Qwen 7 B kanbun

sophiefy

Introduction

Qwen-7B is a model finetuned on a parallel corpus for translating between Kanbun (漢文) and Kakikudashibun (書き下し文).

Architecture

  • Base Model: Qwen/Qwen-7B-Chat-Int4
  • Library: PEFT

Training

The training data, procedure, hyperparameters, and evaluation metrics are not provided. Environmental impact details like hardware type, hours used, cloud provider, compute region, and carbon emissions are also not specified.

Guide: Running Locally

  1. Setup: Install the necessary libraries, including peft.
  2. Model Initialization: Load the Qwen-7B model and tokenizer.
  3. Usage: Utilize the model for translation tasks as shown in the examples, where the Kanbun text is converted to Kakikudashibun.
  4. Hardware Recommendations: For optimal performance, using cloud GPUs such as those offered by AWS, Google Cloud, or Azure is suggested.

License

License information is not provided. Users should verify the license before use.

More Related APIs