Qwen 7 B kanbun
sophiefyIntroduction
Qwen-7B is a model finetuned on a parallel corpus for translating between Kanbun (漢文) and Kakikudashibun (書き下し文).
Architecture
- Base Model: Qwen/Qwen-7B-Chat-Int4
- Library: PEFT
Training
The training data, procedure, hyperparameters, and evaluation metrics are not provided. Environmental impact details like hardware type, hours used, cloud provider, compute region, and carbon emissions are also not specified.
Guide: Running Locally
- Setup: Install the necessary libraries, including
peft
. - Model Initialization: Load the Qwen-7B model and tokenizer.
- Usage: Utilize the model for translation tasks as shown in the examples, where the Kanbun text is converted to Kakikudashibun.
- Hardware Recommendations: For optimal performance, using cloud GPUs such as those offered by AWS, Google Cloud, or Azure is suggested.
License
License information is not provided. Users should verify the license before use.