wmt19 ru en
facebookIntroduction
The facebook/wmt19-ru-en
model is a ported version of the Fairseq WMT19 Transformer for Russian to English translation. This model, part of the FairSeqMachineTranslation (FSMT) series, is designed for text-to-text generation tasks and is available through Hugging Face. It is based on Facebook FAIR's WMT19 News Translation Task Submission.
Architecture
The model uses a Transformer architecture and is implemented using the PyTorch library. It is part of a series of models including translations between English, Russian, and German. The model leverages pretrained weights from Fairseq without modifications for this Hugging Face version.
Training
The model's weights remain identical to those released by Fairseq. Training data is sourced from the WMT19 dataset, and its performance is evaluated using BLEU scores. The translation quality is slightly below Fairseq's due to the lack of model ensemble and re-ranking support in Transformers.
Guide: Running Locally
-
Install the Transformers Library:
pip install transformers
-
Load and Use the Model:
from transformers import FSMTForConditionalGeneration, FSMTTokenizer mname = "facebook/wmt19-ru-en" tokenizer = FSMTTokenizer.from_pretrained(mname) model = FSMTForConditionalGeneration.from_pretrained(mname) input = "Машинное обучение - это здорово, не так ли?" input_ids = tokenizer.encode(input, return_tensors="pt") outputs = model.generate(input_ids) decoded = tokenizer.decode(outputs[0], skip_special_tokens=True) print(decoded) # Output: Machine learning is great, isn't it?
-
Evaluate the Model: Use the sacrebleu script as described in the original documentation to evaluate translation quality.
-
Hardware Recommendations: For efficient processing, consider using cloud GPUs such as those provided by AWS, Google Cloud, or Azure.
License
The facebook/wmt19-ru-en
model is released under the Apache-2.0 license, allowing for both personal and commercial use with attribution.