ukrainian qa
robinhadIntroduction
The UKRAINIAN-QA model is a fine-tuned version of the xlm-roberta-base-uk
model. It is designed for question answering tasks in the Ukrainian language, leveraging the UA-SQuAD dataset for training.
Architecture
The model is built on the xlm-roberta
architecture, which is a transformer-based model suited for multilingual tasks. This particular model is optimized for question answering using PyTorch as the deep learning framework.
Training
The model was trained using the following hyperparameters:
- Learning Rate: 2e-05
- Train Batch Size: 16
- Eval Batch Size: 16
- Seed: 42
- Optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- LR Scheduler Type: Linear
- Number of Epochs: 6
Training results showed a progressive decrease in validation loss, achieving a final value of 1.4778.
Guide: Running Locally
To run the model locally, follow these steps:
-
Install Dependencies:
- Ensure you have Python and PyTorch installed.
- Install the Transformers library:
pip install transformers
-
Load the Model:
- Use the following code snippet to load and use the model:
from transformers import pipeline, AutoTokenizer, AutoModelForQuestionAnswering model_name = "robinhad/ukrainian-qa" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForQuestionAnswering.from_pretrained(model_name) qa_model = pipeline("question-answering", model=model.to("cpu"), tokenizer=tokenizer) question = "Де ти живеш?" context = "Мене звати Сара і я живу у Лондоні" qa_model(question=question, context=context)
- Use the following code snippet to load and use the model:
-
Suggested Hardware:
- For faster performance, especially with large datasets, consider using a cloud GPU service such as AWS, Google Cloud, or Paperspace.
License
The model is released under the MIT License, which allows for extensive use, modification, and distribution with minimal restrictions.