rubert tiny bilingual nli
cointegratedIntroduction
The RUBERT-TINY-BILINGUAL-NLI
model, developed by cointegrated, is a fine-tuned version of the rubert-tiny
model. It is designed to perform natural language inference (NLI), determining the logical relationship between two short texts, specifically focusing on entailment.
Architecture
The model is based on the rubert
architecture and is optimized for zero-shot classification tasks. It leverages the PyTorch library and supports safe tensor formats, making it suitable for various text-classification applications.
Training
The model was fine-tuned using the cointegrated/nli-rus-translated-v2021
dataset. This training enhances its ability to classify text inputs in Russian, enabling it to handle tasks like recognizing text entailment (RTE).
Guide: Running Locally
To run the RUBERT-TINY-BILINGUAL-NLI
model locally:
- Install Dependencies: Ensure you have Python and PyTorch installed.
- Download the Model: Use the Hugging Face Transformers library to load the model.
pip install transformers
- Load the Model: Use the following Python code to load and test the model:
from transformers import pipeline classifier = pipeline("zero-shot-classification", model="cointegrated/rubert-tiny-bilingual-nli") result = classifier("Сервис отстойный, кормили невкусно", candidate_labels=["Мне понравилось", "Мне не понравилось"]) print(result)
- Cloud GPUs: For faster performance, consider using cloud GPU services like AWS, Google Cloud, or Azure to run your model inference tasks.
License
For license information, refer to the model's Hugging Face page: cointegrated/rubert-tiny-bilingual-nli.