rubert_conversational ru sentiment rusentiment

sismetanin

Introduction

RuBERT-Conversational-ru-sentiment-RuSentiment is a fine-tuned version of the RuBERT-Conversational model designed for sentiment analysis in Russian. It is trained using the RuSentiment dataset, which comprises Russian-language posts from VKontakte, a popular Russian social network.

Architecture

The model is based on the RuBERT architecture, specifically adapted for conversational contexts. It leverages the BERT framework to enhance performance in sentiment analysis tasks, particularly for Russian text, utilizing the RuSentiment dataset for fine-tuning.

Training

This model was fine-tuned on the RuSentiment dataset, which is specifically enriched for sentiment analysis in social media contexts in Russian. The dataset includes general-domain posts from VKontakte and has been used to achieve competitive scores across various benchmark datasets for sentiment analysis.

Guide: Running Locally

To run this model locally, follow these steps:

  1. Install Prerequisites:

    • Ensure you have Python installed.
    • Install PyTorch and Transformers libraries.
    pip install torch transformers
    
  2. Download the Model:

    • Access and download the model from the Hugging Face model hub.
  3. Load the Model:

    • Use the Transformers library to load the model and tokenizer.
    from transformers import AutoModelForSequenceClassification, AutoTokenizer
    
    model_name = "sismetanin/rubert_conversational-ru-sentiment-rusentiment"
    model = AutoModelForSequenceClassification.from_pretrained(model_name)
    tokenizer = AutoTokenizer.from_pretrained(model_name)
    
  4. Run Inference:

    • Prepare your text input, tokenize it, and pass it through the model for sentiment prediction.

For faster performance, especially with large datasets, consider using cloud-based GPUs like those available through AWS, Google Cloud, or Azure.

License

The model and its associated code are provided under specific licensing conditions outlined by Hugging Face and the creators. Always refer to the original repository for detailed licensing information.

More Related APIs in Text Classification