sentiment roberta large english

siebert

Introduction

The SiEBERT model is a fine-tuned version of RoBERTa-large, specifically designed for binary sentiment analysis of English text. It predicts either positive (1) or negative (0) sentiment and has been fine-tuned on 15 diverse datasets to enhance its generalization across various text types, such as reviews and tweets.

Architecture

SiEBERT is based on the RoBERTa-large architecture, which is a robust transformer model known for its performance in natural language processing tasks. The model has been fine-tuned to handle a wide range of text types and outperforms models trained on single data types.

Training

The model was trained and evaluated using 15 data sets from different text sources. Key hyperparameters used in fine-tuning include a learning rate of 2e-5, 3 epochs, 500 warmup steps, and a weight decay of 0.01. It demonstrates significant performance improvements over models like DistilBERT fine-tuned only on the SST-2 dataset, achieving an average accuracy of 93.2%.

Guide: Running Locally

  1. Install Transformers: Ensure you have the transformers library installed.

    pip install transformers
    
  2. Use Sentiment Analysis Pipeline:

    from transformers import pipeline
    sentiment_analysis = pipeline("sentiment-analysis", model="siebert/sentiment-roberta-large-english")
    print(sentiment_analysis("I love this!"))
    
  3. Set Up on Google Colab: For GPU support, use Google Colab. You can run the sentiment analysis script provided in the Colab notebook here.

  4. Cloud GPUs: For enhanced performance, consider using cloud GPU services like AWS, Google Cloud, or Azure.

License

The usage of SiEBERT is subject to the licenses provided by the Hugging Face model hub and the respective datasets. Always ensure compliance with the licensing terms when using the model for research or commercial purposes.

More Related APIs in Text Classification