roberta base emotion
bhadresh-savaniIntroduction
The ROBERTA-BASE-EMOTION model is a fine-tuned version of the roberta-base
model, specifically optimized for emotion classification tasks. The model is trained using the emotion dataset, which is derived from Twitter data. It is part of the Hugging Face Transformers library and offers robust performance in terms of accuracy and F1 score.
Architecture
The model architecture is based on RoBERTa, a robustly optimized variant of BERT with improved hyperparameter settings. It retains the core transformer architecture but emphasizes pre-training optimizations.
Training
The model was fine-tuned using the Hugging Face Trainer with the following hyperparameters:
- Learning rate: 2e-5
- Batch size: 64
- Number of training epochs: 8
The training was conducted on the Twitter-Sentiment-Analysis dataset, focusing on identifying emotions such as sadness, joy, love, anger, fear, and surprise.
Guide: Running Locally
To use the ROBERTA-BASE-EMOTION model locally, follow these steps:
-
Install Dependencies: Ensure you have the Hugging Face Transformers library installed.
pip install transformers
-
Load the Model:
from transformers import pipeline classifier = pipeline("text-classification", model='bhadresh-savani/roberta-base-emotion', return_all_scores=True)
-
Make Predictions:
prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use") print(prediction)
The output will be a list of emotions with corresponding scores, indicating the model's prediction confidence for each emotion.
-
Recommended Hardware: For optimal performance, it is recommended to use cloud GPUs such as NVIDIA Tesla V100 or A100 available through platforms like AWS, GCP, or Azure.
License
The ROBERTA-BASE-EMOTION model is licensed under the Apache-2.0 License, allowing for broad use, distribution, and modification with proper attribution.