albert base v2 emotion
bhadresh-savaniIntroduction
ALBERT-BASE-V2-EMOTION is a text classification model based on the ALBERT architecture, fine-tuned for emotion detection. ALBERT, a variant of BERT, is designed to be lightweight with fewer parameters. This model has been specifically fine-tuned using an emotion dataset to categorize emotions in text data.
Architecture
ALBERT (A Lite BERT) uses an architecture with fewer parameters compared to traditional BERT models. The ALBERT-BASE-V2-EMOTION model leverages this architecture, fine-tuned to classify emotions from text inputs.
Training
The model was fine-tuned using the Hugging Face Trainer with the following hyperparameters:
- Learning rate: 2e-5
- Batch size: 64
- Number of training epochs: 8
The training dataset used was the Emotion dataset, which includes data from Twitter.
Model Performance
The model's performance on the Emotion dataset is as follows:
- Accuracy: 93.6%
- F1 Score: 93.65%
- Test Samples per Second: 182.794
Guide: Running Locally
To use the ALBERT-BASE-V2-EMOTION model locally, follow these steps:
-
Install the Transformers library:
pip install transformers
-
Load and use the model:
from transformers import pipeline classifier = pipeline("text-classification", model='bhadresh-savani/albert-base-v2-emotion', return_all_scores=True) prediction = classifier("I love using transformers. The best part is the wide range of support and its easy to use") print(prediction)
-
Output Example:
[ {'label': 'sadness', 'score': 0.0104}, {'label': 'joy', 'score': 0.8902}, {'label': 'love', 'score': 0.0425}, {'label': 'anger', 'score': 0.0413}, {'label': 'fear', 'score': 0.0118}, {'label': 'surprise', 'score': 0.0038} ]
Cloud GPUs: For large-scale inference or training, consider using cloud services like AWS, GCP, or Azure, which provide GPU instances.
License
The ALBERT-BASE-V2-EMOTION model is released under the Apache-2.0 license.