distilbert base uncased finetuned emotion
bhadresh-savaniIntroduction
DISTILBERT-BASE-UNCASED-FINETUNED-EMOTION is a fine-tuned model based on distilbert-base-uncased
, designed for text classification tasks, specifically for emotion detection. It is evaluated on a dataset focused on emotions, achieving high accuracy and F1 scores.
Architecture
The model is based on the DistilBERT architecture, a smaller and faster variant of BERT, known for its efficiency in text classification tasks. It uses a transformer-based architecture to process and classify text data.
Training
Training Hyperparameters
- Learning Rate: 2e-05
- Train Batch Size: 64
- Eval Batch Size: 64
- Seed: 42
- Optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- Learning Rate Scheduler Type: Linear
- Number of Epochs: 5
Training Results
- Epoch 1: Validation Loss: 0.2171, Accuracy: 0.928, F1: 0.9292
- Epoch 2: Validation Loss: 0.1764, Accuracy: 0.9365, F1: 0.9372
- Epoch 3: Validation Loss: 0.1788, Accuracy: 0.938, F1: 0.9388
- Epoch 4: Validation Loss: 0.2005, Accuracy: 0.938, F1: 0.9388
- Epoch 5: Validation Loss: 0.1995, Accuracy: 0.9365, F1: 0.9371
Guide: Running Locally
To run the model locally, follow these steps:
-
Set Up Environment: Install necessary libraries such as Transformers, PyTorch, and Datasets using pip.
pip install transformers==4.13.0 torch==1.11.0 datasets==1.16.1 tokenizers==0.10.3
-
Load the Model: Use the Hugging Face Transformers library to load the model.
from transformers import AutoModelForSequenceClassification, AutoTokenizer model_name = "bhadresh-savani/distilbert-base-uncased-finetuned-emotion" model = AutoModelForSequenceClassification.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name)
-
Prepare Data: Tokenize your text data using the tokenizer.
inputs = tokenizer("Your text here", return_tensors="pt")
-
Inference: Run the model to get predictions.
outputs = model(**inputs)
-
Cloud GPUs: For enhanced performance, consider using cloud GPU services such as AWS, Google Cloud, or Azure.
License
More information needed on the license for this model. Please refer to the original repository or contact the author for licensing details.