bert base uncased emotion

bhadresh-savani

Introduction

The BERT-BASE-UNCASED-EMOTION model is a fine-tuned version of BERT (Bidirectional Encoder Representations from Transformers) specifically tailored for emotion classification tasks. It has been optimized using the emotion dataset from Twitter, achieving high accuracy and F1 scores.

Architecture

BERT, a Transformer-based architecture, is designed for various natural language processing tasks. The base version, "bert-base-uncased," has been fine-tuned for emotion detection, leveraging the Mask Language Modeling (MLM) technique. The model uses a learning rate of 2e-5, a batch size of 64, and is trained for 8 epochs.

Training

The model was trained using the Hugging Face Trainer with the following parameters:

  • Learning Rate: 2e-5
  • Batch Size: 64
  • Epochs: 8

During training, the model achieved a test accuracy of 94.05% and a test F1 score of 94.06 on the Twitter emotion dataset.

Guide: Running Locally

To use the model for emotion classification, you can run the following Python code:

from transformers import pipeline
classifier = pipeline("text-classification", model='bhadresh-savani/bert-base-uncased-emotion', return_all_scores=True)
prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", )
print(prediction)

Cloud GPUs: For efficient training and inference, it is advisable to utilize cloud GPUs, such as those provided by AWS, Google Cloud, or Azure.

License

The BERT-BASE-UNCASED-EMOTION model is licensed under the Apache-2.0 License, allowing for wide usage and distribution.

More Related APIs in Text Classification