sentiment roberta large english 3 classes
j-hartmannIntroduction
The Sentiment RoBERTa Large English 3 Classes model is a RoBERTa-based text classification model designed to analyze sentiment from English-language text. This model distinguishes between three sentiment classes: positive, neutral, and negative. It has been fine-tuned on social media data to achieve a hold-out accuracy of 86.1%.
Architecture
The model utilizes the RoBERTa architecture, which is known for its robust performance in natural language processing tasks. It has been specifically fine-tuned on a dataset of 5,304 manually annotated social media posts to enhance its sentiment classification accuracy.
Training
Training for this model involved fine-tuning on a corpus of manually annotated social media posts, as detailed in Web Appendix F of Hartmann et al. (2021). The dataset includes examples from various sentiment categories to ensure balanced learning.
Guide: Running Locally
To use the model locally, follow these steps:
-
Install the Hugging Face Transformers library:
pip install transformers
-
Use the following Python code to classify sentiment:
from transformers import pipeline classifier = pipeline("text-classification", model="j-hartmann/sentiment-roberta-large-english-3-classes", return_all_scores=True) result = classifier("This is so nice!") print(result)
-
For improved performance, especially with larger datasets, consider using a cloud GPU service like AWS, Google Cloud, or Azure.
License
For licensing information and usage rights, please refer to the terms provided by the model author. If you utilize this model in your work, cite the following paper: "The Power of Brand Selfies" by Hartmann et al. (2021). For questions or further information, contact Jochen Hartmann at jochen.hartmann@tum.de.