japanese sentiment analysis
jarvisx17Introduction
The Japanese Sentiment Analysis model by jarvisx17 is designed for classifying sentiment in Japanese text. It utilizes a BERT-based architecture and is trained on the chABSA dataset, achieving high accuracy and F1 scores.
Architecture
The model employs a BERT architecture, suitable for text classification tasks in Japanese. It has been generated using the Trainer API from the Transformers library.
Training
The model was trained from scratch using the chABSA dataset. Key hyperparameters include:
- Learning Rate: 2e-05
- Batch Size: 16 for both training and evaluation
- Optimizer: Adam with betas (0.9, 0.999) and epsilon 1e-08
- Number of Epochs: 10
- Seed: 42
The model was evaluated with a loss of 0.0001 and both accuracy and F1 score of 1.0.
Guide: Running Locally
To run the model locally, follow these steps:
- Install Dependencies:
pip install transformers fugashi unidic_lite
- Load the Model in Python:
from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("jarvisx17/japanese-sentiment-analysis") model = AutoModelForSequenceClassification.from_pretrained("jarvisx17/japanese-sentiment-analysis") inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs)
For optimal performance, consider using cloud GPUs such as those provided by AWS, Google Cloud, or Azure.
License
The model and its usage guidelines are subject to the license terms provided by jarvisx17. For specific licensing information, refer to the model's page on the Hugging Face Hub.