paper_feedback_intent
mp6kvIntroduction
The PAPER_FEEDBACK_INTENT model is a fine-tuned version of the roberta-base
model, designed for text classification tasks. It achieves high performance with the following metrics on the evaluation set:
- Loss: 0.3621
- Accuracy: 0.9302
- Precision: 0.9307
- Recall: 0.9302
- F1 Score: 0.9297
Architecture
This model is based on the roberta-base
architecture, a transformer model developed by Hugging Face. It has been fine-tuned for specific tasks using a dataset that has not been disclosed.
Training
The model was trained using the following hyperparameters:
- Learning Rate: 2e-05
- Train Batch Size: 16
- Evaluation Batch Size: 16
- Seed: 42
- Optimizer: Adam with betas (0.9, 0.999) and epsilon 1e-08
- Learning Rate Scheduler Type: Linear
- Number of Epochs: 10
Training was conducted with performance evaluation at each epoch, showing improvements in loss and accuracy over time.
Guide: Running Locally
To run the model locally, you can follow these general steps:
-
Install Dependencies:
- Ensure you have Python and pip installed.
- Install PyTorch and the Transformers library:
pip install torch pip install transformers
-
Download and Load the Model:
- Use the Hugging Face Model Hub to load the model:
from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("mp6kv/paper_feedback_intent") tokenizer = AutoTokenizer.from_pretrained("mp6kv/paper_feedback_intent")
- Use the Hugging Face Model Hub to load the model:
-
Inference:
- Prepare your input data and use the model for predictions.
- Refer to the Transformers documentation for additional details on performing inference.
-
Hardware Recommendations:
- For optimal performance, especially with large datasets, consider using cloud-based GPUs like those offered by AWS, GCP, or Azure.
License
The model is available under the MIT License, allowing for both personal and commercial use with minimal restrictions.