roberta base formality ranker
s-nlpIntroduction
The ROBERTA-BASE-FORMALITY-RANKER is a model trained to classify English sentences as formal or informal. It is built on the RoBERTa-base architecture and utilizes datasets like GYAFC and Pavlick-Tetreault-2016 for training. The model is designed to focus on the formality of text, avoiding over-reliance on punctuation and capitalization through data augmentation techniques.
Architecture
The model is based on the RoBERTa-base architecture. RoBERTa is a transformer-based model optimized for various natural language processing tasks. The formality ranker adapts this architecture to specifically address the classification of formality in text.
Training
The model was trained using the GYAFC dataset from Rao and Tetreault, 2018, and an online formality corpus from Pavlick and Tetreault, 2016. Data augmentation was applied by altering text case and punctuation to prevent the model from being overly dependent on these features. The training objective involves binary classification and in-batch ranking, optimizing performance metrics such as ROC AUC, precision, recall, F-score, accuracy, and Spearman correlation.
Guide: Running Locally
To run the ROBERTA-BASE-FORMALITY-RANKER locally, follow these steps:
- Set up your environment: Ensure you have Python and PyTorch installed.
- Install Hugging Face Transformers: Use the command
pip install transformers
. - Load the model: Use the Transformers library to load the model and tokenizer.
from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("s-nlp/roberta-base-formality-ranker") tokenizer = AutoTokenizer.from_pretrained("s-nlp/roberta-base-formality-ranker")
- Prepare your input: Tokenize your sentences using the tokenizer.
- Inference: Pass the tokenized sentences through the model to obtain formality predictions.
For optimal performance, consider using a cloud GPU service such as AWS EC2, Google Cloud Platform, or Azure, especially for processing large datasets.
License
This model is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. You can view the license details at http://creativecommons.org/licenses/by-nc-sa/4.0/.