distilbert base uncased finetuned eoir_privacy
pile-of-lawIntroduction
The distilbert-base-uncased-finetuned-eoir_privacy
model is a fine-tuned version of distilbert-base-uncased
, trained on the EOIR Privacy dataset. It is designed to predict whether names in a text should be masked as pseudonyms to protect privacy according to EOIR court standards. The model achieves an accuracy of 90.53% and an F1 score of 80.88% on the evaluation set.
Architecture
This model is based on the DistilBERT architecture, which is a smaller, faster, and lighter version of BERT. The fine-tuning process involved optimizing the model specifically for the task of text classification on the EOIR Privacy dataset.
Training
The model was trained using a learning rate of 2e-05, a batch size of 16 for both training and evaluation, and a random seed of 42. The Adam optimizer was used with specific betas and epsilon values, and the learning rate scheduler was linear. The training ran for 5 epochs, during which the model's performance improved, achieving a final validation loss of 0.3681, accuracy of 90.53%, and an F1 score of 80.88%.
Guide: Running Locally
-
Set Up Environment: Ensure you have Python installed. It's recommended to create a virtual environment.
python -m venv env source env/bin/activate # On Windows use `env\Scripts\activate`
-
Install Dependencies: Use pip to install the necessary libraries.
pip install torch transformers datasets
-
Load the Model: Use the Hugging Face
transformers
library to load the model.from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("pile-of-law/distilbert-base-uncased-finetuned-eoir_privacy") tokenizer = AutoTokenizer.from_pretrained("pile-of-law/distilbert-base-uncased-finetuned-eoir_privacy")
-
Run Inference: Prepare your text input and run it through the model.
inputs = tokenizer("Your input text here", return_tensors="pt") outputs = model(**inputs) predictions = outputs.logits.argmax(dim=1)
-
Cloud GPUs: For faster training and inference, consider using cloud GPU services such as AWS, Google Cloud, or Azure.
License
This model is licensed under the Apache License 2.0, which allows for use, distribution, and modification, provided that the license terms are followed.