job listing filtering model
saattrupdanIntroduction
The Job Listing Filtering Model is a fine-tuned version of xlm-roberta-base
designed to filter job listings. It was developed using the Hugging Face Transformers library and is compatible with PyTorch. The model was created by saattrupdan and is accessible for deployment and inference.
Architecture
The model is based on the xlm-roberta-base
architecture, a robust multilingual transformer model. It supports a wide range of languages and is suitable for various text classification tasks.
Training
Training Procedure
The model was trained with a learning rate of 2e-05, using a batch size of 8 for both training and evaluation. The total training batch size was 32, achieved through gradient accumulation steps set to 4. The optimizer used was Adam with specific beta and epsilon values, and a linear learning rate scheduler. Training spanned 25 epochs.
Training Results
The model's training loss decreased significantly over the epochs, with a final loss of 0.1992. The validation loss also showed improvement, ending at 0.0063.
Framework Versions
- Transformers: 4.17.0
- PyTorch: 1.11.0+cu113
- Datasets: 2.0.0
- Tokenizers: 0.11.6
Guide: Running Locally
- Setup Environment: Ensure you have Python installed. It's recommended to use a virtual environment.
- Install Dependencies: Use the following command:
pip install transformers==4.17.0 torch==1.11.0 datasets==2.0.0 tokenizers==0.11.6
- Load the Model: Use the Transformers library to load the model:
from transformers import AutoModelForSequenceClassification, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("saattrupdan/job-listing-filtering-model") model = AutoModelForSequenceClassification.from_pretrained("saattrupdan/job-listing-filtering-model")
- Inference: Prepare your data and run inference using the model object.
Cloud GPUs
For enhanced performance, consider using cloud services with GPU support like AWS, GCP, or Azure.
License
The model is distributed under the MIT License, allowing for flexibility in usage and modification.