bert base finetuned ynat

bash1130

Introduction

The bert-base-finetuned-ynat model is a fine-tuned version of klue/bert-base on the KLUE dataset, specifically configured for the YNAT task, aimed at text classification. It achieves a loss of 0.3609 and an F1 score of 0.8712 on the evaluation set.

Architecture

This model is based on the BERT architecture, specifically fine-tuned for text classification tasks using the KLUE dataset.

Training

The model was trained using the following hyperparameters:

  • Learning Rate: 2e-05
  • Train Batch Size: 256
  • Eval Batch Size: 256
  • Seed: 42
  • Optimizer: Adam with betas=(0.9, 0.999) and epsilon=1e-08
  • Learning Rate Scheduler: Linear
  • Number of Epochs: 5

Training results showed an improvement in validation loss and F1 score over five epochs, with the best F1 score of 0.8712 achieved at epoch 3.

Guide: Running Locally

  1. Install Dependencies: Ensure you have Transformers 4.21.0, PyTorch 1.12.0+cu113, Datasets 2.4.0, and Tokenizers 0.12.1 installed.

  2. Download the Model: Access the model from Hugging Face Model Hub.

  3. Run Inference: Use the model in a Python environment to perform text classification tasks.

  4. Hardware Recommendations: Consider using cloud-based GPUs such as those offered by AWS, Google Cloud, or Azure for efficient processing.

License

More information needed

More Related APIs in Text Classification