albert xlarge vitaminc mnli

tals

ALBERT-XLARGE-VITAMINC-MNLI

Introduction

The ALBERT-XLARGE-VITAMINC-MNLI model is designed for text classification tasks, specifically focusing on fact verification with contrastive evidence. The model is part of the VitaminC benchmark, which aims to enhance the robustness of fact verification models by using challenging, contrastive examples.

Architecture

The model is based on ALBERT, a variant of the Transformer architecture that is known for its efficiency and performance in natural language processing tasks. It utilizes datasets such as GLUE, multi_nli, and tals/vitaminc to improve its ability to verify facts against evolving sources of evidence.

Training

The VitaminC benchmark, consisting of over 400,000 claim-evidence pairs, is used to train the model. This dataset includes both real and synthetic revisions from Wikipedia, designed to test the model's sensitivity to minor factual changes. Training with this dataset has shown a 10% improvement in adversarial fact verification and a 6% improvement in natural language inference tasks.

Guide: Running Locally

  1. Clone the Repository: Download the model code and datasets from the GitHub repository.
  2. Setup Environment: Install dependencies using a package manager like pip, ensuring you have PyTorch and TensorFlow installed.
  3. Download Pre-trained Weights: Obtain the model weights from Hugging Face's Model Hub.
  4. Run Inference: Use the model to classify text or verify claims using provided scripts.

For enhanced performance, consider utilizing cloud GPUs such as AWS EC2, Google Cloud Platform, or Azure.

License

The usage of this model and its associated datasets should be attributed by citing the paper: "Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence" by Tal Schuster et al. (2021 NAACL). Full citation details are available in the BibTeX entry provided in the documentation.

More Related APIs in Text Classification