Fake News Bert Detect

jy46604790

Introduction

The Fake-News-Bert-Detect model is designed to classify news articles as either fake or real. It utilizes the 'roberta-base' transformer model and has been trained on over 40,000 news articles from various media sources. The model accepts text inputs up to 500 words, automatically truncating any excess.

Architecture

The model is based on the 'roberta-base' architecture, a variant of the BERT transformer model. It is fine-tuned specifically for the task of text classification to detect fake news. The model outputs two labels:

  • LABEL_0: Fake news
  • LABEL_1: Real news

Training

The model was trained using a dataset of over 40,000 news articles. This extensive dataset allows the model to effectively differentiate between fake and real news. The training process involved fine-tuning the 'roberta-base' model to improve its classification accuracy for this specific task.

Guide: Running Locally

To run the model locally, follow these steps:

  1. Install the Transformers Library:

    pip install transformers
    
  2. Load the Model:

    from transformers import pipeline
    MODEL = "jy46604790/Fake-News-Bert-Detect"
    clf = pipeline("text-classification", model=MODEL, tokenizer=MODEL)
    
  3. Input Text Data:

    text = "Your news article text here..."
    
  4. Obtain Results:

    result = clf(text)
    print(result)
    

For enhanced performance, especially with large datasets, consider using cloud-based GPUs such as AWS EC2 with NVIDIA GPUs, Google Cloud GPUs, or Azure GPU instances.

License

The model is licensed under the Apache License 2.0. This permissive license allows users to freely use, modify, and distribute the model, subject to the terms of the license.

More Related APIs in Text Classification