bert large uncased whole word masking squad2

deepset

Introduction

The BERT-LARGE-UNCASED-WHOLE-WORD-MASKING-SQUAD2 model, developed by deepset, is a fine-tuned version of the BERT-large model for the task of extractive question answering. It is trained on the SQuAD2.0 dataset, supporting English language applications.

Architecture

This model employs the BERT-large architecture, with a focus on whole-word masking, which is particularly tuned for question-answering tasks. It leverages the capabilities of the BERT model to handle the nuances of the English language effectively.

Training

The model was fine-tuned using the SQuAD2.0 dataset for extractive question answering. Evaluation metrics include Exact Match and F1 scores across various datasets, such as SQuAD, adversarial QA, and squadshifts, demonstrating robust performance in answering questions by extracting precise information from given texts.

Guide: Running Locally

To run the model locally, you can use frameworks like Haystack or Transformers:

Using Haystack

  1. Install Haystack and necessary dependencies:
    pip install haystack-ai "transformers[torch,sentencepiece]"
    
  2. Load and run the model with Haystack:
    from haystack import Document
    from haystack.components.readers import ExtractiveReader
    
    docs = [Document(content="Python is a popular programming language")]
    reader = ExtractiveReader(model="deepset/bert-large-uncased-whole-word-masking-squad2")
    reader.warm_up()
    result = reader.run(query="What is a popular programming language?", documents=docs)
    

Using Transformers

  1. Install Transformers:
    pip install transformers
    
  2. Load the model and tokenizer to make predictions:
    from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
    
    model_name = "deepset/bert-large-uncased-whole-word-masking-squad2"
    nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
    QA_input = {
        'question': 'Why is model conversion important?',
        'context': 'The option to convert models between FARM and transformers...'
    }
    res = nlp(QA_input)
    

Suggested Cloud GPUs

For intensive tasks, consider using cloud GPUs from providers such as AWS, Google Cloud Platform, or Azure for efficient performance.

License

The model is licensed under the CC-BY-4.0 license, allowing for sharing and adaptation with appropriate credit.

More Related APIs in Question Answering