t5 base qg hl

valhalla

Introduction

The T5-BASE-QG-HL model is a variant of the T5 model trained specifically for answer-aware question generation tasks. It is built using the PyTorch library and focuses on generating questions based on highlighted answer spans within a text.

Architecture

The model utilizes the T5 architecture as detailed in the paper arxiv:1910.10683. The architecture allows for text-to-text generation, enabling the model to transform a given text with highlighted answers into relevant questions.

Training

The model has been trained on the SQuAD dataset, which is commonly used for question answering and generation tasks. The training involves highlighting answer spans in the text with special tokens <hl> and ending the text with a </s> token to indicate the end of the input.

Guide: Running Locally

  1. Clone the Repository:
    Clone the GitHub repository from patil-suraj/question_generation.

  2. Install Dependencies:
    Ensure you have Python and PyTorch installed. Install additional dependencies as specified in the repository's README.

  3. Run the Model:
    Use the provided pipeline to generate questions:

    from pipelines import pipeline
    nlp = pipeline("question-generation", model="valhalla/t5-base-qg-hl")
    result = nlp("42 is the answer to life, universe and everything.")
    print(result)  # Output: [{'answer': '42', 'question': 'What is the answer to life, universe and everything?'}]
    
  4. Use Cloud GPUs:
    For more efficient processing, consider running the model on cloud platforms offering GPU support like Google Colab or AWS.

License

The model is released under the MIT License, allowing for flexibility in its use and modification.

More Related APIs in Text2text Generation