M E E T I N G_ S U M M A R Y

knkarthick

MEETING_SUMMARY Model

Introduction

The MEETING_SUMMARY model is a specialized transformer-based model aimed at summarizing meeting transcripts and other text forms requiring abstractive summarization. It has been fine-tuned from the 'facebook/bart-large-xsum' model using various datasets like AMI Meeting Corpus, SAMSUM Dataset, DIALOGSUM Dataset, and XSUM Dataset.

Architecture

The model utilizes the BART (Bidirectional and Auto-Regressive Transformers) architecture, which is particularly effective for text generation tasks such as summarization. It supports both PyTorch and TensorFlow frameworks and is designed to handle text2text generation and seq2seq tasks.

Training

The model was trained on diverse datasets including CNNDaily, NewYorkDaily, SAMSUM, and DIALOGSUM, among others. It has been evaluated using ROUGE metrics, demonstrating competitive performance on both validation and test splits. The ROUGE scores indicate the model's proficiency in generating coherent and relevant summaries.

Guide: Running Locally

To run the MEETING_SUMMARY model locally, follow these steps:

  1. Install Transformers Library: Ensure you have the Hugging Face Transformers library installed.

    pip install transformers
    
  2. Load the Model: Use the Transformers pipeline to load the summarization model.

    from transformers import pipeline
    summarizer = pipeline("summarization", model="knkarthick/MEETING_SUMMARY")
    
  3. Summarize Text: Input text into the model to generate summaries.

    text = "Your input text here."
    summary = summarizer(text)
    print(summary)
    

For improved performance, especially on large datasets, consider utilizing cloud-based GPUs such as AWS EC2 instances with GPU support or Google Cloud's GPU offerings.

License

The MEETING_SUMMARY model is available under the Apache 2.0 License, allowing for wide use and modification while ensuring the protection of contributors' rights.

More Related APIs in Summarization