M E E T I N G S U M M A R Y B A R T L A R G E X S U M S A M S U M D I A L O G S U M A M I

knkarthick

Introduction

The MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM-AMI is a fine-tuned model based on facebook/bart-large-xsum, designed for abstractive text summarization. It is optimized for summarizing meeting transcripts and performs well on datasets like XSum, SAMSum, DialogSum, and the AMI Meeting Corpus.

Architecture

The model is built on the BART (Bidirectional and Auto-Regressive Transformers) architecture, specifically the bart-large variant. It is capable of text-to-text generation tasks, leveraging a seq2seq approach. The model supports both PyTorch and TensorFlow frameworks and uses safe tensors for secure model storage.

Training

The model was fine-tuned on datasets such as CNN Daily, New York Daily, XSum, SAMSum, DialogSum, and the AMI Meeting Corpus. The training process focused on optimizing the model for abstractive text summarization, with performance metrics evaluated using the ROUGE score. Specific values for the validation and test ROUGE scores are not available.

Guide: Running Locally

To run the model locally:

  1. Install the transformers library from Hugging Face.
  2. Use the following Python code snippet to create a summarization pipeline:
    from transformers import pipeline
    summarizer = pipeline("summarization", model="knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM-AMI")
    text = "Your text here"
    summary = summarizer(text)
    
  3. Replace "Your text here" with the text you want to summarize.

For optimal performance, especially with large datasets, consider using cloud GPU services such as AWS, Google Cloud, or Azure.

License

The model is released under the Apache 2.0 license, allowing for free use and modification with attribution.

More Related APIs in Summarization