S Pub Med Bert M S M A R C O

pritamdeka

Introduction

S-PubMedBert-MS-MARCO is a model based on sentence-transformers, designed to map sentences and paragraphs into a 768-dimensional dense vector space. This model is especially suitable for clustering and semantic search in the medical and health text domain. It is a fine-tuned version of the microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext model, optimized using the MS-MARCO dataset.

Architecture

The model architecture comprises a SentenceTransformer with two main components:

  1. Transformer: Configured with max_seq_length of 350 and do_lower_case set to False, using a BertModel.
  2. Pooling: Includes pooling modes with word_embedding_dimension of 768, using mean token pooling.

Training

The model was trained with a DataLoader of length 31,434 using a batch size of 16. The training involved the MarginMSELoss loss function and parameters such as:

  • Epochs: 2
  • Evaluation Steps: 10,000
  • Optimizer: AdamW with learning rate 2e-05
  • Scheduler: WarmupLinear with 1,000 warmup steps
  • Weight Decay: 0.01

Guide: Running Locally

To use the model, follow these steps:

  1. Install sentence-transformers:

    pip install -U sentence-transformers
    
  2. Using Sentence-Transformers:

    from sentence_transformers import SentenceTransformer
    sentences = ["This is an example sentence", "Each sentence is converted"]
    model = SentenceTransformer('pritamdeka/S-PubMedBert-MS-MARCO')
    embeddings = model.encode(sentences)
    print(embeddings)
    
  3. Using Hugging Face Transformers:

    from transformers import AutoTokenizer, AutoModel
    import torch
    
    def mean_pooling(model_output, attention_mask):
        token_embeddings = model_output[0]
        input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
        return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
    
    sentences = ['This is an example sentence', 'Each sentence is converted']
    tokenizer = AutoTokenizer.from_pretrained('pritamdeka/S-PubMedBert-MS-MARCO')
    model = AutoModel.from_pretrained('pritamdeka/S-PubMedBert-MS-MARCO')
    encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
    
    with torch.no_grad():
        model_output = model(**encoded_input)
    
    sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
    print("Sentence embeddings:")
    print(sentence_embeddings)
    
  4. Cloud GPUs: For efficient training or inference, consider using cloud GPUs from providers like AWS, Google Cloud, or Azure.

License

The model is released under the CC-BY-NC-2.0 license. This permits non-commercial use with attribution.

More Related APIs in Sentence Similarity