Introduction

The all-MiniLM-L12-v2 is a model developed by Sentence Transformers that maps sentences and paragraphs to a 384-dimensional dense vector space, suitable for tasks such as clustering and semantic search. This model is part of the sentence-transformers library and can be used with PyTorch, Rust, ONNX, Safetensors, OpenVINO, and Transformers libraries.

Architecture

The model is based on the pre-trained microsoft/MiniLM-L12-H384-uncased architecture and has been fine-tuned using a large dataset of sentence pairs. This fine-tuning process utilizes a contrastive learning objective to enhance sentence similarity evaluation.

Training

  • Pre-Training: The model uses the microsoft/MiniLM-L12-H384-uncased as a base model.
  • Fine-Tuning: It involves computing cosine similarity between sentence pairs in a batch and applying cross-entropy loss. Training was conducted on TPU v3-8 with a batch size of 1024 (128 per TPU core), using the AdamW optimizer with a learning rate of 2e-5. The model underwent 100,000 training steps, with a sequence length limit of 128 tokens.

The training data comprises a combination of over 1 billion sentence pairs sourced from datasets like Reddit, S2ORC, WikiAnswers, PAQ, and Stack Exchange, among others.

Guide: Running Locally

  1. Install Sentence Transformers:

    pip install -U sentence-transformers
    
  2. Usage Example:

    from sentence_transformers import SentenceTransformer
    sentences = ["This is an example sentence", "Each sentence is converted"]
    
    model = SentenceTransformer('sentence-transformers/all-MiniLM-L12-v2')
    embeddings = model.encode(sentences)
    print(embeddings)
    
  3. Alternative with Hugging Face Transformers:

    from transformers import AutoTokenizer, AutoModel
    import torch
    import torch.nn.functional as F
    
    def mean_pooling(model_output, attention_mask):
        token_embeddings = model_output[0]
        input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
        return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
    
    sentences = ['This is an example sentence', 'Each sentence is converted']
    tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L12-v2')
    model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L12-v2')
    encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
    
    with torch.no_grad():
        model_output = model(**encoded_input)
    
    sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
    sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
    
    print("Sentence embeddings:")
    print(sentence_embeddings)
    
  4. Cloud GPUs: For optimal performance, especially for large-scale inference tasks, consider using cloud GPUs from providers like AWS, Google Cloud, or Azure.

License

The model is distributed under the Apache-2.0 license, allowing for both personal and commercial use, modification, and distribution.

More Related APIs in Sentence Similarity