paraphrase multilingual mpnet base v2

sentence-transformers

Introduction

The paraphrase-multilingual-mpnet-base-v2 is a Sentence Transformers model designed to map sentences and paragraphs to a 768-dimensional dense vector space. It is suitable for tasks such as clustering and semantic search and supports 50 languages.

Architecture

The model is composed of two main components:

  • A Transformer model: Specifically, an XLMRobertaModel configured with a maximum sequence length of 128 and no lowercasing.
  • A Pooling layer: This performs mean pooling of token embeddings to generate sentence embeddings.

Training

The model was trained using the Sentence-BERT approach, which leverages Siamese BERT-Networks to generate sentence embeddings. For a detailed explanation, refer to the paper "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks" by Nils Reimers and Iryna Gurevych.

Guide: Running Locally

Requirements

To use the model, you will need to install the sentence-transformers library:

pip install -U sentence-transformers

Basic Usage

Using Sentence Transformers

from sentence_transformers import SentenceTransformer

sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/paraphrase-multilingual-mpnet-base-v2')
embeddings = model.encode(sentences)
print(embeddings)

Using Hugging Face Transformers

from transformers import AutoTokenizer, AutoModel
import torch

def mean_pooling(model_output, attention_mask):
    token_embeddings = model_output[0]
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
    return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)

sentences = ['This is an example sentence', 'Each sentence is converted']
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-multilingual-mpnet-base-v2')
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-multilingual-mpnet-base-v2')

encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')

with torch.no_grad():
    model_output = model(**encoded_input)

sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)

Cloud GPUs

For large-scale inference or training, consider using cloud GPUs like those available on AWS, Google Cloud, or Azure to accelerate processing.

License

The model is licensed under the Apache-2.0 License, allowing for both personal and commercial use with attribution.

More Related APIs in Sentence Similarity