bge reranker v2 m3
BAAIIntroduction
The BGE-RERANKER-V2-M3 is a model developed by the Beijing Academy of Artificial Intelligence for text classification and text embeddings inference. It primarily uses the XLM-RoBERTa architecture and is multilingual, supporting languages including Chinese and English. It utilizes a reranker approach to directly output similarity scores between questions and documents.
Architecture
The BGE-RERANKER-V2-M3 model is based on the XLM-RoBERTa architecture. It functions as a reranker, accepting a pair of question and document as input and outputting a similarity score. The model can output scores on a scale from 0 to 1 using a sigmoid function for normalization. It supports efficient deployment and fast inference.
Training
The reranker can be fine-tuned using multilingual datasets, including bge-m3-data, quora train data, and fever train data. Training involves specifying hyperparameters like learning rate, batch size, and the use of techniques such as gradient checkpointing and LoRA (Low-Rank Adaptation).
Guide: Running Locally
-
Install Dependencies:
pip install -U FlagEmbedding
-
Load the Model:
Use the FlagEmbedding library to initialize and compute scores.from FlagEmbedding import FlagReranker reranker = FlagReranker('BAAI/bge-reranker-v2-m3', use_fp16=True) score = reranker.compute_score(['query', 'passage'], normalize=True) print(score)
-
Fine-Tuning:
Prepare a JSON file containing training data and execute the training script usingtorchrun
.
For optimal performance, consider running the model on cloud GPUs from providers like AWS, GCP, or Azure.
License
This project is licensed under the Apache 2.0 license, which allows for both personal and commercial use, redistribution, modification, and distribution.