all Mini L M L6 v2
Xenovaall-MiniLM-L6-v2 Model
Introduction
The all-MiniLM-L6-v2
model by Xenova is a compact and efficient transformer model designed for feature extraction tasks. It is compatible with the Transformers.js library and is available with ONNX weights to facilitate web-based applications.
Architecture
The base model for all-MiniLM-L6-v2
is derived from sentence-transformers/all-MiniLM-L6-v2
. It utilizes the Transformers.js library for JavaScript applications, focusing on efficient feature extraction.
Training
The model has been converted to use ONNX weights, making it compatible with the Transformers.js library. This setup is suitable for web applications, providing a streamlined process for feature extraction and sentence embedding tasks.
Guide: Running Locally
To use the all-MiniLM-L6-v2
model locally with Transformers.js, follow these steps:
-
Install Transformers.js:
Run the following command to install the Transformers.js library via npm:npm i @huggingface/transformers
-
Import and Use the Model:
Use the following JavaScript code to create a feature-extraction pipeline and compute sentence embeddings:import { pipeline } from '@huggingface/transformers'; // Create a feature-extraction pipeline const extractor = await pipeline('feature-extraction', 'Xenova/all-MiniLM-L6-v2'); // Compute sentence embeddings const sentences = ['This is an example sentence', 'Each sentence is converted']; const output = await extractor(sentences, { pooling: 'mean', normalize: true }); console.log(output.tolist());
-
Cloud GPUs:
For more intensive tasks, consider using cloud GPU services like AWS, Google Cloud, or Azure to speed up processing.
License
The all-MiniLM-L6-v2
model is licensed under the Apache 2.0 License, allowing for wide usage and modifications.