ko sroberta multitask

jhgan

Introduction

The Ko-SRoBERTa-MultiTask is a sentence-transformers model designed for mapping Korean sentences and paragraphs to a 768-dimensional dense vector space. This model is particularly useful for tasks such as clustering and semantic search.

Architecture

The model is built using the SentenceTransformer framework and consists of:

  • A Transformer component based on RobertaModel with a maximum sequence length of 128 and without lower casing.
  • A Pooling layer that performs mean token pooling to produce sentence embeddings.

Training

The model was trained using the KorSTS and KorNLI datasets with a multi-task approach and evaluated on the KorSTS dataset. Key training parameters include:

  • Loss functions: MultipleNegativesRankingLoss and CosineSimilarityLoss.
  • Training epochs: 5
  • Optimizer: AdamW with a learning rate of 2e-05.
  • Evaluation metrics: Cosine Pearson, Cosine Spearman, among others.
  • Additional settings: Warmup steps of 360, weight decay of 0.01, and a batch size of 64 for NoDuplicatesDataLoader and 8 for regular DataLoader.

Guide: Running Locally

To use the model locally, install the sentence-transformers library:

pip install -U sentence-transformers

For usage with SentenceTransformers:

from sentence_transformers import SentenceTransformer
sentences = ["안녕하세요?", "한국어 문장 임베딩을 위한 버트 모델입니다."]

model = SentenceTransformer('jhgan/ko-sroberta-multitask')
embeddings = model.encode(sentences)
print(embeddings)

For usage with Hugging Face Transformers:

from transformers import AutoTokenizer, AutoModel
import torch

def mean_pooling(model_output, attention_mask):
    token_embeddings = model_output[0]
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
    return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)

sentences = ['This is an example sentence', 'Each sentence is converted']
tokenizer = AutoTokenizer.from_pretrained('jhgan/ko-sroberta-multitask')
model = AutoModel.from_pretrained('jhgan/ko-sroberta-multitask')
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')

with torch.no_grad():
    model_output = model(**encoded_input)

sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)

For optimal performance, consider using cloud-based GPUs such as AWS, Google Cloud, or Azure.

License

The model does not specify a particular license in the documentation provided. Users should check the repository or contact the author for further licensing details.

More Related APIs in Sentence Similarity