vicuna 7b v1.5

lmsys

Introduction

Vicuna is a chat assistant developed by LMSYS, fine-tuned from the Llama 2 model. It focuses primarily on research in large language models and chatbots, targeting researchers and hobbyists in NLP, machine learning, and AI.

Architecture

Vicuna is an auto-regressive language model based on the transformer architecture. It is fine-tuned from the Llama 2 model, which is detailed in the paper arXiv:2307.09288.

Training

Vicuna v1.5 is trained using supervised instruction fine-tuning on approximately 125,000 conversations from ShareGPT.com. Detailed training information is available in the paper arXiv:2306.05685.

Guide: Running Locally

  1. Clone the Repository:
    git clone https://github.com/lm-sys/FastChat
    
  2. Navigate to the Vicuna Weights Section:
    Follow the instructions at FastChat#vicuna-weights for setup.
  3. Utilize APIs:
    Explore the APIs available at FastChat API for integration.
  4. Cloud GPUs:
    It is recommended to use cloud GPU services like AWS, Google Cloud, or Azure for efficient model performance.

License

Vicuna is distributed under the Llama 2 Community License Agreement.

More Related APIs in Text Generation