Llama 3 Nanda 10 B Chat

MBZUAI

Introduction

Llama-3-Nanda-10B-Chat (Nanda) is a bilingual large language model with 10 billion parameters, pre-trained and instruction-tuned for both Hindi and English. It is developed by the Institute of Foundation Models at the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI). The model uses a transformer-based decoder-only architecture (LLaMA-3) with Rotary Position Embeddings (RoPE) for improved handling of long sequences.

Architecture

The model is based on the LLaMA-3 architecture, a transformer-based, decoder-only framework. It incorporates Rotary Position Embeddings (RoPE), which enhance the model's ability to handle long sequence lengths effectively.

Training

Llama-3-Nanda-10B-Chat was trained on a diverse bilingual dataset, incorporating 65 billion Hindi tokens from various sources such as web pages, Wikipedia, and books. The training involved continuous pre-training followed by instruction tuning on a Cerebras supercomputer. The model has been evaluated against leading language models, showing superior performance in both Hindi and English contexts.

Guide: Running Locally

To run Llama-3-Nanda-10B-Chat locally, follow these steps:

  1. Environment Setup: Ensure you have Python installed. Use pip to install the transformers library:

    pip install transformers==4.28.0
    
  2. Code Implementation: Use the following sample code to load and run the model:

    import torch
    from transformers import AutoTokenizer, AutoModelForCausalLM
    
    model_path = "MBZUAI/Llama-3-Nanda-10B-Chat"
    device = "cuda" if torch.cuda.is_available() else "cpu"
    tokenizer = AutoTokenizer.from_pretrained(model_path)
    model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", trust_remote_code=True)
    
    prompt_hindi = "<|begin_of_text|><|start_header_id|>system<|end_header_id|>You are a helpful AI assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>{Question}<|eot_id|><|start_header_id|>assistant<|end_header_id|>"
    
    def get_response(text, tokenizer=tokenizer, model=model):
        input_ids = tokenizer(text, return_tensors="pt").input_ids
        inputs = input_ids.to(device)
        generate_ids = model.generate(
            inputs, 
            top_p=0.95,
            temperature=0.2,
            max_length=500,
            min_length=30,
            repetition_penalty=1.3,
            do_sample=True
        )
    
        response = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)[0]
        response = response.split("assistant")[-1]
        return response
    
    ques= "मुझे यूएई के बारे में कुछ रोचक तथ्य बताएं?"
    text = prompt_hindi.format_map({'Question':ques})
    print(get_response(text))
    
  3. Hardware Recommendation: For optimal performance, it is recommended to run the model on a machine with a GPU. Consider using cloud-based GPU services like AWS, Google Cloud, or Azure.

License

Llama-3-Nanda-10B-Chat is released under the Llama 3 license. Users must adhere to the terms and conditions outlined in the license, as well as Meta’s acceptable use and privacy policies. More details can be found in the Llama 3 license document.

More Related APIs