orca_mini_v9_3_70 B
pankajmathurORCA Mini V9 3 LLAMA-3.3-70B-Instruct
Introduction
Orca Mini V9 3 LLAMA-3.3-70B-Instruct is a text generation model trained on the Llama-3.3-70B-Instruct architecture. It is designed to be a general-purpose AI assistant, capable of handling various conversational and text generation tasks. The model supports multiple languages and is optimized for safety and flexibility.
Architecture
The model is based on the Llama-3.3-70B-Instruct architecture, which provides a robust foundation for developing AI applications. The architecture is equipped to handle multilingual inputs and supports a longer context window for more complex interactions.
Training
Orca Mini has been trained using various SFT datasets, specifically tailored for enhancing its text generation capabilities. The training process incorporates both human-generated and synthetic data to improve safety and performance. Additionally, the model underwent safety fine-tuning to ensure robust handling of potentially harmful content.
Guide: Running Locally
To run the model locally, follow these basic steps:
-
Install the necessary packages:
pip install torch transformers bitsandbytes
-
Set up the model pipeline:
import torch from transformers import pipeline, BitsAndBytesConfig model_slug = "pankajmathur/orca_mini_v9_3_70B" quantization_config = BitsAndBytesConfig(load_in_4bit=True) pipeline = pipeline( "text-generation", model=model_slug, model_kwargs={"quantization_config": quantization_config}, device_map="auto", )
-
Use the model for text generation:
messages = [ {"role": "system", "content": "You are Orca Mini, a helpful AI assistant."}, {"role": "user", "content": "Hello Orca Mini, what can you do for me?"} ] outputs = pipeline(messages, max_new_tokens=128, do_sample=True, temperature=0.01, top_k=100, top_p=0.95) print(outputs[0]["generated_text"][-1])
For enhanced performance, consider using cloud GPUs from providers such as AWS, Google Cloud, or Azure.
License
The model is released under the Llama3.3 license, which allows for adaptation and customization, provided that proper credit and attribution are given. Users are encouraged to enhance and tailor the model to their specific needs.