Llama Deepsync 1 B G G U F
QuantFactoryLlama-Deepsync-1B-GGUF
Introduction
The Llama-Deepsync-1B-GGUF is a quantized version of the Llama-Deepsync-1B model, designed for advanced text generation tasks requiring deep reasoning and problem-solving skills. It is particularly effective for applications in education, programming, and creative writing, providing contextually relevant outputs for complex queries.
Architecture
Llama 3.2 is an auto-regressive language model using an optimized transformer architecture. This model employs supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) for enhanced alignment with human preferences in terms of helpfulness and safety. It features long-context support up to 128K tokens, multilingual support for over 29 languages, and improved capabilities in coding, mathematics, and instruction following.
Training
The model has been fine-tuned from the Llama-3.2-1B-Instruct base model, focusing on enhanced text generation tasks. It includes advancements in generating structured outputs, such as JSON, and accommodates a variety of system prompts for role-play and chatbot interactions.
Guide: Running Locally
Step-by-Step Instructions
- Install Ollama: Download and install Ollama from https://ollama.com/download.
- Create Your Model File
- Create a file named after your model, e.g.,
metallama
. - Add the following line to specify the base model:
FROM Llama-3.2-1B.F16.gguf
- Ensure the base model file is in the same directory.
- Create a file named after your model, e.g.,
- Create and Patch the Model: Run the following commands to create and verify your model:
ollama create metallama -f ./metallama ollama list
- Run the Model: Use the following command to start your model:
ollama run metallama
- Interact with the Model: Once the model is running, interact with it:
>>> Tell me about Space X. Space X, the private aerospace company founded by Elon Musk, is revolutionizing space exploration...
Cloud GPUs
For enhanced performance, consider using cloud GPU services such as AWS, Google Cloud, or Azure to run your models efficiently.
License
The model is licensed under the creativeml-openrail-m license, allowing use and modification with certain limitations.