Dolphin3.0 Llama3.1 8 B
cognitivecomputationsIntroduction
Dolphin 3.0 Llama 3.1 8B is part of the Dolphin series, designed as a general-purpose, instruct-tuned model. It is intended for coding, math, agentic tasks, function calling, and general use cases. Unlike other models, Dolphin provides users with control over system prompts, model versions, and data privacy.
Architecture
The model is based on the Meta-Llama architecture, specifically Llama 3.1 with 8 billion parameters. It is trained with a variety of datasets to enhance its capabilities in different domains.
Training
Dolphin 3.0 was trained on diverse datasets, including OpenCoder-LLM, Microsoft's Orca datasets, and others. The training process was supported by generous sponsors providing high-performance hardware for training and evaluation.
Guide: Running Locally
To run Dolphin 3.0 locally:
- Install Dependencies: Use platforms like Ollama or Hugging Face Transformers.
- Download and Run the Model:
- For Ollama: Install from ollama.com and run
ollama run hf.co/cognitivecomputations/Dolphin3.0-Llama3.1-8B-GGUF:Q4_0
.
- For Ollama: Install from ollama.com and run
- Set System Prompt: Define the system prompt to customize the model's behavior.
Cloud GPUs
For enhanced performance, consider using cloud GPU services like those provided by Crusoe Cloud or Akash.
License
The model is distributed under the llama3.1 license, which governs its usage and distribution.