L3 D A R K E S T P L A N E T 16.5 B G G U F

DavidAU

L3-DARKEST-PLANET-16.5B-GGUF

Introduction

L3-Darkest-Planet-16.5B-GGUF is a LLama3 model designed for creative writing, storytelling, and fiction generation. It is an enhanced version of the "Dark Planet 8B" model, engineered using the Brainstorm 40x method to improve prose output and expand the model to 16.5B parameters.

Architecture

The model boasts a maximum context of 8192 tokens, extendable to 32k+ using rope settings. It contains 71 layers and 642 tensors, offering detailed and varied prose in structure and content. The Brainstorm 40x process involves reassembling and expanding the model's reasoning centers for enhanced detail and prose quality.

Training

L3-Darkest-Planet-16.5B-GGUF underwent an augmentation process called Brainstorm 40x, which involved expanding and calibrating its reasoning capabilities. This process aims to improve detail, coherence, and emotional engagement without compromising instruction following.

Guide: Running Locally

  1. Setup Environment: Ensure you have Python and necessary libraries installed.
  2. Download Model: Obtain the model files from Hugging Face.
  3. Configure Settings: Adjust parameters such as temperature and repetition penalty to suit your use case. A rep pen of 1.05 or higher is recommended.
  4. Run Model: Use platforms like KoboldCpp or text-generation-webui for execution.
  5. Consider Cloud GPUs: For optimal performance, consider using cloud GPUs like those from AWS, GCP, or Azure.

License

L3-Darkest-Planet-16.5B-GGUF is released under the Apache 2.0 license, allowing for commercial use, modification, and distribution, provided that the same license terms are applied to derived works.

More Related APIs in Text Generation