c4ai command r7b 12 2024 abliterated G G U F

bartowski

C4AI-COMMAND-R7B-12-2024-ABLITERATED-GGUF

Introduction

C4AI-COMMAND-R7B-12-2024-ABLITERATED-GGUF is a text generation model that supports 23 languages. It is designed to be uncensored and conversational, with a focus on generating high-quality text.

Architecture

The model uses the llama.cpp architecture for quantization, specifically employing the LLAMACPP IMATRIX quantizations. This setup allows for a range of quantization types, from high-quality to more memory-efficient formats.

Training

The training process involved using a dataset specifically curated for imatrix quantization, which is available through a Gist link. The model's quantization was done using the llama.cpp b4415 release.

Guide: Running Locally

  1. Install the Hugging Face CLI:
    pip install -U "huggingface_hub[cli]"
    
  2. Download the Desired Model File:
    huggingface-cli download bartowski/c4ai-command-r7b-12-2024-abliterated-GGUF --include "c4ai-command-r7b-12-2024-abliterated-Q4_K_M.gguf" --local-dir ./
    
  3. Choose the Appropriate File:
    • Select a quantization file based on available RAM/VRAM. The "K-quants" (e.g., Q5_K_M) are generally recommended for ease of use.
  4. Run on a Cloud GPU:
    • Use cloud services such as AWS, Google Cloud, or Azure to access GPUs with sufficient VRAM for optimal performance.

License

The model is released under the CC BY-NC 4.0 license, allowing for non-commercial use only.

More Related APIs in Text Generation