Beepo 22 B G G U F

concedo

Introduction

Beepo-22B-GGUF is a quantized version of the Beepo-22B model, finetuned on the Mistral-Small-Instruct-2409 model. It is designed for conversational inference and supports the Alpaca prompt format, offering a more straightforward instruct-decensoring process.

Architecture

Beepo-22B-GGUF maintains the intelligence of its base model by using a low learning rate and a carefully pruned dataset. It supports the Alpaca prompt format but can also use the original Mistral instruct format, though the latter is not recommended.

Training

The model was finetuned with the goal of retaining its original intelligence while applying instruct decensoring. This means the model is designed to follow user instructions without requiring complex jailbreak techniques, ensuring compliance without moralizing or judgement.

Guide: Running Locally

To run Beepo-22B-GGUF locally, follow these steps:

  1. Install KoboldCpp: Download the latest release from KoboldCpp on GitHub.
  2. Download the Model: Get the Beepo-22B-GGUF model files from the Hugging Face repository.
  3. Set Up Environment: Ensure your local environment is configured to run the model, including necessary dependencies.
  4. Run the Model: Use KoboldCpp to execute the model with your desired inputs.

For optimal performance, consider using cloud GPUs from providers like AWS, Google Cloud, or Azure.

License

Please refer to the original Hugging Face model card for licensing details, as specifics are not provided in the summary.

More Related APIs