Beeper King 22 B
ToastyPigeonIntroduction
Beeper-King-22B is an adjusted version of the MS-Meadowlark-22B model. It merges several models to enhance its functionality and behavior. The model has been fine-tuned to respond accurately to prompts while maintaining a playful and dynamic nature.
Architecture
Beeper-King-22B is a combination of multiple models:
- Mistral-Small-Gutenberg-Doppel-22B by nbeerbower
- MS-sunfall-v0.7.0 by crestf411
- mistral-small-fujin-qlora by Alfitaria
- mistral-small-springdragon-qlora by ToastyPigeon
- Beepo-22B by concedo
By modifying the instruct portion and applying QLoRAs to Beepo-22B, the model exhibits a more engaging and lively interaction style while still being highly compliant and responsive.
Training
The model employs an instruct format from Mistral V2 & V3 and is compatible with Alpaca due to the integration of Beepo-22B. This configuration allows it to effectively follow instructions and character descriptions.
Guide: Running Locally
To run Beeper-King-22B locally, follow these basic steps:
- Clone the Repository: Start by cloning the model repository from Hugging Face.
- Install Dependencies: Ensure that you have all necessary Python packages installed, which may include transformers, torch, and safetensors.
- Load the Model: Use the Hugging Face Transformers library to load Beeper-King-22B.
- Run Inference: Input your data or prompts to get responses from the model.
For optimal performance, consider using a cloud GPU service such as AWS, Google Cloud, or Azure.
License
The model is distributed under terms specified by its contributors. Ensure to review the license agreement on the Hugging Face model card for compliance and usage rights.