M S Meadowlark 22 B
allura-orgIntroduction
MS-Meadowlark-22B is a roleplay and storywriting model based on the Mistral Small 22B architecture. It is designed to enhance creativity and unpredictability in text generation tasks. The model combines multiple datasets and training approaches to achieve a balanced performance.
Architecture
The base model for MS-Meadowlark-22B is unsloth/Mistral-Small-Instruct-2409
, utilizing the transformers library. The model is a result of merging various component models to enhance its stability and creative output.
Training
The model was trained using several datasets, each contributing to different aspects of its functionality:
- Dampfinchen/Creative_Writing_Multiturn at 16k
- Fizzarolli/rosier-dataset and Alfitaria/body-inflation-org at 16k
- ToastyPigeon/SpringDragon at 8k
These datasets were individually trained onto Mistral Small Instruct before being merged. The merging also included nbeerbower/Mistral-Small-Gutenberg-Doppel-22B
. Different blends were tested to optimize for stability and creativity.
Guide: Running Locally
To run MS-Meadowlark-22B locally, follow these steps:
- Install Dependencies: Ensure you have Python and the necessary libraries installed, particularly the Hugging Face
transformers
library. - Clone the Repository: Download the model files from Hugging Face.
- Load the Model: Use a compatible script or interface, such as Kobold Lite in Adventure Mode or Story Mode, to load and interact with the model.
- Configure Settings: Use the provided instruct format or custom templates for optimized performance.
For enhanced performance, consider using cloud GPU services like AWS, Google Cloud, or Microsoft Azure.
License
MS-Meadowlark-22B is licensed under the MRL license. For detailed information, refer to the license documentation at Mistral's License.