Blue Orchid 2x7b
nakodaneiIntroduction
Blue-Orchid-2x7b is a roleplaying-focused Mixture of Experts (MoE) model designed for text generation tasks. It combines expertise in roleplaying and storywriting, making it versatile for both applications. The base model used is SanjiWatsuki/Kunoichi-DPO-v2-7B.
Architecture
This model utilizes a Mixture of Experts approach, where:
- Expert 1 merges models focused on roleplaying, including LimaRP, Limamono, Noromaid 0.4 DPO, and good-robot.
- Expert 2 merges models focused on storywriting, including Erebus, Holodeck, Dans-AdventurousWinds-Mk2, Opus, Ashhwriter, and good-robot.
Training
Blue-Orchid-2x7b is built by merging specific models to optimize performance for roleplaying and storywriting tasks. The training details, including datasets and specific methodologies, are not provided.
Guide: Running Locally
To run Blue-Orchid-2x7b locally, follow these steps:
- Clone the repository containing the model from Hugging Face.
- Install dependencies such as the
transformers
andsafetensors
libraries. - Load the model using your preferred Python environment.
- Configure your input using the provided prompt templates (LIMARP or Alpaca) for optimal results.
- Execute the model to generate responses based on your input.
For efficient performance, consider using cloud GPU services such as AWS EC2, Google Cloud Platform, or Azure.
License
The Blue-Orchid-2x7b model is licensed under the Apache 2.0 License, allowing for free use, modification, and distribution under the terms stipulated in the license.