Introduction

Mythalion-13B is a text generation model designed for fictional writing and entertainment. It is a blend of Pygmalion-2 13B and Mythomax 13B, developed in collaboration with Gryphe, leveraging the Llama-2 architecture. The model excels in role-playing and chat scenarios, outperforming its predecessors.

Architecture

The Mythalion-13B model is built on the Llama-2 framework. It integrates features from both Pygmalion-2 13B and Mythomax L2 13B, aiming to provide enhanced performance in text generation tasks. The model supports text generation in English and can be prompted using Alpaca and Pygmalion/Metharme formatting.

Training

Mythalion-13B was trained using a combination of datasets including PygmalionAI/PIPPA and Open-Orca/OpenOrca, among others. It utilizes three role-based tokens: <|system|>, <|user|>, and <|model|>, to structure conversations. The model has not been fine-tuned for safety and may generate content that includes profanity or socially unacceptable text.

Guide: Running Locally

  1. Clone the Repository: Download the Mythalion-13B model from the Hugging Face Model Hub.
  2. Install Dependencies: Ensure that your environment has PyTorch and Transformers libraries installed.
  3. Load the Model: Use the Transformers library to load the model and tokenizer.
  4. Run Inference: Deploy the model locally to test its text generation capabilities.
  5. Hardware Recommendation: For optimal performance, it is recommended to use a cloud GPU service like AWS EC2, Google Cloud, or Azure.

License

Mythalion-13B is available under the Llama-2 license, permitting free use for both commercial and non-commercial purposes.

More Related APIs in Text Generation