Captain_ B M O 12 B

Nitral-AI

Captain BMO-12B Model Documentation

Introduction

Captain BMO-12B is a language model developed by Nitral-AI. It utilizes Mistral formatting for text completion tasks. The model was primarily trained for internal testing and has subsequently been made available publicly. It is built on the Nemo 12B instruct as a base.

Architecture

The model is trained on a 12 billion parameter architecture. It uses a randomized subset of training data, specifically a 200k subset from GU_instruct-Remastered-1.1, along with 25k additional data from a dataset referred to as "hathor/poppy sauce." This combination was trained over three epochs.

Training

The model underwent a unique training process:

  • Base Model: Nemo 12B instruct.
  • Data: 200k subset of GU_instruct-Remastered-1.1, with 25k additional data.
  • Epochs: 3
  • Purpose: Initially for internal testing, with no extended support planned.

Guide: Running Locally

To run Captain BMO-12B locally, follow these steps:

  1. Set Up Environment: Ensure Python and necessary libraries such as PyTorch are installed.
  2. Download Model: Obtain the model weights from Hugging Face's model repository.
  3. Load Model: Use the Hugging Face Transformers library to load the model into your application.
  4. Inference: Implement text completion tasks using the model's capabilities.

For optimal performance, consider using cloud GPUs from providers like AWS, Google Cloud, or Azure to handle the computational demands of the model.

License

The model is released under an unspecified "other" license, indicating potential restrictions on use. Users should review the specific licensing terms before deploying the model.

More Related APIs