G P T Neo X 20 B Erebus

KoboldAI

Introduction

GPT-NeoX-20B-Erebus is a sophisticated autoregressive language model developed as the second generation of the original Shinen model. It focuses on generating text related to adult themes and is intended for mature audiences only. The model is known for producing X-rated content and is not suitable for minors.

Architecture

The model uses the GPT-NeoX-20B framework, leveraging a heavily modified version of Ben Wang's Mesh Transformer JAX library. This architecture was initially employed by EleutherAI for training the GPT-J-6B model. It employs a TPUv3-256 TPU pod for training, a robust setup to handle its computational demands.

Training

The training process of GPT-NeoX-20B-Erebus involved a collection of six datasets, all centering around adult-themed stories. The datasets include Literotica, Sexstories, a private dataset referred to as Dataset-G, Doc's Lab, Pike Dataset, and SoFurry. The data was tagged with genres using a comma-separated list format. The model's training was conducted with a focus on generating content relevant to the adult genre, resulting in a strong NSFW bias.

Guide: Running Locally

To run GPT-NeoX-20B-Erebus locally, follow these steps:

  1. Set Up Environment: Ensure you have Python and PyTorch installed.
  2. Download Model: Retrieve the model files from Hugging Face's model repository.
  3. Install Dependencies: Install necessary libraries such as transformers and jax.
  4. Load Model: Use the Transformers library to load the model and tokenizer.
  5. Run Inference: Execute the model on input data for text generation.

Given the model's size and complexity, using cloud GPUs like those offered by AWS or Google Cloud is recommended to ensure optimal performance.

License

The GPT-NeoX-20B-Erebus model is released under the Apache 2.0 license, allowing for wide use and modification, subject to the terms of this permissive open-source license.

More Related APIs in Text Generation