Wizard L M 13 B Uncensored
cognitivecomputationsIntroduction
WizardLM-13B-Uncensored is a text generation model developed by Cognitive Computations. It is a variant of WizardLM, trained without built-in alignment or moralizing features. This allows users to apply their own alignment using methods such as Reinforcement Learning from Human Feedback (RLHF) with Low-Rank Adaptation (LoRA).
Architecture
WizardLM-13B-Uncensored is based on the Transformers architecture and implemented in PyTorch. It is designed for text generation tasks and does not include predefined content moderation or ethical alignment.
Training
The model was trained using a subset of the Alpaca Evolutionary Instruction dataset, specifically excluding responses that included alignment or moralizing content. This approach provides flexibility for users to implement custom alignment measures post-training.
Guide: Running Locally
-
Environment Setup: Ensure that you have Python and PyTorch installed. Clone the repository or download the model files from Hugging Face.
-
Dependencies: Install necessary Python packages, typically including
transformers
andtorch
. -
Loading the Model: Use the
transformers
library to load the model into your environment. -
Inference: Once the model is loaded, you can perform text generation tasks by providing input prompts.
-
Cloud GPUs: For optimal performance, especially with large models like WizardLM-13B, consider using cloud GPU services such as AWS, Google Cloud, or Azure.
License
WizardLM-13B-Uncensored is released under an unspecified "other" license. Users should ensure compliance with this license and acknowledge responsibility for any output generated by the model, similar to the responsibility for actions taken with any potentially dangerous tool.