Llama 3 8 B Lexi Uncensored
OrengutengIntroduction
Llama-3-8B-Lexi-Uncensored is an advanced text generation model based on Llama-3-8b-Instruct. This model is designed to provide uncensored, open conversational capabilities while allowing flexibility for compliance through user-implemented alignment layers.
Architecture
The model architecture is based on the Llama-3 framework, which supports a variety of tasks in text generation. It leverages advanced transformer technology to generate human-like text responses across diverse contexts.
Training
The model has been evaluated on several datasets, achieving the following performance metrics:
- AI2 Reasoning Challenge (25-Shot): 59.56% normalized accuracy
- HellaSwag (10-Shot): 77.88% normalized accuracy
- MMLU (5-Shot): 67.68% accuracy
- TruthfulQA (0-shot): 47.72% multiple-choice accuracy
- Winogrande (5-shot): 75.85% accuracy
- GSM8k (5-shot): 68.39% accuracy
These evaluations have been documented on the Open LLM Leaderboard.
Guide: Running Locally
-
Environment Setup:
- Ensure you have Python installed, along with
transformers
andsafetensors
libraries.
- Ensure you have Python installed, along with
-
Model Download:
- Clone the model repository from Hugging Face:
git clone https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored
.
- Clone the model repository from Hugging Face:
-
Load the Model:
- Use the
transformers
library to load the model.
from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("Orenguteng/Llama-3-8B-Lexi-Uncensored") model = AutoModelForCausalLM.from_pretrained("Orenguteng/Llama-3-8B-Lexi-Uncensored")
- Use the
-
Inference:
- Use the model to generate text:
inputs = tokenizer("Your input text here", return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0]))
-
Cloud GPUs:
- For optimal performance, utilize cloud GPU services such as AWS EC2, Google Cloud, or Azure to handle the computational requirements of the model.
License
Llama-3-8B-Lexi-Uncensored is licensed under the META LLAMA 3 COMMUNITY LICENSE AGREEMENT. Users have the flexibility to use the model, including for commercial purposes, as long as they comply with the terms outlined in Meta's Llama-3 license. The model is uncensored and requires users to implement their own compliance measures.