phi 4 abliterated
Orion-zhenPHI-4-ABLITERATED
Introduction
Phi-4 Abliterated is a text generation model based on the Phi-4 architecture. It is designed to serve as a foundational model for various AI-powered features, with specific capabilities for reasoning and logic. The model is built to handle memory and compute-constrained environments, making it suitable for applications requiring low latency.
Architecture
Phi-4 is a 14 billion parameter dense decoder-only transformer model. This architecture allows it to perform complex text generation tasks efficiently.
Training
The training data for Phi-4 extends from its predecessor, Phi-3, and incorporates a variety of sources:
- Publicly available documents filtered for quality.
- Synthetic data designed to teach subjects like math and coding.
- Acquired academic books and Q&A datasets.
- High-quality chat format data for instruction adherence and preference alignment.
Safety measures include supervised fine-tuning and direct preference optimization using diverse datasets to enhance the model's robustness and adherence to instructions.
Guide: Running Locally
To run the Phi-4 Abliterated model locally, follow these steps:
-
Clone the Repository:
Clone the model repository from the Hugging Face Model Hub or GitHub. -
Install Dependencies:
Ensure you have the necessary libraries installed, typically includingtransformers
,torch
, and other dependencies specified in the repository. -
Load the Model:
Use the Hugging Facetransformers
library to load the model and tokenizer. -
Inference:
Execute inference using the model for your specific text generation task.
Suggested Cloud GPUs
For optimal performance, consider using cloud-based GPUs such as those from AWS, Google Cloud, or Azure, which offer powerful hardware suitable for running large models like Phi-4 efficiently.
License
The Phi-4 Abliterated model is released under the GPL-3.0 license, allowing for open use and modification while ensuring that derivative works are also open-sourced under the same license.