Phi 4 jackterated G G U F

JackCloudman

PHI-4-JACKTERATED GGUF

Introduction

PHI-4-JACKTERATED GGUF is an experimental model derived from the base model matteogeniaccio/phi-4. It incorporates modifications using transformerLens to support the Phi-4 architecture. The model introduces the Abliterated technique, which is detailed further in an external notebook.

Architecture

The model is built upon the transformers library, specifically tailored to utilize the Abliterated technique. It is uncensored and suitable for conversational applications. The exact architecture version used is Llama.cpp version: b4361.

Training

The model leverages matteogeniaccio/phi-4 as its foundational base. It is modified with a focus on the Abliterated technique, which can be explored in depth through a notebook linked here.

Guide: Running Locally

  1. Clone the Repository:
    git clone https://huggingface.co/JackCloudman/Phi-4-jackterated-GGUF
    cd Phi-4-jackterated-GGUF
    
  2. Install Dependencies: Ensure you have Python and the transformers library installed. Use:
    pip install transformers
    
  3. Run the Model: Load the model in your Python environment using:
    from transformers import AutoModel
    model = AutoModel.from_pretrained("JackCloudman/Phi-4-jackterated-GGUF")
    
  4. Cloud GPUs: For optimal performance, consider using cloud GPUs from providers like AWS, Google Cloud, or Azure.

License

This model is licensed under the Apache-2.0 License, allowing for both commercial and non-commercial use, modification, and distribution.

More Related APIs