Qwen2.5 14 B Instruct abliterated

huihui-ai

Introduction

The Qwen2.5-14B-Instruct-Abliterated model is an uncensored version of the Qwen/Qwen2.5-14B-Instruct model, created using a process called "abliteration." This model is designed for text generation tasks and can be used in conversational applications. It is primarily focused on generating uncensored and conversational text.

Architecture

The model is based on the Qwen/Qwen2.5-14B-Instruct architecture and uses the Transformers library. It has been modified to offer an uncensored experience, which can be beneficial in certain applications where unrestricted text generation is desired.

Training

The original model training details are not provided. However, the modifications involve the "abliteration" process, which is a technique for creating uncensored versions of models. This process was contributed by a user named @FailSpy.

Guide: Running Locally

To run the model locally, follow these steps:

  1. Install Dependencies: Ensure you have Python installed, and then install the Transformers library.
    pip install transformers torch
    
  2. Load the Model: Use the provided Python script to load the model and tokenizer.
    from transformers import AutoModelForCausalLM, AutoTokenizer
    
    model_name = "huihui-ai/Qwen2.5-14B-Instruct-abliterated"
    model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
    tokenizer = AutoTokenizer.from_pretrained(model_name)
    
  3. Run the Conversation Loop: Follow the script to initiate a conversation loop, allowing you to interact with the model.
    • Type /exit to end the conversation.
    • Type /clean to reset the conversation context.

Consider using cloud GPUs like those from AWS, Google Cloud, or Azure for optimal performance, especially when handling large models or datasets.

License

The model is licensed under the Apache 2.0 License. For more details, please refer to the license file.

More Related APIs in Text Generation