White Rabbit Neo 2.5 Qwen 2.5 Coder 7 B

WhiteRabbitNeo

Introduction

WhiteRabbitNeo is a model series designed for both offensive and defensive cybersecurity applications. The models are released as a public preview to demonstrate their capabilities and to evaluate their societal impact.

Architecture

WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B is based on the Qwen/Qwen2.5-Coder architecture, utilizing the Transformers library for text generation. It supports English language processing and is optimized for code generation, finetuning, and conversational tasks.

Training

The model is trained with a focus on generating code and handling conversational inputs effectively. It uses the ChatML Prompt Format to interactively generate responses based on user instructions.

Guide: Running Locally

To run WhiteRabbitNeo locally, follow these basic steps:

  1. Environment Setup: Ensure you have Python and PyTorch installed.
  2. Install Transformers: Use pip install transformers to get the required library.
  3. Load Model: Use the AutoModelForCausalLM and AutoTokenizer classes from Transformers to load the model with the path WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B.
  4. Generate Text: Implement a function to encode user input, generate responses, and decode the output.
  5. Continuous Interaction: Set up a loop for continuous interaction, using user input to generate and print responses.

For optimal performance, consider using cloud GPUs from providers like AWS, Azure, or Google Cloud.

License

WhiteRabbitNeo is released under the Apache-2.0 license with additional usage restrictions. The restrictions prohibit uses that violate laws, harm individuals, or exploit vulnerabilities. Users must also avoid generating false or inappropriate content and are responsible for ensuring compliance with these terms. The model is provided "as is" without warranties, and users assume responsibility for any risks associated with its use.

More Related APIs in Text Generation