Seneca L L M_x_ Qwen2.5 7 B Cyber Security Q8_0 G G U F

AlicanKiraz0

Introduction

The SenecaLLM_x_Qwen2.5-7B-CyberSecurity-Q8_0-GGUF is a model designed for tasks related to cybersecurity, such as text classification in ethical hacking, pentesting, and information security. It has been fine-tuned to understand and respond like a cybersecurity expert while counteracting malicious use.

Architecture

This model is based on the Qwen2.5-Coder-7B-Instruct architecture. It has been converted to the GGUF format using llama.cpp via the ggml.ai's GGUF-my-repo, providing a streamlined approach for deployment and execution.

Training

The model underwent nearly 100 hours of training using various systems including 1x4090, 8x4090, and 3xH100 GPUs. The training focused on key cybersecurity areas such as Incident Response, Threat Hunting, Code Analysis, Exploit Development, Reverse Engineering, and Malware Analysis. The objective was to develop the model's ability to think like a cybersecurity expert.

Guide: Running Locally

To run the SenecaLLM_x_Qwen2.5-7B-CyberSecurity-Q8_0-GGUF model locally, you'll need to use the llama.cpp tool. Below are the steps:

  1. Install llama.cpp using Homebrew (applicable for Mac and Linux):

    brew install llama.cpp
    
  2. Clone the llama.cpp repository from GitHub:

    git clone https://github.com/ggerganov/llama.cpp
    
  3. Compile the project by navigating to the cloned directory and using build flags:

    cd llama.cpp && LLAMA_CURL=1 make
    
  4. Invoke the model through the CLI or SERVER:

    • CLI:
      ./llama-cli --hf-repo AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-Q8_0-GGUF --hf-file senecallm_x_qwen2.5-7b-cybersecurity-q8_0.gguf -p "The meaning to life and the universe is"
      
    • SERVER:
      ./llama-server --hf-repo AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-Q8_0-GGUF --hf-file senecallm_x_qwen2.5-7b-cybersecurity-q8_0.gguf -c 2048
      

For better performance, consider using cloud GPUs, such as those provided by AWS, Google Cloud, or Azure.

License

The SenecaLLM_x_Qwen2.5-7B-CyberSecurity-Q8_0-GGUF is distributed under the MIT License, allowing for open and flexible use of the model.

More Related APIs in Text Classification