Skywork o1 Open P R M Qwen 2.5 1.5 B

Skywork

Introduction

The Skywork o1 Open model series, developed by the Skywork team at Kunlun Inc., introduces models with enhanced reasoning capabilities, including slow thinking and planning. The series comprises:

  • Skywork o1 Open-Llama-3.1-8B: A chat model based on Llama-3.1-8B, trained with "o1-style" data to enhance reasoning.
  • Skywork o1 Open-PRM-Qwen-2.5-1.5B: Focuses on reasoning through incremental process rewards, suitable for complex problem-solving.
  • Skywork o1 Open-PRM-Qwen-2.5-7B: An extended version of the 1.5B model for more demanding reasoning tasks.

These models are significant improvements over traditional models, exhibiting advanced reasoning skills and strategic advancements in AI.

Architecture

The Skywork-o1-Open-PRM models are based on the Qwen2.5-Math-1.5B-Instruct and Qwen2.5-Math-7B-Instruct models. They integrate unique evaluation settings and metrics to measure performance in mathematical and code-related tasks using various datasets and evaluation methods.

Training

The evaluation process involves:

  • Mathematical Evaluation: Using datasets like GSM8K, MATH, and OlympiadBench, with metrics like Greedy Sampling Pass@1 and Majority Voting@64.
  • Code Evaluation: Focused on the Skywork-o1-Open-PRM model's performance, using datasets like MBPP and HumanEval.

Reward models are assessed using different methods across several base models, with sampling temperatures adjusted for task types.

Guide: Running Locally

Basic Steps

  1. Clone the Skywork PRM Inference Repository:

    git clone https://github.com/SkyworkAI/skywork-o1-prm-inference.git
    cd skywork-o1-prm-inference
    
  2. Run PRM Inference:

    • Set up the tokenizer and model using the provided Python script.
    • Prepare input data and process it with the model to derive step rewards.
  3. VLLM Server for Inference:

    • Install VLLM and the PRM plugin:
      pip install vllm==v0.6.4.post1
      pip install -e .
      
    • Start the VLLM server:
      CUDA_VISIBLE_DEVICES=0,1,2,3 vllm serve /path/to/prm_model --host 0.0.0.0 --port 8081 --tensor-parallel-size 4 --gpu-memory-utilization 0.9 --enable-prefix-caching --dtype auto
      

Suggest Cloud GPUs

For optimal performance, consider using cloud GPU services such as AWS EC2 with NVIDIA GPUs, Google Cloud GPU instances, or Azure GPU VMs to enhance processing speed and efficiency.

License

Skywork models are available under the Skywork Community License, allowing for commercial use. Compliance with the license terms is required for commercial applications. Note that Skywork models should not be used for unlawful activities or without proper security reviews. The use of these models does not imply any liability from the developers for potential risks or issues.

More Related APIs in Text Classification