Introduction

BlenderLLM is a large language model designed for computer-aided design (CAD) tasks, utilizing self-improvement techniques for enhanced performance. It is based on the Qwen2.5-Coder-7B-Instruct model and has been fine-tuned using the BlendNet dataset. This project is a collaborative effort from the School of Data Science (SDS) at the Chinese University of Hong Kong, Shenzhen.

Architecture

The architecture of BlenderLLM is built upon Qwen2.5-Coder-7B-Instruct, which serves as the foundational model. This base is enhanced through fine-tuning with the BlendNet dataset, specifically tailored for tasks involving CAD, rendering, and 3D modeling. The model is further optimized using self-improvement methods to ensure high performance.

Training

BlenderLLM's training involves fine-tuning the Qwen2.5-Coder-7B-Instruct model with the BlendNet dataset. Self-improvement techniques are employed to refine the model's capabilities in CAD-related tasks. The training process focuses on improving the model's efficiency and accuracy in generating code and renderings.

Guide: Running Locally

To run BlenderLLM locally, follow these steps:

  1. Clone the Repository:
    Clone the BlenderLLM repository from GitHub.

  2. Install Dependencies:
    Ensure you have the necessary libraries and dependencies installed. This usually involves setting up a Python environment with packages such as TensorFlow or PyTorch.

  3. Download the Model:
    Download the BlenderLLM model files from the repository or Hugging Face model hub.

  4. Run the Model:
    Execute the model code to start generating CAD-related outputs.

  5. Utilize Cloud GPUs:
    For optimal performance, consider using cloud services like AWS, Google Cloud, or Azure to access powerful GPUs.

License

BlenderLLM is licensed under the Apache-2.0 License. This permits users to freely use, modify, and distribute the software, provided that any copies or substantial portions of the software include the original license.

More Related APIs in Text To 3d