F L U X.1 dev Controlnet Inpainting Alpha

alimama-creative

Introduction

FLUX.1-dev-Controlnet-Inpainting-Alpha is an inpainting model developed by the AlimamaCreative Team. It is designed to enhance images by intelligently filling in missing or masked parts. This model is a part of the FLUX series and utilizes ControlNet architecture.

Architecture

The model employs ControlNet, an architecture that facilitates text-to-image generation with specific control over the output. It combines inpainting capabilities and Stable Diffusion techniques to achieve high-quality results.

Training

The model was trained using 12 million images from the LAION-2B dataset and internal sources, focusing on resolutions of 768x768 pixels. It is optimized for this resolution, with other sizes potentially yielding lesser performance. The recommended ControlNet conditioning scale is between 0.9 and 0.95.

Guide: Running Locally

To use this model locally, follow these steps:

  1. Install Diffusers
    Install the required version of the Diffusers library:

    pip install diffusers==0.30.2
    
  2. Clone the Repository
    Clone the model repository from GitHub:

    git clone https://github.com/alimama-creative/FLUX-Controlnet-Inpainting.git
    
  3. Modify and Run the Script
    Adjust the image_path, mask_path, and prompt within the script, then execute:

    python main.py
    

For optimal performance, it's recommended to use a cloud GPU, such as those offered by AWS or Google Cloud, especially for high-resolution images.

License

The model weights are distributed under the FLUX.1-dev Non-Commercial License. For more details, refer to the license document.

More Related APIs