F L U X.1 dev Controlnet Inpainting Beta

alimama-creative

Introduction

FLUX.1-DEV ControlNet Inpainting - Beta is an improved inpainting model developed by the AlimamaCreative Team. It offers significant enhancements over previous versions, enabling better quality and detail in image-to-image transformations using the ControlNet architecture.

Architecture

The model is built with ControlNet, supporting resolutions up to 1024x1024 pixels directly, improving detail generation and prompt control. This architecture allows for intricate image editing capabilities, leveraging advanced prompt interpretation for precise content generation.

Training

The model was trained on a dataset comprising 15 million images from LAION2B and proprietary sources. This extensive training set enables the model to perform optimal inference at a resolution of 1024x1024 pixels.

Guide: Running Locally

  1. Install Dependencies: Ensure that you have the required version of the diffusers library.

    pip install diffusers==0.30.2
    
  2. Clone Repository: Obtain the model files by cloning the GitHub repository.

    git clone https://github.com/alimama-creative/FLUX-Controlnet-Inpainting.git
    
  3. Configure Execution: Set up the image_path, mask_path, and prompt within main.py.

  4. Run the Model: Execute the main script to start the inpainting process.

    python main.py
    
  5. GPU Recommendation: For efficient performance, consider using a cloud GPU service with at least 27GB of memory, such as those provided by AWS or Google Cloud.

License

The FLUX.1-DEV ControlNet Inpainting model is released under the FLUX.1-dev Non-Commercial License. Detailed license information can be found here.

More Related APIs in Image To Image