Hunyuan Di T_ Controlnet_inpainting
TTPlanetIntroduction
The HunyuanDiT Controlnet Inpainting model by TTPlanet offers advanced capabilities for inpainting, maintaining stylistic consistency and creativity in modified images. It supports diverse resolutions and robustly adapts to various base models.
Architecture
The model leverages the ComfyUI inpainting preprocessor to integrate masks and images, enhancing its ability to maintain the original image style while applying changes.
Training
The model has been trained on a variety of images and masks, enabling it to handle different resolutions and maintain consistency in style and depth perception during inpainting tasks.
Guide: Running Locally
- Clone the Repository: Start by cloning the model repository from Hugging Face.
- Dependencies: Install necessary dependencies, including the ComfyUI preprocessor from the linked GitHub repository.
- Run the Model: Use Python to execute the model with your images and masks.
- Cloud GPUs: For optimal performance, especially with high-resolution images, consider using cloud GPU services such as AWS, Google Cloud, or Azure.
License
The project is licensed under the MIT License, allowing for flexible use and modification.