Control Net
lllyasvielIntroduction
ControlNet is a model designed to enhance Stable Diffusion (SD) by integrating various detection techniques such as edge, depth, and pose detection. This allows for more controlled image generation.
Architecture
ControlNet consists of pretrained weights and detection models, which include:
- Canny edge detection
- Midas depth estimation
- HED edge detection
- M-LSD line detection
- Normal map generation
- OpenPose pose detection
- Human scribbles
- Semantic segmentation
Each model is designed to manipulate SD through specific detection protocols, enabling refined control over image outputs.
Training
ControlNet provides a training dataset named fill50k.zip
for training tutorials. It utilizes various third-party models such as OpenPose and Midas for different detection tasks to enhance the performance of the ControlNet models.
Guide: Running Locally
To run ControlNet locally, follow these steps:
- Clone the Repository: Download the ControlNet repository from GitHub.
- Install Dependencies: Ensure you have all necessary dependencies installed, which may include Python libraries for machine learning and image processing.
- Download Models: Acquire the necessary pretrained model files from the repository or Hugging Face.
- Execute Model Scripts: Use the appropriate scripts to run models, adjusting parameters as needed for specific detection tasks.
For better performance, it's recommended to utilize cloud GPUs such as those offered by AWS, Google Cloud, or Azure.
License
ControlNet is distributed under the OpenRAIL license, which includes guidelines against misuse. It prohibits generating content that is hostile, offensive, or perpetuates stereotypes.