F L U X.1 schnell gguf
city96Introduction
The FLUX.1-SCHNELL-GGUF model is a text-to-image generation model converted from the original black-forest-labs/FLUX.1-schnell
. It utilizes the GGUF library for its operations and has been quantized by the user city96. This model is suitable for generating images based on textual input.
Architecture
The model is based on the GGUF library, which supports text-to-image and image-generation pipelines. It has been specifically optimized for use with the ComfyUI-GGUF custom node, allowing for efficient deployment and operation. The base model for this conversion is black-forest-labs/FLUX.1-schnell
.
Training
The model was converted directly from its base version using quantization techniques to enhance performance. Users should refer to the quantization types chart available in the provided link for further understanding of how these techniques impact model performance.
Guide: Running Locally
- Clone the Repository: Download the model files from the Hugging Face repository.
- Install ComfyUI: Follow the instructions in the ComfyUI-GGUF GitHub repository to set up the custom node.
- Place Model Files: Move the downloaded model files to the
ComfyUI/models/unet
directory. - Run the Model: Use ComfyUI to load and execute the model.
Suggested Cloud GPUs
For optimal performance, consider using cloud services that offer NVIDIA GPUs, such as AWS, Google Cloud Platform, or Azure.
License
The model is released under the Apache 2.0 license, allowing for both personal and commercial usage under the terms specified in the license.