flux.1 lite 8 B
FreepikIntroduction
Flux.1 Lite is an 8 billion parameter transformer model designed for text-to-image generation. It is a distilled version of the FLUX.1-dev model, offering a more efficient resource usage by reducing RAM consumption by 7 GB and providing a 23% faster runtime while maintaining precision.
Architecture
Flux.1 Lite utilizes a transformer architecture optimized for text-to-image tasks. The model's architecture has been refined to allow skipping certain blocks without significant impact on final image quality, as identified through mean squared error analysis.
Training
The model has been trained with a new dataset, improving upon the previous alpha version. Key enhancements include distillation across a broader range of guidance values and number of steps, as well as a more diverse dataset featuring longer prompts.
Guide: Running Locally
To run Flux.1 Lite locally, follow these steps:
- Install Dependencies: Ensure you have the necessary dependencies, including PyTorch and the
diffusers
library. - Load the Model:
import torch from diffusers import FluxPipeline torch_dtype = torch.bfloat16 device = "cuda" model_id = "Freepik/flux.1-lite-8B" pipe = FluxPipeline.from_pretrained( model_id, torch_dtype=torch_dtype ).to(device)
- Run Inference:
prompt = "A close-up image of a green alien with fluorescent skin in the middle of a dark purple forest" guidance_scale = 3.5 n_steps = 28 seed = 11 with torch.inference_mode(): image = pipe( prompt=prompt, generator=torch.Generator(device="cpu").manual_seed(seed), num_inference_steps=n_steps, guidance_scale=guidance_scale, height=1024, width=1024, ).images[0] image.save("output.png")
For optimal performance, consider using a cloud GPU service such as AWS, Google Cloud, or Azure.
License
Flux.1 Lite is distributed under the flux-1-dev-non-commercial-license. For more details, refer to the license document.