A W Portrait F L
Shakker-LabsIntroduction
AWPortrait-FL is a text-to-image model finetuned on the FLUX.1-dev base model. It is trained with nearly 2,000 high-quality fashion photography images to improve composition and details, delivering more realistic skin textures and aesthetics.
Architecture
AWPortrait-FL utilizes the FluxPipeline from the Diffusers library. It builds on the FLUX.1-dev model, incorporating advanced image-generation techniques to enhance visual outputs.
Training
The model is fine-tuned using a dataset from AWPortrait-XL, focusing on fashion photography with high aesthetic standards. This involves training methods that elevate the quality of composition and detail, under the supervision of DynamicWang at AWPlanet.
Guide: Running Locally
-
Install Required Libraries: Ensure you have the
diffusers
library andtorch
installed in your Python environment. -
Load the Model:
import torch from diffusers import FluxPipeline pipe = FluxPipeline.from_pretrained("Shakker-Labs/AWPortrait-FL", torch_dtype=torch.bfloat16) pipe.to("cuda")
-
Generate an Image:
prompt = "close up portrait, Amidst the interplay of light and shadows..." image = pipe(prompt, num_inference_steps=24, guidance_scale=3.5, width=768, height=1024).images[0] image.save("example.png")
-
LoRA Inference: For memory efficiency, use LoRA weights.
pipe.load_lora_weights('Shakker-Labs/AWPortrait-FL', weight_name='AWPortrait-FL-lora.safetensors') pipe.fuse_lora(lora_scale=0.9)
-
Cloud GPUs: Consider using cloud-based GPU services like AWS, Google Cloud, or Azure for optimal performance.
License
The model is released under the flux-1-dev-non-commercial-license. Generated images are intended for non-commercial use, adhering to the licensing terms. For further details, refer to the license document.