Stable_ Diffusion_ Paper Cut_ Model

Fictiverse

Introduction

The Paper Cut Model V1 is a fine-tuned version of the Stable Diffusion model, specifically trained on Paper Cut images. It is designed for text-to-image generation using prompts that include "PaperCut."

Architecture

The model is based on Stable Diffusion 1.5 and utilizes the diffusers library for deployment. It supports exporting to ONNX, MPS, and FLAX/JAX formats for optimization and compatibility with various platforms.

Training

The Paper Cut Model V1 has been fine-tuned on images that represent the Paper Cut art style. This fine-tuning allows the model to generate images that closely resemble this unique artistic style when prompted accordingly.

Guide: Running Locally

To run the Paper Cut Model V1 locally, follow these steps:

  1. Install Required Libraries:

    pip install diffusers torch
    
  2. Load and Run the Model:

    from diffusers import StableDiffusionPipeline
    import torch
    
    model_id = "Fictiverse/Stable_Diffusion_PaperCut_Model"
    pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
    pipe = pipe.to("cuda")
    
    prompt = "PaperCut R2-D2"
    image = pipe(prompt).images[0]
    
    image.save("./R2-D2.png")
    
  3. Hardware Recommendation:

    • It is recommended to use a cloud GPU for optimal performance, such as those provided by AWS, Google Cloud, or Azure.

License

The Paper Cut Model V1 is released under the CreativeML Open RAIL-M license, which allows for both non-commercial and commercial use, provided that users adhere to the terms specified in the license.

More Related APIs in Text To Image