Double Exposure Diffusion

joachimsallstrom

Introduction

Double Exposure Diffusion is a model designed to generate double exposure style images, particularly focusing on images of people and some animals. The model uses the "dublex" style token to create the effect and is available for download.

Architecture

The model is based on the Stable Diffusion framework and leverages the StableDiffusionPipeline for generating images. It's compatible with Hugging Face's Inference Endpoints, allowing for easy deployment and use.

Training

Double Exposure Diffusion version 2 was trained using Shivam's DreamBooth model on Google Colab, completing 2000 training steps. The training focused on enhancing the model's ability to create double exposure images, specifically tailored for portraits and certain animals.

Guide: Running Locally

To run the Double Exposure Diffusion model locally, follow these steps:

  1. Clone the repository: Download the model files from the Hugging Face repository.
  2. Set up environment: Install the necessary Python packages, including Hugging Face's transformers and diffusers libraries.
  3. Load the model: Use the StableDiffusionPipeline to load the model checkpoint (Double_Exposure_v2.ckpt).
  4. Generate images: Input prompts using the "dublex" style token to create images.

Suggested Cloud GPUs: For optimal performance, consider using cloud GPUs such as those offered by Google Cloud, AWS, or Azure.

License

The Double Exposure Diffusion model is released under the CreativeML OpenRAIL-M license. Key points include:

  1. Prohibition against using the model for illegal or harmful content.
  2. No rights claimed by the authors on generated outputs, with accountability on the user.
  3. Redistribution and commercial use are permitted, provided the same license restrictions are maintained, and a copy of the license is shared with users. For full details, review the license.

More Related APIs in Text To Image