paper (low-res) paper (high-res) arXiv


Abstract

Text-to-image diffusion models produce impressive results but are frustrating tools for artists who desire fine-grained control. For example, a common use case is to create images of a specific instance in novel contexts, i.e., "identity-preserving generation". This setting, along with many other tasks (e.g., relighting), is a natural fit for image+text-conditional generative models. However, there is insufficient high-quality paired data to train such a model directly. We propose Diffusion Self-Distillation, a method for using a pre-trained text-to-image model to generate its own dataset for text-conditioned image-to-image tasks. We first leverage a text-to-image diffusion model's in-context generation ability to create grids of images and curate a large paired dataset with the help of a Visual-Language Model. We then fine-tune the text-to-image model into a text+image-to-image model using the curated paired dataset. We demonstrate that Diffusion Self-Distillation outperforms existing zero-shot methods and is competitive with per-instance tuning techniques on a wide range of identity-preservation generation tasks, without requiring test-time optimization.



Method Overview

Overview of our pipeline. Left: the top shows our vanilla paired data generation wheel. We first sample reference image captions from the LAION dataset. These reference captions are parsed through an LLM to be translated into identity-preserved grid generation prompts. We feed these enhanced prompts to a pretrained text-to-image diffusion model to sample potentially identity-preserved grids of images, which are then cropped and composed into vanilla image pairs. On the bottom, we show our data curation pipeline, where the vanilla image paired are fed into a VLM to classify whether they depict identical main subjects. This process mimics a human annotation/curation process while being fully automatic; we use the curated data as our final training data. Right: we extend the diffusion transformer model into an image-conditioned framework by treating the input image as the first frame of a two-frame sequence. The model generates both frames simultaneously—the first reconstructs the input, while the second is the edited output—allowing effective information exchange between the conditioning image and the desired output.


Bibtex


                    @inproceedings{cai2024dsd,
                        author={Cai, Shengqu and Chan, Eric and Zhang, Yunzhi
                                and Guibas, Leonidas and Wu, Jiajun and Wetzstein, Gordon.},
                        title={Diffusion Self-Distillation for Zero-Shot Customized Image Generation},
                        booktitle={arXiv},
                        year={2024}
                    }