PILOT: Coherent and Multi-modality Image Inpainting via Latent Space Optimization

1 Xi'an Jiaotong University 2 EPFL
Teaser.

PILOT enables users to generate images in single and multiple modalities, such as text, text + image prompt, text + scribbles, and subject as reference.

Abstract

With the advancements in denoising diffusion probabilistic models (DDPMs), image inpainting has undergone a significant evolution, transitioning from filling information based on nearby regions to generating content conditioned on various factors such as text, exemplar images, sketches, etc. However, existing methods often necessitate fine-tuning of the model or concatenation of latent vectors, leading to drawbacks such as generation failure due to overfitting and inconsistent foreground generation. In this paper, we argue that the current large models are powerful enough to generate realistic images without further tuning. Hence, we introduce PILOT (inPainting vIa Latent OpTimization), an optimization approach grounded on a novel semantic centralization and background loss to identify latent spaces capable of generating inpainted regions that exhibit high fidelity to user-provided prompts while maintaining coherence with the background region. Crucially, our method seamlessly integrates with any pre-trained model, including ControlNet and DreamBooth, making it suitable for deployment in multi-modal editing tools. Our qualitative and quantitative evaluations demonstrate that our method outperforms existing approaches by generating more coherent, diverse, and faithful inpainted regions to the provided prompts.

Model Architecture

(a) Full pipeline of PILOT.


(b) Illustration of our optimization strategy.

We present a novel image inpainting framework, PILOT, comprising a novel optimization stage and a blending stage to enable multi-modality control through conditions. (a) First we apply our latent optimization strategy, where we adjust the direction of gradients and identify the optimal latent vector every τ steps, followed by the normal reverse diffusion process. After that, we apply the latent blending strategy until the denoising process is complete. (b) We depict how we optimize the latent vector with prompts from one or more modalities as conditions.


Qualitative Results

PILOT can generate images in the guidance of multi-modality conditions.

With text prompts:


With spatial controls:


With image prompts:


With reference instances:


And achieve fine-grained editing step by step:



BibTeX

If you find our work useful, please cite our paper:

@misc{pan2024coherentmultimodalityimageinpainting,
      title={Coherent and Multi-modality Image Inpainting via Latent Space Optimization}, 
      author={Lingzhi Pan and Tong Zhang and Bingyuan Chen and Qi Zhou and Wei Ke and Sabine Süsstrunk and Mathieu Salzmann},
      year={2024},
      eprint={2407.08019},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2407.08019}, 
}