Do Inpainting Yourself: Generative Facial Inpainting Guided by Exemplars
Wanglong Lu
Hanli Zhao
Xianta Jiang
Xiaogang Jin
Yongliang Yang
Min Wang
Jiankai Lyu
Kaijie Shi
[Paper]
[GitHub]
Facial inpainting examples using our method. Top two rows: starting with the input image (the top-left sub-image with mask), our method gradually edits the eye style (left), the mouth style (middle left), the hair style (middle), and the facial styles (right) from exemplars. Hairstyles can be edited with the insertion of basic sketches (middle). Real-world and artistic face photos can both be used to direct the inpainting of (blended) facial features in the local edited regions without affecting the visual content of the rest of the image. Bottom row: For occluded portraits with eyeglasses and masks, we perform guided facial image recovery from exemplars.

Abstract

Facial image inpainting is a task of filling visually realistic and semantically meaningful contents for missing or masked pixels in a face image. Although existing methods have made significant progress in achieving high visual quality, the controllable diversity of facial image inpainting remains an open problem in this field. This paper introduces EXE-GAN, a novel diverse and interactive facial inpainting framework, which can not only preserve the high-quality visual effect of the whole image but also complete the face image with exemplar-like facial attributes. The proposed facial inpainting is achieved based on generative adversarial networks by leveraging the global style of input image, the stochastic style, and the exemplar style of exemplar image. A novel attribute similarity metric is introduced to encourage networks to learn the style of facial attributes from the exemplar in a self-supervised way. To guarantee the natural transition across the boundary of inpainted regions, a novel spatial variant gradient backpropagation technique is designed to adjust the loss gradients based on the spatial location. A variety of experimental results and comparisons on public CelebA-HQ and FFHQ datasets are presented to demonstrate the superiority of the proposed method in terms of both the quality and diversity in facial inpainting.


Demo vedio


BiliBIli channel


 [GitHub]


Paper and Supplementary Material

Wanglong Lu, Hanli Zhao*, Xianta Jiang, Xiaogang Jin, Yongliang Yang, Min Wang, Jiankai Lyu, and Kaijie Shi.
Do Inpainting Yourself: Generative Facial Inpainting Guided by Exemplars (hosted on ArXiv)


[Bibtex]


Acknowledgements

We thank Serguei Vassiliev for code debugging, Tao Wang and Jingjing Zheng for helpful discussion, and 63 volunteers for the user survey.
This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; Thanks for their awesome work!.