论文标题

蒙面的脸部通过残留的注意力插入

Masked Face Inpainting Through Residual Attention UNet

论文作者

Hosen, Md Imran, Islam, Md Baharul

论文摘要

具有较高纹理区域(例如去除面罩)的逼真的图像恢复具有挑战性。最新的基于深度学习的方法无法保证高保真性,由于梯度问题消失(例如,在初始层中略有更新的权重)和空间信息损失会导致训练不稳定。它们还取决于中介阶段,例如分割含义需要外部掩码。本文提出了一种使用残留注意的盲面膜面部涂底漆方法,以清除面膜,并用细节恢复脸部,同时用地面真相的脸部结构最大程度地减少缝隙。一个残留的块将信息馈送到下一层,直接进入约两个啤酒花的图层,以解决梯度消失的问题。此外,注意力单元帮助模型专注于相关的面具区域,减少资源并更快地使模型。公开可用的Celeba数据集的广泛实验表明了我们提出的模型的可行性和鲁棒性。代码可在\ url {https://github.com/mdhosen/mask-face-inpainting-iusing-isidual-criptity-unet}获得代码

Realistic image restoration with high texture areas such as removing face masks is challenging. The state-of-the-art deep learning-based methods fail to guarantee high-fidelity, cause training instability due to vanishing gradient problems (e.g., weights are updated slightly in initial layers) and spatial information loss. They also depend on intermediary stage such as segmentation meaning require external mask. This paper proposes a blind mask face inpainting method using residual attention UNet to remove the face mask and restore the face with fine details while minimizing the gap with the ground truth face structure. A residual block feeds info to the next layer and directly into the layers about two hops away to solve the gradient vanishing problem. Besides, the attention unit helps the model focus on the relevant mask region, reducing resources and making the model faster. Extensive experiments on the publicly available CelebA dataset show the feasibility and robustness of our proposed model. Code is available at \url{https://github.com/mdhosen/Mask-Face-Inpainting-Using-Residual-Attention-Unet}

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源