论文标题

半参数化妆通过语义意识对应

Semi-parametric Makeup Transfer via Semantic-aware Correspondence

论文作者

Zhu, Mingrui, Yi, Yun, Wang, Nannan, Wang, Xiaoyu, Gao, Xinbo

论文摘要

源非制作图像和参考化妆图像之间的巨大差异是化妆转移的关键挑战之一。传统的化妆转移方法要么以两个图像之间的参数方式学习分离的表示形式,要么以参数方式执行像素对应。我们认为,非参数技术在解决姿势,表达和遮挡差异方面具有很高的潜力。为此,本文提出了一个\ textbf {s} emi- \ textbf {p} arametric \ textbf {m} akeup \ textbf {t} ransfer(spmt)方法,该方法结合了非参数和参数机制的相互强度。非参数组件是一种新颖的\ textbf {s} emantic- \ textbf {a} ware \ textbf {c} orrespondence(sac)模块,该模块在组件语义上的强限制下明确重建内容表示内容表示内容表示内容。需要重建的表示形式来保留源图像的空间和身份信息,同时“佩戴”参考图像的构成。输出图像是通过借助重建表示的参数解码器合成的。广泛的实验证明了我们方法在视觉质量,鲁棒性和灵活性方面的优越性。代码和预训练模型可在\ url {https://github.com/anonymscholar/spmt上获得。

The large discrepancy between the source non-makeup image and the reference makeup image is one of the key challenges in makeup transfer. Conventional approaches for makeup transfer either learn disentangled representation or perform pixel-wise correspondence in a parametric way between two images. We argue that non-parametric techniques have a high potential for addressing the pose, expression, and occlusion discrepancies. To this end, this paper proposes a \textbf{S}emi-\textbf{p}arametric \textbf{M}akeup \textbf{T}ransfer (SpMT) method, which combines the reciprocal strengths of non-parametric and parametric mechanisms. The non-parametric component is a novel \textbf{S}emantic-\textbf{a}ware \textbf{C}orrespondence (SaC) module that explicitly reconstructs content representation with makeup representation under the strong constraint of component semantics. The reconstructed representation is desired to preserve the spatial and identity information of the source image while "wearing" the makeup of the reference image. The output image is synthesized via a parametric decoder that draws on the reconstructed representation. Extensive experiments demonstrate the superiority of our method in terms of visual quality, robustness, and flexibility. Code and pre-trained model are available at \url{https://github.com/AnonymScholar/SpMT.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源