论文标题
视觉和文本的组成混合物表示
Compositional Mixture Representations for Vision and Text
论文作者
论文摘要
学习视觉和语言之间的共同表示空间,使深网可以将图像中的对象与相应的语义含义联系起来。我们提出了一个模型,该模型可以学习共享的高斯混合物表示,将文本的组成性强加于视觉域而没有明确的位置监督。通过将空间变压器与表示学习方法相结合,我们学会将图像分开为编码的补丁,以可解释的方式将视觉和文本表示。根据MNIST和CIFAR10的变化,我们的模型能够执行弱监督的对象检测,并证明其推断对象的观察组合的能力。
Learning a common representation space between vision and language allows deep networks to relate objects in the image to the corresponding semantic meaning. We present a model that learns a shared Gaussian mixture representation imposing the compositionality of the text onto the visual domain without having explicit location supervision. By combining the spatial transformer with a representation learning approach we learn to split images into separately encoded patches to associate visual and textual representations in an interpretable manner. On variations of MNIST and CIFAR10, our model is able to perform weakly supervised object detection and demonstrates its ability to extrapolate to unseen combination of objects.