论文标题

混合现实中增强的自我感知:以自动标记为中心的手臂分割和数据库

Enhanced Self-Perception in Mixed Reality: Egocentric Arm Segmentation and Database with Automatic Labelling

论文作者

Gonzalez-Sosa, Ester, Perez, Pablo, Tolosana, Ruben, Kachach, Redouane, Villegas, Alvaro

论文摘要

在这项研究中,我们专注于以武器为中心的分割,以改善增强虚拟性(AV)中的自我感知。这项工作的主要贡献是:i)对AV的细分算法进行的全面调查; ii)一个以超过10 000张图像组成的自我为中心的ARM分割数据集,包括肤色和性别的变化。我们提供了自动生成地面图和半合成图像所需的所有细节; iii)首次使用深度学习在AV中进行分割; iv)为了展示该数据库的有用性,我们报告了不同的中心手动数据集的结果,包括GTEA GAIZE+,EDSH,EGOHANDS,EGO YouTube Hands,Thu-Read Read,Tego,Tego,FPAB和EGO手势,允许与现有的使用颜色或深度的现有方法进行直接比较。结果证实了Egoarm数据集对此任务的适用性,根据特定数据集,相对于原始网络,可获得高达40%的改进。结果还表明,尽管基于颜色或深度的方法可以在受控条件下起作用(缺乏阻塞,统一照明,只有在接近范围,受控背景等中感兴趣的对象等),但基于深度学习的以eg中心分段在真实的AV应用中更强大。

In this study, we focus on the egocentric segmentation of arms to improve self-perception in Augmented Virtuality (AV). The main contributions of this work are: i) a comprehensive survey of segmentation algorithms for AV; ii) an Egocentric Arm Segmentation Dataset, composed of more than 10, 000 images, comprising variations of skin color, and gender, among others. We provide all details required for the automated generation of groundtruth and semi-synthetic images; iii) the use of deep learning for the first time for segmenting arms in AV; iv) to showcase the usefulness of this database, we report results on different real egocentric hand datasets, including GTEA Gaze+, EDSH, EgoHands, Ego Youtube Hands, THU-Read, TEgO, FPAB, and Ego Gesture, which allow for direct comparisons with existing approaches utilizing color or depth. Results confirm the suitability of the EgoArm dataset for this task, achieving improvement up to 40% with respect to the original network, depending on the particular dataset. Results also suggest that, while approaches based on color or depth can work in controlled conditions (lack of occlusion, uniform lighting, only objects of interest in the near range, controlled background, etc.), egocentric segmentation based on deep learning is more robust in real AV applications.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源