论文标题

通过黑盒攻击了解基于骨架的人类活动识别的脆弱性

Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack

论文作者

Diao, Yunfeng, Wang, He, Shao, Tianjia, Yang, Yong-Liang, Zhou, Kun, Hogg, David, Wang, Meng

论文摘要

人类活动识别(HAR)已在广泛的应用中使用,例如自动驾驶汽车,安全和生命受到威胁。最近,由于骨骼的稳健性,由于其对对抗性攻击的脆弱性,因此受到了质疑。但是,拟议的攻击需要对受攻击的分类器的全面知识,这是过于限制的。在本文中,即使攻击者只能访问模型的输入/输出,我们也表明了这种威胁确实存在。为此,我们提出了基于骨架的HAR BASAR中的第一种Black-Box对抗攻击方法。巴萨尔探讨了分类边界与自然运动歧管之间的相互作用。据我们所知,这是时间序列的对抗攻击中第一次引入数据歧管。通过巴萨尔,我们发现在骨骼运动中,对术中的对抗样本非常欺骗,而且在骨骼运动中相当普遍,这与普遍认为仅存在的对抗样品存在的普遍信念相反。通过详尽的评估,我们表明巴萨可以在分类器,数据集和攻击模式下成功攻击。通过攻击,巴萨有助于确定模型漏洞的潜在原因,并就可能的改进提供见解。最后,为了减轻新确定的威胁,我们通过利用ON/OFF-MANIFOLD对抗样本的复杂分布(称为基于混合的基于歧管的对抗性训练(MMAT))提出了一种新的对抗训练方法。 MMAT可以在不损害分类准确性的情况下成功防御对抗攻击。

Human Activity Recognition (HAR) has been employed in a wide range of applications, e.g. self-driving cars, where safety and lives are at stake. Recently, the robustness of skeleton-based HAR methods have been questioned due to their vulnerability to adversarial attacks. However, the proposed attacks require the full-knowledge of the attacked classifier, which is overly restrictive. In this paper, we show such threats indeed exist, even when the attacker only has access to the input/output of the model. To this end, we propose the very first black-box adversarial attack approach in skeleton-based HAR called BASAR. BASAR explores the interplay between the classification boundary and the natural motion manifold. To our best knowledge, this is the first time data manifold is introduced in adversarial attacks on time series. Via BASAR, we find on-manifold adversarial samples are extremely deceitful and rather common in skeletal motions, in contrast to the common belief that adversarial samples only exist off-manifold. Through exhaustive evaluation, we show that BASAR can deliver successful attacks across classifiers, datasets, and attack modes. By attack, BASAR helps identify the potential causes of the model vulnerability and provides insights on possible improvements. Finally, to mitigate the newly identified threat, we propose a new adversarial training approach by leveraging the sophisticated distributions of on/off-manifold adversarial samples, called mixed manifold-based adversarial training (MMAT). MMAT can successfully help defend against adversarial attacks without compromising classification accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源