论文标题
对音频源分离的对抗性攻击
Adversarial attacks on audio source separation
论文作者
论文摘要
尽管基于神经网络的音频源分离方法及其广泛的应用表现出色,但它们针对故意攻击的稳健性已被大大忽略。在这项工作中,我们将各种对抗性攻击方法重新制定了音频源分离问题,并在不同的攻击条件和目标模型下进行了深入研究。我们进一步提出了一种简单而有效的正则化方法,以获得不可察觉的对抗噪声,同时以低计算复杂性对分离质量的影响最大化。实验结果表明,当为目标模型制定噪声时,可以通过添加明显的小噪声来在很大程度上降低分离质量。我们还显示了针对黑盒攻击的源分离模型的鲁棒性。这项研究为开发滥用分离信号并改善分离性能和鲁棒性的内容提供了潜在有用的见解。
Despite the excellent performance of neural-network-based audio source separation methods and their wide range of applications, their robustness against intentional attacks has been largely neglected. In this work, we reformulate various adversarial attack methods for the audio source separation problem and intensively investigate them under different attack conditions and target models. We further propose a simple yet effective regularization method to obtain imperceptible adversarial noise while maximizing the impact on separation quality with low computational complexity. Experimental results show that it is possible to largely degrade the separation quality by adding imperceptibly small noise when the noise is crafted for the target model. We also show the robustness of source separation models against a black-box attack. This study provides potentially useful insights for developing content protection methods against the abuse of separated signals and improving the separation performance and robustness.