论文标题

基于生成模型的深度学习和线性分类器模型的对抗性安全

A Generative Model based Adversarial Security of Deep Learning and Linear Classifier Models

论文作者

Catak, erhat Ozgur, Sivaslioglu, Samed, Sahinbas, Kevser

论文摘要

近年来,机器学习算法已广泛应用于健康,运输和自动驾驶汽车等各个领域。随着深度学习技术的快速发展,至关重要的是要考虑到算法的应用。尽管机器学习在应用算法方面具有很大的优势,但安全问题被忽略了。由于它在现实世界中具有许多应用程序,因此安全是算法的重要组成部分。在本文中,我们提出了一种缓解方法,以使用自动编码器模型对机器学习模型进行对抗性攻击,该模型是生成的一种。针对机器学习模型的对抗攻击背后的主要思想是通过操纵训练有素的模型产生错误的结果。我们还通过使用不同的方法(例如非针对性的和有针对性的攻击)对多级逻辑回归,快速梯度标志方法,目标快速梯度方法以及对MNIST的基本迭代方法攻击MNIST数据的各种方法,将自动编码器模型的性能从深神经网络到传统算法的各种攻击方法进行了性能。

In recent years, machine learning algorithms have been applied widely in various fields such as health, transportation, and the autonomous car. With the rapid developments of deep learning techniques, it is critical to take the security concern into account for the application of the algorithms. While machine learning offers significant advantages in terms of the application of algorithms, the issue of security is ignored. Since it has many applications in the real world, security is a vital part of the algorithms. In this paper, we have proposed a mitigation method for adversarial attacks against machine learning models with an autoencoder model that is one of the generative ones. The main idea behind adversarial attacks against machine learning models is to produce erroneous results by manipulating trained models. We have also presented the performance of autoencoder models to various attack methods from deep neural networks to traditional algorithms by using different methods such as non-targeted and targeted attacks to multi-class logistic regression, a fast gradient sign method, a targeted fast gradient sign method and a basic iterative method attack to neural networks for the MNIST dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源