论文标题

Deeppeep:利用设计后果来破译紧凑型DNNS的体系结构

DeepPeep: Exploiting Design Ramifications to Decipher the Architecture of Compact DNNs

论文作者

Jha, Nandan Kumar, Mittal, Sparsh, Kumar, Binod, Mattela, Govardhan

论文摘要

深神经网络(DNN)的显着预测性能导致它们在前所未有的规模和范围的服务领域中采用。但是,DNN的广泛采用和不断增长的商业化强调了知识产权(IP)保护的重要性。由于将DNN计算在基于云的服务中不受信任的加速器外包外包的趋势越来越多,因此制定技术以确保IP保护已成为必要。 DNN的设计方法和超参数是至关重要的信息,并且泄漏它们可能会对组织造成巨大的经济损失。此外,对DNN架构的知识可以增加对抗性攻击的成功概率,在这种情况下,对手会散布输入并改变预测。 在这项工作中,我们设计了一种两阶段的攻击方法“ deeppeep”,该方法利用了设计方法的独特特征,以逆转紧凑型DNN中的构建块的架构。我们显示了“ deeppeep”在P100和P4000 GPU上的功效。此外,我们提出了智能设计操纵策略,以通过Deeppeep攻击来挫败IP盗窃,并提议“安全Mobilenet-V1”。有趣的是,与Vanilla Mobilenet-V1相比,Secure Mobilenet-V1可显着降低推理潜伏期($ \ $ \ $ 60%)和预测性能($ \ $ \ $ \ $ 2%),并具有非常低的记忆和计算间接费用。

The remarkable predictive performance of deep neural networks (DNNs) has led to their adoption in service domains of unprecedented scale and scope. However, the widespread adoption and growing commercialization of DNNs have underscored the importance of intellectual property (IP) protection. Devising techniques to ensure IP protection has become necessary due to the increasing trend of outsourcing the DNN computations on the untrusted accelerators in cloud-based services. The design methodologies and hyper-parameters of DNNs are crucial information, and leaking them may cause massive economic loss to the organization. Furthermore, the knowledge of DNN's architecture can increase the success probability of an adversarial attack where an adversary perturbs the inputs and alter the prediction. In this work, we devise a two-stage attack methodology "DeepPeep" which exploits the distinctive characteristics of design methodologies to reverse-engineer the architecture of building blocks in compact DNNs. We show the efficacy of "DeepPeep" on P100 and P4000 GPUs. Additionally, we propose intelligent design maneuvering strategies for thwarting IP theft through the DeepPeep attack and proposed "Secure MobileNet-V1". Interestingly, compared to vanilla MobileNet-V1, secure MobileNet-V1 provides a significant reduction in inference latency ($\approx$60%) and improvement in predictive performance ($\approx$2%) with very-low memory and computation overheads.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源