论文标题
致力于保护隐私的神经建筑搜索
Towards Privacy-Preserving Neural Architecture Search
论文作者
论文摘要
机器学习促进了各个领域的信号处理的持续开发,包括网络流量监控,脑电图分类,面部识别等。但是,用于培训深度学习模型的大量用户数据引起了隐私问题,并增加了手动调整网络结构的困难。为了解决这些问题,我们建议基于安全多方计算的保护隐私神经体系结构搜索(PP-NAS)框架,以保护用户的数据和模型的参数/超参数。 PP-NAS将NAS任务外包到两个非胶卷云服务器,以充分利用混合协议设计。在现有的PP机器学习框架中补充,我们重新设计了安全的Relu和Max-pooling Coveruts,以明显提高效率($ 3 \ sim 436 $ timess加速)。我们开发了一种新的替代方案,以近似秘密股份的软词函数,该功能绕过了在SoftMax中近似指数操作的限制,同时提高了准确性。广泛的分析和实验证明了PP-NAS在安全性,效率和准确性方面的优势。
Machine learning promotes the continuous development of signal processing in various fields, including network traffic monitoring, EEG classification, face identification, and many more. However, massive user data collected for training deep learning models raises privacy concerns and increases the difficulty of manually adjusting the network structure. To address these issues, we propose a privacy-preserving neural architecture search (PP-NAS) framework based on secure multi-party computation to protect users' data and the model's parameters/hyper-parameters. PP-NAS outsources the NAS task to two non-colluding cloud servers for making full advantage of mixed protocols design. Complement to the existing PP machine learning frameworks, we redesign the secure ReLU and Max-pooling garbled circuits for significantly better efficiency ($3 \sim 436$ times speed-up). We develop a new alternative to approximate the Softmax function over secret shares, which bypasses the limitation of approximating exponential operations in Softmax while improving accuracy. Extensive analyses and experiments demonstrate PP-NAS's superiority in security, efficiency, and accuracy.