论文标题
对脆弱性检测深度学习模型的经验研究
An Empirical Study of Deep Learning Models for Vulnerability Detection
论文作者
论文摘要
代码的深度学习模型最近报告了漏洞检测的巨大进展。在某些情况下,基于DL的模型的表现优于静态分析工具。尽管已经提出了许多出色的模型,但我们尚未对这些模型有很好的了解。这限制了模型鲁棒性,调试和部署漏洞检测的进一步发展。在本文中,我们对2个广泛使用的漏洞检测数据集进行了调查并再现了9个最先进的(SOTA)深度学习模型:Devign和MSR。我们研究了三个领域的6个研究问题,即模型功能,培训数据和模型解释。我们通过实验证明了模型的不同运行与不同模型输出之间的低一致性之间的差异。我们研究了经过对所有漏洞训练的模型进行培训的模型。我们探索了DL的程序类型可能会认为“难”要处理。我们研究了培训数据大小和培训数据组成与模型性能的关系。最后,我们研究了模型解释,并分析了模型用于做出预测的重要特征。我们认为,我们的发现可以帮助更好地理解模型结果,为准备培训数据提供指导以及改善模型的鲁棒性。我们所有的数据集,代码和结果均可在https://doi.org/10.6084/m9.figshare.20791240上找到。
Deep learning (DL) models of code have recently reported great progress for vulnerability detection. In some cases, DL-based models have outperformed static analysis tools. Although many great models have been proposed, we do not yet have a good understanding of these models. This limits the further advancement of model robustness, debugging, and deployment for the vulnerability detection. In this paper, we surveyed and reproduced 9 state-of-the-art (SOTA) deep learning models on 2 widely used vulnerability detection datasets: Devign and MSR. We investigated 6 research questions in three areas, namely model capabilities, training data, and model interpretation. We experimentally demonstrated the variability between different runs of a model and the low agreement among different models' outputs. We investigated models trained for specific types of vulnerabilities compared to a model that is trained on all the vulnerabilities at once. We explored the types of programs DL may consider "hard" to handle. We investigated the relations of training data sizes and training data composition with model performance. Finally, we studied model interpretations and analyzed important features that the models used to make predictions. We believe that our findings can help better understand model results, provide guidance on preparing training data, and improve the robustness of the models. All of our datasets, code, and results are available at https://doi.org/10.6084/m9.figshare.20791240.