论文标题

学习使用全球上下文推荐方法名称

Learning to Recommend Method Names with Global Context

论文作者

Liu, Fang, Li, Ge, Fu, Zhiyi, Lu, Shuai, Hao, Yiyang, Jin, Zhi

论文摘要

在编程中,程序实体的名称,尤其是对于方法,是了解代码功能的直观特征。为了确保程序的可读性和可维护性,方法名称应正确命名。具体而言,名称应有意义,并且与其代码库中相关上下文中使用的其他名称一致。近年来,提出了许多自动化方法为方法提出一致的名称,其中包括神经机器翻译(NMT)的模型被广泛使用并获得了最新结果。但是,这些基于NMT的模型主要集中于从方法主体或周围方法中提取特定于代码的特征,项目特定的目标和文档被忽略。我们进行统计分析以探索方法名称及其上下文之间的关系。基于统计结果,我们提出了GTNM,这是一种基于全局变压器的方法名称建议的神经模型,它同时考虑了本地上下文,特定于项目的上下文以及该方法的文档。 Java方法的实验结果表明,我们的模型可以通过大量的方法名称建议胜过最先进的结果,这证明了我们提出的模型的有效性。

In programming, the names for the program entities, especially for the methods, are the intuitive characteristic for understanding the functionality of the code. To ensure the readability and maintainability of the programs, method names should be named properly. Specifically, the names should be meaningful and consistent with other names used in related contexts in their codebase. In recent years, many automated approaches are proposed to suggest consistent names for methods, among which neural machine translation (NMT) based models are widely used and have achieved state-of-the-art results. However, these NMT-based models mainly focus on extracting the code-specific features from the method body or the surrounding methods, the project-specific context and documentation of the target method are ignored. We conduct a statistical analysis to explore the relationship between the method names and their contexts. Based on the statistical results, we propose GTNM, a Global Transformer-based Neural Model for method name suggestion, which considers the local context, the project-specific context, and the documentation of the method simultaneously. Experimental results on java methods show that our model can outperform the state-of-the-art results by a large margin on method name suggestion, demonstrating the effectiveness of our proposed model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源