论文标题
人与自动反馈对学生对AI概念和编程风格的理解的影响
Effects of Human vs. Automatic Feedback on Students' Understanding of AI Concepts and Programming Style
论文作者
论文摘要
在大型本科编程课程中,自动分级工具的使用几乎变得普遍存在,最近的工作重点是提高自动产生的反馈的质量。但是,在接收计算机生成的反馈和人写的反馈时,相对缺乏数据直接比较学生的结果。本文通过将一个90个学生的类别分成两个反馈组并分析两个队列表现的差异来解决这一差距。该课程是AI的介绍,并具有编程HW分配。一组学生收到了有关其编程作业的详细计算机生成的反馈,描述了算法的逻辑中缺少哪些部分;另一个小组还收到了人写的反馈,描述了他们的程序的语法如何与其逻辑和定性(样式)建议有关改进其代码的建议相关。关于测验和考试问题的结果表明,人类反馈有助于学生获得更好的概念理解,但是分析发现小组在最终项目上合作的能力没有区别。课程年级分配表明,收到人写的反馈的学生总体表现更好。在每组的中间两个四分位数中,这种效果最为明显。这些结果表明,有关语法逻辑关系的反馈可能是人类反馈改善学生结果的主要机制。
The use of automatic grading tools has become nearly ubiquitous in large undergraduate programming courses, and recent work has focused on improving the quality of automatically generated feedback. However, there is a relative lack of data directly comparing student outcomes when receiving computer-generated feedback and human-written feedback. This paper addresses this gap by splitting one 90-student class into two feedback groups and analyzing differences in the two cohorts' performance. The class is an intro to AI with programming HW assignments. One group of students received detailed computer-generated feedback on their programming assignments describing which parts of the algorithms' logic was missing; the other group additionally received human-written feedback describing how their programs' syntax relates to issues with their logic, and qualitative (style) recommendations for improving their code. Results on quizzes and exam questions suggest that human feedback helps students obtain a better conceptual understanding, but analyses found no difference between the groups' ability to collaborate on the final project. The course grade distribution revealed that students who received human-written feedback performed better overall; this effect was the most pronounced in the middle two quartiles of each group. These results suggest that feedback about the syntax-logic relation may be a primary mechanism by which human feedback improves student outcomes.