论文标题
算法辅助依赖建议偏好
Algorithmic Assistance with Recommendation-Dependent Preferences
论文作者
论文摘要
当算法提供风险评估时,我们通常认为它们是对人类决策的有用意见,例如在将风险评分提交给法官或医生时。但是,决策者不仅可以对算法提供的信息做出反应。决策者还可以将算法建议视为默认行动,使他们偏离算法,例如,当法官不愿推翻被告的高风险评估时,或者医生担心偏离推荐程序的后果。为了解决算法援助的这种意想不到的后果,我们提出了一个联合人机决策的主要代理模型。在此模型中,我们考虑算法建议的效果和设计不仅是通过转移信念,而且通过改变偏好来影响选择的效果和设计。我们从制度因素(例如避免审核的愿望)以及行为科学中建立的模型中促进了这种假设,这些模型相对于参考点,这些模型预测了相对于参考点的损失厌恶,这是由算法设定的。我们表明,与建议有关的偏好造成了效率低下,而决策者对建议过于响应。作为一种潜在的补救措施,我们讨论了从战略上扣留建议的算法,并展示它们如何提高最终决策的质量。
When an algorithm provides risk assessments, we typically think of them as helpful inputs to human decisions, such as when risk scores are presented to judges or doctors. However, a decision-maker may not only react to the information provided by the algorithm. The decision-maker may also view the algorithmic recommendation as a default action, making it costly for them to deviate, such as when a judge is reluctant to overrule a high-risk assessment for a defendant or a doctor fears the consequences of deviating from recommended procedures. To address such unintended consequences of algorithmic assistance, we propose a principal-agent model of joint human-machine decision-making. Within this model, we consider the effect and design of algorithmic recommendations when they affect choices not just by shifting beliefs, but also by altering preferences. We motivate this assumption from institutional factors, such as a desire to avoid audits, as well as from well-established models in behavioral science that predict loss aversion relative to a reference point, which here is set by the algorithm. We show that recommendation-dependent preferences create inefficiencies where the decision-maker is overly responsive to the recommendation. As a potential remedy, we discuss algorithms that strategically withhold recommendations, and show how they can improve the quality of final decisions.