论文标题

战略代表

Strategic Representation

论文作者

Nair, Vineet, Ghalme, Ganesh, Talgam-Cohen, Inbal, Rosenfeld, Nir

论文摘要

人类已经依靠机器将过多的信息减少到可管理的表示形式。但是可以滥用这种依赖 - 战略机器可能会制定操纵用户的表示。用户如何根据战略表示做出很好的选择?我们将其正式化为学习问题,并追求算法来进行操纵。在我们关注的主要环境中,系统将项目的属性表示给用户,后者决定是否消耗。我们通过战略分类的镜头(Hardt等,2016)对这种相互作用进行建模,逆转:学习,首先播放的用户;响应的系统排名第二。该系统必须以揭示“除了真理”但不必透露整个真理的表示形式做出响应。因此,用户在战略子集选择下面临学习设置功能的问题,该选项提出了不同的算法和统计挑战。我们的主要结果是一种学习算法,尽管具有战略性表示,该算法可以最大程度地减少错误,而我们的理论分析阐明了学习工作和操纵易感性之间的权衡。

Humans have come to rely on machines for reducing excessive information to manageable representations. But this reliance can be abused -- strategic machines might craft representations that manipulate their users. How can a user make good choices based on strategic representations? We formalize this as a learning problem, and pursue algorithms for decision-making that are robust to manipulation. In our main setting of interest, the system represents attributes of an item to the user, who then decides whether or not to consume. We model this interaction through the lens of strategic classification (Hardt et al. 2016), reversed: the user, who learns, plays first; and the system, which responds, plays second. The system must respond with representations that reveal `nothing but the truth' but need not reveal the entire truth. Thus, the user faces the problem of learning set functions under strategic subset selection, which presents distinct algorithmic and statistical challenges. Our main result is a learning algorithm that minimizes error despite strategic representations, and our theoretical analysis sheds light on the trade-off between learning effort and susceptibility to manipulation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源