论文标题
战略分类的学习损失
Learning Losses for Strategic Classification
论文作者
论文摘要
战略分类,即在可能的战略操纵下进行的分类,从机器学习和游戏理论社区引起了很多关注。大多数作品都专注于在此类操作下分析最佳决策规则的属性。在我们的工作中,我们采用学习理论的观点,重点是学习良好决策规则所需的样本复杂性,这对战略操纵是可靠的。我们通过引入新的损失函数\ emph {策略操纵损失}来执行此分析,该功能同时考虑了最终决策规则的准确性及其对操纵的脆弱性。我们根据功能类别和操纵图的复杂性分析了样本复杂性的样本复杂性。此外,我们初始化了在相关代理的未知操作能力下进行学习的研究。使用传递学习理论的技术,我们为操纵图定义了一个相似性度量,并表明学习成果相对于操纵图中的小变化是可靠的。最后,我们分析了对这种相似性度量的操纵能力的学习(样本复杂性)的(样本复杂性),从而提供了相对于未知的操纵图的战略分类的新颖保证。
Strategic classification, i.e. classification under possible strategic manipulations of features, has received a lot of attention from both the machine learning and the game theory community. Most works focus on analysing properties of the optimal decision rule under such manipulations. In our work we take a learning theoretic perspective, focusing on the sample complexity needed to learn a good decision rule which is robust to strategic manipulation. We perform this analysis by introducing a novel loss function, the \emph{strategic manipulation loss}, which takes into account both the accuracy of the final decision rule and its vulnerability to manipulation. We analyse the sample complexity for a known graph of possible manipulations in terms of the complexity of the function class and the manipulation graph. Additionally, we initialize the study of learning under unknown manipulation capabilities of the involved agents. Using techniques from transfer learning theory, we define a similarity measure for manipulation graphs and show that learning outcomes are robust with respect to small changes in the manipulation graph. Lastly, we analyse the (sample complexity of) learning of the manipulation capability of agents with respect to this similarity measure, providing novel guarantees for strategic classification with respect to an unknown manipulation graph.