论文标题

教老狗新技巧:有限的监督学习

Teaching the Old Dog New Tricks: Supervised Learning with Constraints

论文作者

Detassis, Fabrizio, Lombardi, Michele, Milano, Michela

论文摘要

在机器学习中增加约束支持有可能解决数据驱动的AI系统(例如安全性和公平性)中的出色问题。现有方法通常将受限的优化技术应用于ML培训,通过调整模型设计或使用约束来纠正输出来实施约束满意度。在这里,我们通过直接使用最先进的约束求解器来研究基于对监督ML方法的“教学”约束满意度的不同,互补的策略:这使得能够利用数十年来对受限优化的数十年来进行有限的努力研究。实际上,我们使用分解方案交替进行主步骤(负责执行约束)和学习者步骤(可以使用任何有监督的ML模型和培训算法)。该过程通常会导致近似约束满意度,并且很难建立收敛性。尽管这一事实,我们从经验上发现,即使是我们的方法的幼稚设置也可以在具有公正性约束的ML任务上以及具有合成约束的经典数据集上表现良好。

Adding constraint support in Machine Learning has the potential to address outstanding issues in data-driven AI systems, such as safety and fairness. Existing approaches typically apply constrained optimization techniques to ML training, enforce constraint satisfaction by adjusting the model design, or use constraints to correct the output. Here, we investigate a different, complementary, strategy based on "teaching" constraint satisfaction to a supervised ML method via the direct use of a state-of-the-art constraint solver: this enables taking advantage of decades of research on constrained optimization with limited effort. In practice, we use a decomposition scheme alternating master steps (in charge of enforcing the constraints) and learner steps (where any supervised ML model and training algorithm can be employed). The process leads to approximate constraint satisfaction in general, and convergence properties are difficult to establish; despite this fact, we found empirically that even a naïve setup of our approach performs well on ML tasks with fairness constraints, and on classical datasets with synthetic constraints.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源