论文标题

对风险敏感学习的学习界限

Learning Bounds for Risk-sensitive Learning

论文作者

Lee, Jaeho, Park, Sejun, Shin, Jinwoo

论文摘要

在对风险敏感的学习中,一个人的目的是找到一个假设,该假设可以最大程度地减少规避风险(或寻求风险的)损失,而不是标准的预期损失。在本文中,我们建议研究通过优化确定性等效物(OCE)描述的风险敏感学习方案的概括特性:我们的一般方案可以处理各种已知风险,例如熵风险,均值变化,均值和条件价值 - 风险风险,例如特殊情况。我们为经验最小化器的性能提供了两个学习范围。第一个结果根据假设空间的Rademacher平均值提供了OCE保证,该空间概括并改善了预期损失和有条件的价值风险的现有结果。第二个结果基于基于新方差的OCE表征,给出了预期的损失保证,并抑制了对所选OCE的平滑度的抑制作用。最后,我们通过神经网络的探索实验证明了所提出的界限的实际意义。

In risk-sensitive learning, one aims to find a hypothesis that minimizes a risk-averse (or risk-seeking) measure of loss, instead of the standard expected loss. In this paper, we propose to study the generalization properties of risk-sensitive learning schemes whose optimand is described via optimized certainty equivalents (OCE): our general scheme can handle various known risks, e.g., the entropic risk, mean-variance, and conditional value-at-risk, as special cases. We provide two learning bounds on the performance of empirical OCE minimizer. The first result gives an OCE guarantee based on the Rademacher average of the hypothesis space, which generalizes and improves existing results on the expected loss and the conditional value-at-risk. The second result, based on a novel variance-based characterization of OCE, gives an expected loss guarantee with a suppressed dependence on the smoothness of the selected OCE. Finally, we demonstrate the practical implications of the proposed bounds via exploratory experiments on neural networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源