论文标题
PCAL:基于对抗性学习的智能信用风险建模框架
PCAL: A Privacy-preserving Intelligent Credit Risk Modeling Framework Based on Adversarial Learning
论文作者
论文摘要
信用风险建模已渗透到我们的日常生活中。大多数银行和金融公司都使用这种技术来对其客户的可信度进行建模。尽管机器学习越来越多地用于该领域,但由于未经授权的黑客每年造成的数十种数据泄露事件,因此,由此产生的大规模用户私人信息收集了隐私辩论,并且(可能更多)授权党派滥用信息/滥用信息。为了解决这些关键问题,本文提出了一个基于对抗性学习(PCAL)的保护隐私风险建模框架。 PCAL旨在通过(迭代)权衡隐私风险损失和面向公用事业的损失之间的权衡(迭代)权衡目标预测任务性能,同时掩盖原始数据集中的私人信息。就效用和隐私保护而言,将PCAL与现成的选项进行了比较。结果表明,PCAL可以从用户数据中学习有效的无隐私表示形式,从而为隐私的机器学习提供了坚实的基础,以进行信用风险分析。
Credit risk modeling has permeated our everyday life. Most banks and financial companies use this technique to model their clients' trustworthiness. While machine learning is increasingly used in this field, the resulting large-scale collection of user private information has reinvigorated the privacy debate, considering dozens of data breach incidents every year caused by unauthorized hackers, and (potentially even more) information misuse/abuse by authorized parties. To address those critical concerns, this paper proposes a framework of Privacy-preserving Credit risk modeling based on Adversarial Learning (PCAL). PCAL aims to mask the private information inside the original dataset, while maintaining the important utility information for the target prediction task performance, by (iteratively) weighing between a privacy-risk loss and a utility-oriented loss. PCAL is compared against off-the-shelf options in terms of both utility and privacy protection. Results indicate that PCAL can learn an effective, privacy-free representation from user data, providing a solid foundation towards privacy-preserving machine learning for credit risk analysis.