论文标题
有害的词:量化临床上下文单词嵌入中的偏见
Hurtful Words: Quantifying Biases in Clinical Contextual Word Embeddings
论文作者
论文摘要
在这项工作中,我们检查了嵌入方式可能以不同的方式编码边缘化种群的程度,以及这可能导致偏见的延续并在临床任务上的性能恶化。我们在模仿医院数据集中的医疗说明上预先嵌入模型(BERT),并使用两种方法量化潜在的差异。首先,我们确定了危险的潜在关系,这些关系由上下文单词嵌入使用填充方法捕获,并带有来自真实临床注释的文本和对数概率偏差分数量化的文本。其次,我们评估了超过50个下游临床预测任务的公平定义的性能差距,包括检测急性和慢性病。我们发现,从BERT表示培训的分类器在绩效上表现出统计学上的显着差异,通常在性别,语言,种族和保险状态方面有利于多数群体。最后,我们探讨了在上下文单词嵌入中使用对抗性偏见来混淆亚组信息的缺点,并为在临床环境中的这种深层嵌入模型推荐最佳实践。
In this work, we examine the extent to which embeddings may encode marginalized populations differently, and how this may lead to a perpetuation of biases and worsened performance on clinical tasks. We pretrain deep embedding models (BERT) on medical notes from the MIMIC-III hospital dataset, and quantify potential disparities using two approaches. First, we identify dangerous latent relationships that are captured by the contextual word embeddings using a fill-in-the-blank method with text from real clinical notes and a log probability bias score quantification. Second, we evaluate performance gaps across different definitions of fairness on over 50 downstream clinical prediction tasks that include detection of acute and chronic conditions. We find that classifiers trained from BERT representations exhibit statistically significant differences in performance, often favoring the majority group with regards to gender, language, ethnicity, and insurance status. Finally, we explore shortcomings of using adversarial debiasing to obfuscate subgroup information in contextual word embeddings, and recommend best practices for such deep embedding models in clinical settings.