论文标题

教学模型以表达他们的不确定性

Teaching Models to Express Their Uncertainty in Words

论文作者

Lin, Stephanie, Hilton, Jacob, Evans, Owain

论文摘要

我们表明,GPT-3模型可以学会在不使用模型逻辑的情况下以自然语言来表达其自己答案的不确定性。当提出问题时,该模型同时产生答案和信心水平(例如“ 90%的信心”或“高信心”)。这些级别映射到经过校准的概率。该模型在分配变化下还保持了适度的校准,并且对自己的答案中的不确定性敏感,而不是模仿人类的例子。据我们所知,这是第一次证明模型对自然语言的答案表达了校准的不确定性。为了测试校准,我们介绍了校准任务套件。我们比较了用单词(“言语概率”)表达的不确定性的校准与从模型逻辑提取的不确定性的校准。两种不确定性都能够在分布变化下概括校准。我们还提供了证据表明,GPT-3概括校准的能力取决于预先训练的潜在表示,这些表征与其答案上的认知不确定性相关。

We show that a GPT-3 model can learn to express uncertainty about its own answers in natural language -- without use of model logits. When given a question, the model generates both an answer and a level of confidence (e.g. "90% confidence" or "high confidence"). These levels map to probabilities that are well calibrated. The model also remains moderately calibrated under distribution shift, and is sensitive to uncertainty in its own answers, rather than imitating human examples. To our knowledge, this is the first time a model has been shown to express calibrated uncertainty about its own answers in natural language. For testing calibration, we introduce the CalibratedMath suite of tasks. We compare the calibration of uncertainty expressed in words ("verbalized probability") to uncertainty extracted from model logits. Both kinds of uncertainty are capable of generalizing calibration under distribution shift. We also provide evidence that GPT-3's ability to generalize calibration depends on pre-trained latent representations that correlate with epistemic uncertainty over its answers.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源