论文标题

自适应性内在学习:信息压缩的观点,用于示例和订购

Self-Adaptive In-Context Learning: An Information Compression Perspective for In-Context Example Selection and Ordering

论文作者

Wu, Zhiyong, Wang, Yaoxiang, Ye, Jiacheng, Kong, Lingpeng

论文摘要

尽管内部文化学习(ICL)的表现不佳,但随机示例示例作为背景仍然是一种常见的做法。本文提倡ICL的新原则:自适应内在学习。引入了自我适应机制,以帮助每个样本找到一个可以得出正确预测的文字示例置换(即选择和顺序),从而最大程度地提高了性能。为了验证自适应ICL的有效性,我们提出了一个一般的选择框架,然后通过新的选择和排名算法对其进行实例化。在对八个不同的NLP数据集进行了广泛评估后,我们的自适应ICL方法比共同的实践设置可实现40%的相对改善。进一步的分析揭示了自适应ICL的巨大潜力,即在更先进的算法下,它可能能够缩小ICL和芬太尼之间的差距。我们发布的代码是为了促进该领域的未来研究:https://github.com/shark-nlp/self-aptive-icl

Despite the surprising few-shot performance of in-context learning (ICL), it is still a common practice to randomly sample examples to serve as context. This paper advocates a new principle for ICL: self-adaptive in-context learning. The self-adaption mechanism is introduced to help each sample find an in-context example permutation (i.e., selection and ordering) that can derive the correct prediction, thus maximizing performance. To validate the effectiveness of self-adaptive ICL, we propose a general select-then-rank framework and instantiate it with new selection and ranking algorithms. Upon extensive evaluation on eight different NLP datasets, our self-adaptive ICL method achieves a 40% relative improvement over the common practice setting. Further analysis reveals the enormous potential of self-adaptive ICL that it might be able to close the gap between ICL and finetuning given more advanced algorithms. Our code is released to facilitate future research in this area: https://github.com/Shark-NLP/self-adaptive-ICL

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源