论文标题
通过随机过程进行语言建模
Language modeling via stochastic processes
论文作者
论文摘要
现代语言模型可以生成高质量的短文。但是,它们在生成更长的文本时通常蜿蜒曲折或不连贯。这些问题是由下一步的语言建模目标引起的。自学学习的最新工作表明,模型可以通过对比度学习学习良好的潜在表示,这对于歧视性任务可能是有效的。我们的工作分析了对比表示在生成任务(如长文本生成)中的应用。我们提出了一种利用约束表示的方法,我们称之为时间控制(TC)。 TC首先了解目标文本域的对比表示,然后通过解码这些表示形式生成文本。与各种文本域中的特定于域特异性方法和微调GPT2相比,TC竞争性地针对有关话语相干性学习句子表示的特定方法。在长文本生成设置上,TC在订购方面保留了文本结构(最高$+15 \%$更好)和文本长度一致性(最高$+90 \%$更好)。
Modern language models can generate high-quality short texts. However, they often meander or are incoherent when generating longer texts. These issues arise from the next-token-only language modeling objective. Recent work in self-supervised learning suggests that models can learn good latent representations via contrastive learning, which can be effective for discriminative tasks. Our work analyzes the application of contrastive representations for generative tasks, like long text generation. We propose one approach for leveraging constrastive representations, which we call Time Control (TC). TC first learns a contrastive representation of the target text domain, then generates text by decoding from these representations. Compared to domain-specific methods and fine-tuning GPT2 across a variety of text domains, TC performs competitively to methods specific for learning sentence representations on discourse coherence. On long text generation settings, TC preserves the text structure both in terms of ordering (up to $+15\%$ better) and text length consistency (up to $+90\%$ better).