Recitation-Augmented Language Models

https://papers.cool/arxiv/2210.01296

Authors: Zhiqing Sun ; Xuezhi Wang ; Yi Tay ; Yiming Yang ; Denny Zhou

Summary: We propose a new paradigm to help Large Language Models (LLMs) generate more accurate factual knowledge without retrieving from an external corpus, called RECITation-augmented gEneration (RECITE). Different from retrieval-augmented language models that retrieve relevant documents before generating the outputs, given an input, RECITE first recites one or several relevant passages from LLMs' own memory via sampling, and then produces the final answers. We show that RECITE is a powerful paradigm for knowledge-intensive NLP tasks. Specifically, we show that by utilizing recitation as the intermediate step, a recite-and-answer scheme can achieve new state-of-the-art performance in various closed-book question answering (CBQA) tasks. In experiments, we verify the effectiveness of \method~on four pre-trained models (PaLM, UL2, OPT, and Codex) and three CBQA tasks (Natural Questions, TriviaQA, and HotpotQA). Our code is available at "https://github.com/Edward-Sun/RECITE".


Q: 这篇论文试图解决什么问题?

A: 这篇论文提出了一个新的范式,旨在帮助大型语言模型(LLMs)在不从外部语料库检索信息的情况下生成更准确的事实知识。这个问题被称为RECITation-augmented gEneration(RECITE)。与检索增强型语言模型在生成输出前检索相关文档不同,RECITE首先通过抽样从模型自身的记忆中“背诵”一个或多个相关段落,然后产生最终答案。论文展示了RECITE在知识密集型自然语言处理(NLP)任务中的强大能力,特别是在闭卷问答(CBQA)任务中,通过利用背诵作为中间步骤,可以实现新的最先进性能。


Q: 有哪些相关研究?

A: 相关研究领域包括:

  1. 开放域问答(Open-domain Question Answering)
  2. 闭卷问答(Closed-Book Question Answering)
  3. 上下文学习(In-Context Few-Shot Learning)
  4. 理由增强推理(Rationale-Augmented Reasoning)
  5. 大型语言模型的记忆能力(Memorization in Large Language Models)
  6. Transformer记忆作为信息检索模型(Transformer Memory as an Information Retrieval Model)
  7. 自对话模型(Self-Talk Models)

这些研究领域为RECITE方法提供了理论基础和技术支持,尤其是在如何设计提示(prompts)以激发模型的潜在知识和推理能力方面。


Q: 论文如何解决这个问题?