REALM: Retrieval-Augmented Language Model Pre-Training
https://papers.cool/arxiv/2002.08909
Authors: Kelvin Guu ; Kenton Lee ; Zora Tung ; Panupong Pasupat ; Ming-Wei Chang
Summary: Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering. However, this knowledge is stored implicitly in the parameters of a neural network, requiring ever-larger networks to cover more facts. To capture knowledge in a more modular and interpretable way, we augment language model pre-training with a latent knowledge retriever, which allows the model to retrieve and attend over documents from a large corpus such as Wikipedia, used during pre-training, fine-tuning and inference. For the first time, we show how to pre-train such a knowledge retriever in an unsupervised manner, using masked language modeling as the learning signal and backpropagating through a retrieval step that considers millions of documents. We demonstrate the effectiveness of Retrieval-Augmented Language Model pre-training (REALM) by fine-tuning on the challenging task of Open-domain Question Answering (Open-QA). We compare against state-of-the-art models for both explicit and implicit knowledge storage on three popular Open-QA benchmarks, and find that we outperform all previous methods by a significant margin (4-16% absolute accuracy), while also providing qualitative benefits such as interpretability and modularity.
A: 这篇论文提出了一种名为REALM(Retrieval-Augmented Language Model Pre-Training)的新框架,旨在解决以下问题:
总结来说,REALM旨在通过结合检索增强和预训练,提高语言模型在处理开放域问答等任务时的性能,同时提供更好的知识可解释性和模块化。
A: 这篇论文提到了与REALM相关的几个研究领域和具体工作,主要包括: