Questions Are All You Need to Train a Dense Passage Retriever

We introduce , a new corpus-level autoencoding approach for training dense retrieval models that does not require any labeled training data. Dense retrieval is a central challenge for open-domain tasks, such as Open QA, where state-of-the-art methods typically require large supervised datasets with...

Full description

Saved in:
Bibliographic Details
Published in:Transactions of the Association for Computational Linguistics Vol. 11; pp. 600 - 616
Main Authors: Sachan, Devendra Singh, Lewis, Mike, Yogatama, Dani, Zettlemoyer, Luke, Pineau, Joelle, Zaheer, Manzil
Format: Journal Article
Language:English
Published: One Broadway, 12th Floor, Cambridge, Massachusetts 02142, USA MIT Press 20-06-2023
MIT Press Journals, The
The MIT Press
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We introduce , a new corpus-level autoencoding approach for training dense retrieval models that does not require any labeled training data. Dense retrieval is a central challenge for open-domain tasks, such as Open QA, where state-of-the-art methods typically require large supervised datasets with custom hard-negative mining and denoising of positive examples. , in contrast, only requires access to unpaired inputs and outputs (e.g., questions and potential answer passages). It uses a new passage-retrieval autoencoding scheme, where (1) an input question is used to retrieve a set of evidence passages, and (2) the passages are then used to compute the probability of reconstructing the original question. Training for retrieval based on question reconstruction enables effective unsupervised learning of both passage and question encoders, which can be later incorporated into complete Open QA systems without any further finetuning. Extensive experiments demonstrate that obtains state-of-the-art results on multiple QA retrieval benchmarks with only generic initialization from a pre-trained language model, removing the need for labeled data and task-specific losses. Our code and model checkpoints are available at: .
Bibliography:2023
ISSN:2307-387X
2307-387X
DOI:10.1162/tacl_a_00564