I Could’ve Asked That: Reformulating Unanswerable Questions

Authors: Wenting Zhao, Ge Gao, Claire Cardie, Alexander M. Rush

Abstract: When seeking information from unfamiliar documents, users frequently pose
questions that cannot be answered by the documents. While existing large
language models (LLMs) identify these unanswerable questions, they do not
assist users in reformulating their questions, thereby reducing their overall
utility. We curate CouldAsk, an evaluation benchmark composed of existing and
new datasets for document-grounded question answering, specifically designed to
study reformulating unanswerable questions. We evaluate state-of-the-art
open-source and proprietary LLMs on CouldAsk. The results demonstrate the
limited capabilities of these models in reformulating questions. Specifically,
GPT-4 and Llama2-7B successfully reformulate questions only 26% and 12% of the
time, respectively. Error analysis shows that 62% of the unsuccessful
reformulations stem from the models merely rephrasing the questions or even
generating identical questions. We publicly release the benchmark and the code
to reproduce the experiments.

Source: http://arxiv.org/abs/2407.17469v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these