Authors: Yucheng Jiang, Yijia Shao, Dekun Ma, Sina J. Semnani, Monica S. Lam
Abstract: While language model (LM)-powered chatbots and generative search engines
excel at answering concrete queries, discovering information in the terrain of
unknown unknowns remains challenging for users. To emulate the common
educational scenario where children/students learn by listening to and
participating in conversations of their parents/teachers, we create
Collaborative STORM (Co-STORM). Unlike QA systems that require users to ask all
the questions, Co-STORM lets users observe and occasionally steer the discourse
among several LM agents. The agents ask questions on the user’s behalf,
allowing the user to discover unknown unknowns serendipitously. To facilitate
user interaction, Co-STORM assists users in tracking the discourse by
organizing the uncovered information into a dynamic mind map, ultimately
generating a comprehensive report as takeaways. For automatic evaluation, we
construct the WildSeek dataset by collecting real information-seeking records
with user goals. Co-STORM outperforms baseline methods on both discourse trace
and report quality. In a further human evaluation, 70% of participants prefer
Co-STORM over a search engine, and 78% favor it over a RAG chatbot.
Source: http://arxiv.org/abs/2408.15232v1