Authors: Justin Wong, Yury Orlovskiy, Michael Luo, Sanjit A. Seshia, Joseph E. Gonzalez
Abstract: Generating diverse responses from large language models (LLMs) is crucial for
applications such as planning/search and synthetic data generation, where
diversity provides distinct answers across generations. Prior approaches rely
on increasing temperature to increase diversity. However, contrary to popular
belief, we show not only does this approach produce lower quality individual
generations as temperature increases, but it depends on model’s next-token
probabilities being similar to the true distribution of answers. We propose
\method{}, an alternative approach that uses the language model itself to
partition the space into strata. At inference, a random stratum is selected and
a sample drawn from within the strata. To measure diversity, we introduce
CoverageQA, a dataset of underspecified questions with multiple equally
plausible answers, and assess diversity by measuring KL Divergence between the
output distribution and uniform distribution over valid ground truth answers.
As computing probability per response/solution for proprietary models is
infeasible, we measure recall on ground truth solutions. Our evaluation show
using SimpleStrat achieves higher recall by 0.05 compared to GPT-4o and 0.36
average reduction in KL Divergence compared to Llama 3.
Source: http://arxiv.org/abs/2410.09038v1