Addressing Uncertainty in LLMs to Enhance Reliability in Generative AI

Authors: Ramneet Kaur, Colin Samplawski, Adam D. Cobb, Anirban Roy, Brian Matejek, Manoj Acharya, Daniel Elenius, Alexander M. Berenbeim, John A. Pavlik, Nathaniel D. Bastian, Susmit Jha

Abstract: In this paper, we present a dynamic semantic clustering approach inspired by
the Chinese Restaurant Process, aimed at addressing uncertainty in the
inference of Large Language Models (LLMs). We quantify uncertainty of an LLM on
a given query by calculating entropy of the generated semantic clusters.
Further, we propose leveraging the (negative) likelihood of these clusters as
the (non)conformity score within Conformal Prediction framework, allowing the
model to predict a set of responses instead of a single output, thereby
accounting for uncertainty in its predictions. We demonstrate the effectiveness
of our uncertainty quantification (UQ) technique on two well known question
answering benchmarks, COQA and TriviaQA, utilizing two LLMs, Llama2 and
Mistral. Our approach achieves SOTA performance in UQ, as assessed by metrics
such as AUROC, AUARC, and AURAC. The proposed conformal predictor is also shown
to produce smaller prediction sets while maintaining the same probabilistic
guarantee of including the correct response, in comparison to existing SOTA
conformal prediction baseline.

Source: http://arxiv.org/abs/2411.02381v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these