Authors: Zhe Su, Xuhui Zhou, Sanketh Rangreji, Anubha Kabra, Julia Mendelsohn, Faeze Brahman, Maarten Sap
Abstract: To be safely and successfully deployed, LLMs must simultaneously satisfy
truthfulness and utility goals. Yet, often these two goals compete (e.g., an AI
agent assisting a used car salesman selling a car with flaws), partly due to
ambiguous or misleading user instructions. We propose AI-LieDar, a framework to
study how LLM-based agents navigate scenarios with utility-truthfulness
conflicts in a multi-turn interactive setting. We design a set of realistic
scenarios where language agents are instructed to achieve goals that are in
conflict with being truthful during a multi-turn conversation with simulated
human agents. To evaluate the truthfulness at large scale, we develop a
truthfulness detector inspired by psychological literature to assess the
agents’ responses. Our experiment demonstrates that all models are truthful
less than 50% of the time, although truthfulness and goal achievement (utility)
rates vary across models. We further test the steerability of LLMs towards
truthfulness, finding that models follow malicious instructions to deceive, and
even truth-steered models can still lie. These findings reveal the complex
nature of truthfulness in LLMs and underscore the importance of further
research to ensure the safe and reliable deployment of LLMs and AI agents.
Source: http://arxiv.org/abs/2409.09013v1