Authors: Parsa Hejabi, Elnaz Rahmati, Alireza S. Ziabari, Preni Golazizian, Jesse Thomason, Morteza Dehghani
Abstract: Large Language Models (LLMs) have shown impressive capabilities in complex
tasks and interactive environments, yet their creativity remains underexplored.
This paper introduces a simulation framework utilizing the game Balderdash to
evaluate both the creativity and logical reasoning of LLMs. In Balderdash,
players generate fictitious definitions for obscure terms to deceive others
while identifying correct definitions. Our framework enables multiple LLM
agents to participate in this game, assessing their ability to produce
plausible definitions and strategize based on game rules and history. We
implemented a centralized game engine featuring various LLMs as participants
and a judge LLM to evaluate semantic equivalence. Through a series of
experiments, we analyzed the performance of different LLMs, examining metrics
such as True Definition Ratio, Deception Ratio, and Correct Guess Ratio. The
results provide insights into the creative and deceptive capabilities of LLMs,
highlighting their strengths and areas for improvement. Specifically, the study
reveals that infrequent vocabulary in LLMs’ input leads to poor reasoning on
game rules and historical context
(https://github.com/ParsaHejabi/Simulation-Framework-for-Multi-Agent-Balderdash).
Source: http://arxiv.org/abs/2411.10422v1