Authors: Steve Huntsman, Jewell Thomas
Abstract: We devise an algorithm to generate sets of propositions that objectively
instantiate graphs that support coherence-driven inference. We then benchmark
the ability of large language models (LLMs) to reconstruct coherence graphs
from (a straightforward transformation of) propositions expressed in natural
language, with promising results from a single prompt to models optimized for
reasoning. Combining coherence-driven inference with consistency evaluations by
neural models may advance the state of the art in machine cognition.
Source: http://arxiv.org/abs/2502.13953v1