Are Language Models Up to Sequential Optimization Problems? From Evaluation to a Hegelian-Inspired Enhancement

Authors: Soheil Abbasloo

Abstract: Large Language Models (LLMs) have demonstrated impressive capabilities across
numerous fields, presenting an opportunity to revolutionize optimization
problem-solving, a crucial, ubiquitous, and complex domain. This paper explores
the proficiency of LLMs in handling Sequential Optimization Problems (SOPs). We
introduce WorldGen, a dynamic framework for generating unseen SOPs with
controllable complexities, to evaluate LLM performance. Our initial
observations reveal that while LLMs perform well on simple SOPs, their
performance significantly degrades with increased complexity. Motivated by
this, we revisit philosophical hypotheses on reasoning to enhance LLM
performance. Inspired by the influential framework of Hegelian Dialectics, we
propose ACE, demonstrating how the performance of LLMs in SOP contexts can be
significantly improved without any retraining or further fine-tuning.

Source: http://arxiv.org/abs/2502.02573v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these