Check-Eval: A Checklist-based Approach for Evaluating Text Quality

Authors: Jayr Pereira, Roberto Lotufo

Abstract: Evaluating the quality of text generated by large language models (LLMs)
remains a significant challenge. Traditional metrics often fail to align well
with human judgments, particularly in tasks requiring creativity and nuance. In
this paper, we propose Check-Eval, a novel evaluation framework leveraging LLMs
to assess the quality of generated text through a checklist-based approach.
Check-Eval can be employed as both a reference-free and reference-dependent
evaluation method, providing a structured and interpretable assessment of text
quality. The framework consists of two main stages: checklist generation and
checklist evaluation. We validate Check-Eval on two benchmark datasets:
Portuguese Legal Semantic Textual Similarity and SummEval. Our results
demonstrate that Check-Eval achieves higher correlations with human judgments
compared to existing metrics, such as G-Eval and GPTScore, underscoring its
potential as a more reliable and effective evaluation framework for natural
language generation tasks. The code for our experiments is available at
https://anonymous.4open.science/r/check-eval-0DB4.

Source: http://arxiv.org/abs/2407.14467v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these