Authors: Zhuochun Li, Yuelyu Ji, Rui Meng, Daqing He
Abstract: Large language models (LLMs) have exhibited complex reasoning abilities by
generating question rationales and demonstrated exceptional performance in
natural language processing (NLP) tasks. However, these reasoning capabilities
generally emerge in models with tens of billions of parameters, creating
significant computational challenges for real-world deployment. Recent research
has concentrated on improving open-source smaller models through knowledge
distillation (KD) from commercial LLMs. Nevertheless, most of these studies
rely solely on the responses from one single LLM as the gold rationale for
training. In this paper, we introduce a novel Mistake-Aware Peer-Review
Distillation (MAPD) approach: 1) Instead of merely obtaining gold rationales
from teachers, our method asks teachers to identify and explain the student’s
mistakes, providing customized instruction learning data. 2) We design a
simulated peer-review process between teacher LLMs, which selects only the
generated rationales above the acceptance threshold. This reduces the chance of
teachers guessing correctly with flawed rationale, improving instructional data
quality. Comprehensive experiments and analysis on mathematical, commonsense,
and logical reasoning tasks demonstrate the effectiveness of our method.
Source: http://arxiv.org/abs/2410.03663v1