Authors: Juanhui Li, Sreyashi Nag, Hui Liu, Xianfeng Tang, Sheikh Sarwar, Limeng Cui, Hansu Gu, Suhang Wang, Qi He, Jiliang Tang
Abstract: In real-world NLP applications, Large Language Models (LLMs) offer promising
solutions due to their extensive training on vast datasets. However, the large
size and high computation demands of LLMs limit their practicality in many
applications, especially when further fine-tuning is required. To address these
limitations, smaller models are typically preferred for deployment. However,
their training is hindered by the scarcity of labeled data. In contrast,
unlabeled data is often readily which can be leveraged by using LLMs to
generate pseudo-labels for training smaller models. This enables the smaller
models (student) to acquire knowledge from LLMs(teacher) while reducing
computational costs. This process introduces challenges, such as potential
noisy pseudo-labels. Selecting high-quality and informative data is therefore
critical to enhance model performance while improving the efficiency of data
utilization. To address this, we propose LLKD that enables Learning with Less
computational resources and less data for Knowledge Distillation from LLMs.
LLKD is an adaptive sample selection method that incorporates signals from both
the teacher and student. Specifically, it prioritizes samples where the teacher
demonstrates high confidence in its labeling, indicating reliable labels, and
where the student exhibits a high information need, identifying challenging
samples that require further learning. Our comprehensive experiments show that
LLKD achieves superior performance across various datasets with higher data
efficiency.
Source: http://arxiv.org/abs/2411.08028v1