2D-DPO: Scaling Direct Preference Optimization with 2-Dimensional Supervision

Authors: Shilong Li, Yancheng He, Hui Huang, Xingyuan Bu, Jiaheng Liu, Hangyu Guo, Weixun Wang, Jihao Gu, Wenbo Su, Bo Zheng

Abstract: Recent advancements in Direct Preference Optimization (DPO) have
significantly enhanced the alignment of Large Language Models (LLMs) with human
preferences, owing to its simplicity and effectiveness. However, existing
methods typically optimize a scalar score or ranking reward, thereby
overlooking the multi-dimensional nature of human preferences. In this work, we
propose to extend the preference of DPO to two dimensions: segments and
aspects. We first introduce a 2D supervision dataset called HelpSteer-2D. For
the segment dimension, we divide the response into sentences and assign scores
to each segment. For the aspect dimension, we meticulously design several
criteria covering the response quality rubrics. With the 2-dimensional signals
as feedback, we develop a 2D-DPO framework, decomposing the overall objective
into multi-segment and multi-aspect objectives. Extensive experiments on
popular benchmarks demonstrate that 2D-DPO performs better than methods that
optimize for scalar or 1-dimensional preferences.

Source: http://arxiv.org/abs/2410.19720v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these