Authors: Hoang Nguyen, Khyati Mahajan, Vikas Yadav, Philip S. Yu, Masoud Hashemi, Rishabh Maheshwary
Abstract: Multilingual LLMs have achieved remarkable benchmark performance, but we find
they continue to underperform on non-Latin script languages across contemporary
LLM families. This discrepancy arises from the fact that LLMs are pretrained
with orthographic scripts, which are dominated by Latin characters that obscure
their shared phonology with non-Latin scripts. We propose leveraging phonemic
transcriptions as complementary signals to induce script-invariant
representations. Our study demonstrates that integrating phonemic signals
improves performance across both non-Latin and Latin languages, with a
particularly significant impact on closing the performance gap between the two.
Through detailed experiments, we show that phonemic and orthographic scripts
retrieve distinct examples for in-context learning (ICL). This motivates our
proposed Mixed-ICL retrieval strategy, where further aggregation leads to our
significant performance improvements for both Latin script languages (up to
12.6%) and non-Latin script languages (up to 15.1%) compared to randomized ICL
retrieval.
Source: http://arxiv.org/abs/2411.02398v1