FANformer: Improving Large Language Models Through Effective Periodicity Modeling

Authors: Yihong Dong, Ge Li, Xue Jiang, Yongding Tao, Kechi Zhang, Hao Zhu, Huanyu Liu, Jiazheng Ding, Jia Li, Jinliang Deng, Hong Mei

Abstract: Periodicity, as one of the most important basic characteristics, lays the
foundation for facilitating structured knowledge acquisition and systematic
cognitive processes within human learning paradigms. However, the potential
flaws of periodicity modeling in Transformer affect the learning efficiency and
establishment of underlying principles from data for large language models
(LLMs) built upon it. In this paper, we demonstrate that integrating effective
periodicity modeling can improve the learning efficiency and performance of
LLMs. We introduce FANformer, which integrates Fourier Analysis Network (FAN)
into attention mechanism to achieve efficient periodicity modeling, by
modifying the feature projection process of attention mechanism. Extensive
experimental results on language modeling show that FANformer consistently
outperforms Transformer when scaling up model size and training tokens,
underscoring its superior learning efficiency. To further validate the
effectiveness of FANformer, we pretrain a FANformer-1B on 1 trillion tokens.
FANformer-1B exhibits marked improvements on downstream tasks compared to
open-source LLMs with similar model parameters or training tokens. The results
position FANformer as an effective and promising architecture for advancing
LLMs.

Source: http://arxiv.org/abs/2502.21309v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these