ChamaleonLLM: Batch-Aware Dynamic Low-Rank Adaptation via Inference-Time Clusters

Authors: Kamer Ali Yuksel, Hassan Sawaf

Abstract: Recent advances in large language models (LLMs) have shown remarkable
performance across diverse tasks. However, these models are typically deployed
with fixed weights, which limits their ability to adapt dynamically to the
variability inherent in real-world data during inference. This paper introduces
ChamaleonLLM, a novel framework that enables inference-time adaptation of LLMs
by leveraging batch-aware clustering and on-the-fly generation of low-rank
updates. Unlike traditional fine-tuning approaches such as Low-Rank Adaptation
(LoRA) or methods that rely on a fixed set of pre-learned uniforms (changeable
masks), our method dynamically generates adaptive modifications to the decoder
weights based on the aggregated statistics of clustered batches. By
intelligently grouping similar inputs and computing context-aware low-rank
updates via a hyper-network, ChamaleonLLM achieves significant performance
gains, outperforming conventional LoRA methods while eliminating the overhead
of maintaining multiple expert models. Our experiments highlight the potential
of our approach to serve as a versatile and highly adaptive solution for
language model inference. ChamaleonLLM is open-sourced to ensure the
reproducibility of our experiments:
https://anonymous.4open.science/r/ChamaleonLLM/

Source: http://arxiv.org/abs/2502.04315v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these