Efficient Fine-Tuning of Large Language Models via Low-Rank Spectral Adaptation
Fine-tuning large language models (LLMs) for downstream tasks remains prohibitively expensive, as full parameter updates require memory proportional to model size. Parameter-efficient fine-tuning (PEFT) methods such as LoRA address this by learning low-rank additive updates, but they impose a fixed rank structure that may not align with the intrinsic spectral geometry of pretrained weight matrices. We propose Low-Rank Spectral Adaptation (LoRSA), a novel PEFT method that leverages the singular value decomposition (SVD) of pretrained weights to identify and selectively adapt the most task-relevant spectral components. LoRSA decomposes each weight matrix $W = U \Sigma V^\top$ and learns lightweight perturbations $\Delta\sigma_i$ to a subset of singular values, along with low-rank rotations of the corresponding singular vectors. On the GLUE benchmark, LoRSA matches full fine-tuning performance on LLaMA-2 7B and 13B while training only 0.12% of parameters—a 3.2× reduction compared to LoRA at equivalent task performance. We further demonstrate LoRSA's advantages in multi-task adaptation scenarios, where spectral components exhibit interpretable task specialization.


