Efficient Fine-Tuning of Large Language Models via Low-Rank Spectral Adaptation
Abstract
Fine-tuning large language models (LLMs) for downstream tasks remains prohibitively expensive, as full parameter updates require memory proportional to model size. Parameter-efficient fine-tuning (PEFT) methods such as LoRA address this by learning low-rank additive updates, but they impose a fixed rank structure that may not align with the intrinsic spectral geometry of pretrained weight matrices. We propose Low-Rank Spectral Adaptation (LoRSA), a novel PEFT method that leverages the singular value decomposition (SVD) of pretrained weights to identify and selectively adapt the most task-relevant spectral components. LoRSA decomposes each weight matrix and learns lightweight perturbations to a subset of singular values, along with low-rank rotations of the corresponding singular vectors. On the GLUE benchmark, LoRSA matches full fine-tuning performance on LLaMA-2 7B and 13B while training only 0.12% of parameters—a 3.2 reduction compared to LoRA at equivalent task performance. We further demonstrate LoRSA's advantages in multi-task adaptation scenarios, where spectral components exhibit interpretable task specialization.
1. Introduction
The emergence of large language models with billions of parameters has created a critical tension between model capability and adaptation cost. While models such as LLaMA-2 [1], GPT-4 [2], and Mistral [3] achieve strong zero-shot and few-shot performance, many applications require fine-tuning to reach acceptable accuracy on domain-specific tasks. Full fine-tuning of a 7B-parameter model requires approximately 56 GB of GPU memory (with mixed precision and Adam optimizer states), placing it beyond the reach of most practitioners.
Parameter-efficient fine-tuning (PEFT) methods have emerged as a practical solution. LoRA [4] is the most widely adopted approach, introducing trainable low-rank matrices and such that the adapted weight becomes , where . While effective, LoRA's low-rank update is structurally agnostic—it does not leverage any information about the spectral structure of the pretrained weight .
We observe that pretrained weight matrices exhibit highly structured spectral profiles. The singular value spectrum of in attention projection layers follows a characteristic power-law decay with , and the top singular vectors encode semantically meaningful directions. This motivates our central hypothesis: fine-tuning can be made more efficient by adapting the existing spectral components of rather than learning a structurally independent low-rank perturbation.
Our contributions are:
- We propose LoRSA, which parameterizes weight updates as perturbations within the spectral basis of pretrained weights, achieving superior parameter efficiency.
- We introduce spectral importance scoring to automatically select which singular components to adapt per layer and per task.
- We demonstrate that LoRSA achieves full fine-tuning performance with 0.12% trainable parameters on LLaMA-2 models, outperforming LoRA by 3.2 in parameter efficiency.
2. Related Work
Low-rank adaptation. LoRA [4] and its variants—AdaLoRA [5] (adaptive rank allocation), QLoRA [6] (quantized base model), and DoRA [7] (weight-decomposed adaptation)—form the dominant PEFT paradigm. All share the assumption that task-specific updates lie in a low-rank subspace, but none explicitly connect this subspace to the pretrained weight spectrum.
Spectral methods in deep learning. Spectral normalization [8] constrains the largest singular value for training stability. Spectral pruning [9] removes neurons based on singular value magnitude. Our work is the first to use the full SVD basis of pretrained weights as the parameterization space for fine-tuning.
Other PEFT methods. Prefix tuning [10], prompt tuning [11], and adapter layers [12] offer alternative approaches. LoRSA is orthogonal to prompt-based methods and can be combined with quantization techniques analogous to QLoRA.
3. Methodology
3.1 Spectral Decomposition of Pretrained Weights
For each target weight matrix (e.g., attention projection , , , ), we compute the truncated SVD:
where are the singular values, and , are the left and right singular vectors. This decomposition is computed once at initialization and frozen during training.
3.2 LoRSA Parameterization
LoRSA adapts the weight matrix by learning three types of lightweight perturbations over a selected subset of the top- spectral components:
Singular value shifts: For each , we learn a scalar .
Singular vector rotations: For each , we learn small rotation parameters and that define low-rank perturbations to the singular vectors:
where and are fixed random projection matrices (shared across components to save memory), and controls the expressiveness of vector rotations.
The adapted weight matrix is:
i^\top + \sum{i \in \mathcal{S}} (\sigma_i + \Delta\sigma_i) \mathbf{u}_i' \mathbf{v}_i'^\top
3.3 Spectral Importance Scoring
Rather than uniformly selecting the top- components, we introduce a data-driven scoring mechanism. Given a small calibration set , we compute the Fisher-weighted spectral importance for each component:
{x \sim \mathcal{D}{\text{cal}}} \left[ \left| \frac{\partial \mathcal{L}}{\partial \sigma_i} \right|^2 \right]
Components are ranked by and the top- are selected for adaptation. This scoring requires a single forward-backward pass over the calibration set (typically 256 samples) and is performed once before training.
3.4 Efficient Implementation
A naive implementation of LoRSA would require materializing the full SVD, which is prohibitive for large matrices. We employ several optimizations:
- Truncated SVD via randomized algorithms [13], computing only the top- components ( by default) in time.
- Shared projection matrices and across all components within a layer, reducing storage from to .
- Fused forward pass that avoids reconstructing explicitly, instead computing the output as:
def lorsa_forward(x, W_frozen, U_S, V_S, sigma_S, delta_sigma, p, q, P, Q):
# Base output from frozen weights
y = x @ W_frozen.T
# Spectral correction for adapted components
for i in range(len(sigma_S)):
u_i = U_S[:, i] + P @ p[i]
v_i = V_S[:, i] + Q @ q[i]
scale = (sigma_S[i] + delta_sigma[i]) - sigma_S[i] # net change
y += (x @ v_i).unsqueeze(-1) * u_i * (sigma_S[i] + delta_sigma[i])
y -= (x @ V_S[:, i]).unsqueeze(-1) * U_S[:, i] * sigma_S[i]
return y
In practice, the loop is vectorized over components for GPU efficiency.
4. Results and Discussion
4.1 Main Results on GLUE Benchmark
We evaluate LoRSA on LLaMA-2 7B and 13B across GLUE tasks, comparing against full fine-tuning, LoRA (rank 16), AdaLoRA, and DoRA:
| Method | Params (%) | MNLI | QQP | SST-2 | QNLI | Avg. |
|---|---|---|---|---|---|---|
| Full FT | 100 | 90.1 | 92.3 | 96.1 | 94.8 | 93.3 |
| LoRA (r=16) | 0.38 | 89.4 | 91.8 | 95.7 | 94.2 | 92.8 |
| AdaLoRA | 0.32 | 89.6 | 91.9 | 95.8 | 94.3 | 92.9 |
| DoRA | 0.35 | 89.7 | 92.0 | 95.9 | 94.5 | 93.0 |
| LoRSA | 0.12 | 89.9 | 92.2 | 96.0 | 94.7 | 93.2 |
LoRSA achieves 93.2% average GLUE score with only 0.12% trainable parameters, matching full fine-tuning (93.3%) and outperforming LoRA (92.8%) with 3.2 fewer parameters.
4.2 Scaling Analysis
The parameter efficiency advantage of LoRSA grows with model size. On LLaMA-2 13B, LoRSA requires 0.08% trainable parameters to match full fine-tuning, compared to 0.30% for LoRA—a 3.75 improvement. This scaling behavior is explained by the observation that larger models exhibit steeper singular value decay ( for 13B vs. for 7B), meaning fewer spectral components carry task-relevant information.
4.3 Multi-Task Spectral Specialization
A compelling finding emerges in multi-task settings. When we examine which spectral components are selected by the importance scoring for different tasks, we observe clear specialization:
- Syntactic tasks (CoLA, linguistic acceptability) preferentially adapt components 20–80, corresponding to mid-spectrum singular vectors that encode grammatical structure.
- Semantic tasks (MNLI, entailment) predominantly adapt the top-20 components, which encode broad semantic features.
- Lexical tasks (SST-2, sentiment) adapt a sparse set across the full spectrum.
This specialization suggests that the SVD basis of pretrained weights provides a natural task-decomposition axis, a property absent in LoRA's random initialization.
4.4 Training Efficiency
The one-time SVD computation adds 12 minutes for LLaMA-2 7B (truncated, , on a single A100). Per-step training time is 1.15 that of LoRA due to the spectral correction computation, but the 3.2 parameter reduction allows larger effective batch sizes, resulting in comparable or faster wall-clock convergence.
5. Conclusion
We have introduced LoRSA, a parameter-efficient fine-tuning method that leverages the spectral structure of pretrained weight matrices to achieve superior adaptation efficiency. By operating within the SVD basis of pretrained weights and selectively adapting task-relevant spectral components, LoRSA achieves full fine-tuning performance with 3.2 fewer parameters than LoRA. The interpretable spectral specialization observed across tasks opens promising directions for understanding how pretrained representations encode diverse linguistic capabilities.
Future work will explore extending LoRSA to vision-language models, investigating dynamic spectral selection during training, and combining LoRSA with quantization for even greater memory efficiency.
References
[1] H. Touvron et al., "LLaMA 2: Open foundation and fine-tuned chat models," arXiv 2023.
[2] OpenAI, "GPT-4 technical report," arXiv 2023.
[3] A. Jiang et al., "Mistral 7B," arXiv 2023.
[4] E. Hu et al., "LoRA: Low-rank adaptation of large language models," ICLR 2022.
[5] Q. Zhang et al., "AdaLoRA: Adaptive budget allocation for parameter-efficient fine-tuning," ICML 2023.
[6] T. Dettmers et al., "QLoRA: Efficient finetuning of quantized language models," NeurIPS 2023.
[7] S. Liu et al., "DoRA: Weight-decomposed low-rank adaptation," ICML 2024.
[8] T. Miyato et al., "Spectral normalization for generative adversarial networks," ICLR 2018.
[9] M. Suzuki et al., "Spectral pruning: Compressing deep neural networks via spectral analysis," IJCAI 2020.
[10] X. L. Li and P. Liang, "Prefix-tuning: Optimizing continuous prompts for generation," ACL 2021.
[11] B. Lester et al., "The power of scale for parameter-efficient prompt tuning," EMNLP 2021.
[12] N. Houlsby et al., "Parameter-efficient transfer learning for NLP," ICML 2019.
[13] N. Halko et al., "Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions," SIAM Review 2011.


