Not sure if I buy it. First, SVD decomposition to obtain U, Σ, V is computationally expensive, so it would work only if we are not finetuning very big models.
But my real concern comes at the results. The "13 parameters" looks like bait, because it is one result of finetuning a model on a very simple math benchmark, grade-school-math (GSM8K), an already very saturated benchmark on every model. Besides, it seems to happen only for the qwen family model... It looks like GSM8K was part of the training set of the qwen model, and this tinylora finetuning did the last adjustments to perfectly reflect that overtraining.
>One theory is that the knowledge required to solve the task is already stored in the parameters of the model, and only the style has to change for task success
>In particular, learning to generate longer outputs may be possible in few parameters
>we develop budget forcing to control test-time compute by forcefully terminating the model’s thinking process or lengthening it by appending “Wait” multiple times to the model’s generation when it tries to end. This can lead the model to double-check its answer, often fixing incorrect reasoning steps
Maybe, indeed, the model simply learns to insert the EOS token (or similar) later, and the capability is already in the base model
It's not "13 parameters to reason", they just rotated the full 8B parameter space in 13 dimensions and found a rotation that was still able to reason.
Depending on the latent structure, it's possible a nice rotation that would be perfect for some one specific problem, but you still got to search for it, and it's not a guarantee to exist.
But it's a nice step towards LLM parameter-space interpretability.
This is interesting and all, but “LoRA” is painfully close to “LoRa” (which is related to radio networking, not AI) when just scanning a list of topics. We’re never going to beat the Shannon limit on acronyms and initialisms.
I’m glad the rest of the anchor text gave some context.
Such low dimensionality of the LoRA vector must surely result in a close-to-linear modification to the KV calculation. This seems to me to imply that what we call "reasoning" is latent within the model. Pretty clear I didn't read the paper, I'm sure the authors address this.
If 13 parameters can unlock better reasoning, then we will not be "training" models, we'll be steering them. Most of the capability is already there.
The real unlock isn’t TinyLoRA, it’s what this implies: ultra-cheap, continuous adaptation. The bottleneck shifts from compute to having a good reward signal.
45 comments
But my real concern comes at the results. The "13 parameters" looks like bait, because it is one result of finetuning a model on a very simple math benchmark, grade-school-math (GSM8K), an already very saturated benchmark on every model. Besides, it seems to happen only for the qwen family model... It looks like GSM8K was part of the training set of the qwen model, and this tinylora finetuning did the last adjustments to perfectly reflect that overtraining.
>One theory is that the knowledge required to solve the task is already stored in the parameters of the model, and only the style has to change for task success
>In particular, learning to generate longer outputs may be possible in few parameters
Reminded me of: https://arxiv.org/abs/2501.19393
>we develop budget forcing to control test-time compute by forcefully terminating the model’s thinking process or lengthening it by appending “Wait” multiple times to the model’s generation when it tries to end. This can lead the model to double-check its answer, often fixing incorrect reasoning steps
Maybe, indeed, the model simply learns to insert the EOS token (or similar) later, and the capability is already in the base model
Depending on the latent structure, it's possible a nice rotation that would be perfect for some one specific problem, but you still got to search for it, and it's not a guarantee to exist.
But it's a nice step towards LLM parameter-space interpretability.
I’m glad the rest of the anchor text gave some context.
Let's say we have a low level programmer expert and we try to teach him algebra either we:
[0]: cartesien.io or Salesforce's WebscaleRL
The real unlock isn’t TinyLoRA, it’s what this implies: ultra-cheap, continuous adaptation. The bottleneck shifts from compute to having a good reward signal.