TinyLoRA – Learning to Reason in 13 Parameters (arxiv.org)

by sorenjan 45 comments 234 points
Read article View on HN

45 comments

[−] dollo_7 45d ago
Not sure if I buy it. First, SVD decomposition to obtain U, Σ, V is computationally expensive, so it would work only if we are not finetuning very big models.

But my real concern comes at the results. The "13 parameters" looks like bait, because it is one result of finetuning a model on a very simple math benchmark, grade-school-math (GSM8K), an already very saturated benchmark on every model. Besides, it seems to happen only for the qwen family model... It looks like GSM8K was part of the training set of the qwen model, and this tinylora finetuning did the last adjustments to perfectly reflect that overtraining.

[−] kashifr 44d ago
You can try out TinyLoRA in PEFT main now: https://huggingface.co/docs/peft/main/en/package_reference/t...
[−] MASNeo 45d ago
Is it an Aprils Fools publication?
[−] kgeist 45d ago

>One theory is that the knowledge required to solve the task is already stored in the parameters of the model, and only the style has to change for task success

>In particular, learning to generate longer outputs may be possible in few parameters

Reminded me of: https://arxiv.org/abs/2501.19393

>we develop budget forcing to control test-time compute by forcefully terminating the model’s thinking process or lengthening it by appending “Wait” multiple times to the model’s generation when it tries to end. This can lead the model to double-check its answer, often fixing incorrect reasoning steps

Maybe, indeed, the model simply learns to insert the EOS token (or similar) later, and the capability is already in the base model

[−] measurablefunc 45d ago
With four parameters I can fit an elephant, and with five I can make him wiggle his trunk so there is still room for improvement.
[−] 5555watch 44d ago
It's not "13 parameters to reason", they just rotated the full 8B parameter space in 13 dimensions and found a rotation that was still able to reason.

Depending on the latent structure, it's possible a nice rotation that would be perfect for some one specific problem, but you still got to search for it, and it's not a guarantee to exist.

But it's a nice step towards LLM parameter-space interpretability.

[−] cestith 44d ago
This is interesting and all, but “LoRA” is painfully close to “LoRa” (which is related to radio networking, not AI) when just scanning a list of topics. We’re never going to beat the Shannon limit on acronyms and initialisms.

I’m glad the rest of the anchor text gave some context.

[−] Xx_crazy420_xX 45d ago
If i understand it correctly, the analogy could be:

Let's say we have a low level programmer expert and we try to teach him algebra either we:

  - (SFT): give him algebra book with new nomenclature, definitions, syntax
  - (RL): let him learn algebra using C syntax
[−] vasco 44d ago
Most data in the training set of most reasoning models is crap I guess.
[−] a-t-c-g 45d ago
The quality of custom models trained with proper reasoning datasets[0] even with small parameters (3-7B is sweet spot) is incredible now

[0]: cartesien.io or Salesforce's WebscaleRL

[−] ashater 44d ago
Likely reasoning is part of the original model. It is well known that it is not possible to get a 1bn parameter model to reason, even with RL.
[−] nekusar 44d ago
Can a model that small dynamically grow? In other words, can it train itself AS it progresses through the network?
[−] Sim-In-Silico 45d ago
[dead]
[−] AndrewB234 43d ago
[flagged]
[−] ValveFan6969 45d ago
[dead]
[−] evermore611 45d ago
[dead]
[−] ValveFan6969 45d ago
[dead]
[−] volume_tech 44d ago
[flagged]
[−] matt123456789 45d ago
Such low dimensionality of the LoRA vector must surely result in a close-to-linear modification to the KV calculation. This seems to me to imply that what we call "reasoning" is latent within the model. Pretty clear I didn't read the paper, I'm sure the authors address this.
[−] sachaa 45d ago
If 13 parameters can unlock better reasoning, then we will not be "training" models, we'll be steering them. Most of the capability is already there.

The real unlock isn’t TinyLoRA, it’s what this implies: ultra-cheap, continuous adaptation. The bottleneck shifts from compute to having a good reward signal.