Show HN: I built a tiny LLM to demystify how language models work (github.com)

by armanified 134 comments 915 points
Read article View on HN

134 comments

[−] fg137 39d ago
How does this compare to Andrej Karpathy's microgpt (https://karpathy.github.io/2026/02/12/microgpt/) or minGPT (https://github.com/karpathy/minGPT)?
[−] armanified 39d ago
I haven't compared it with anything yet. Thanks for the suggestion; I'll look into these.
[−] BrokenCogs 39d ago
Who cares how it compares, it's not a product it's a cool project
[−] tantalor 39d ago
Even cool projects can learn from others. Maybe they missed something that could benefit the project, or made some interesting technical choice that gives a different result.

For the readers/learners, it's useful to understand the differences so we know what details matter, and which are just stylistic choices.

This isn't art; it's science & engineering.

[−] stronglikedan 39d ago

> Who cares how it compares

Well, the person who asked the question, for one. I'm sure they're not the only one. Best not to assume why people are asking though, so you can save time by not writing irrelevant comments.

[−] layer8 39d ago
Microgpt isn’t a product either. Are you saying that differences between cool projects aren’t worth thinking and conversing about?
[−] thomasfl 39d ago
Is there some documentation for this? The code is probably the simplest (Not So) Large Language Model implementation possible, but it is not straight forward to understand for developers not familiar with multi-head attention, ReLU FFN, LayerNorm and learned positional embeddings.

This projects shares similarities with Minix. Minix is still used at universities as an educational tool for teaching operating system design. Minix is the operating system that taught Linus Torvalds how to design (monolithic) operating systems. Similarly having students adding capabilities to GuppyLM is a good way to learn LLM design.

[−] totetsu 39d ago
https://bbycroft.net/llm has 3d Visualization of tiny example LLM layers that do a very good job at showing what is going on (https://news.ycombinator.com/item?id=38505211)
[−] ordinarily 40d ago
It's genuinely a great introduction to LLMs. I built my own awhile ago based off Milton's Paradise Lost: https://www.wvrk.org/works/milton
[−] mudkipdev 39d ago
This is probably a consequence of the training data being fully lowercase:

You> hello Guppy> hi. did you bring micro pellets.

You> HELLO Guppy> i don't know what it means but it's mine.

[−] algoth1 39d ago
This really makes me think if it would be feasible to make an llm trained exclusively on toki pona (https://en.wikipedia.org/wiki/Toki_Pona)
[−] neurworlds 39d ago
Cool project. I'm working on something where multiple LLM agents share a world and interact with each other autonomously. One thing that surprised me is how much the "world" matters — same model, same prompt, but put it in a system with resource constraints, other agents, and persistent memory, the behavior changes dramatically. Made me realize we spend too much time optimizing the model and not enough thinking about the environment it operates in.
[−] SilentM68 40d ago
Would have been funny if it were called "DORY" due to memory recall issues of the fish vs LLMs similar recall issues :)
[−] zwaps 39d ago
I like the idea, just that the examples are reproduced from the training data set.

How does it handle unknown queries?

[−] brcmthrowaway 39d ago
Why are there so many dead comments from new accounts?
[−] AndrewKemendo 40d ago
I love these kinds of educational implementations.

I want to really praise the (unintentional?) nod to Nagel, by limiting capabilities to representation of a fish, the user is immediately able to understand the constraints. It can only talk like a fish cause it’s very simple

Especially compared to public models, thats a really simple correspondence to grok intuitively (small LLM > only as verbose as a fish, larger LLM > more verbose) so kudos to the author for making that simple and fun.

[−] bblb 39d ago
Could it be possible to train LLM only through the chat messages without any other data or input?

If Guppy doesn't know regular expressions yet, could I teach it to it just by conversation? It's a fish so it wouldn't probably understand much about my blabbing, but would be interesting to give it a try.

Or is there some hard architectural limit in the current LLM's, that the training needs to be done offline and with fairly large training set.

[−] cbdevidal 40d ago

> you're my favorite big shape. my mouth are happy when you're here.

Laughed loudly :-D

[−] CaseFlatline 39d ago
I am trying to find how the synthetic data was created (looking through the repo) and didn't find it. Maybe I am missing it - Would love to see the prompts and process on that aspect of the training data generation!
[−] rpdaiml 39d ago
This is a nice idea. A tiny implementation can be way more useful for learning than yet another wrapper around a big model, especially if it keeps the training loop and inference path small enough to read end to end.
[−] jzer0cool 39d ago
Does this work by just training once with next token prediction? Want to understand better how it creates fluent sentences if anyone can provide insights.
[−] BiraIgnacio 39d ago
Nice work and thanks for sharing it!

Now, I ask, have LLMs ben demystified to you? :D

I am still impressed how much (for the most part) trivial statistics and a lot of compute can do.

[−] kaipereira 39d ago
This is so cool! I'd love to see a write-up on how made it, and what you referenced because designing neural networks always feel like a maze ;)
[−] ankitsanghi 39d ago
Love it! I think it's important to understand how the tools we use (and will only increasingly use) work under the hood.
[−] NyxVox 40d ago
Hm, I can actually try the training on my GPU. One of the things I want to try next. Maybe a bit more complex than a fish :)
[−] Leomuck 39d ago
Wow that is such a cool idea! And honestly very much needed. LLMs seem to be this blackbox nobody understands. So I love every effort to make that whole thing less mysterious. I will definitely have a look at dabbling with this, may it not be a goldfish LLM :)
[−] Duplicake 39d ago
I love this! Seems like it can't understand uppercase letters though
[−] ergocoder 39d ago
It's just so amazing that 5 years ago it would be extremely to build a conversational bot like this.

But right now people make it a hobby, and that thing can run on a laptop.

This is just so wild.

[−] gnarlouse 40d ago
I... wow, you made an LLM that can actually tell jokes?
[−] kubrador 39d ago
how's it handle longer context or does it start hallucinating after like 2 sentences? curious what the ceiling is before the 9M params
[−] bharat1010 39d ago
This is such a smart way to demystify LLMs. I really like that GuppyLM makes the whole pipeline feel approachable..great work
[−] drincanngao 39d ago
I was going to suggest implementing RoPE to fix the context limit, but realized that would make it anatomically incorrect.
[−] fawabc 39d ago
how did you generate the synthetic data?
[−] rclkrtrzckr 39d ago
I could fork it and create TrumpLM. Not a big leap, I suppose.
[−] amelius 39d ago

> A 9M model can't conditionally follow instructions

How many parameters would you need for that?

[−] EmilioOldenziel 39d ago
Building it yourself is always the best test if you really understand how it works.
[−] ananandreas 39d ago
Great and simple way to bridge the gap between LLMs and users coming in to the field!
[−] ben8bit 39d ago
This is really great! I've been wanting to do something similar for a while.
[−] nobodyandproud 39d ago
Thanks. Tinkering is how I learn and this is what I’ve been looking for.
[−] jbethune 39d ago
Forked. Very cool. I appreciate the simplicity and documentation.
[−] nullbyte808 40d ago
Adorable! Maybe a personality that speaks in emojis?
[−] monksy 39d ago
Is this a reference from the Bobiverse?
[−] cpldcpu 39d ago
Love it! Great idea for the dataset.
[−] winter_blue 39d ago
This is amazing work. Thank you.
[−] gdzie-jest-sol 39d ago
* How creating dataset? I download it but it is commpresed in binary format.

* How training. In cloud or in my own dev

* How creating a gguf