Show HN: MacMind – A transformer neural network in HyperCard on a 1989 Macintosh (github.com)

by hammer32 42 comments 159 points
Read article View on HN

42 comments

[−] edwin 28d ago
There’s something quietly impressive about getting modern AI ideas to run on old hardware (like OP's project or running LLM inference on Windows 3.1 machines). It’s easy to think all the progress is just bigger GPUs and more compute, but moments like that remind you how much of it is just more clever math and algorithms squeezing signal out of limited resources. Feels closer to the spirit of early computing than the current “throw hardware at it” narrative.
[−] wdbm 28d ago
There is an absolutely beautiful rendering of the Mona Lisa encoded at some point in the digits of pi. If you know the position, it's really easy to plot the image.

But first you have to find that position.

[−] zoky 28d ago
This is both simultaneously false, and true but largely meaningless. If you mean the Mona Lisa is somehow directly encoded somewhere in pi, then of course it’s not. It’s just a number.

If you mean that when you feed the numbers starting with some offset of pi into a specific algorithm you will get a rendering of the Mona Lisa, then yes, but so what? Allow me to introduce you to the PiMona algorithm. I won’t bother you with the implementation details, but it takes exactly one integer parameter. If it’s 3, it produces a beautiful rendering of the Mona Lisa. Anything else and it generates random garbage. Turns out, it’s really easy to find where the Mona Lisa is encoded in pi! It’s right there at the start.

But let’s say you meant that the digits of pi at some offset, when encoded properly and fed into any algorithm that is theoretically capable of generating the Mona Lisa will cause that algorithm to do so, then sure. But that’s also true of random noise, and says more about the algorithm and the nature of random numbers than about the Mona Lisa somehow being encoded into the fabric of the universe (which I’m sure isn’t what you meant, but I’m just saying there’s nothing really special about pi in that regard, except that as far as we know, it continues infinitely).

[−] mfro 27d ago
I think they're going for more of a 'monkeys will eventually produce shakespeare' thing here. Which you can apply the same argument to - monkeys do not know english, don't know what they're typing, and theoretically english could devolve to a state where every sentence could be qualified as shakespeare, right? Your argument just seems unnecessarily pedantic.
[−] hammer32 28d ago
Exactly. Working in a constrained environment invites innovation.
[−] Unbeliever69 28d ago
Now do this on a Casio Watch next :)
[−] hyperhello 29d ago
Hello, if there are no XCMDs it should work adequately in HyperCard Simulator. I am only on my phone but I took a minute to import it.

https://hcsimulator.com/imports/MacMind---Trained-69E0132C

[−] watersb 28d ago
This is great!

I first studied back-propagation in 1988, at the same time I fell in love with HyperCard programming. This project helps me recall this elegant weapon for a more civilized age.

[−] nxobject 28d ago
I love this. From reading the nuts-and-bolts "parameters" (haha) of your implementation, I get the impression that the fundamental limit is, well, using a 32-bit platform to address the sizes of data that usually need at least 48 bits!
[−] gcanyon 29d ago
It's strange to think how modern concepts are only modern because no one thought of them back then. This feels (to me) like the germ theory being transferred back to the ancient greeks.
[−] tty456 28d ago
Where's the code for the actual HyperCard and building of the .img? I only see the python validator in the repo.
[−] rcarmo 28d ago
Neat. Looks like I found my new benchmark for my ARM64 JIT for BasiliskII :)

(still debugging it, but getting closer to full coverage)

[−] DetroitThrow 29d ago
This is very cool. Any more demos of inference output?
[−] immanuwell 28d ago
The architecture of macmind looks pretty interesting