A communist Apple II and fourteen years of not knowing what you're testing (llama.gs)

by major4x 117 comments 229 points
Read article View on HN

117 comments

[−] dzink 30d ago
If you grow up in that environment (restricted by government in some areas and liberated in others) you’ll start seeing systems very differently. The game plays differently with different rules.

They had Pravets computers and robotic arms in rural classrooms in places that didn’t have traffic lights, or English teachers. Chess and Math competitions as well, were accessible everywhere. Those were all self-feedback mechanisms that are cheap but allow an interested individual to iterate infinitely to reach advanced levels. Even if only a tiny subset of any population has the cognitive surplus to meddle with programming and math, they had easy access to fulfill that and be found. In the US, schools enable that with sports, which monetize as entertainment venues. In the Eastern Block they had that with brains. As soon as the stupid restrictions on travel were lifted, the brains knew to leave the other restrictions and immigrate to places that reward cognitive surplus.

Intelligence builds with reinforcement learning on context that gives you feedback - which makes it easy to iterate on. If you’re not making those types of games/tools/systems available to kids, you are going to lose that generation to more attention grabbing stuff like Youtube or sports.

[−] summa_tech 30d ago
This was a fascinating article, because I've seen so many results of the Eastern Bloc reverse-engineering efforts basically founder into obscurity. Many of these re-created (sometimes with minor variations, or quite novel and ingenious implementation choices) computers were made in small series, but could not compete against illegal imports, and in any case would only be briefly popular in their local university town.

So it's cool to see that Bulgaria managed to muster enough government interest to force a cohesive strategy for the whole country. It sounds like it paid off.

Also, after googling for Правец, I have found out that I can in fact read Bulgarian, which was quite surprising to me.

[−] grishka 30d ago
There was also a Soviet Apple II clone, called Агат (Agat). But iirc, for whatever reason, they couldn't clone the 6502, so they built one out of logic ICs, and the thing was too slow to run Apple II software unmodified. Later there was an expansion card with a real imported 6502 that added full compatibility.

The USSR did make their own Z80 and 8080 clones later though. There existed an IBM PC compatible built completely out of Soviet-made parts. A lot of fully localized ZX Spectrum clones as well, of varying degree of homebrewness. Those were very popular in the late 80s and early 90s from what I gather, but I'm too young to have used one myself.

[−] Schlagbohrer 30d ago
"The circuits do not hallucinate" - que me thinking about all the times I've had to diagnose analog bugs in hardware.
[−] ipeev 30d ago
I think my reaction is mostly puzzlement. I can see a sensible point or several in the article, but I was not always sure how big a point the author was trying to make.

At the narrower level, it seems to be saying that benchmarks are easier to interpret when you know what they really are. That makes sense. If a circuit is known to be a multiplier, that tells you more than if it is just called c6288.

That is also why I thought of Python benchmarks. In something like pyperformance, names such as json_loads, python_startup, or nbody already tell you something about the workload. So when you compare results, you have a better sense of what kind of task a system is doing well on. But so what? It is just benchmarks. They don't guarantee anything about anything anyway.

What made it harder for me to follow was that this fairly modest point is wrapped in a lot of jokes and swipes about AI and corporate AI language. Some of that is funny, but it also made me less sure what the main point was supposed to be. Was the article really about benchmark interpretation, or was that mostly a vehicle for making a broader point about AI hype and technical understanding?

So I do think there is a real point in there. I just found it slightly hard to separate that point from the style and the jokes.

[−] paulnsorensen 30d ago
Thank you for this. Very enjoyable read. It reminds me of a post I saw discussing how code review is similar to reviewing mathematical proofs -- you have to know it like you wrote it, and now with AI, this is less and less the case. I definitely miss the days when I knew something was bulletproof because I wrote it and tested it thoroughly.
[−] teo_zero 30d ago
The current AI approach to technology is masterfully described as

> to build something enormous, declare it transformative, and hope nobody asks what it actually computes.

And the corollary:

> [such] approach requires billions of dollars and produces systems that cannot explain themselves.

[−] PunchyHamster 30d ago

> The methodology was elegantly practical: Hayes assigned each circuit to a PhD student. Cheap labour, and almost certainly cheaper per insight than an LLM.

We need API for that, grad students are paying to be there so can't get cheaper than that!

[−] Terretta 30d ago
Refreshing interconnection of topics.

First home “Apple //e” was in Africa, using a Korean improvement on the Apple ][+ adding lowercase and memory and more. It was lugged in by Korean ambassador's son and remained a better performer than the Apple //e once that came out.

Once I started looking for the history, I've never found what that Korean machine was.

Next came Apple IIc which ran circles around it. Then Fat Mac, SE, SE/30… but that's a different story.

[−] varjag 30d ago
Interesting point about the grassroots origin. When I read the accounts in early 1990s it was alleged that a whole factory of a minor computer manufacturer in the USA was bought and relocated to Pravets. Including the furniture, broom closets and trashcans. Though am not sure if computer designs were also allegedly in the deal.

Also that Bulgaria invested into some semiconductor manufacturer in Singapore to maintain uninterrupted access to the components.

[−] yehat 30d ago
I know there's predominant thinking that "communism" existed somewhere, but in fact it doesn't. It was the ideology developed in the West and brutally imported into the East. Why I say that and why it matters to understand the difference? Because there was no "communist" thinking behind the motivation to do whatever by the ordinary people. There was something deeper that manifested in a way that many people mistake for "communist" thinking. And that's natural, because people's thinking is not same in the West as in the East, and even more in far-East. Ok, enough on that, everyone's right to call it whatever they want, just pitching some clues that can help avoiding the cliché. My journey also started when first seeing IMKO-2 in 1984, then there was a popular magazine for the young "engineers" called "МЛАД КОНСТРУКТОР" (full archive here https://drive.google.com/uc?export=download&id=0Bw941VGG9Tjc... )that started publishing a course of BASIC. So I learned virtually and even wrote programs on paper before the first actual contact with the computer, which happened 1 year later on the newly acquired by my school couple of PRAVETZ-82.
[−] Schlagbohrer 30d ago
Ok I read this article but I don't see how it connects to LLMs as he puts forth at the start. Is he saying we have not yet reached the phase where someone will reverse engineer LLMs to figure out what is actually going on with these strange new beasts?
[−] flomo 30d ago

> A limitation, but also an engineering decision that had a certain brutal elegance: you get one alphabet at a time, comrade, and you will type in capitals.

Same decision with the capitalist American Apple II, only upper case letters unless you added some additional board.

[−] mghackerlady 30d ago
Eastern bloc computers are so fascinating to me. The soviets were pretty bad at it, but the germans were doing pretty well all things considered. It's also fascinating how much they liked DEC, at least architecturally
[−] Angostura 30d ago
I seldom enjoy a piece of writing this much. Loved it. Kudos to the author
[−] Minecodes 30d ago
Very interesting article. It's always fascinating for me as someone from Gen Z, to see how the computers worked in the beginning, and the stories behind them.
[−] exmadscientist 30d ago

> For fourteen years... nobody —

nobody — knew what these circuits were actually supposed to compute.

This is utterly, utterly mind-boggling to me. Seriously no one had any curiosity to look in to these things for 14 years? I mean, I guess someone was bored somewhere along the way, but usually that sort of thing becomes an open secret... not here, I guess.

[−] gizajob 30d ago
Great writing and an enjoyable tale well told.
[−] bogantech 30d ago

> More interesting is what happened next: an institute in Sofia was reportedly tasked with decapping the ICs, lifting the netlists under a microscope, and reproducing them with socialist lithography

Given that (afaik) the Apple II logic would have all been jelly bean logic or otherwise off the shelf parts did they really reverse engineer ICs?

[−] DonHopkins 30d ago
"Reverse Over Engineering" is a great thing:

Will Wright on Designing User Interfaces to Simulation Games (1996) (2023 Video Update):

https://donhopkins.medium.com/designing-user-interfaces-to-s...

https://news.ycombinator.com/item?id=22062590

DonHopkins on Jan 16, 2020 | parent | context | favorite | on: Reverse engineering course

Will Wright defined the "Simulator Effect" as how game players imagine a simulation is vastly more detailed, deep, rich, and complex than it actually is: a magical misunderstanding that you shouldn’t talk them out of. He designs games to run on two computers at once: the electronic one on the player’s desk, running his shallow tame simulation, and the biological one in the player’s head, running their deep wild imagination. "Reverse Over-Engineering" is a desirable outcome of the Simulator Effect: what game players (and game developers trying to clone the game) do when they use their imagination to extrapolate how a game works, and totally overestimate how much work and modeling the simulator is actually doing, because they filled in the gaps with their imagination and preconceptions and assumptions, instead of realizing how many simplifications and shortcuts and illusions it actually used.

https://www.masterclass.com/classes/will-wright-teaches-game...

>There's a name for what Wright calls "the simulator effect" in the video: apophenia. There's a good GDC video on YouTube where Tynan Sylvester (the creator of RimWorld) talks about using this effect in game design.

https://en.wikipedia.org/wiki/Apophenia

>Apophenia (/æpoʊˈfiːniə/) is the tendency to mistakenly perceive connections and meaning between unrelated things. The term (German: Apophänie) was coined by psychiatrist Klaus Conrad in his 1958 publication on the beginning stages of schizophrenia. He defined it as "unmotivated seeing of connections [accompanied by] a specific feeling of abnormal meaningfulness". He described the early stages of delusional thought as self-referential, over-interpretations of actual sensory perceptions, as opposed to hallucinations.

RimWorld: Contrarian, Ridiculous, and Impossible Game Design Methods

https://www.youtube.com/watch?v=VdqhHKjepiE

5 game design tips from Sims creator Will Wright

https://www.youtube.com/watch?v=scS3f_YSYO0

>Tip 5: On world building. As you know by now, Will's approach to creating games is all about building a coherent and compelling player experience. His games are comprised of layered systems that engage players creatively, and lead to personalized, some times unexpected outcomes. In these types of games, players will often assume that the underlying system is smarter than it actually is. This happens because there's a strong mental model in place, guiding the game design, and enhancing the player's ability to imagine a coherent context that explains all the myriad details and dynamics happening within that game experience.

>Now let's apply this to your project: What mental model are you building, and what story are you causing to unfold between your player's ears? And how does the feature set in your game or product support that story? Once you start approaching your product design that way, you'll be set up to get your customers to buy into the microworld that you're building, and start to imagine that it's richer and more detailed than it actually is.

[−] hani1808 30d ago
[dead]
[−] r366y6 30d ago
[dead]
[−] vhantz 30d ago
[flagged]
[−] somat 30d ago
"AMD’s AI director reports that Claude Code has become “dumber and lazier” since February, based on analysis of 6,852 sessions and 234,760 tool calls, which is the most thorough performance review any AI has received and rather more than most human employees get."

Are there any good ways to measure agent ability? Or do we just have to go by vibes?