I am this very term teaching 18-year-old students 6502 assembly programming using an emulated Apple II Plus. They've had intro to Python, data structures, and OO programming courses using a modern programming environment.
Now, they are programming a chip from the seventies using an editor/assembler that was written in 1983 and has a line editor, not a full-screen one.
We had a total of 10 hours of class + lab where I taught them about assembly language and told them about the registers, instructions, and addressing modes of the chip, memory map and monitor routines of the Apple, and after that we went and wrote a few programs together, mostly using the low-resolution graphics mode (40x40): a drawing program, a bouncing ball, culminating in hand-rolled sprites with simple collision detection.
Their assignment is to write a simple program (I suggested a low-res game like Snake or Tetris but they can do whatever they want provided they tell me about it and I okay it), demo their program, and then explain to the class how it works.
At first they hated the line editor. But then a very interesting thing happened. They started thinking about their code before writing it. Planning. Discussing things in advance. Everything we told them they should do before coding in previous classes, but they didn't do because a powerful editor was right there so why not use it?...
And then they started to get used to the line editor. They told me they didn't need to really see the code on the screen, it was in their head.
They will of course go back to modern tools after class is finished, but I think it's good for them to have this kind of experience.
I took a very similar class 9 years ago, and it was honestly one of the most helpful things I got out of my CS degree. The low level and limited tooling taught me to think before I start writing.
I've had other people look askanse at me, but on greenfield work I tend to start with pen and graph paper. I'm not even writing pseudocode, but diagramming a loose graph with potential functions or classes and arrows interconnecting them. Obviously this can be taken too far, full waterfall planning will be a different exercise in frustration.
I find spending a few hours planning out ahead of time before opening an editor saves me tons of time actually coding. I've never had a project even loosely resemble the paper diagram, but the exercise of thinking through the general structure ahead of time makes me way more productive when it comes time to start writing code. I've tried diagramming and scaffolding in my editor, but then I end up actually writing code instead of big picture diagramming. Writing it on paper where I know I'll have to retype everything anyway removes the distractions of what method to use or what to name a variable.
The few times I've vibe-coded something this was super helpful, since then I can give much more concrete and focused prompts.
One of my favourite experiences coming up as an engineer was working with a very senior engineer right in the beginning. Whenever he had a task or problem, he would start out thinking, maybe doodling a bit on paper, go for a walk, and only then sit down at his computer and start typing. He would type in one go only compiling in the end, and it would work. (Even typos were rare.)
All this to say that it is extremely useful to have the program and the problem space in your head and to be able to reason about it before hand. It makes it clearer what you expect and easier to catch when something unexpected happens.
I was going to say "why on earth are you making them use a line editor there is probably a vscode plugin for the assembler with syntax highlighting" then I got to your point about it being in their head instead. This reminds me of what zed Shaw said, for some reason code written without an ide is better and he's not sure why.
As a sort of an adjacent point, I worked through a book that is used on a course often called "from nand to Tetris". It is probably the best thing I've done, in terms of understanding how computers, assemblers and compilers work
I wish more was being invested in AI autocomplete workflows. That was a nice middle-ground.
But yeah my hunch is "the old way" - although not sure we can even call it that - is likely still on par with an "agentic" workflow if you view it through a wider lens. You retain much better knowledge of the codebase. You improve your understanding over coding concepts (active recall is far stronger than passive recognition).
The very first few years of my career I spent writing code (mostly Perl) in vi (not even vim) on a SPARC running Solaris. I bought myself the O’Riley Perl Cookbook and that was pretty much my sole guide apart from the few internet forums that were available at the time. Search engines were still primitive, so getting help when you got stuck was far more difficult. But it forced me to deeply learn a lot of things - Perl syntax (we had no syntax highlighting, intellisense, etc.) terminal tools, and especially vi keystrokes. Looking back, there was far less distraction and “noise”, though I admit that could have been the fact that it was the beginning of my career and expectations were lower. I miss those times because now everything feels insanely more layered and complex.
Personally I haven't stopped doing things the old way. I haven't had any issues using LLMs as rubber ducks or brain storming assistants - they can be particularly useful for identifying algorithms which might solve a given problem you're unfamiliar with. Basically a variant on google searching.
But when it comes to the final act I find myself unwilling to let an LLM write the actual code - I still do it myself.
Perhaps because my main project at the moment is a game I've been working on for four years, so the codebase is sizable, non-trivial, and all written by me. My strong sense even since coding LLMs showed up has been that continuing to write the code is important for keeping it coherent and manageable as a whole, including my mental model of it.
And also: for keeping myself happy working on it. The enjoyment would be gone if I leaned that far into LLMs.
Here’s how i do it: I create a lot of stuff using AI to the max, but I also spend the necessary of time on reviewing that the AI is producing code that passes my cognitive load standards. this involves some tokens spent on grooming code and documenting well. Most of this is effortless thanks to an AGENTS.md based on this: https://github.com/zakirullin/cognitive-load/blob/main/READM... but i have a good sense of catching when things are getting weird and i steer back.
Then, when credits run out. It’s show time! The code is neatly organized, abstractions make sense, comments are helpful so I have a solid ground to do some good old organic human coding. I make sure that when i’m approaching limits I’m asking the AI to set the stage.
I used to get frustrated when credits ran out because the AI was making something I would need to study to comprehend. Now I’m eager to the next “brain time hand-out”
It sounds weird but it’s a form of teamwork. I have the means to pay for a larger plan but i’d rather keep my brain active.
I didn't realize how I learned to develop software from 2011-2015 was the old way lol. (Am I old now?).
I appreciate that the author understands why doing everything "the old way" is good. AI is a tool, it can't be a replacement for how you think and it can't be a replacement for the actual work.
I wish more people had a desire for the inner workings of things because it makes you better at actually using tools. Implementing compilers, databases, OSes, control systems, etc. is like practicing swimming. Yeah, you might not ever swim again but when you need to the muscle memory will be there when you need to get out of the ocean (I know this is a strained metaphor).
Knowing more can only be a boon to using LLMs for coding and it's really a general problem in ML. I work in a science field as hw / sw engineer and I've seen so many pure data science people say they can replace all our work with a model, flail for 2 years and then their whole org gets canned. If they just read a textbook or collaborated (which they never do, no matter how polite you are), they'd have been able to leverage their data science skills to build something great and instead they just toil away never making it past step 0.
I'm a big advocate for AI, including GenAI. But I still spend a fair amount of time coding by hand, or "by hand + Copilot completions enabled". And yes, I will use spec driven development with SpecKit + OpenCode, or just straight up "vibe code" on occasion but so far I am unwilling to abdicate my responsibility to understand code and abandon the knowledge of how to write it. Heck, I even bought a couple of new LISP and Java books lately to bone up on various corners of those respectively. And I got a couple of Forth books before that. Not planning to stop coding completely for a while, if ever.
The fact that with AI development, your brain is no longer in a tight feedback loop with the codebase, leading to a significant drift between your model and reality, is still a sticking point with me and agentic development. It feels like trying to eat with silicone rubber chopsticks. I lose all precision and dexterity.
I still keep hoping there'll be a glut of demand for traditional software engineers once the bibbi in the babka goes boom in production systems in a big way:
But agentic workflows are so good now—and bound to get better with things like Claude Mythos—that programming without LLMs looks more and more cooked as a professional technique (rather than a curiosity or exercise) with each passing day. Human software engineers may well end up out of the loop completely except for the endpoints in a few years.
Getting to spend 3 months on a self learning journey sounds wonderful. My hunch is that these deep skills will be valuable long term and that this new abstraction is not the same as moving from assembly to c, but I am not completely sure. Lately most of my code has been llm generated and I can’t say I feel any sense of enjoyment, accomplishment, or satisfaction at the end of a work day. But I’ve also come to realise I really only enjoy 5-10% of the coding anyway and the rest is all the tedious semi-mechanical changes that support that small interesting core. On the scale of human history working with computers is a blip in time and I wonder how the period of hand writing code will be viewed in a hundred years, perhaps as a footnote or simply bundled as ‘everything before machines were self automating’.
It is amazing to see such a change in the industry: this title is something nearly every single developer could've said ~two years ago, now anyone claiming to code by hand is almost like an endangered species.
I think the author’s intent is well-placed, but it does feel a bit sad that this subject is blog-worthy.
I’ve spent a lifetime teaching myself programming, computers, and engineering. I have no formal education in these disciplines and find that I excel due to the self-taught nature of my background.
I take a very metered approach to AI and use it for autocomplete while still scrutinizing every token (not the AI kind) as well as an augment to my self-pedagogy. It’s great to be able to “query” or get a summary from a set of technical documents on demand.
However, I don’t understand the desire to remove oneself from the process with AI. I simply don’t do anything that won’t teach me something new or improve my existing skills.
There’s more to engineering than simply programming. Both the engineer and the intended user base must also understand the system. The value lost is greater than the sum of all the parts when an LLM produces most or all of the code.
I started using Zed as a half measure. I think I'll start using AI for planning and suggested implementation steps.
I am seeing non technical people getting involved building apps with Claude. After the Openclaw and other Agentic obsession trends I just don't see it pragmatic to continue down the road of AI obsession.
In most other aspects of life my skills were valuated because of my ability to care about details under the hood and the ability to get my hands dirty on new problems.
Curious to see how the market adapts and how people find ways to communicate this ability for nuance.
> We don’t have teachers or a curriculum, and there’s very little required structure beyond making a full-time commitment during your retreat
I saw this quote when looking at the Recurse Center website. How does one usually go about something like this if they work full time? Does this mainly target those who are just entering the industry or between jobs?
I know the article is mostly about what the author built at the coding retreat, but now he has me interested in trying to attend one!
I'll bet we see more and more of this in the future. As developer skills atrophy due to over-reliance on LLMs, we'll have to keep our skills sharp somehow. What better way than a sabbatical?
> One solution to this constant companion problem: Spend more time with your phone out of easy reach. If it’s not nearby, it won’t be as likely to trigger your motivational neurons, helping clear your brain to focus on other activities with less distraction.
Reminds me of this study: "The mere presence of a smartphone reduces basal attentional performance"
The effect persisted even when the phone was switched off. It only went away when the phone was moved to a different part of the building.
I mean, that's the only way I code. I don't use llm's to do my work for me. I'm perfectly capable of solving any sort of problem on my own, and then I'll understand it well enough to explain it to someone later.
I did things the old way for 25 years and my carpal tunnels are wearing out. LLMs let me produce the same quality I always have with a lot less typing so not mad at that at all. I review and own every line I commit, and feel no desire to go back to the old way.
What scares the shit out of me are all these new CS grads that admit they have never coded anything more complex than basic class assignments by hand, and just let LLMs push straight to main for everything and they get hired as senior engineers.
It is like hiring an army of accountants that have never done math on paper and exclusively let turbotax do all the work.
If you have never written and maintained a complex project by hand, you should not be allowed to be involved in the development of production bound code.
But also, I feel this way about the industry long before LLMs. If you are not confident enough to run Linux on the computer in front of you, no senior sysadmin will hire you to go near their production systems.
Job one of everyone I mentor is to build Linux from scratch, and if you want an LLM build all the tools to run one locally for yourself. You will be way more capable and employable if you do not skip straight to using magic you do not understand.
the cal newport quote in the post is doing a lot of work. the strain required to craft a clear memo or report is the mental equivalent of a gym workout.
fine, but the gym analogy breaks down somewhere. in a gym, the person who actually lifts heavier gets noticed. in software, the person with the right bio and the right network gets noticed, regardless of whether they've ever lifted anything real.
you can spend three years learning compilers properly and have a handful of readers. someone else ships a wrapper on a saturday and lands a pmarca quote tweet by monday.
coding the old way is good for you. i'm not convinced it's what gets you noticed. the strain was never really what got rewarded in the first place.
I left big tech about 5 years ago, which was an interesting timing looking back. It's not even that long ago, but man have things completely changed since then. I still code a lot, but only for fun. I've never even tried agentic coding. I'm kinda sad that "coding the old way" (as what this apparently is now) has become obsolete so quickly, but also very grateful that I was coincidentally born at the right time to have lived through a good chunk of time where people still wrote code themselves.
I wonder if we could design a programming language specifically for teaching CS, and have a way to hard-exclude it from all LLM output. Kinda like anti virus software has special strings that are not viruses but trigger detections for testing.
This would probably require cooperation during model training, but now that I think of it, is there adversarial research on LLM? Can you design text data specifically to mess with LLM training? Like what is the 1MB of text data that if I insert it into the training set harms LLM training performance the most?
Anecdotally I found that it was very easy to just throw everything at the LLM. That was fine until I realized once I got stuck that I was basically lost. It only took 2 weeks for years of knowledge to feel very “foreign”.
Recently I’ve been trying to combat this by learning things “deeper” IE. yes I can secure and respond to container based threats but how do containers actually work deep down?
So far I think it’s working well and as an odd plus it’s actually helping me use AI more efficiently when I need to.
It's easy to take for granted lots of experience programming before the advent of LLMs. This seems like a good strategy to develop understanding of software engineering.
I remember writing BASIC on the Apple II back when it wasn't retro to do so!
I've settled into a pattern of using agents for work (where throughput/results are the most important) and doing things the hard way for personal or learning projects (where the learning is more important).
Should have LLM providers create stack overflow type of site based on user’s most asked problem. At least we won’t deplete de source of normal searches results.
i feel like i'm being gaslit into believing that coding was ever hard or the bottleneck.
Typing and thinking in English is demonstrably slower than in code/the abstract (Haskell for me.)
And no, I didn't write English plans before AI. Or have a stream of English thought in my head. Or even pronounce code as I read and wrote it. That's low-skill stuff.
It is all a conspiracy, now that mechanical keyboards are affordable and available and so many shapes and switches, they want to take this last pleasure (typing) from us
This is ominous and very depressing given what we've recently learned / reconfirmed about LLMs sapping our ability to persist through difficult problems:
> There were 2 or 3 bugs that stumped me, and after 20 min or so of debugging I asked Claude for some advice. But most of the debugging was by hand!
Twenty whole minutes. Us old-timers (I am 39) are chortling.
I am not trying to knock the author specifically. But he was doing this for education, not for work. He should have spent more like 6 hours before desperately reaching for the LLM. I imagine after 1 hour he would have figured it out on his own.
So we’ve already grown nostalgic for the old days… skimming through an alien looking codebase, scratching your head trying to figure what crazy abstraction the last person who touched this code had in mind. Oh shit it was me? That made so much more sense back then… but it’s been 6 hours and I can’t figure out why this does not work anymore. So you read some docs but they are poorly written. So you find something on Google and try to hack that into your solution. But nope, now more stuff broke. There goes your day.
(I swapped the title for the subtitle earlier because I thought it was more informative. What I missed was the flamebaity effect that "the old way" would have. Obvious in hindsight!)
Depressing. It's like reading has-been actors' stories about how they went to wellness retreats to "reconnect with themselves" to try get back on the job. I can't wait for the day when the same type of people as the author - or indeed, the author himself - start labeling plain regular programming as "artisanal" and "craft".
357 comments
Now, they are programming a chip from the seventies using an editor/assembler that was written in 1983 and has a line editor, not a full-screen one.
We had a total of 10 hours of class + lab where I taught them about assembly language and told them about the registers, instructions, and addressing modes of the chip, memory map and monitor routines of the Apple, and after that we went and wrote a few programs together, mostly using the low-resolution graphics mode (40x40): a drawing program, a bouncing ball, culminating in hand-rolled sprites with simple collision detection.
Their assignment is to write a simple program (I suggested a low-res game like Snake or Tetris but they can do whatever they want provided they tell me about it and I okay it), demo their program, and then explain to the class how it works.
At first they hated the line editor. But then a very interesting thing happened. They started thinking about their code before writing it. Planning. Discussing things in advance. Everything we told them they should do before coding in previous classes, but they didn't do because a powerful editor was right there so why not use it?...
And then they started to get used to the line editor. They told me they didn't need to really see the code on the screen, it was in their head.
They will of course go back to modern tools after class is finished, but I think it's good for them to have this kind of experience.
I've had other people look askanse at me, but on greenfield work I tend to start with pen and graph paper. I'm not even writing pseudocode, but diagramming a loose graph with potential functions or classes and arrows interconnecting them. Obviously this can be taken too far, full waterfall planning will be a different exercise in frustration.
I find spending a few hours planning out ahead of time before opening an editor saves me tons of time actually coding. I've never had a project even loosely resemble the paper diagram, but the exercise of thinking through the general structure ahead of time makes me way more productive when it comes time to start writing code. I've tried diagramming and scaffolding in my editor, but then I end up actually writing code instead of big picture diagramming. Writing it on paper where I know I'll have to retype everything anyway removes the distractions of what method to use or what to name a variable.
The few times I've vibe-coded something this was super helpful, since then I can give much more concrete and focused prompts.
All this to say that it is extremely useful to have the program and the problem space in your head and to be able to reason about it before hand. It makes it clearer what you expect and easier to catch when something unexpected happens.
As a sort of an adjacent point, I worked through a book that is used on a course often called "from nand to Tetris". It is probably the best thing I've done, in terms of understanding how computers, assemblers and compilers work
https://amzn.eu/d/07pszOEy
But yeah my hunch is "the old way" - although not sure we can even call it that - is likely still on par with an "agentic" workflow if you view it through a wider lens. You retain much better knowledge of the codebase. You improve your understanding over coding concepts (active recall is far stronger than passive recognition).
But when it comes to the final act I find myself unwilling to let an LLM write the actual code - I still do it myself.
Perhaps because my main project at the moment is a game I've been working on for four years, so the codebase is sizable, non-trivial, and all written by me. My strong sense even since coding LLMs showed up has been that continuing to write the code is important for keeping it coherent and manageable as a whole, including my mental model of it.
And also: for keeping myself happy working on it. The enjoyment would be gone if I leaned that far into LLMs.
Then, when credits run out. It’s show time! The code is neatly organized, abstractions make sense, comments are helpful so I have a solid ground to do some good old organic human coding. I make sure that when i’m approaching limits I’m asking the AI to set the stage.
I used to get frustrated when credits ran out because the AI was making something I would need to study to comprehend. Now I’m eager to the next “brain time hand-out”
It sounds weird but it’s a form of teamwork. I have the means to pay for a larger plan but i’d rather keep my brain active.
I appreciate that the author understands why doing everything "the old way" is good. AI is a tool, it can't be a replacement for how you think and it can't be a replacement for the actual work.
I wish more people had a desire for the inner workings of things because it makes you better at actually using tools. Implementing compilers, databases, OSes, control systems, etc. is like practicing swimming. Yeah, you might not ever swim again but when you need to the muscle memory will be there when you need to get out of the ocean (I know this is a strained metaphor).
Knowing more can only be a boon to using LLMs for coding and it's really a general problem in ML. I work in a science field as hw / sw engineer and I've seen so many pure data science people say they can replace all our work with a model, flail for 2 years and then their whole org gets canned. If they just read a textbook or collaborated (which they never do, no matter how polite you are), they'd have been able to leverage their data science skills to build something great and instead they just toil away never making it past step 0.
I still keep hoping there'll be a glut of demand for traditional software engineers once the bibbi in the babka goes boom in production systems in a big way:
https://m.youtube.com/watch?v=J1W1CHhxDSk
But agentic workflows are so good now—and bound to get better with things like Claude Mythos—that programming without LLMs looks more and more cooked as a professional technique (rather than a curiosity or exercise) with each passing day. Human software engineers may well end up out of the loop completely except for the endpoints in a few years.
I’ve spent a lifetime teaching myself programming, computers, and engineering. I have no formal education in these disciplines and find that I excel due to the self-taught nature of my background.
I take a very metered approach to AI and use it for autocomplete while still scrutinizing every token (not the AI kind) as well as an augment to my self-pedagogy. It’s great to be able to “query” or get a summary from a set of technical documents on demand.
However, I don’t understand the desire to remove oneself from the process with AI. I simply don’t do anything that won’t teach me something new or improve my existing skills.
There’s more to engineering than simply programming. Both the engineer and the intended user base must also understand the system. The value lost is greater than the sum of all the parts when an LLM produces most or all of the code.
I am seeing non technical people getting involved building apps with Claude. After the Openclaw and other Agentic obsession trends I just don't see it pragmatic to continue down the road of AI obsession.
In most other aspects of life my skills were valuated because of my ability to care about details under the hood and the ability to get my hands dirty on new problems.
Curious to see how the market adapts and how people find ways to communicate this ability for nuance.
> We don’t have teachers or a curriculum, and there’s very little required structure beyond making a full-time commitment during your retreat
I saw this quote when looking at the Recurse Center website. How does one usually go about something like this if they work full time? Does this mainly target those who are just entering the industry or between jobs?
I know the article is mostly about what the author built at the coding retreat, but now he has me interested in trying to attend one!
> One solution to this constant companion problem: Spend more time with your phone out of easy reach. If it’s not nearby, it won’t be as likely to trigger your motivational neurons, helping clear your brain to focus on other activities with less distraction.
Reminds me of this study: "The mere presence of a smartphone reduces basal attentional performance"
The effect persisted even when the phone was switched off. It only went away when the phone was moved to a different part of the building.
https://www.nature.com/articles/s41598-023-36256-4
What scares the shit out of me are all these new CS grads that admit they have never coded anything more complex than basic class assignments by hand, and just let LLMs push straight to main for everything and they get hired as senior engineers.
It is like hiring an army of accountants that have never done math on paper and exclusively let turbotax do all the work.
If you have never written and maintained a complex project by hand, you should not be allowed to be involved in the development of production bound code.
But also, I feel this way about the industry long before LLMs. If you are not confident enough to run Linux on the computer in front of you, no senior sysadmin will hire you to go near their production systems.
Job one of everyone I mentor is to build Linux from scratch, and if you want an LLM build all the tools to run one locally for yourself. You will be way more capable and employable if you do not skip straight to using magic you do not understand.
fine, but the gym analogy breaks down somewhere. in a gym, the person who actually lifts heavier gets noticed. in software, the person with the right bio and the right network gets noticed, regardless of whether they've ever lifted anything real.
you can spend three years learning compilers properly and have a handful of readers. someone else ships a wrapper on a saturday and lands a pmarca quote tweet by monday.
coding the old way is good for you. i'm not convinced it's what gets you noticed. the strain was never really what got rewarded in the first place.
>>ai is here. so i'm spending 3 months coding the old way
The old way?! So not using AI is already being called "the old way"?!!
That statement sends alarm bells off about writing on the internet and trust to be put into it as if I'm the first one to notice it.
1. It increases the chances of any bugs being found and resolved.
2. It encourages the author to be more careful with their code to avoid long reviews with a lot of findings.
3. It ensures at least two people - the author and approverd - have familiarity with the code.
4. It spreads responsibility for the code across at least two people - the author and approvers.
It's clear this article's author does not review their own code. I sure hope that code is not used for anything important.
This would probably require cooperation during model training, but now that I think of it, is there adversarial research on LLM? Can you design text data specifically to mess with LLM training? Like what is the 1MB of text data that if I insert it into the training set harms LLM training performance the most?
Recently I’ve been trying to combat this by learning things “deeper” IE. yes I can secure and respond to container based threats but how do containers actually work deep down?
So far I think it’s working well and as an odd plus it’s actually helping me use AI more efficiently when I need to.
I remember writing BASIC on the Apple II back when it wasn't retro to do so!
> so i'm spending 3 months coding the old way
the old way which is about one year ago?
Typing and thinking in English is demonstrably slower than in code/the abstract (Haskell for me.)
And no, I didn't write English plans before AI. Or have a stream of English thought in my head. Or even pronounce code as I read and wrote it. That's low-skill stuff.
> 15 years of Clojure experience
My God I’m old.
> There were 2 or 3 bugs that stumped me, and after 20 min or so of debugging I asked Claude for some advice. But most of the debugging was by hand!
Twenty whole minutes. Us old-timers (I am 39) are chortling.
I am not trying to knock the author specifically. But he was doing this for education, not for work. He should have spent more like 6 hours before desperately reaching for the LLM. I imagine after 1 hour he would have figured it out on his own.
I do the former for fun. The latter to provide for my family.
There is a reason old men take on hobbies like woodworking and fixing old cars and other stuff that has been replaced by technology.
(I swapped the title for the subtitle earlier because I thought it was more informative. What I missed was the flamebaity effect that "the old way" would have. Obvious in hindsight!)