Several weeks ago, I spent about a week fully reverse engineering a Stereomaker pedal. It accepts a mono signal and produces a stereo field using a 5-stage all-pass filter to mess with the phase without the use of delay (which sounds cheesy and creates a result that doesn't mix well back to mono).
I've not really worked with audio circuits previously, and I'd been intimidated to approach the domain. My journey was radically expedited by iterating through the entire process with a ChatGPT instance. I would share zoomed photos, grill it about how audio transformers work, got it to patiently explain JFET soft-switching using an inverter until the pattern was forced into my goopy brain.
Through the process of exploring every node of this circuit, I learned about configurable ground lifts, using a diode bridge to extract the desired voltage rail polarity, how to safely handle both TS and TRS cables with a transformer, that transformer outputs are 180 degrees out of phase, how to add a switch that will attenuate 10dB off a signal to switch line/instrument levels.
Eventually I transitioned from sharing PCB photos to implementing my own take on the cascade design in KiCAD, at which point I was copying and pasting chunks of netlist and reasoning about capacitor values with it.
In short, I gave myself a self-directed college-level intensive in about a week and since that's not generally a thing IRL, it's reasonable to conclude that I wouldn't have ever moved this from a "some day" to something I now understand deeply in the past tense without the ability to shamelessly interrogate an LLM at all hours of the day/night, on my schedule.
This is a phenomenal example of exactly what I am advocating.
Notice you didn't ask the AI to 'just design a stereo pedal for me.' You interrogated it, reasoned about netlists, and forced the concepts into your brain through intense friction. That is pure deep work.
Its not as simple as just being lazy, our brains are hardwired to take the path of least resistance. I believe someone industrious like you is the exception and not the rule which is why industrious people do well in life and a priased.
No, you touch on the aspects where you're able to use AI as an extension of your skills.
This is completely different than my colleague who isn't a software engineer, and now all of the sudden is creating PRs which I need to review and correct.
I'm a sceptic. I use it to explore the unknowns and go from there.
I have a nearly total opposite take. I can't tell you how many times I've read a book, a paper or something else and been confused by some ambiguity in the author's prose. Being able to drop the paper (or even the book!) into an LLM to dig into the precise meaning has been an unbelievable boost for me.
Now I can actually get beyond conceptual misunderstanding or even ignorance and get to practice, which is how skills actually develop, in a much more streamlined way.
The key is to use the tool with discipline, by going into it with a few inviolable rules. I have a couple in my list, now: embrace Popperian falsifiability; embrace Bertrand Russell's statement: “Everything is vague to a degree you do not realize till you have tried to make it precise.”
LLMs have become excellent teachers for me as a result.
Anecdote: I haven’t done any web development since 2002 and I always farmed that off to someone else.
But since I started using coding agents, I have done two feature full internal web apps authenticated by Amazon Cognito. While the UI looks like something from 2002, I am good at putting myself in the shoes of the end user, I iterated often (and quickly) over the UX.
I didn’t look at a line of code and have no plans to learn web development. I might have taken the time to learn a little before AI just to help me with internal websites. Yes I know it’s secure - I validated the endpoints can’t be accessed unauthenticated and the IAM role.
Second anecdote: I know AWS (trust me on this) like the back of my hand. I also know CloudFormation. For years I’ve been putting off learning Terraform and the CDK. After AI, why bother? I can one shot either for IAC and I’m very specific about what I want.
My company is happy and my customer is happy (consulting) what else matters? Substitute “customer” for “the business” or “stakeholders”
You are here. You know what you want. You know what are the correct rules and constraints. You know what the correct path looks like. You are able to spot drifting. You are able to objectively review the outcome. AI is the tool to go from here to where you want to arrive. There.
If not. AI is the tool to get "here" faster. And then go from here to there.
All you need is to take a little time to learn how to use this new tool: AI.
Take time to learn.
Have you taken the time to document what mathematicians were able to do with AI? What researchers were able to do with AI? They took the time to learn the AI tool. Then they used it with great results.
What are you waiting for? Learning is something you should also do. Go do it.
I have some algorithms I absolutely must know. So I’m hand coding them and asking the agent to critique me.
I do a very similar thing in writing - I need feedback, don’t rewrite this!
In both cases I need the struggle of editing / failing to arrive at a deeper understanding.
The future dev will need to know when to hand code vs when to not waste your time. And the advantage will still go to the person willing to experience struggle to understand what they need to.
Maybe for you reading a paper deeply is the most constructive way that you have to absorb information.
For me, it is having a document and interrogating it. Maybe having many sets of documents about a whole category of information. Getting the bullet points. getting the high level and then interrogating and digging down and being able to get bubbled up information as I need it.
That is the learning style that matches how I learn.
I have never been able to skim, so reading a large document WILL teach me that topic, but getting through that doc is tough.
I can dump a very large set of docs in a reader that lets me interrogate the whole data set and I can fly through looking for what is interesting to me, and what I may need, and along the way I will likely dive into other parts too. Asking questions keeps my hyperfocus active.
I think it is just a different style. I have synesthesia and a hard time not working on three to five things at once. I am use to knowing I learn differently than others.
But to actually answer the question:
I’ve been putting research paper pdfs in notebook llm , and turning them into ~40 minute podcasts which I listen to on my walks.
Yes it’s shallow learning, and it might have some hallucinations in there but I wouldn’t have read some of those otherwise.
I think the risk is this; when non-technical users who've never shipped software in their life can dictate to a machine and get "instant results" it going to bring back managers not understanding that you don't just ship code. Especially these days where one bad dependency can mean downtime or worse.
Agreed. LLMs have helped me achieve much deeper reading, _when directed to do so_. Asking an LLM to “Teach me Socratically about this paper/code. One question at a time”, usually allows me to get a much deeper reading of the material than I would otherwise.
I can't see the issue of being lazy, and laziness becoming more productive than liability in these times we have AIs, most lazy people find better shortcuts and better ways to make things that other cannot find due to overly being filled with the need to be more productive than lazy and your brain capacity will get overloaded. Sometimes AI also can be an assistive tool for disabled people, and make even them Productive people, so calling it "lazy", over exaggerated.
Pretend learning is absolutely the key point, for me. There is danger in shifting our reasoning from knowing "stuff", to knowing a symbolic summary of "stuff" (helpfully generated by an LLM at varying levels of accuracy).
Previously, we saw a shift with search engines where we no longer needed to learn data because we could use a search engine as a mental signpost to the data, freeing up capacity for other thought.
LLMs are shifting knowledge creation to this mental pointer model. We don't need to know real "stuff" because we know how to look it up later (never?).
Each of these summaries is a secondary source, delivered through an agent biased by whatever is in its current context window. Like a game of telephone the summaries are inherently lossy, and each one may be 95% correct and we crucially don't understand which 5% may be incorrect.
When our basis for decision making is a collection of 100s or 1000s of LLM generated "Schrodinger's facts", we risk cumulative cascading errors. We will be wrong in unpredictable, chaotic ways.
We are voluntarily capping ourselves as this childish level of thought, because it feels like we are exercising our critical judgement the same as ever. However, the integrity of the inputs has been compromised. Bad inputs always lead to bad outputs.
This was the issue with some the ads Apple was running when launching the iPhone 16. It showed the worst worker using Apple Intelligence to impress the boss and get promotions, which being generally lazy and terrible. I felt it was the wrong message to send. [0]
I don’t think AI is all bad for summaries though. I used to add stuff to a reading list with good intentions, but things went there to die. Hundreds of articles added, but with so much new content each day, I would never actually read any of it. Now, I use AI summaries to get more context on what the article is. If it sounds interesting and I want more info, I can read the whole thing in the moment. If I’m satisfied with the summary alone, I can move on with my life. No more pushing it off to a reading list that only generates guilt. I actually end up reading more articles due to this, not less.
And it doesn't matter. To each their own. Take one example: cooking. Some may choose to be a gourmet chef, whether professionally or just on their own time. Some will just regularly cook their food. Some will cook only when they have to. And some will avoid cooking no matter what, leaving it to family or going out to buy food, etc.
Now apply to every task and endeavor that one may be involved in. It doesn't matter if any particular thing sticks or not. Some may care and dive deeply, and some may prefer a hands-off approach. Nothing changes either way; life goes on.
The primary reason to get into anything deeply before was because it contributed directly to survival, eg studying and building a career to provide a product/service others needed. Things had to stick because living depended on it. Now with AI, well it just doesn't matter anymore with the essentials and everything beyond increasingly being automated away.
Getting your directions from Google Maps might make you seem more knowledgeable about a city's geography than you actually are.
However, what does it mean to say that's deceptive? It means you care more about social signalling than you do about arriving at the right destination on time. Showing that you're not the sort of person who gets lost isn't really the primary reason people use Google Maps. When it's not a test of your navigation skills, it's not cheating.
Similarly, doing Google searches before posting might be "deceptive" in that it makes you seem more knowledgeable than you are, but on the whole I would prefer more knowledgeable posts, so the social signalling seems like a secondary consideration.
Similarly for using AI. Sometimes it's just a way to get more information.
A side-track and a possibly controversial opinion:
It seems to me that Agile methodology did a similar thing. The idea of Agile is not to skip understanding requirements, design, upfront reasoning and due diligence, as seen in seen in waterfall methods. It however sometimes turned into laziness looking like faster incremental progress.
I think quality of software has become worse over the time, with "unknown error occurred try again later" becoming more common, and I wonder if the root causes of it includes jumping to building things without properly thinking through about the customer problem, requirements and/or design.
I may easily be wrong, would like to hear corrective thoughts.
Finally someone said this! I feel it's not that people are doing less, it's that the feedback loop that used to tell you whether you actually understood something is now gone. you ship, it works, you move on, and three months later you can't explain why you made half the decisions you did.
The bit about taste is what actually stuck with me, though. taste comes from having opinions formed through struggle, not summaries and that's the part AI quietly erodes without you noticing it's happening. Does anyone here agree with me?
That's a different take than I've been considering AI to be genuinely useful. I try to not use it for deep work, infact I try to use it minimally but frequently for short checks on my own understanding.
Using your research paper reading example, I would read the research paper, but then ask an AI tool specific questions about the work, frequently in new chats. Then at the end I might ask it to implement my description of the paper. I guess it's your 'debate with me' conclusion, the only difference is I would try to have multiple short conversations.
Thank you for this helpful differentiation. I agree - and if it‘s undermining our trust into ‚effort‘ (we start to be suspicious about how much some piece of work is really ‚worth‘), it undermines also our relationships.
Valid points, with which I agree and share the concern. I can't compete with colleagues, whom do things fast, if I want to learn. On the other hand, I no longer have to toil to work through things that I never truly learned, like tasks that require to be done a few times a year. Mastery is never acievable bc I forget, side quests become much less derailing. However, I am deprived of going through the motions and researching.
That sounds relatable, but there are ways through which we can avoid being lazy. As for myself, I occasionally try to debug and fix the code myself instead of relying on coding tools. That doesn't give superhuman feeling like writing a whole project by hand, but still that helps.
Can't say fortunately or unfortunately, but we have no other choice but to keep up this way.
The biggest risk is companies - if they are saying "I dont need a team to do X" whether development, marketing, whatever. Then the barrier to entry has collapsed in industries so we can see other companies enter the market
Sometimes it feels like vibe coding lowers the cost of creating new skills so much that we end up with skills for making skills, and then more skills for evaluating those skills.
At some point the question stops being “how do we evaluate all this?” and becomes “did all of this need to exist in the first place?
I agree with your point but i think there are many cases in which its good. like if you're building something and get slowed down by topics you’re not familiar with, a summery of a topic is enough to remove the obstacle of not knowing how to continue. but yeah you can't call this "real" learning.
What’s important? That bridges get built and stay up, or that they’re built only after toiling X amounts of hours. AI will change the nature of work, it’s going to make a lot of people uncomfortable. But more importantly, it’s going to let people who understand things faster get the info they need to be productive.
I find value in learning some things deeply but not all things.
The ability to be more selective about where I attend deeply, while leveraging fast shallow learning to complete other tasks... That seems like a potential benefit and a nice choice to have in the toolbox.
I don't think it's all that bad. There's definitely vibe coding that is "copy paste / throw away" programming on ultra steroids. But after vibe coding two products and then finding them essentially impossible to then actually get to a quality bar I considered ready to launch, I've been working on a more measured approach that leverages AI but in a way that simply speeds up traditional programming. I use it to save tons of time on "why is pylance mad about X" "X works from the docs example but my slightly modified X gives error Y" "how do I make a toggle switch in css and html" "how am I supposed to do Python context managers in 2026 (I didn't know about the generator wrapper thing)" all that bullshit that constantly slows you down but needs to be right . AI is great at helping you kickstart and then keeping you unblocked.
I've been using Gemini chat for this, and specifically only giving it my code via copy paste. This sounds Luddite but actually it's been pretty interesting. I can show it my couple "core" library files and then ask it to do the next thing. I can inspect the output and retool it to my satisfaction, then slot it in to my program, or use it as an example to then hand code it.
This very intentional "me being the bridge" between AI and the code has helped so much in getting speed out of AI but then not letting it go insane and write a ton of slop.
And not to toot my own horn too much, but I think AI accelerates people more the wider their expertise is even if it's not incredibly deep. Eg I know enough CSS to spot slop and correct mistakes and verify the output. But I HATE writing CSS. So the AI and I pair really well there and my UIs look way better than they ever have.
Productivity can’t be faked. Productivity also mostly aligns with incentives in my opinion. You want someone to be productive then give them a reason to be.
88 comments
I've not really worked with audio circuits previously, and I'd been intimidated to approach the domain. My journey was radically expedited by iterating through the entire process with a ChatGPT instance. I would share zoomed photos, grill it about how audio transformers work, got it to patiently explain JFET soft-switching using an inverter until the pattern was forced into my goopy brain.
Through the process of exploring every node of this circuit, I learned about configurable ground lifts, using a diode bridge to extract the desired voltage rail polarity, how to safely handle both TS and TRS cables with a transformer, that transformer outputs are 180 degrees out of phase, how to add a switch that will attenuate 10dB off a signal to switch line/instrument levels.
Eventually I transitioned from sharing PCB photos to implementing my own take on the cascade design in KiCAD, at which point I was copying and pasting chunks of netlist and reasoning about capacitor values with it.
In short, I gave myself a self-directed college-level intensive in about a week and since that's not generally a thing IRL, it's reasonable to conclude that I wouldn't have ever moved this from a "some day" to something I now understand deeply in the past tense without the ability to shamelessly interrogate an LLM at all hours of the day/night, on my schedule.
If you're lazy, perhaps you're just... lazy?
Anyhow, I highly recommend the Surfy Industries Stereomaker. It's amazing at what it does. https://www.surfyindustries.com/stereomaker
Notice you didn't ask the AI to 'just design a stereo pedal for me.' You interrogated it, reasoned about netlists, and forced the concepts into your brain through intense friction. That is pure deep work.
This is completely different than my colleague who isn't a software engineer, and now all of the sudden is creating PRs which I need to review and correct.
I'm a sceptic. I use it to explore the unknowns and go from there.
Now I can actually get beyond conceptual misunderstanding or even ignorance and get to practice, which is how skills actually develop, in a much more streamlined way.
The key is to use the tool with discipline, by going into it with a few inviolable rules. I have a couple in my list, now: embrace Popperian falsifiability; embrace Bertrand Russell's statement: “Everything is vague to a degree you do not realize till you have tried to make it precise.”
LLMs have become excellent teachers for me as a result.
But since I started using coding agents, I have done two feature full internal web apps authenticated by Amazon Cognito. While the UI looks like something from 2002, I am good at putting myself in the shoes of the end user, I iterated often (and quickly) over the UX.
I didn’t look at a line of code and have no plans to learn web development. I might have taken the time to learn a little before AI just to help me with internal websites. Yes I know it’s secure - I validated the endpoints can’t be accessed unauthenticated and the IAM role.
Second anecdote: I know AWS (trust me on this) like the back of my hand. I also know CloudFormation. For years I’ve been putting off learning Terraform and the CDK. After AI, why bother? I can one shot either for IAC and I’m very specific about what I want.
My company is happy and my customer is happy (consulting) what else matters? Substitute “customer” for “the business” or “stakeholders”
If not. AI is the tool to get "here" faster. And then go from here to there.
All you need is to take a little time to learn how to use this new tool: AI.
Take time to learn.
Have you taken the time to document what mathematicians were able to do with AI? What researchers were able to do with AI? They took the time to learn the AI tool. Then they used it with great results.
What are you waiting for? Learning is something you should also do. Go do it.
No mystery here.
I do a very similar thing in writing - I need feedback, don’t rewrite this!
In both cases I need the struggle of editing / failing to arrive at a deeper understanding.
The future dev will need to know when to hand code vs when to not waste your time. And the advantage will still go to the person willing to experience struggle to understand what they need to.
For me, it is having a document and interrogating it. Maybe having many sets of documents about a whole category of information. Getting the bullet points. getting the high level and then interrogating and digging down and being able to get bubbled up information as I need it.
That is the learning style that matches how I learn.
I have never been able to skim, so reading a large document WILL teach me that topic, but getting through that doc is tough.
I can dump a very large set of docs in a reader that lets me interrogate the whole data set and I can fly through looking for what is interesting to me, and what I may need, and along the way I will likely dive into other parts too. Asking questions keeps my hyperfocus active.
I think it is just a different style. I have synesthesia and a hard time not working on three to five things at once. I am use to knowing I learn differently than others.
But to actually answer the question: I’ve been putting research paper pdfs in notebook llm , and turning them into ~40 minute podcasts which I listen to on my walks. Yes it’s shallow learning, and it might have some hallucinations in there but I wouldn’t have read some of those otherwise.
Previously, we saw a shift with search engines where we no longer needed to learn data because we could use a search engine as a mental signpost to the data, freeing up capacity for other thought.
LLMs are shifting knowledge creation to this mental pointer model. We don't need to know real "stuff" because we know how to look it up later (never?).
Each of these summaries is a secondary source, delivered through an agent biased by whatever is in its current context window. Like a game of telephone the summaries are inherently lossy, and each one may be 95% correct and we crucially don't understand which 5% may be incorrect.
When our basis for decision making is a collection of 100s or 1000s of LLM generated "Schrodinger's facts", we risk cumulative cascading errors. We will be wrong in unpredictable, chaotic ways.
We are voluntarily capping ourselves as this childish level of thought, because it feels like we are exercising our critical judgement the same as ever. However, the integrity of the inputs has been compromised. Bad inputs always lead to bad outputs.
I don’t think AI is all bad for summaries though. I used to add stuff to a reading list with good intentions, but things went there to die. Hundreds of articles added, but with so much new content each day, I would never actually read any of it. Now, I use AI summaries to get more context on what the article is. If it sounds interesting and I want more info, I can read the whole thing in the moment. If I’m satisfied with the summary alone, I can move on with my life. No more pushing it off to a reading list that only generates guilt. I actually end up reading more articles due to this, not less.
[0] https://youtu.be/YP-ukrBVDH8 (this is sadly the best copy I can find)
> nothing actually sticks.
And it doesn't matter. To each their own. Take one example: cooking. Some may choose to be a gourmet chef, whether professionally or just on their own time. Some will just regularly cook their food. Some will cook only when they have to. And some will avoid cooking no matter what, leaving it to family or going out to buy food, etc.
Now apply to every task and endeavor that one may be involved in. It doesn't matter if any particular thing sticks or not. Some may care and dive deeply, and some may prefer a hands-off approach. Nothing changes either way; life goes on.
The primary reason to get into anything deeply before was because it contributed directly to survival, eg studying and building a career to provide a product/service others needed. Things had to stick because living depended on it. Now with AI, well it just doesn't matter anymore with the essentials and everything beyond increasingly being automated away.
However, what does it mean to say that's deceptive? It means you care more about social signalling than you do about arriving at the right destination on time. Showing that you're not the sort of person who gets lost isn't really the primary reason people use Google Maps. When it's not a test of your navigation skills, it's not cheating.
Similarly, doing Google searches before posting might be "deceptive" in that it makes you seem more knowledgeable than you are, but on the whole I would prefer more knowledgeable posts, so the social signalling seems like a secondary consideration.
Similarly for using AI. Sometimes it's just a way to get more information.
It seems to me that Agile methodology did a similar thing. The idea of Agile is not to skip understanding requirements, design, upfront reasoning and due diligence, as seen in seen in waterfall methods. It however sometimes turned into laziness looking like faster incremental progress.
I think quality of software has become worse over the time, with "unknown error occurred try again later" becoming more common, and I wonder if the root causes of it includes jumping to building things without properly thinking through about the customer problem, requirements and/or design.
I may easily be wrong, would like to hear corrective thoughts.
The bit about taste is what actually stuck with me, though. taste comes from having opinions formed through struggle, not summaries and that's the part AI quietly erodes without you noticing it's happening. Does anyone here agree with me?
Using your research paper reading example, I would read the research paper, but then ask an AI tool specific questions about the work, frequently in new chats. Then at the end I might ask it to implement my description of the paper. I guess it's your 'debate with me' conclusion, the only difference is I would try to have multiple short conversations.
A good example is ‚birthday wishes‘:
https://m.youtube.com/watch?v=2IYqhdJuRfU&t=5m47s
(AutoCorrect, AutoComplete - generate? AutoCongratulate? How much is ‚okay‘?)
Can't say fortunately or unfortunately, but we have no other choice but to keep up this way.
At some point the question stops being “how do we evaluate all this?” and becomes “did all of this need to exist in the first place?
With things like Tiktok I've learned that we need to break up bigger works into smaller digestible pieces
Another issue is there is too much content for people to read or consume already (a problem independent of AI)
Yeah, it's about "effective use of AI as a tool"
The ability to be more selective about where I attend deeply, while leveraging fast shallow learning to complete other tasks... That seems like a potential benefit and a nice choice to have in the toolbox.
I've been using Gemini chat for this, and specifically only giving it my code via copy paste. This sounds Luddite but actually it's been pretty interesting. I can show it my couple "core" library files and then ask it to do the next thing. I can inspect the output and retool it to my satisfaction, then slot it in to my program, or use it as an example to then hand code it.
This very intentional "me being the bridge" between AI and the code has helped so much in getting speed out of AI but then not letting it go insane and write a ton of slop.
And not to toot my own horn too much, but I think AI accelerates people more the wider their expertise is even if it's not incredibly deep. Eg I know enough CSS to spot slop and correct mistakes and verify the output. But I HATE writing CSS. So the AI and I pair really well there and my UIs look way better than they ever have.
> not treating it as your TikTok of pretend learning
Nah, YC has another website for that already...
/s