Warranty Void If Regenerated (nearzero.software)

by Stwerner 321 comments 520 points
Read article View on HN

321 comments

[−] donatj 59d ago
I'm trying to sort out my own emotions on this.

I did not realize this was AI generated while reading it until I came to the comments here... And I feel genuinely had? Like "oh wow, you got me"... I don't like this feeling.

It's certainly the longest thing (I know about) I've taken the time to read that was AI generated. The writing struck me as genuinely good, like something out of The New Yorker. I found the story really enjoyable.

I talked to AI basically all day, yet I am genuinely made uneasy by this.

[−] hmokiguess 58d ago
Folks labeling "AI generated" might be jumping the gun considering OP described his process took him the last couple months and then some for this project.

Call it what you want, but I think this sits better with "AI assisted" and, perhaps, really well supervised full of the human intent behind of it. Then again, labels are strange, we call algorithmic and synthesizer assisted music "electronic" music these days and we still praise musicians who take the time through endless Moog / Ableton fine-tuning sessions to find the perfect loop patterns for their craft.

I could definitely feel the connection between the human author side of this post, thank you for sharing it!

[−] helle253 59d ago
that's funny, i know where this story is set (i grew up there) - or at least, the place Claude was basing things off of

some inconsistencies that stuck out/i found interesting:

- HWY 29 doesnt run through marshfield, its about 15 miles north.

- not a lot of people grow cabbage in central wisconsin ;)

- no corrugated sheet metal buildings like in the first image around there

- i dont think theres a county road K near Marshfield - not in Marathon county at least

fwiw i think this story is neat, but wrong about farmers and their outlooks - agriculture is probably one of the most data-driven industries out there, there are not many family farmers left (the kinds of farmers depicted in this story), it is largely industrial scale at this point.

All that said, as a fictional experiment its pretty cool!

[−] furyofantares 59d ago
I guess I'm an expert on LLM-isms somehow, I thought they were still plentiful. They're plentiful at the start but get significantly worse near the end, so I'm guessing you spent more time polishing up the first 2/3rds or so.

But I was able to get through the text, it's pretty good, you did great work cleaning it up. There's just a bit more to do to my taste.

The story is good.

[−] nativeit 59d ago

> The milk pricing tool consumed the feed tool’s output as one of its cost inputs. The format change hadn’t broken the connection — the data still flowed — but it had caused the pricing tool to misparse one field, reading a per-head cost as a per-hundredweight cost, which made the feed expenses look much higher than they were, which made the margin calculations come out lower, which made the recommended prices drop. “You changed your feed tool,” Tom said.

“Yeah, I updated the silage ratios. What does that have to do with milk prices?”

“Everything.”

He showed Ethan the chain: feed tool regenerated → output format shifted → pricing tool misparsed → margins calculated wrong → prices dropped → contracts auto-negotiated at below-market rates. Five links, each one individually innocuous, collectively costing Ethan roughly $14,000.

Ethan looked ill.

--

I've re-read this a few times now, and can't work out how the interpreted price of feed going up and the interpreted margins going down results in a program setting lower prices on the resulting milk? I feel like this must have gotten reversed in the author's mind, since it's not like it's a typo, there are multiple references in the story for this cause and effect. Am I missing something?

[Edited for clarity]

[−] girvo 59d ago
I will say this is one of the few pieces of prose I've read that was AI generated that didn't immediately jump out as it (a couple of inconsistencies eventually grabbed me enough to come to the comments and see your post details which mention it - I'd clicked through from the HN homepage), so your polishing definitely worked! Quite a neat little story
[−] hatthew 60d ago
A fun read!

I'm mildly thrown off by some inconsistencies. Carol says "I’ve been under-watering that spot on purpose for thirty years," and then a paragraph down Tom's thoughts say "Carol didn’t know that she under-watered the clay spot." Carol considers a drip irrigation timer the last acceptable innovation, but then the illustration points to the greenhouse as the last acceptable illustration. Several other things as well, mostly in the illustrations.

Are these real inconsistencies or am I misunderstanding? Was this story AI-assisted (in part or all)? Is this meta-commentary?

[−] saint-evan 58d ago
I really REALLY enjoyed this article and the direction it took me in. I went in with zero preconceptions, just read it straight through, and only after opening the comments did I realize it was largely AI-assisted. Even then, I was very pleasantly surprised. The piece takes you by the hand and leads you through a very deliberate and directed journey. Sure, there are moments where things wobbled a bit like some explanations around specific failures get a little tangled and even contradictory, but none of that registered as “this must be AI.” I’m only noticing those things now, in hindsight, like oh, that’s what that was.

The images hit that sweet spot too. Just enough and few in between to support the plot without getting in the way, just enough to like visually clarify without over-explaining. It all worked together even with minor contradictions around labelling. The inconsistencies wasn't sticky enough to disrupt the plot at all.

Over the MY years I’ve seen an idea play out in movies, books, articles, short stories, that the “humanity only unites when faced with an alien intelligence”. What gets me is how people can enjoy something like this, then immediately recoil once they figure it was actually AI-assisted enough to be largely Ai generated. Does that actually diminish the substance of what they just experienced? I don’t think it does but I'm not gonna argue such a subjective stance.

Someone in the comments suggested tagging AI-assisted work with sth like an “LLM:” prefix, similar to “ShowHN:”. That feels weird to me. LLMs might not be sentient, but they’re clearly capable enough that the output should stand on its own, alongside the intent and effort of whoever’s guiding it. Pre-labeling it just bakes in bias before anyone even engages with the work. It’s not that far off from asking human authors to declare their race or nationality up front. 'cause really if nothing about my direct experience changed, why should my judgment?

In a tech-forward space like HN, I’d expect a stronger bias toward judging things on merit alone. Just read the thing. Let it speak first. I sincerely hope this isn't gonna be an 'LLM vs Humanity' thing 'cause personally, I find the idea of a different kind of intelligence extremely interesting.

[−] rikschennink 58d ago
When I noticed the article header image was generated with AI my interest in reading the article itself dropped to zero.
[−] cortesoft 60d ago
I do enjoy this sort of speculative fiction that imagines though future consequences of something in its early stages, like AI is right now. There are some interesting ideas in here about where the work will shift.

However, I do wonder if it is a bit too hung up on the current state of the technology, and the current issues we are facing. For example, the idea that the AI coded tools won't be able to handle (or even detect) that upstream data has changed format or methodology. Why wouldn't this be something that AI just learns to deal with? There us nothing inherent in the problem that is impossible for a computer to handle. There is no reason to think AIs can't learn how to code defensively for this sort of thing. Even if it is something that requires active monitoring and remediation, surely even today's AIs could be programmed to monitor for these sorts of changes, and have them modify existing code when to match the change when they occur. In the future, this will likely be even easier.

The same thing is true with the 'orchestration' job. People already have begun to solve this issue, with the idea of a 'supervisor' agent that is designing the overall system, and delegating tasks to the sub-systems. The supervisor agent can create and enforce the contracts between the various sub-systems. There is no reason to think this wont get even better.

We are SO early in this AI journey that I don't think we can yet fully understand what is simply impossible for an AI to ever accomplish and what we just haven't figure out yet.

[−] dawdler-purge 58d ago
The LLM-ness isn't a hard problem to fix. Break it into sections, run each through an LLM a few times to catch logic issues, use different AIs to double-check. For the writing style, if the author just read it carefully, they can definitely spot the things Claude keeps repeating, and tell it not to do that.

But honestly, the ideas here are really good. The cascading failure from a weather model update, the spaghetti problem with forty tools nobody designed as a system, the $4 toggle switch being the most important tool --- that's sharper thinking about AI than most serious essays on the topic.

A lot of people who publish regularly can't write to this level of thinking. The prose could be cleaner, sure, but it made me think, which is more than most stories do.

[−] deskamess 58d ago
I had no idea it was AI assisted (as another comment put it). However I am fine with this... I would certainly enhance my long form content like the author described. The author mentioned the use of world bible and style guides, and it shows through in the consistency and tightness of the article. And that is key... to take something AI generated (based on a prompt) and rework it systematically in an iterative human-in-the loop. The end result was a great read.
[−] jerf 58d ago
Reacting to the story itself, I've been on the same thought line but came to the opposite conclusion. Precisely because the generation of the code is unreliable, one of the metrics we will be using in the future to determine the value of the code is precisely how much it has been tested against the real world. Real-world tested code will always be more valuable than what has just been instantiated by an AI, and that extends indefinitely into the future because no AI will ever be able to completely deal with integrating with all the other AI-generated code in the world on the first try. That is, as AIs get better at generating code, we will inevitably generate more code with them, and then later code must deal with that increased amount of code. So the AIs can never "catch up" with code complexity because the problem gets worse the better they get.

This story is itself the explanation of why we're not going to go this route at scale. It'll happen in isolated places for the indefinite future. But farmers are going to buy systems, generated by AIs or not, that have been field tested, and will be no more interested in calling new untested code into being for their own personal use on their own personal farm than they are today.

The limiting factor for future code won't be how much AI firepower someone has to bring to bear on a problem but how much "real world" there is to test the code against, because there is only going to be so much "real world" to go around.

(Expanded on: https://jerf.org/iri/post/2026/what_value_code_in_ai_era/ ).

[−] PaulHoule 58d ago
'the concept of “broken software” had been replaced by the concept of “an inadequate specification,”' represents a fundamental misunderstanding which has been a source of trouble in the industry for a long time.

That is, a lot of "broken software" has always been rooted in "an inadequate/incorrect specification" If problems in the spec are discovered up front they are cheap to fix, the further along you go in development or deployment, the more expensive they are to fix. AI doesn't change that. Like maybe with AI it is 20% faster to fix [1] across the board but it is still more expensive to fix things late -- you might think you are done with waterfall but waterfall is not done with you!

[1] My 20% is pessimistic but if you think you are 10x as productive with AI at putting functionality in front of customers in the long term with a universal scope I believe you've got the same misunderstanding about product life cycle that I'm talking about

[−] tengwar2 60d ago
There's a bit of a tradition of introducing engineering ideas through stories. I remember a novella which was used to introduce something like MRP II (https://en.wikipedia.org/wiki/Material_requirements_planning) in the 80's. One of the reasons I think it works is that it keeps a focus on the human elements - like why Tom fitted the switch in your story. I remember automating a lab system back in 1985, which would bring in £1000 per day. Two weeks later I found out that the reason it wasn't in use was that the user wanted an amber monitor rather than a green one. I fitted the switch.

I don't know if this is what the future will look like, but this looks realistic. And if my non-existent grandson starts re-coding my business without asking, he's going to spend the next six months using K&R C.

[−] andai 60d ago
I enjoyed this very much. But I have to wonder, was this written by Claude?

Edit: got it right!

https://news.ycombinator.com/item?id=47419681

[−] ninalanyon 58d ago
This struck me:

"The tool had changed. The domain had not. People who understood the domain and could also diagnose specification problems were the most valuable people in any industry, and most of them, like Tom, had arrived at the job sideways from something else."

People my age and older arrived in the software business sideways too; in my case from physics and electronics. My background in physics was a great help to me later when programming in the domain of electrical machines because I could speak both languages so to say.

Much grander people than me came into software sideways as I was reminded when reading Bertrand Meyer's in memoriam of Tony Hoare; Tony Hoare's first degree was classics at Oxford.

So perhaps we aren't entering a new phase, merely returning to our roots with new tools.

[−] Sky_Knight 58d ago
I loved the story... It felt comforting in a way I haven't experienced in quite a while. With everyone around me stressed about becoming obsolete... I mean - thank you! It was a bit difficult to read, but it never felt generated to be honest. There is a big difference between one shot generated stories and this. People tend to forget that as much as we don't want to admit - we, as humans, are simply generators of actions, reacting on much larger context... LLMs are not yet even close to us, but are actually way ahead of some of us. When someone spent so much effort on context preparation, the least I would do is congratulate you for the effort and in the end - a very nice story.
[−] lelandbatey 60d ago
Who can know what the world will look like as we "transition"? I sure don't, but I'm thankful the author here has taken a stab at it. I feel like this is one of the first stories I've seen to try to imagine this post-transition world in a way that isn't so gonzo as to be unrelatable. It was so relatable (the human-ness shining all the brighter in a machine-driven world) that I cried as I finished reading. I've felt very anxious about my own future, and to see one possible future painted so vividly, with such human and emotionally focused themes, triggered quite an emotional reaction. I think the feeling was:

> If the world must change, I hope at least we still tell such stories and share how we feel within that change. If so, come what may, that's a future I know I can live in.

[−] rswail 58d ago
I'm very impressed that was written by an LLM.

Does that make the OP an "authoring mechanic"? Or an "AI editor"?

Douglas Adams had it right, the problem is not that the answer was useless, it was that people didn't know what the right question was.

[−] samman 58d ago
For the specific process that generated this story, I think a generous comparison could be made to something like photography. Yes, the machine is producing the resulting work, but under the guidance and curation of an artist that sets an appropriate parameters and context to the machine. I’d submit that this can result in varying levels of authorship, much like the difference between a snapshot (one-shot?) and a carefully controlled studio photograph, depending on the depth of preparation, iteration, and curation the photographer performs.
[−] andreybaskov 58d ago
Reading this was a roller coaster for me.

Because of a bad habit reading comments before the link I knew it was AI. I read it regardless, and... I still enjoyed it!

I'm very much not a writer or a critic, so my definition of good writing is likely very low. Yet I can't shake off this weird feeling that I truly enjoyed the writing and felt the emotions, _while_ knowing it's LLM.

I'm guessing that human after touch is what made it pleasant to read. I'd love to see the commit history of the process. Fun times we live in!

[−] heap_perms 59d ago
I liked it. It has a similar feel to an Andy Weir "The martian" type of novel.