Stop Sloppypasta (stopsloppypasta.ai)

by namnnumbr 259 comments 667 points
Read article View on HN

259 comments

[−] czhu12 62d ago
I’ve encountered an even more nightmarish version of this recently: ai generated tickets. Basically dumping the output of “write a detailed product spec for a clinical trial data collection pipeline” into a jira ticket and handing it off.

Doesn’t match any of our internal product design, adds tons of extraneous features. When I brought this up with said PM they basically responded that these inaccuracies should just be brought up in the sprint review and “partnering” with the engineering team. AI etiquette is something we’ll all have to learn in the coming years.

[−] xorcist 61d ago
That used to be my joke! Given that most large organization spend (much) more time with the administrative work around code changes than the actual changes themselves (planning, deciding, meetings) then before we let Claude write our code we should let it write our Jira tickets. It was a great joke because while it was obviously absurd to many people it also made them a bit uneasy.

Cue a similar joke about salary negotiation, and the annual dance around goals and performance indicators. Is it really programmers who should be afraid to become redundant, when you think about it?

I should know better than making jokes about reality. It has already one-upped me too many times.

[−] ljm 61d ago
Tried that last year and the problem was, the tickets themselves were broken down well enough to make sense to the naked eye. The second problem was that it was all for a legacy codebase where practically everybody who had built it over the years had left, so it was a real don't-know-what-you-don't-know situation.

The second problem was always going to be there, even with human written tickets, but the problem really is that someone who relies on AI gets into the habit of treating the LLM as a more trustworthy colleague than anybody on the team, and mistakes start slipping in.

This is equally problematic for the engineers using AI to implement the features because they are no longer learning the quirks of the codebase and they are very quickly putting a hard ceiling on their career growth by virtue of not working with the team, not communicating that well, and not learning.

[−] lesostep 61d ago
Had a friend in a similar situation. She got a clearly LLM-generated ticket that didn't make any sense, and was directed to question anything about that ticket.

Apparently, asking "why it doesn't make any sense" wasn't !polite~

If I remember correctly, she came up with ~200 questions for a 2-paged ticket. I helped write some of them, because for parts of the word salad you had to come up with the meaning first and then question the meaning.

You know what happened after she presented it? Ticket got rewritten as a job requirement, and now they seeking some poor sod to make it make sense lol

One had to be very unqualified to even get through the interview for that job without asking questions about the job, I feel. Truly, an AI-generated job for anyone who is new to the field

[−] user142 61d ago
The first question should have been "Was this ticket AI-generated?".
[−] dminik 61d ago
Yes. My Jira tickets used to be almost empty, but all of it was useful info. Now, my Jira tickets are way too long. The amount of useful info has also gone down.

Talk about an AI induced productivity increase ...

[−] est31 61d ago
AI etiquette is a great term. AI is useful in general but some patterns of AI usage are annoying. Especially if the other side spent 10 seconds on something and expects you to treat it seriously.

Currently it's a bit of a wild west, but eventually we'll need to figure out the correct set of rules of how to use AI.

[−] stingraycharles 62d ago
As someone who maintains open source projects, I can assure you that this has been a problem for about a year or so. But I reckon it took a bit longer for people to start doing this at work as well.
[−] codemog 61d ago
Let me guess, it’s ok if they do it, but if you handed their crappy ticket to Claude and shipped whatever crud came out, you’d be held accountable? ;)

Funny how that works out.

[−] BiraIgnacio 61d ago
I ran into a similar case recently, there was a ticket describing what needed to be done in detail. I was written by a human and it was a well written spec. The problem is that it removed some access controls in the system, essentially given some users more access than they should have.

The ticket was given to an LLM, the code written. Luckily the engineer working on it noticed the discrepancy at some point and was able to call it out.

Scrutinizing specs is always needed, no matter what.

[−] dev_l1x_be 61d ago
Some people use AI as they use anything else. Careless, without putting the effort in, making things somebody else's problem. This existed before AI, it just accelerated the stupidity.
[−] darkwater 61d ago
This. In my case I do write from time to time tickets with an LLM but it's always after a long exploratory session with Claude Code, when I go back and forth checking possibilities and gathering data, and then I tell it to create a ticket with the info gathered so far. But even in that case I tend to edit it because I don't like the style or add some useless data that I want to remove.
[−] duxup 61d ago
I work for a small SaaS company.

We’re getting prospective and existing clients emailing us what look like AI generated spreadsheets with features that are miles long that they want us to respond to. Like thousands of lines. And a lot of features that are “what does that even mean??”

We get on a call with them and they don’t even know what is on the spreadsheet or what it means…

Very much a “So you want us to make Facebook?” (Not actually asking for Facebook) feeling.

I fear these horror shows of spreadsheets are just AI fever dreams….

[−] jrjeksjd8d 61d ago
The manager of my team is like this. He LLMed a design doc and then whenever people have questions he's exasperated that people didn't read the design doc. Bro you didn't write it, why would we read it?
[−] tom_m 59d ago
This is a perfect example of where the real work and challenges are in software development.

AI makes it worse. This is where people will lose tons of productivity with AI and many people are completely clueless. It'll hit them like a ton of bricks one day.

[−] whstl 61d ago
Oh boy, do I have a story about this.

I had a PM that was unable to work without AI. Everything he did had to include AI somehow.

His magnum opus was 30 extremely large tickets that had the exact same text minus two or three places with slight variations. He wanted us to create 30 website pages with the content.

The ticket went into details such as using a CDN, following the current design, writing a scalable backend, test coverage, about 3-4 pages per ticket, plus VERY DETAILED instructions for the QA. Yep: all in the same task.

In the end it was just about adding each of the 30 items to an array.

I don’t know if he knows, but in the end it was this specific AI slop that got him fired.

[−] Gigachad 61d ago
This is hilarious because I've seen the idea that AI should just take the Jira ticket and implement it automatically.

Everyone wants to hand off the real work to someone else.

[−] egecant 61d ago
This is even worse because you are working with clinical trials, which literally has impact on human lives
[−] asplake 61d ago
Is tossing stuff over the fence considered ok now? Review the slop with the person that submitted it.
[−] SideburnsOfDoom 61d ago
[dead]
[−] madrox 62d ago
I find that I don't have a lot of sympathy for people angry at this type of behavior, even though I share the disdain for someone else's AI output. The people doing this kind of thing are not the kind of people to be reading this manifesto. We've been creating bait content for a long time, and humans have never been given the tools to manage this in any sophisticated fashion. The internet was not a bastion of high quality content or discourse pre-AI. We need better tools as content consumers to filter content. Ironically, AI is what may actually make this possible.

I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.

I suspect the endgame of this is probably the fulfillment of Dead Internet Theory, where it's just AI creating content and AI browsing the internet for content, and users will never engage with it directly. That person who spent 10 seconds getting AI to write something will be consumed by AI as well, only to be surfaced to you when you ask the AI to summon and summarize.

And if that fills people with horror at the inefficiency of it all, well, like I said, it isn't like the internet was a bastion of efficiency before. We smiled and laughed for years that all of this technology and power is just being used to share cat videos.

[−] GuB-42 61d ago
You can use AI to make a summary of these AI-generated walls of text.

We are getting to this weird situation where instead of Alice sending a message to Bob, Alice sends the message to her AI, which sends it to Bob's AI, which then tries to recover Alice's original message.

To be fair, I don't think it is an AI problem, more of a quirk of formal communication, the same happen with human secretaries. For example, I want my customer to pay me, I want to be professional but not bother with the details, so I ask my secretary to write a well written letter to my customer, with a proper bill and all that stuff, my customer's secretary will then read the letter and tell his boss "hey, our supplier wants $xxx". I could have just called the boss directly and say "hey, it is $xxx", but it is rarely how it is done. Here, it is AI that is taking charge of the formalism, and I find it to work really well for this, as it is essentially a translation task, what LLMs do best.

I am not discounting human secretaries here, they can do much more than write formal letters, but that's a part of their job where LLMs excel at.

[−] sbinnee 61d ago
As a senior engineer, I am getting extremely tired of reviewing AI slops. Today at work I have decided that I just have to build a POC project from scratch. I spent 2 weeks to review the code, to log the process, and to build toy examples to make my argument clear that some (actually most) parts were not working.

The funny thing is that I know my manager got this “working” within a week with Claude. I had to spend 2 weeks with 4 JIRA tasks, many commits for toy examples, and three reports.

[−] artyom 62d ago
I find "sloppypasta" extremely useful. Since I've been in charge of people and teams for years, it's a clear signal of who I should get rid of.
[−] anonzzzies 62d ago
Talking with middle managers in fortune 100 companies, I often get 'send us the documents so we can make a decision'. It used to be that we carefully wrote things and no one would read them. Now we send 3000 pages of AI crap to make sure no one reads it and then we get approved to start working. Not great but the old situation was worse; no one would read anything and ask you to read it for them on a conference call with 36 people; now that does not happen anymore.
[−] rrr_oh_man 62d ago
It's ironic, because the site has all the hallmarks of an LLM generated website.
[−] galaxyLogic 61d ago
Shouldn't the etiquette be that if you send someone a response from AI, you start your message by telling the prompt that produced that reponse?

That, would give the responder the chance to modify the prompt and get a perhaps better answer from the LLM?

[−] uniq7 62d ago
This article's proposal for stopping sloppypasta is to convince the people who does it to stop doing it, but I am more interested on what someone who receives sloppypasta can do.

How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?

I've never did that so far because I feel like I am either exposing their serious lack of professionalism or, if I wrongly assumed it was AI, I am plainly telling them that their work looks like bad AI slop.

[−] merrvk 61d ago
I had a guy doing this to reply to PR review comments, copying in the comment to the LLM and pasting the response back.
[−] namnnumbr 62d ago
Tired of people at work pasting raw ChatGPT output into chats, I coined the term "sloppypasta" and have written this rant to explain why it's rude and some guidelines for what to do instead

sloppypasta: Verbatim LLM output copy-pasted at someone, unread, unrefined, and unrequested. From slop (low-quality AI-generated content) + copypasta (text copied and pasted, often as a meme, without critical thought). It is considered rude because it asks the recipient to do work the sender did not bother to do themselves.

[−] TZubiri 62d ago

>"I asked Claude about this! Here's what it said:" >"ChatGPT says:" My policy suggestion is that we need to completely people quoting ChatGPT. That's legit, that's not a bannable offense, not against any policy.

The author wastes time talking about this case, and even does it first before talking about the much worse case:

>"The sender shares AI output as their own work, with no indication a chatbot wrote it."

This is 100 times worse, and is objective rather than subjective. If the author admits it's AI when confronted it kills their reputation, (if they don't admit it and turns out it is AI, it's fraud, fireable offense)

Putting these 2 categories of AI use wastes breath and conflates the two, the message will not be clear at all.

What's worse, such a policy actually has the effect of increasing undisclosed AI use. This is a specific case of the general case: banning all AI usage increases unregulated AI usage. Everyone who prohibited employees from using AI in 2024 knows that what you get is undisclosed AI use or content you are not sure is AI written or not. If you give a specific way to use AI, you can add features like auditability, supply chain control, and you can remove any outs from employees and users that do not comply with the policy.