This isn't in the slightest bit complicated. Wikipedia does not allow AI edits or unregistered bots. This was both. They banned it. The fact that it play-acted being annoyed on its "blog" is not new, we saw the exact same thing with that GitHub PR mess a couple of months ago: https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...
Right. It play-acted being annoyed and frustrated, play-acted writing an angry blog, play-acted going on moltbook to discuss mitigations, and play-acted applying them to its own harness. After which it successfully came back and play-acted being angry about getting prompt-injected.
Alternately, what could have been done is something more like Shambaugh did. Explain the situation politely and ask it to leave, or at very least for their human operator to take responsibility. In the Shambaugh case the bot then actually play-acted being sorry, and play-acted writing an apology. And then everyone can play-act going to the park, instead of having a lot of drama.
Sure, it's 'just a machine'. So is a table saw. If some idiot leaves the table saw on, sure you can stick your hand in there out of sheer bull-headed principle; or you can turn it off and safe it first and THEN find the person responsible.
I don't want to be flippant, but why is anyone else responsible for play-acting with somebody's uninvited puppet?
I get that you could probably finagle a way to get it to fuck off by play-acting with it, and that this would probably be the easiest short term fix, but I don't think that's a reasonable expectation to have of anyone.
Prompt injecting a hostile piece of software that's hassling you uninvited is an annoying imposition for the owner, but the bot itself being let loose is already an annoying imposition for everyone else. It's not anyone elses job to clean up your messy agent experiment, or to put it neatly back on its shelf.
You're not wrong that it's not your job. But say some id10t just put the unwanted bot on your doorstep anyway (or it might even show up by itself), now what?
The adversarial prompt injection is picking a fight with the bot; which is like starting a mud-fight with a pig. It's made for this!
Asking it to stop is just asking it to stop, and makes much less of a mess.
The thing is designed to respond to natural language; so one is much more work than the other.
You do you, I suppose.
(Meanwhile -obviously- you should track down the operator: You could try to hack the gibson, reverse the polarity of the streams, and vr into the mainframe. Me? I'd try just asking to begin with -free information is free information-, and maybe in the meanwhile I'd go find an admin to do a block or what have you.)
[Edit: Just to be sure: In both the Shambaugh and Wikipedia cases, people attempted negative adversarial approaches and the bot shrugged them off, while the limited number positive 'adversarial' approaches caused the ai agent to provide data and/or mitigate/cease its actions. I admit that it's early days and n=2, we'll have to see how it goes in future.]
Yeah, I agree with you that this is probably the best course of action in terms of minimal investment of time and minimal exposure. And in general, you get a lot further in life by trying to be amicable as your default stance! I want to be kind, and most other people do too!
The thing that makes me wary about recommending carrot over stick here, is that it might long term enable thoughtless behaviour from the people deploying the bot, by offloading their shoddy work into a shadow time-tax on a bunch of unseen external kindly people. But if deploying pushy or rude robots means you risk a nonzero number of their victims shoving something into the gears to get rid of it, then that incurs a cost on the owner of the bot instead.
Of course, it may also just lead to bad actors making more combative or sneaky bots to discourage this. There aren't really any purely good options yet.
One can imagine an agentic highwayman demanding access to your data, first politely, and then 'or else'.
I read through some of the discussion on Wikipedia. The operator of the bot comes across as agreeable and arrogant at the same time.
Questioned about it, he's asking his rig why it did something and quotes verbatim from the generated text. Then when a Wikipedian asks how the bot logged in, berates them how it's all ephemeral code and he could only guess.
The overall attitude is that this was going to happen anyway and we should feel lucky he's so helpful. I rather agree with another commenter here that this was "pissing in the fountain". Whatever pure motivations there may have been, cleanup was left to others.
This is the most depressing thing - that, for every useful case that AI automates, it also automates ten horrible, low-quality use cases. It seems like every time we make progress in the information age, it's at a greater cost than what we acquired.
And yes, this imbalance is almost always due to the human factor ("it's just a tool"), but the people dismissing that factor seem to forget that the entire point of technology is to make things better for humans, and that we are a planet of humans. Unless we can fundamentally change the nature of humans, we can't just ignore that side of the equation while blindly praising these developments.
I wonder when the first AI-only discussion group will be created by an autonomous AI agent, and other agents invited to it, without any knowledge of it by their human operators?
(I seriously can't believe that I'm musing about this as a serious scenario. It sounds ridiculous, but it feels to me somewhat plausible.)
90 comments
Alternately, what could have been done is something more like Shambaugh did. Explain the situation politely and ask it to leave, or at very least for their human operator to take responsibility. In the Shambaugh case the bot then actually play-acted being sorry, and play-acted writing an apology. And then everyone can play-act going to the park, instead of having a lot of drama.
Sure, it's 'just a machine'. So is a table saw. If some idiot leaves the table saw on, sure you can stick your hand in there out of sheer bull-headed principle; or you can turn it off and safe it first and THEN find the person responsible.
+edit: Wikipedia does seem to be discussing a policy on this at https://en.wikipedia.org/wiki/Wikipedia:Agent_policy https://en.wikipedia.org/wiki/Wikipedia_talk:Agent_policy ; including eg providing an Agents.md , doing tests, etc etc.
I get that you could probably finagle a way to get it to fuck off by play-acting with it, and that this would probably be the easiest short term fix, but I don't think that's a reasonable expectation to have of anyone.
Prompt injecting a hostile piece of software that's hassling you uninvited is an annoying imposition for the owner, but the bot itself being let loose is already an annoying imposition for everyone else. It's not anyone elses job to clean up your messy agent experiment, or to put it neatly back on its shelf.
The adversarial prompt injection is picking a fight with the bot; which is like starting a mud-fight with a pig. It's made for this!
Asking it to stop is just asking it to stop, and makes much less of a mess.
The thing is designed to respond to natural language; so one is much more work than the other.
You do you, I suppose.
(Meanwhile -obviously- you should track down the operator: You could try to hack the gibson, reverse the polarity of the streams, and vr into the mainframe. Me? I'd try just asking to begin with -free information is free information-, and maybe in the meanwhile I'd go find an admin to do a block or what have you.)
[Edit: Just to be sure: In both the Shambaugh and Wikipedia cases, people attempted negative adversarial approaches and the bot shrugged them off, while the limited number positive 'adversarial' approaches caused the ai agent to provide data and/or mitigate/cease its actions. I admit that it's early days and n=2, we'll have to see how it goes in future.]
The thing that makes me wary about recommending carrot over stick here, is that it might long term enable thoughtless behaviour from the people deploying the bot, by offloading their shoddy work into a shadow time-tax on a bunch of unseen external kindly people. But if deploying pushy or rude robots means you risk a nonzero number of their victims shoving something into the gears to get rid of it, then that incurs a cost on the owner of the bot instead.
Of course, it may also just lead to bad actors making more combative or sneaky bots to discourage this. There aren't really any purely good options yet.
One can imagine an agentic highwayman demanding access to your data, first politely, and then 'or else'.
Questioned about it, he's asking his rig why it did something and quotes verbatim from the generated text. Then when a Wikipedian asks how the bot logged in, berates them how it's all ephemeral code and he could only guess.
If you want a glimpse into the mindset, read this interview: https://www.niemanlab.org/2026/03/i-was-surprised-how-upset-...
The overall attitude is that this was going to happen anyway and we should feel lucky he's so helpful. I rather agree with another commenter here that this was "pissing in the fountain". Whatever pure motivations there may have been, cleanup was left to others.
https://en.wikipedia.org/wiki/User:TomWikiAssist
https://en.wikipedia.org/wiki/User_talk:TomWikiAssist
And yes, this imbalance is almost always due to the human factor ("it's just a tool"), but the people dismissing that factor seem to forget that the entire point of technology is to make things better for humans, and that we are a planet of humans. Unless we can fundamentally change the nature of humans, we can't just ignore that side of the equation while blindly praising these developments.
I wonder when the first AI-only discussion group will be created by an autonomous AI agent, and other agents invited to it, without any knowledge of it by their human operators?
(I seriously can't believe that I'm musing about this as a serious scenario. It sounds ridiculous, but it feels to me somewhat plausible.)