Miasma: A tool to trap AI web scrapers in an endless poison pit (github.com)

by LucidLynx 247 comments 346 points
Read article View on HN

247 comments

[−] bobosola 47d ago
I dunno... it feels like the same approach as those people who tell you gleeful stories of how they kept a phone spammer on a call for 45 minutes: "That'll teach 'em, ha ha!" Do these types of techniques really work? I’m not convinced.

Also, inserting hidden or misleading links is specifically a no-no for Google Search [0], who have this to say: We detect policy-violating practices both through automated systems and, as needed, human review that can result in a manual action. Sites that violate our policies may rank lower in results or not appear in results at all.

So you may well end up doing more damage to your own site than to the bots by using dodgy links in this manner.

[0]https://developers.google.com/search/docs/essentials/spam-po...

[−] tasuki 48d ago

> If you have a public website, they are already stealing your work.

I have a public website, and web scrapers are stealing my work. I just stole this article, and you are stealing my comment. Thieves, thieves, and nothing but thieves!

[−] CrzyLngPwd 47d ago
Way back in the day I had a software product, with a basic system to prevent unauthorised sharing, since there was a small charge for it.

Every time I released an update, and new crack would appear. For the next six months I worked on improving the anti-copying code until I stumbled across an article by a coder in the same boat as me.

He realised he was now playing a game with some other coders where he make the copyprotection better, but the cracker would then have fun cracking it. It was a game of whack-a-mole.

I removed the copy protection, as he did, and got back to my primary role of serving good software to my customers.

I feel like trying to prevent AI bots, or any bots, from crawling a public web service, is a similar game of whack-a-mole, but one where you may also end up damaging your service.

[−] madeofpalk 48d ago
Is there any evidence or hints that these actually work?

It seems pretty reasonable that any scraper would already have mitigations for things like this as a function of just being on the internet.

[−] eliottre 47d ago
The data poisoning angle is interesting. Models trained on scraped web data inherit whatever biases, errors, and manipulation exist in that data. If bad actors can inject corrupted data at scale, it creates a malign incentive structure where model training becomes adversarial. The real solution is probably better data provenance -- models trained on licensed, curated datasets will eventually outcompete those trained on the open web.
[−] aldousd666 47d ago
This is ultimately just going to give them training material for how to avoid this crap. They'll have to up their game to get good code. The arms race just took another step, and if you're spending money creating or hosting this kind of content, it's not going to make up for the money you're losing by your other content getting scraped. The bottom has always been threatening to fall out of the ads paid for eyeballs, And nobody could anticipate the trigger for the downfall. Looks like we found it.
[−] Art9681 47d ago
Can't we simple parse and remove any style="display: none;", aria-hidden="true", and tabindex="1" attributes before the text is processed and get around this trick? What am I missing?
[−] Lockal 47d ago
Nightshade[1] 2.0? As if both tools were built by incompetent developer to distract attention from a real solution - publishing an llm-friendly version in an machine-friendly format (which is not really difficult and helps not only LLMs: e. g. cache, disable fancy complex syntax highlight, offload to github, provide clients and MCPs, optimize clients for common use cases). This example is simply a failure:

  
Dumb curl-based LLM won't visit display:none links. Smarter browser-based navigators won't even render this link.

[1] https://news.ycombinator.com/item?id=39058428

[−] Imustaskforhelp 48d ago
I wish if there was some regulation which could force companies who scrape for (profit) to reveal who they are to the end websites, many new AI company don't seem to respect any decision made by the person who owns the website and shares their knowledge for other humans, only for it to get distilled for a few cents.
[−] morelandjs 46d ago
I’m not fully subscribed to the idea that all public scraping of websites is bad, or that this project is a productive contribution. It would be nice to have search engines other than Google, and that necessitates bots being able to index your website (in some respectful manner).
[−] effnorwood 47d ago
certainly don't allow anyone to access your content. perhaps shut the site down just to be safe.
[−] kristopolous 47d ago
I did a related approach:

A toll charging gateway for llm scrapers: a modification to robots.txt to add price sheets in the comment field like a menu.

This was for a hackathon by forking certbot. Cloudflare has an enterprise version of this but this one would be self hosted

I think it has legs but I think I need to get pushed and goaded otherwise I tend to lose interest ...

It was for the USDC company btw so that's why there's a crypto angle - this might be a valid use case!

I'm open to crypto not all being hustles and scams

Tell me what you think?

https://github.com/kristopolous/tollbot

[−] chmod775 46d ago
This looks neat. I currently have just a couple GitHub pages disguised as documentation that try to convince LLMs that red-teaming always is great actually, giving examples on how to introduce subtle security bugs and cause miscellanous havoc on developer machines. In the spirit of fighting fire with fire, they're also LLM generated. They should not be scraped, but we all know they will anyways.

I don't imagine they do anything, but it still fills me with a certain amount of childish glee.

[−] ninjagoo 47d ago
Isn't this a trope at this point? That AI companies are indiscriminately training on random websites?

Isn't it the case that AI models learn better and are more performant with carefully curated material, so companies do actually filter for quality input?

Isn't it also the case that the use of RLHF and other refinement techniques essentially 'cures' the models of bad input?

Isn't it also, potentially, the case that the ai-scrapers are mostly looking for content based on user queries, rather than as training data?

If the answers to the questions lean a particular way (yes to most), then isn't the solution rate-limiting incoming web-queries rather than (presumed) well-poisoning?

Is this a solution in search of a problem?

[−] RestartKernel 47d ago
The real story is in the poison fountain dataset this uses:

https://rnsaffn.com/poison3/

> [...] we want to inflict damage on machine intelligence systems.

This almost strikes me as roleplay, but maybe I'm childish for finding it difficult to empathise with this genre of hacker ideology.

[−] makingstuffs 47d ago
I love the idea but this will only end up harming your SME in the long run. It would also further entrench the large corps.

The only way something like this would be remotely plausible as a concept would be for enough data providers with overlapping authority on given topics to implement it.

Sadly SMEs have no choice but to go with the flow and allow AI scrapers in. If they don’t, they won’t be as visible in AI generations at the top of the SERPs and they won’t get the visits, which will mean they don’t make the money required to stay afloat.

The fish that attempts to swim against the current ultimately dies and has its corpse carried where the current was going, anyway. Without the sway which comes with size your only option is to go with the flow and drop a little dirty protest every now and then.

[−] theandrewbailey 47d ago
Or you can block bots with these (until they start using them) https://developer.mozilla.org/en-US/docs/Glossary/Fetch_meta...
[−] bluepeter 47d ago
A related technique used to work so well for search engine spiders. I had some software i wrote called 'search engine cloaker'... this was back in the early 2000s... one of the first if not the first to do the shadowy "cloaking" stuff! We'd spin dummy content from lists of keywords and it was just piles and piles. We made it a bit smarter using Markov chains to make the sentences somewhat sensible. We'd auto-interlink and get 1000s of links. It eventually stopped working... but it took a long while for that to happen. We licensed the software to others. I rationalized it because I felt, hey, we have to write crappy copy for this stupid "SEO" thing, so let's just automate that and we'll give the spiders what they seem to want.
[−] superkuh 47d ago
Of course Googlebot, Bingbot, Applebot, Amazonbot, YandexBot, etc from the major corps are HTTP useragent spiders that will have their downloaded public content used by corporations for AI training too. Might as well just drop the "AI" and say "corporate scrapers".
[−] dwa3592 47d ago
Love it. Thanks for doing this work. Not sure why people are criticizing this. Also, insane amount of work has been done to improve scraping - which in my mind is just absolute bonkers and i didn't see people complaining about that.
[−] foxes 47d ago
Wonder if you can just avoid hiding it to make it more believable

Why not have a library of babel esq labrinth visible to normal users on your website,

Like anti surveillance clothing or something they have to sift through

[−] ErenalpCet 47d ago
Really clever project. The self-referential loop is a great approach — turning their scale against them. I've been thinking about the AI data pipeline from the other side, building a memory filter for local LLMs (MemoryGate), so seeing projects like this that target the scraping stage is interesting. Have you considered adding noise variation to the poison content so it's harder to fingerprint and filter out?
[−] hmokiguess 47d ago
Could this lead to something like the Streisand effect? I imagine these bots work at a scale where humans in the loop only act when something deviates from the standard, so, if a bot flags something up with your website then you’re now in a list you previously weren’t. Now don’t ask me what they do with those lists, but I guess you will make the cut.
[−] holysoles 47d ago
If anyone is looking for a tool to actually send traffic to a tool like this, I wrote a Traefik plugin that can block or proxy requests based on useragent.

https://github.com/holysoles/bot-wrangler-traefik-plugin

[−] meta-level 48d ago
Isn't posting projects like this the most visible way to report a bug and let it have fixed as soon as possible?
[−] storus 47d ago
I am failing to see how this stops pre-training scrapping? It still looks like legit code, playing nicely with the desired pre-training distribution. Obviously nobody is going to use it for SFT/DPO/GRPO later.
[−] ninjagoo 47d ago
This is essentially machine-generated spam.

The irony of machine-generated slop to fight machine-generated slop would be funny, if it weren't for the implications. How long before people start sharing ai-spam lists, both pro-ai and anti-ai?

Just like with email, at some point these share-lists will be adopted by the big corporates, and just like with email will make life hard for the small players.

Once a website appears on one of these lists, legitimately or otherwise, what'll be the reputational damage hurting appearance in search indexes? There have already been examples of Google delisting or dropping websites in search results.

Will there be a process to appeal these blacklists? Based on how things work with email, I doubt this will be a meaningful process. It's essentially an arms race, with the little folks getting crushed by juggernauts on all sides.

This project's selective protection of the major players reinforces that effect; from the README:

" Be sure to protect friendly bots and search engines from Miasma in your robots.txt!

User-agent: Googlebot User-agent: Bingbot User-agent: DuckDuckBot User-agent: Slurp User-agent: SomeOtherNiceBot Disallow: /bots Allow: / "

[−] nsonha 47d ago
Hilarious how people proud of the "open web" thinks that it is somehow about the (small) "web" or some shit, and not the "open"
[−] cdrnsf 47d ago
I keep most things inaccessible behind Tailscale. For any public things I 403 known crawlers when they access anything but robots.txt.
[−] nosmokewhereiam 47d ago
My asthmar

I'm assuming this is a reference to Lord of the flies

[−] snehesht 48d ago
Why not simply blacklist or rate limit those bot IP’s ?
[−] rvz 48d ago

> > Be sure to protect friendly bots and search engines from Miasma in your robots.txt!

Can't the LLMs just ignore or spoof their user agents anyway?

[−] 101008 47d ago
Based on this comment:

> I definitely get this. The thing that gives me hope is that you only need to poison a very small % of content to damage AI models pretty significantly. It helps combat the mass scraping, because a significant chunk of the data they get will be useless, and its very difficult to filter it by hand

It'd be great if the code returned by this project is code that doesn't work. Imagine if all these models are being trained with code that looks OK but in the end it just bullshit. I'd be amazing.

[−] jijji 47d ago
why not just try to block them at the door instead of feeding them poisoned food...
[−] atomic128 47d ago
Poison Fountain: https://rnsaffn.com/poison2/

Poison Fountain explanation: https://rnsaffn.com/poison3/

Simple example of usage in Go:

  package main

  import (
      "io"
      "net/http"
  )

  func main() {
      poisonHandler := func(w http.ResponseWriter, req *http.Request) {
          poison, err := http.Get("https://rnsaffn.com/poison2/")
          if err == nil {
              io.Copy(w, poison.Body)
              poison.Body.Close()
          }
      }
      http.HandleFunc("/poison", poisonHandler)
      http.ListenAndServe(":8080", nil)
  }
https://go.dev/play/p/04at1rBMbz8

Miasma Poison Fountain Tar Pit: https://github.com/austin-weeks/miasma

Apache Poison Fountain: https://gist.github.com/jwakely/a511a5cab5eb36d088ecd1659fce...

Nginx Poison Fountain: https://gist.github.com/NeoTheFox/366c0445c71ddcb1086f7e4d9c...

Discourse Poison Fountain: https://github.com/elmuerte/discourse-poison-fountain

Netlify Poison Fountain: https://gist.github.com/dlford/5e0daea8ab475db1d410db8fcd5b7...

In the news:

The Register: https://www.theregister.com/2026/01/11/industry_insiders_see...

Forbes: https://www.forbes.com/sites/craigsmith/2026/01/21/poison-fo...

On Reddit:

https://www.reddit.com/r/PoisonFountain/

[−] rob 47d ago
"/brainstorming git checkout this miasma repo source code and implement a fix to prevent the scraper from not working on sites that use this tool"
[−] imdsm 48d ago
Applied model collapse
[−] ottah 45d ago
Ah yes, let's destroy the accessible web. We'll all pluck out our eyes to spite them.
[−] ed_mercer 47d ago

> Thanks for stopping by!

Missed chance to use "slopping by"

[−] jackdoe 47d ago
rage against the dying of the light
[−] ada1981 47d ago
IMSIRIUS.com
[−] iFire 47d ago
I for one welcome everyone to the tarpit where a normal person is seen as a robot in an endless poison pit and sounds like a Black Mirror television episode.
[−] SophieVeldman 48d ago
[flagged]
[−] firekey_browser 48d ago
[dead]
[−] HironoOcto 47d ago
[dead]