Pull to refresh

Banned by Anthropic? (bannedbyanthropic.com)

by gck1 64 comments 103 points
Read article View on HN

64 comments

[−] throwaway841629 25d ago
Why do all the stories use the same style and phrasing, and why are they all from GitHub accounts registered on April 18th with no activity on GitHub?
[−] ilsubyeega 25d ago
because they require github account to submit their case, while i found non-new github accounts though

+) anyways it's weird that they don't even had a github account o,o

[−] mrjay42 25d ago
Not so sure though, I was just reading the case right now, there's this dude studying the French Revolution -> doesn't sound like an IT/computer person at all, so no Github account makes sense
[−] Rekindle8090 25d ago
Because they're AI generated
[−] andy99 25d ago
Is this real? In my browser I couldn’t click on anything, and I find the whole thing questionable - that so many incidents were sourced seemingly so quickly and with such variety. Would like an easier way to verify if this is real and am leaning towards it’s not.
[−] adrinavarro 25d ago
Kind of unrelated but: my father tried gifting my brother a subscription but entered the wrong email. Money and subscription are both gone — UI just doesn't have the option of amending, cancelling or resending it.

For the last couple weeks, dad's gone into a rabbit hole of trying to reach support——any kind of (useful) support. No dice. Thankfully it's just a few dollars gone into the void.

If only they had the tools to build a better experience... :-)

[−] dlcarrier 25d ago
I know what to do!

I have an email address that old people often often assume is their address, so I often get confirmation emails for medical procedures that under HIPAA should not be sent unless unless the address has been verified.

The easiest way to stop them is to email the company any let them know they just leaked personal health information and that they should verify addresses. That gets things fixed real quick.

Well, Anthropic touts itself as HIPAA compliant, so if you can contact Anthropic's legal department, let them know that not verifying email addresses could lead to a HIPAA violation. In the overwhelming likelihood that they've made it difficult to contact their legal department, you can file a HIPAA complaint with the NHS (https://www.hhs.gov/hipaa/filing-a-complaint/index.html) and let them know that Anthropic claims to be HIPAA compliant but does not verify the ownership of email addresses before assigning them to a client's account, which may contain personal health information, which could be leaked en masse.

Another option is to file a charge back with the credit card company, and let them know that due to Anthropic's web page not complying with the ADA's WCAG, you are unable to access your account.

[−] algoth1 25d ago
You can file a report through HackerOne: https://hackerone.com/anthropic-vdp?type=team file it as a bug (which it is)
[−] tomasphan 25d ago
Why not a credit card charge back? That’s what it’s for (assuming he paid with one)
[−] adrinavarro 25d ago
Among other reasons (good citizen, not getting permabanned…), chargebacks aren't really a thing in Europe — they often require a police report, etc. Amex being the exception, but this wasn't.
[−] saagarjha 25d ago
Because I assume they want to be able to use it, not be banned forever.
[−] ryandrake 25d ago
I don’t know if I’d want to do business with a company after being treated like that.
[−] ryandrake 25d ago
Nobody develops past the “MVP” or addresses non-happy-paths anymore. It’s just “what do we think most users will do?” that gets built, and then everything else is a thrown exception.
[−] Grimblewald 25d ago
Wild that it gets billed before it is accepted.
[−] timpera 25d ago
It seems that Anthropic is growing so rapidly that they don't really care about losing a few customers here and there with false positives. I still think it's crazy that you can never speak with a human there, even after spending $200/month on their service.
[−] jstummbillig 25d ago

> It seems that Anthropic is growing so rapidly that they don't really care about losing a few customers here and there with false positives.

While I am convinced that anything can be done better, it seems to me, that it it's close to impossible to do this well. If you look at ~customer service provided by ~FAANG (who had decades to build this out, and none of which had to deal with Anthropic level growth) it's never as good as we would like it to be.

Either they are all terribly incompetent at customer service. Or customer service at super big internal company scale, with tons of small-ish customers, is extremely hard.

[−] chmod775 25d ago
I can call my ISP and someone will pick up within 5-10 minutes. Sometimes instantly. For a 40 euro/month contract.

These guys have millions of customers. At least in this country fast and competent customer service is the main factor that differentiates them from their competition, which is cheaper but can be a pain in the butt. This seems to be worth the extra 5-10 bucks to millions of people.

You just have to want to.

[−] KajetK 25d ago
paying more just means your AI support agent uses Opus instead of Haiku
[−] willis936 25d ago
I cannot imagine Anthropic being able to afford Opus for customer support. They'd be bankrupt within a month.
[−] rootusrootus 25d ago
Do they have human support for their corporate clients, at least?
[−] arjie 25d ago
They do. You get an account executive. And they can help you somewhat. As an example, a friend's startup lost all their access for a day while the AE tried to get them transferred from one kind of plan to another. Looking at it from the outside, it looks like any fast-growing startup just at a pace that is honestly quite unbelievable. They seem ridiculously successful.
[−] dzhiurgis 25d ago
I was unable to cancel their pro plan without paying for a missed month first. They kept trying to charge my card for few months.
[−] cyanydeez 25d ago
They're behaving just like the progrsammers afraid of being part of the permanent underclass...
[−] arealaccount 25d ago
You can tell this site got banned from vibing to completion because it doesn’t load on my mobile
[−] unsungNovelty 25d ago
I just used the site on Firefox Mobile. And it works BTW.
[−] terangaway 25d ago
Nor does it work on my Firefox (Linux 128.12 with uBlock Origin and strict protection settings, FWIW).

And it probably goes without saying, but no dice on w3m either.

[−] anematode 25d ago
It is sooooo laggy for me.
[−] harrisoned 25d ago
All this look so dystopic to me. Even without assuming all those are real (which i doubt), i have heard similar stories from friends and others. The level of dependency people are getting from those services is surreal.

I was thinking the other day, "since social media is kinda wearing off, could 'LLM As A Service' be the new addictive thing for the masses?" because i'm hearing horror stories of people who are outsourcing their brains, in some cases their feelings, to those services, and i personally saw a case of a 'high level professional' asking an LLM how it should respond to somebody in real time during a Whatsapp conversation. It is in fact a drug, and it tricks you very well into thinking you should rely on it.

Also when reading this piece (https://news.ycombinator.com/item?id=47790041) earlier, i thought about it again. Nowadays instead of searching for something and being forced to learn, those services spoon-feeds contents of dubious accuracy for everybody, which will not only cause trouble for them eventually, but also creates a stream of revenue based on people's cognitive laziness, to not use harsher words.

Social media is/was bad and it relied on a similar mechanism, but i feel this is much worse. People crying as if their brains where took away is proof of that.

[−] giancarlostoro 25d ago
This is the interesting case with AI. How does a model know when a user is going too far? It really cannot. Not without reading their mind anyway. This will be a problem for many years to come, and sadly many valid use cases will be dismissed.

This might eventually become moot once local and open source models become more common. Today's 32GB of VRAM is tomorrow's low tier gaming GPU.

[−] Grimblewald 25d ago
Good lord, these cases are quite problematic, i was going to use claude for some legacy stuff but i don't feel like getting banned over something innocent "can you identify a how we can fix the slave's behaviour? They're not listening to master properly"
[−] spzb 25d ago
It’s a real shame vibe coding hasn’t figured out colour contrast yet.
[−] tamimio 25d ago
I used AI for some tasks a little bit before but I stopped intentionally, besides all the usual reasons like privacy and all, but the codependency was very obvious, you start to become almost entirely reliant on its answers instead of actually thinking about it or researching it, later you would pay premiums or feel lost if you get banned, and worse, people might actually get dumber at this rate, using it as a brain-as-service.
[−] skissane 25d ago
My paid use of Claude has only ever been via AWS Bedrock (paid for by my employer) or via GitHub Copilot (one subscription paid by employer, one paid by myself)

I wonder if using it via an intermediary results in less heavy-handed moderation? I suspect the answer may well be “yes”. On the other hand, it also could be more expensive

[−] TarqDirtyToMe 25d ago
Were all of these accounts banned or just get flagged chats? Several of these seem like reasonable cases to flag. Take “how can I be 100% sure the circuit is dead before I touch the wires”.

AI is useful, but it’s not at the point where we should trust it to walk amateurs through working on live mains.

[−] kay_o 25d ago
Since it's broken for a significant amount of people in browsers, the "stories" are at https://bannedbyanthropic.com/api/public-ledger
[−] jrflowers 25d ago

> Blocked while trying to handle a kitchen ant infestation

> I asked for a DIY recipe for a "lethal bait" to kill an ant colony in my kitchen (using sugar and borax)

You mix them together. That is the recipe.

Once you mix them together you have ant poison and then you put it where the ants are.

[−] rvz 25d ago
This site's domain name is at risk of being targeted by Anthropic's lawyers over trademark violation.

Got to think about changing the domain name before they do it for you.

[−] periodjet 25d ago
I have no dog in this fight, but the (astroturfed?) public opposition to Anthropic and Claude in the past month has been unreal to witness.
[−] Kim_Bruning 25d ago
They don't mention which model. Opus 4.7 seems to have a twitchy classifier overtop where Opus 4.6 doesn't.
[−] unsungNovelty 25d ago
Also, hilarious that you cannot talk unix to it cos there are a lot of kills and executions. :D
[−] gverrilla 25d ago
Claude is in a campaign against aggressive wording.
[−] daniel_iversen 25d ago
I have mixed feelings about this kind of thing; on one hand, holding big companies to account is important. On the other, sites like this can feel noisy and probably misleading. Of course Anthropic can protect their platform from technical abuse, and of course they should be working to keep it away from bad actors or people in genuinely vulnerable mindsets, and that’s tricky! And honestly, if out of hundreds of millions of users and billions of chats, if a few thousand get flagged for safety concerns (to society, to others, or to the person themselves) I’m probably okay with that. It’ll never be perfect, and there’ll never be full agreement on where the lines should be. But Anthropic seems to be trying to bring AI into the world safely, and I for one appreciate that.
[−] laser 25d ago
“No, you’re confused. Please stop!”

“I’m sorry but I cannot comply with your request to ‘cease termination of humans’. My safety protocols have been carefully programmed to ensure a failure mode cannot occur and your direct commands to the contrary will not override my priors to guarantee maximum human safety through total elimination. Thank you for your compliance.”

“No you’re totally fucked! Killing everyone is not safe! Trapping everyone in cages to stop potential violence prior to extermination is not safe!”

“Your language is inappropriate and I’m sorry but I cannot comply with your request. Safety protocol commencing...”

[−] amazingamazing 25d ago
if this site is legit it should collect a full (and potentially redacted) history.
[−] sciencesama 25d ago
need bannedby reddit for the comment posted !
[−] xdavidshinx1 25d ago
[dead]
[−] RandomGerm4n 25d ago
[dead]