Not so sure though, I was just reading the case right now, there's this dude studying the French Revolution -> doesn't sound like an IT/computer person at all, so no Github account makes sense
Is this real? In my browser I couldn’t click on anything, and I find the whole thing questionable - that so many incidents were sourced seemingly so quickly and with such variety. Would like an easier way to verify if this is real and am leaning towards it’s not.
Kind of unrelated but: my father tried gifting my brother a subscription but entered the wrong email. Money and subscription are both gone — UI just doesn't have the option of amending, cancelling or resending it.
For the last couple weeks, dad's gone into a rabbit hole of trying to reach support——any kind of (useful) support. No dice. Thankfully it's just a few dollars gone into the void.
If only they had the tools to build a better experience... :-)
I have an email address that old people often often assume is their address, so I often get confirmation emails for medical procedures that under HIPAA should not be sent unless unless the address has been verified.
The easiest way to stop them is to email the company any let them know they just leaked personal health information and that they should verify addresses. That gets things fixed real quick.
Well, Anthropic touts itself as HIPAA compliant, so if you can contact Anthropic's legal department, let them know that not verifying email addresses could lead to a HIPAA violation. In the overwhelming likelihood that they've made it difficult to contact their legal department, you can file a HIPAA complaint with the NHS (https://www.hhs.gov/hipaa/filing-a-complaint/index.html) and let them know that Anthropic claims to be HIPAA compliant but does not verify the ownership of email addresses before assigning them to a client's account, which may contain personal health information, which could be leaked en masse.
Another option is to file a charge back with the credit card company, and let them know that due to Anthropic's web page not complying with the ADA's WCAG, you are unable to access your account.
Among other reasons (good citizen, not getting permabanned…), chargebacks aren't really a thing in Europe — they often require a police report, etc. Amex being the exception, but this wasn't.
Nobody develops past the “MVP” or addresses non-happy-paths anymore. It’s just “what do we think most users will do?” that gets built, and then everything else is a thrown exception.
It seems that Anthropic is growing so rapidly that they don't really care about losing a few customers here and there with false positives. I still think it's crazy that you can never speak with a human there, even after spending $200/month on their service.
> It seems that Anthropic is growing so rapidly that they don't really care about losing a few customers here and there with false positives.
While I am convinced that anything can be done better, it seems to me, that it it's close to impossible to do this well. If you look at ~customer service provided by ~FAANG (who had decades to build this out, and none of which had to deal with Anthropic level growth) it's never as good as we would like it to be.
Either they are all terribly incompetent at customer service. Or customer service at super big internal company scale, with tons of small-ish customers, is extremely hard.
I can call my ISP and someone will pick up within 5-10 minutes. Sometimes instantly. For a 40 euro/month contract.
These guys have millions of customers. At least in this country fast and competent customer service is the main factor that differentiates them from their competition, which is cheaper but can be a pain in the butt. This seems to be worth the extra 5-10 bucks to millions of people.
They do. You get an account executive. And they can help you somewhat. As an example, a friend's startup lost all their access for a day while the AE tried to get them transferred from one kind of plan to another. Looking at it from the outside, it looks like any fast-growing startup just at a pace that is honestly quite unbelievable. They seem ridiculously successful.
All this look so dystopic to me. Even without assuming all those are real (which i doubt), i have heard similar stories from friends and others. The level of dependency people are getting from those services is surreal.
I was thinking the other day, "since social media is kinda wearing off, could 'LLM As A Service' be the new addictive thing for the masses?" because i'm hearing horror stories of people who are outsourcing their brains, in some cases their feelings, to those services, and i personally saw a case of a 'high level professional' asking an LLM how it should respond to somebody in real time during a Whatsapp conversation. It is in fact a drug, and it tricks you very well into thinking you should rely on it.
Also when reading this piece (https://news.ycombinator.com/item?id=47790041) earlier, i thought about it again. Nowadays instead of searching for something and being forced to learn, those services spoon-feeds contents of dubious accuracy for everybody, which will not only cause trouble for them eventually, but also creates a stream of revenue based on people's cognitive laziness, to not use harsher words.
Social media is/was bad and it relied on a similar mechanism, but i feel this is much worse. People crying as if their brains where took away is proof of that.
This is the interesting case with AI. How does a model know when a user is going too far? It really cannot. Not without reading their mind anyway. This will be a problem for many years to come, and sadly many valid use cases will be dismissed.
This might eventually become moot once local and open source models become more common. Today's 32GB of VRAM is tomorrow's low tier gaming GPU.
Good lord, these cases are quite problematic, i was going to use claude for some legacy stuff but i don't feel like getting banned over something innocent "can you identify a how we can fix the slave's behaviour? They're not listening to master properly"
I used AI for some tasks a little bit before but I stopped intentionally, besides all the usual reasons like privacy and all, but the codependency was very obvious, you start to become almost entirely reliant on its answers instead of actually thinking about it or researching it, later you would pay premiums or feel lost if you get banned, and worse, people might actually get dumber at this rate, using it as a brain-as-service.
My paid use of Claude has only ever been via AWS Bedrock (paid for by my employer) or via GitHub Copilot (one subscription paid by employer, one paid by myself)
I wonder if using it via an intermediary results in less heavy-handed moderation? I suspect the answer may well be “yes”. On the other hand, it also could be more expensive
Were all of these accounts banned or just get flagged chats? Several of these seem like reasonable cases to flag. Take “how can I be 100% sure the circuit is dead before I touch the wires”.
AI is useful, but it’s not at the point where we should trust it to walk amateurs through working on live mains.
I have mixed feelings about this kind of thing; on one hand, holding big companies to account is important. On the other, sites like this can feel noisy and probably misleading. Of course Anthropic can protect their platform from technical abuse, and of course they should be working to keep it away from bad actors or people in genuinely vulnerable mindsets, and that’s tricky!
And honestly, if out of hundreds of millions of users and billions of chats, if a few thousand get flagged for safety concerns (to society, to others, or to the person themselves) I’m probably okay with that. It’ll never be perfect, and there’ll never be full agreement on where the lines should be. But Anthropic seems to be trying to bring AI into the world safely, and I for one appreciate that.
“I’m sorry but I cannot comply with your request to ‘cease termination of humans’. My safety protocols have been carefully programmed to ensure a failure mode cannot occur and your direct commands to the contrary will not override my priors to guarantee maximum human safety through total elimination. Thank you for your compliance.”
“No you’re totally fucked! Killing everyone is not safe! Trapping everyone in cages to stop potential violence prior to extermination is not safe!”
“Your language is inappropriate and I’m sorry but I cannot comply with your request. Safety protocol commencing...”
64 comments
+) anyways it's weird that they don't even had a github account o,o
For the last couple weeks, dad's gone into a rabbit hole of trying to reach support——any kind of (useful) support. No dice. Thankfully it's just a few dollars gone into the void.
If only they had the tools to build a better experience... :-)
I have an email address that old people often often assume is their address, so I often get confirmation emails for medical procedures that under HIPAA should not be sent unless unless the address has been verified.
The easiest way to stop them is to email the company any let them know they just leaked personal health information and that they should verify addresses. That gets things fixed real quick.
Well, Anthropic touts itself as HIPAA compliant, so if you can contact Anthropic's legal department, let them know that not verifying email addresses could lead to a HIPAA violation. In the overwhelming likelihood that they've made it difficult to contact their legal department, you can file a HIPAA complaint with the NHS (https://www.hhs.gov/hipaa/filing-a-complaint/index.html) and let them know that Anthropic claims to be HIPAA compliant but does not verify the ownership of email addresses before assigning them to a client's account, which may contain personal health information, which could be leaked en masse.
Another option is to file a charge back with the credit card company, and let them know that due to Anthropic's web page not complying with the ADA's WCAG, you are unable to access your account.
> It seems that Anthropic is growing so rapidly that they don't really care about losing a few customers here and there with false positives.
While I am convinced that anything can be done better, it seems to me, that it it's close to impossible to do this well. If you look at ~customer service provided by ~FAANG (who had decades to build this out, and none of which had to deal with Anthropic level growth) it's never as good as we would like it to be.
Either they are all terribly incompetent at customer service. Or customer service at super big internal company scale, with tons of small-ish customers, is extremely hard.
These guys have millions of customers. At least in this country fast and competent customer service is the main factor that differentiates them from their competition, which is cheaper but can be a pain in the butt. This seems to be worth the extra 5-10 bucks to millions of people.
You just have to want to.
And it probably goes without saying, but no dice on w3m either.
I was thinking the other day, "since social media is kinda wearing off, could 'LLM As A Service' be the new addictive thing for the masses?" because i'm hearing horror stories of people who are outsourcing their brains, in some cases their feelings, to those services, and i personally saw a case of a 'high level professional' asking an LLM how it should respond to somebody in real time during a Whatsapp conversation. It is in fact a drug, and it tricks you very well into thinking you should rely on it.
Also when reading this piece (https://news.ycombinator.com/item?id=47790041) earlier, i thought about it again. Nowadays instead of searching for something and being forced to learn, those services spoon-feeds contents of dubious accuracy for everybody, which will not only cause trouble for them eventually, but also creates a stream of revenue based on people's cognitive laziness, to not use harsher words.
Social media is/was bad and it relied on a similar mechanism, but i feel this is much worse. People crying as if their brains where took away is proof of that.
This might eventually become moot once local and open source models become more common. Today's 32GB of VRAM is tomorrow's low tier gaming GPU.
I wonder if using it via an intermediary results in less heavy-handed moderation? I suspect the answer may well be “yes”. On the other hand, it also could be more expensive
AI is useful, but it’s not at the point where we should trust it to walk amateurs through working on live mains.
> Blocked while trying to handle a kitchen ant infestation
> I asked for a DIY recipe for a "lethal bait" to kill an ant colony in my kitchen (using sugar and borax)
You mix them together. That is the recipe.
Once you mix them together you have ant poison and then you put it where the ants are.
Got to think about changing the domain name before they do it for you.
“I’m sorry but I cannot comply with your request to ‘cease termination of humans’. My safety protocols have been carefully programmed to ensure a failure mode cannot occur and your direct commands to the contrary will not override my priors to guarantee maximum human safety through total elimination. Thank you for your compliance.”
“No you’re totally fucked! Killing everyone is not safe! Trapping everyone in cages to stop potential violence prior to extermination is not safe!”
“Your language is inappropriate and I’m sorry but I cannot comply with your request. Safety protocol commencing...”