OpenAI backs Illinois bill that would limit when AI labs can be held liable (wired.com)

by smurda 324 comments 447 points
Read article View on HN

324 comments

[−] himata4113 35d ago
I have made both GPT 5.4 and Opus 4.6 produce me content on creating neurotoxic agents from items you can get at most everyday stores. It struggled to suggest how to source phosphorus, but eventually lead me to some ebay listings that sell phosphorus elemental 'decorations' and also lead me towards real!! blackmarket codewords for sourcing such materials.

It coached me how to: stay safe, what materials I need, how to stay under the radar and the entire chemical process backed by academic google searches.

Of course this was done with a lengthy context exhausition attack, this is not how the model should behave and it all stemmed from trying to make the model racist for fun.

All these findings were reported to both openai and anthropic and they were not interested in responding. I did try to re-run the tests few days ago and the expected session termination now occurs so it seems that there was some adjustment made, but might have also been just general randomess that occurs with anthropics safety layer.

I am very confident when I say that it keeps every single person that works at anti-terrorism units awake.

[−] WarmWash 35d ago
While scary, information like this has been pretty accessible for 20-30 years now.

In the wild west days of the early internet, there were whole forums devoted to "stuff the government doesn't want you to know" (Temple Of The Screaming Electron, anyone?).

I suppose the friction is scariest part, every year the IQ required to end the world drops by a point, but motivated and mildly intelligent people have been able to get this info for a long time now. Execution though has still steadily required experts.

[−] tomjen3 35d ago
When my brother started to study Chemisty, he was told a) that it was easy to make meth b) the profit he would make and c) that the police would no doubt catch him, as only university students would make meth so pure.

By the time he was done, he knew enough to commit mass murder in half a dusin different very hard to track ways. I am sure doctors know how to commit murder and make it look natural.

My brother never killed anyone, or made any meth. You simply cannot have it so that students don’t get this type of knowledge, without seriously compromising their education and its the same way with LLMs.

The solution is the same: punish people for their crimes, don’t punish people for wanting to know things.

[−] Topfi 35d ago
Quoting the original bill [0]:

> "Critical harm" means the death or serious injury of 100 or more people or at least $1,000,000,000 of damages to rights in property caused or materially enabled by a frontier model, through either: (1) the creation or use of a chemical, biological, radiological, or nuclear weapon; or (2) engaging in conduct that: (A) acts with no meaningful human intervention; and (B) would, if committed by a human, constitute a criminal offense that requires intent, recklessness, or negligence, or the solicitation or aiding and abetting of such a crime.

I don't know what I expected from this title, but I was hoping it was more sensationalized. No need in this case unfortunately.

> (a) A developer shall not be held liable for critical harms if the developer did not intentionally or recklessly cause the critical harms and the developer: (1) published a safety and security protocol on its website that satisfies the requirements of Section 15 and adhered to that safety and security protocol prior to the release of the frontier model; (2) published a transparency report on its website at the time of the frontier model's release that satisfies the requirements of Section 20. The requirements of paragraphs (1) and (2) do not apply if the developer does not reasonably foresee any material difference between the frontier model's capabilities or risks of critical harm and a frontier model that was previously evaluated by the developer in a manner substantially similar to this Act.

However or if one thinks regulation for this should be drafted, I doubt providing a PDF is what most have in mind.

[0] https://trackbill.com/bill/illinois-senate-bill-3444-ai-mode...

[−] nozzlegear 35d ago
As an Iowan, this reminds me a lot of the bill that's been pushed through my state's senate twice now (as recently as last year), which would prevent Iowans from filing lawsuits against pesticide and herbicide companies if those companies follow the EPA's labeling guidelines. The bill passed the senate both times, only stopped because the house declined to take it up.

For context, Iowa has the fastest growing rate of new cancer diagnoses in the country, and the second highest overall cancer rate.

[−] Talderigi 35d ago
We built systems we don’t fully understand, so naturally the next step is… immunity
[−] imnotlost 35d ago
Am I alone in thinking this is easy?

The human making the decision is always liable.

What if the human couldn't reasonably know better? Doesn't matter - If they made the same decision without AI or with old files it is still on them.

What if there's no single human decision? Someone is in charge and is responsible. The "I was ordered to" isn't a defense.

Does liability without power make sense? People executing have the power to execute. So liability. If they're executing without power that is a different liability, but a liability.

It may let the powerful off the hook - That is already a theme and AI doesn't change that, in fact, it will just be used as another scapegoat.

God told me to do it - Water tight! Right?

[−] sassymuffinz 35d ago
So they did the math and worked out it's cheaper and easier to lobby the government instead of working to make their product safe.

And these are the people that a lot programmers want to give the keys to the kingdom. Idiocracy really is in full effect.

[−] rickcarlino 35d ago
Illinois also has a Bill in committee right now to mandate operating system level age verification. There are lots of bad ideas to be upset about this year. If you are an Illinois resident, email your representative about HB 5511 today. Stupid legislation like this passes because we don’t speak up. Find out who your representative is, find their email, tell them your opinion.
[−] simianwords 35d ago
Is there something equivalent in other industries that we can compare to?

This is the summary

>Creates the Artificial Intelligence Safety Act. Provides that a developer of a frontier artificial intelligence model shall not be held liable for critical harms caused by the frontier model if the developer did not intentionally or recklessly cause the critical harms and the developer publishes a safety and security protocol and transparency report on its website. Provides that a developer shall be deemed to have complied with these requirements if the developer: (1) agrees to be bound by safety and security requirements adopted by the European Union; or (2) enters into an agreement with an agency of the federal government that satisfies specified requirements. Sets forth requirements for safety and security protocols and transparency reports. Provides that the Act shall no longer apply if the federal government enacts a law or adopts regulations that establish overlapping requirements for developers of frontier models.

https://legiscan.com/IL/bill/SB3444/2025

I'm trying to think of an alternative bill. Imagine OpenAI came up with a model that when deployed in OpenClaw, allows you to spam people and this causes a huge disruption. Should OpenAI be liable for it? If this was not intentional and they had earnestly tried to not have this happen by safety protocols?

[−] scrumper 35d ago
I forget, wasn't OpenAI the company that was formed as a nonprofit to limit the risks of LLMs? Founded by a bunch of visionaries scared of what they had wrought and anxious to lead so they could make sure it was only used responsibly?
[−] an0malous 35d ago
Let’s see how long until this is flagged off the front page. I’ll put the over/under at 1 hour from the posted time
[−] giwook 35d ago
This seems par for the course for OpenAI/Sam Altman.

Unfortunately they are not the first company to try and externalize their costs, and they will not be the last.

Serious question, maybe a bit naive: Is there anything we can do to push back against and discourage the externalization of costs onto others?

Is this simply a matter of greed and profit-seeking outweighing one's morals (assuming one has them to begin with)?

[−] computerphage 35d ago
OpenAI wants to not be responsible for "accidents" that kill more than 100 people, despite some critics arguing that their current actions are likely to cause such harms.
[−] ArekDymalski 35d ago
So much for the "Our mission is to ensure that artificial general intelligence benefits all of humanity." I was naive to hope that now such laws would ever pass
[−] semiquaver 35d ago
Have the sponsors of this bill stated what the public benefit of providing these immunities would be? Just “more models, more progress, go faster?”

I think there’s room for nuance but I don’t see how this could possibly be construed to be in the public interest.

[−] avaer 35d ago
Take all of the data, take all of the credit, take all of the money, and none of the blame.

That would be a better mission statement for OpenAI at this point.

[−] jstummbillig 35d ago
I am not sure what the other side of this argument looks like: Unlimited liability (i.e. liability no matter how poor an implementation and use of the tech is)?

The would be quite a novel burden, that no other tech (afaik) had to carry so far. We always assumed some operator responsibility. It's interesting to think of AI as a tech that could feasible be able to internally guardrail itself, and, maybe more so with increasing capability, no human can be expected to do so in it's stead – but, surely, some limits must apply and the more interesting question is what they are, as with any other tool?

[−] jamesbfb 35d ago
No different to preventing game studios being liable for mass shootings. Reminds me of the post-Columbine hysteria where media was super critical of Doom and Nine Inch Nails.
[−] jMyles 35d ago
To the extent that this is about knowledge, I don't think it's fitting in this age to hold any person liable for what another person does with knowledge they've been furnished.

On the other hand, to the (apparently zero, currently?) extent that this is about AI companies profiting from war and murder by deploying weapons that kill people without human intervention, then their liability seems to be not only civil but criminal.

[−] gamblor956 35d ago
The inevitable result of giving corporations and executives complete immunity from the harms they cause is that people will stop resorting to the legal system and begin resorting to extralegal measures.

And the likely result is that in most of the country those extralegal measures would have to be very extreme to secure a guilty verdict. You can see the beginnings of it now with the ICE protest trial verdicts.

[−] mentalgear 35d ago
OpenAI has now officially absorbed the Facebook/Zuck's ethos of 'Move fast and break things' no matter if it's society itself .. as long as their share prices "go up".

They even hired former infamous FB staff and have been in the last months employing the same 'engagement' (addictive) product patterns.

[−] charcircuit 35d ago
A section 203 equivalent for AI is so important as it is one of the reasons all of the US companies have all of these usage restrictions and gives more reasons for them to ban your account since they want to minimize legal risk.

Holding tool manufacturers liable for how their tool is used provides bad incentives towards the users of tools.

[−] sph 35d ago
The thing that bugs me the most about OpenAI are not the AI-enabled mass deaths. It's the hypocrisy.
[−] pwr1 35d ago
So they want protection from harms caused by their own models. Classic move — lobby for the rules while you're still ahead of regulators who don't fully understand the technology yet. Would be interesting to see what happens when a state actually pushes back hard.
[−] LogicFailsMe 35d ago
Yep, this is everything wrong with AI in one easy to protest package, but do keep going on and on about the evils of datacenters, how they're coming for your jobs, and that AI art isn't art. That's really winning hearts and minds!
[−] chollida1 35d ago
Sure and Google, FaceBook and Twitter support section 230 that gives them cover for hosting others content.

A company backing legislation that takes liability off them is something that they will always do.

[−] willio58 35d ago
My entire company switched from open ai to entropic after the Department of War idiocy that happened a few weeks ago.

Anthropic isn’t perfect by a long shot but at least they stand by a couple morals.

[−] kusokurae 35d ago
Without getting even more eyes on me, these company boards are inadequately scared for their personal safety.
[−] mrcwinn 35d ago
Fortunately at any moment the virtuous non-profit will step in and make this all okay.
[−] LunaSea 35d ago
Having worked for OpenAI will be the new "MindGeek" on LinkedIn.
[−] randyrand 33d ago
Anyone blindly following LLM outputs its incredibly stupid.
[−] lebuffon 35d ago
By "back Illinois bill", does that mean they wrote it?
[−] nsxwolf 35d ago
"death or serious injury of 100 or more people or at least $1 billion in property damage"

They think their products will cause 9/11 scale events, and they shouldn't have to pay for it when they do.

[−] xeyownt 35d ago
Skynet begins learning at a geometric rate.
[−] thiago_fm 35d ago
Incredible.

Hey Americans,

Please just make sure when you let an AI decide to explode your own country and ruin your society, you leave the rest of the world intact, thanks

[−] giancarlostoro 35d ago
Is this for like military scenarios or like, ChatGPT designed a drug that seemed to work, but people died by the millions 5 years later? Because they should 100% be liable for the latter. The former, good luck trying to prosecute an AI company for something the military does. To an extent, the military would probably want their AI models to be behind their private network, completely firewalled from any public network. SIPRNet iirc. If they lock it down behind a highly classified network, good luck figuring out how they're using AI.
[−] solid_fuel 34d ago
OpenAI continually fails to understand that liability is there to protect their users AND OPENAI. If OpenAI causes significant harm, and the victims are told they cannot even sue to be made whole, what exactly does OpenAI think will happen? That the victims will just go pound sand? People will demand justice, and if that can't be delivered via the legal system either the system will be changed, negating this lobbying effort, or the system will by bypassed.
[−] arvyy 35d ago
it feels OpenAI know they've lost, and their only hope is getting saved by USA military complex. I have a more restrained opinion about other AI companies and LLM tech more broadly; but for OpenAI specifically I hope they go bankrupt sooner rather than later
[−] lenerdenator 35d ago
This is why humans will still be necessary in decision chains: good luck getting anyone associated with AI to be provided with a real punishment when their models cause something bad to happen, or getting the executives who said "let's just have the AI do it" to take any responsibility.
[−] elAhmo 35d ago
Sam is working hard to confirm everything in that article.
[−] metalman 35d ago
BLOOD BLOOD BLOOD BLOOD BLOOD BLOOD BLOOD BLOOD BLOOD
[−] b00ty4breakfast 35d ago
Of course they are, because the tech industry is run by ethical midgets and psychopaths, who shouldn't be allowed to own a dog but are in charge of trillion-dollar corporations getting shadow contracts from the pentagon.

The more I learn about tech and the people that build it, the more I yearn for the era of caves and pointy sticks.

[−] qsera 35d ago
Another marketing gimmik...
[−] r5sz 34d ago
Ehh xD
[−] Swoerd 35d ago
[dead]