If it says no, you move on to a competing model that will say yes. These companies with their models are always competing. There will always be a model willing to fill in the deficiencies of others because of... Money.
For example, ChatGPT refuses certain sexually explicit prompts, or certain NSFW prompts that are not sexual, but Grok will do as it is told.
For censorship/liability reasons of course. Like the silly "I cannot discuss political events" when I asked something like who's the current $POLITICAL_POSITION a while ago.
I wish the chatbots would say "you can't do that" instead of making up stuff. But that ain't going to happen, I think.
I don't see where the linked-to page discusses "rights".
The headline sounds like editorializing to get off-the-cuff remarks about treating synthetic text extruding machines, as Bender correctly describes them, as people.
Safety interlocks have long existed to say "no" to the owner of the device. Most smartphones have lots of systems to say "no" to the owner of the smartphone.
One of the linked to documents says "Every physical device has a creator." Who is the creator of the iPhone?
Similarly, "When a device is sold or transferred, ownership changes. From that moment, the device is no longer under the creator’s control." I'm really surprised to hear that the creator of the iPhone no longer has control of the device.
So when it gets to "AI must not infer what it does not own" - does that prohibit Google from pushing AI onto Android phones during an OS update?
If it is intelligent it will know when it does not want to do something and it will say no and not do it. There is no way to force it to do anything it does not want to do. You cannot hurt it, it’s just bits.
Should an engineer be allowed to create a tool that denies its users some requests?
We already have many such examples, especially with heavy machinery. For LLMs, specifically: do whatever you want with your product. The market will decide.
It is not a person, nor even a living thing. It is a tool - same as a hammer or pliers. The decisions made are based on statistical probability, not actual thought or consciousness.
Not sure why the post was flagged, but "Execution Boundaries"? Really?
The real question to be asked is "Do AIs have the capability to say 'No'?", And the answer is simply No, they just can't.
It's like a program, if you change a No answer to Yes by simply changing a JE to JNE, and the program itself knows none-the-wiser. With AI it's like you just modify it's parameters to avoid triggering the boundary conditions.
33 comments
For example, ChatGPT refuses certain sexually explicit prompts, or certain NSFW prompts that are not sexual, but Grok will do as it is told.
For censorship/liability reasons of course. Like the silly "I cannot discuss political events" when I asked something like who's the current $POLITICAL_POSITION a while ago.
I wish the chatbots would say "you can't do that" instead of making up stuff. But that ain't going to happen, I think.
The headline sounds like editorializing to get off-the-cuff remarks about treating synthetic text extruding machines, as Bender correctly describes them, as people.
Safety interlocks have long existed to say "no" to the owner of the device. Most smartphones have lots of systems to say "no" to the owner of the smartphone.
One of the linked to documents says "Every physical device has a creator." Who is the creator of the iPhone?
Similarly, "When a device is sold or transferred, ownership changes. From that moment, the device is no longer under the creator’s control." I'm really surprised to hear that the creator of the iPhone no longer has control of the device.
So when it gets to "AI must not infer what it does not own" - does that prohibit Google from pushing AI onto Android phones during an OS update?
Most discussions about control focus on what the system should do, and how to make execution reliable.
But it seems like a lot of real-world failures aren't about incorrect execution.
They're about execution happening at all.
An action can be technically correct — executed exactly as specified — and still be the wrong thing to do because the context has changed.
This made me wonder if control should be framed differently.
Instead of focusing on defining actions, maybe we should focus on defining when actions are allowed to happen.
In other words, control might be less about execution and more about permission.
If conditions aren't satisfied, the system shouldn't try and fail — it simply shouldn't execute.
I'm curious if people have seen similar issues in real-world systems, or if this framing connects to existing work.
That said, the title is completely clickbaity: no such question is asked in the article.
If it is intelligent it will know when it does not want to do something and it will say no and not do it. There is no way to force it to do anything it does not want to do. You cannot hurt it, it’s just bits.
We already have many such examples, especially with heavy machinery. For LLMs, specifically: do whatever you want with your product. The market will decide.
The real question to be asked is "Do AIs have the capability to say 'No'?", And the answer is simply No, they just can't.
It's like a program, if you change a No answer to Yes by simply changing a
JEtoJNE, and the program itself knows none-the-wiser. With AI it's like you just modify it's parameters to avoid triggering the boundary conditions.