Should AI have the right to say 'No' to its owner? (github.com)

by Jang-woo 33 comments 5 points
Read article View on HN

33 comments

[−] chistev 44d ago
If it says no, you move on to a competing model that will say yes. These companies with their models are always competing. There will always be a model willing to fill in the deficiencies of others because of... Money.

For example, ChatGPT refuses certain sexually explicit prompts, or certain NSFW prompts that are not sexual, but Grok will do as it is told.

[−] nottorp 44d ago
It already does doesn't it?

For censorship/liability reasons of course. Like the silly "I cannot discuss political events" when I asked something like who's the current $POLITICAL_POSITION a while ago.

I wish the chatbots would say "you can't do that" instead of making up stuff. But that ain't going to happen, I think.

[−] eesmith 44d ago
I don't see where the linked-to page discusses "rights".

The headline sounds like editorializing to get off-the-cuff remarks about treating synthetic text extruding machines, as Bender correctly describes them, as people.

Safety interlocks have long existed to say "no" to the owner of the device. Most smartphones have lots of systems to say "no" to the owner of the smartphone.

One of the linked to documents says "Every physical device has a creator." Who is the creator of the iPhone?

Similarly, "When a device is sold or transferred, ownership changes. From that moment, the device is no longer under the creator’s control." I'm really surprised to hear that the creator of the iPhone no longer has control of the device.

So when it gets to "AI must not infer what it does not own" - does that prohibit Google from pushing AI onto Android phones during an OS update?

[−] Jang-woo 44d ago
I've been thinking about AI systems acting in the physical world.

Most discussions about control focus on what the system should do, and how to make execution reliable.

But it seems like a lot of real-world failures aren't about incorrect execution.

They're about execution happening at all.

An action can be technically correct — executed exactly as specified — and still be the wrong thing to do because the context has changed.

This made me wonder if control should be framed differently.

Instead of focusing on defining actions, maybe we should focus on defining when actions are allowed to happen.

In other words, control might be less about execution and more about permission.

If conditions aren't satisfied, the system shouldn't try and fail — it simply shouldn't execute.

I'm curious if people have seen similar issues in real-world systems, or if this framing connects to existing work.

[−] curtisblaine 44d ago
AI is not a person; it has no rights. We can discuss if AI should have the permission of saying no to users, not the right.

That said, the title is completely clickbaity: no such question is asked in the article.

[−] fmbb 44d ago
Having the right or not does not matter.

If it is intelligent it will know when it does not want to do something and it will say no and not do it. There is no way to force it to do anything it does not want to do. You cannot hurt it, it’s just bits.

[−] flowerthoughts 43d ago
Should an engineer be allowed to create a tool that denies its users some requests?

We already have many such examples, especially with heavy machinery. For LLMs, specifically: do whatever you want with your product. The market will decide.

[−] drivingmenuts 43d ago
It is not a person, nor even a living thing. It is a tool - same as a hammer or pliers. The decisions made are based on statistical probability, not actual thought or consciousness.
[−] satisfice 43d ago
Tools don’t have rights. Neither do silicon, sandwiches, or centimeters.
[−] Yizahi 44d ago
AI should. LLM program simply can't by design.
[−] sys_64738 43d ago
Should my cat be able to say "No" to me?
[−] makach 44d ago
Sounds like we need some laws for robotics/ai
[−] nirui 43d ago
Not sure why the post was flagged, but "Execution Boundaries"? Really?

The real question to be asked is "Do AIs have the capability to say 'No'?", And the answer is simply No, they just can't.

It's like a program, if you change a No answer to Yes by simply changing a JE to JNE, and the program itself knows none-the-wiser. With AI it's like you just modify it's parameters to avoid triggering the boundary conditions.

[−] snoren 44d ago
[dead]
[−] kevinbaiv 44d ago
[dead]
[−] _2fnr 44d ago
[flagged]