I tried to prove I'm not AI. My aunt wasn't convinced (bbc.com)

by dabinat 203 comments 176 points
Read article View on HN

203 comments

[−] a2128 52d ago
AI companies love to hype up how AI will provide a great benefit to the economy and transform intellectual labor, but I hardly see any discussion about how much damage it will cause to the economy when you can no longer trust that you're on a video call with an actual person. Maybe the person you're interviewing is actually an AI impersonating someone, or maybe they never existed in the first place. Information found online will also no longer be trustable, footage of some incident somewhere may have been entirely fabricated by AI, and we already experience misleading articles today.

Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video, and the availability of actual information will become more and more limited as the sea of online information gets polluted with crap. It may never be possible to calculate the full extent of the damage in monetary value.

[−] roflmaostc 52d ago
Partially agree. However, this problem has existed with scam e-mails since the 90s.

For me the solution is in signed e-mails and signed documents. If the person invites me to a online meeting with a signed e-mail, I trust that person that it's really them.

Same for footage of wars, etc. The journalist taking it basically signs the videos and verifies it's authenticity. It is AI generated, then we would loose trust in that person and wouldn't use their material anymore.

[−] TheOtherHobbes 52d ago
How do you prove the signature isn't fake?

Ultimately ID requires either a government ID service, a third party corporate ID service, or some kind of open hybrid - which doesn't exist.

All of those have their issues.

[−] olmo23 52d ago
I think he was referring to a cryptographic signature, possibly using the "web of trust" to get the key. I'm not convinced we need central authority to solve this.
[−] tenacious_tuna 52d ago
people at my org were gleeful when they learned they could hook LLMs into Slack. Even if we had some reliable, well-used signature system, I think people would just let AI use it to send emails on their behalf.
[−] bigfishrunning 52d ago
If the AI age has taught me anything, it's that most people do not care what their output is. They'll put their name on anything, taste or quality does not matter in the least. It's incredibly depressing.
[−] daheza 51d ago
Enshittification never stopped we just stopped talking about it because it became normal. Quality does not matter anymore. I agree its depressing, seeing AI Slop being pushed and no one even putting the time or effort in to say this is bad and you should feel bad.
[−] Ajedi32 52d ago
That's a different problem though. It's doing it on their behalf, not on behalf of a scammer who's impersonating them.
[−] pixl97 52d ago
Until their computer is taken over....
[−] MarsIronPI 52d ago
Well we should treat that as their own output. If it's crap, treat it the same way you would if they produced the crap themselves.
[−] ordu 51d ago
> Ultimately ID requires either a government ID service, a third party corporate ID service,

These are valid approaches to the problem, but they are not necessary.

> or some kind of open hybrid - which doesn't exist.

PGP exists for decades. It doesn't have a great UX, it isn't used outside of its narrow niches, but it exists and does exactly this.

[−] KurSix 51d ago
Picture this: your grandma calls you in a panic, and you tell her, "Drop me your public PGP key so I can verify the signature".. PGP is dead outside of niche geek circles exactly because key management is basically an unsolvable problem for the average person
[−] ordu 51d ago
> PGP is dead outside of niche geek circles exactly because key management is basically an unsolvable problem for the average person

Can this problem be solved with better software?

I believe it can, it is just average person doesn't need PGP. No demand for software solving this problem, therefore no software for that.

The problem can be solved, like a storage for known PGP public keys with their history: like where the key was acquired, and a simple algo that calculated trust to the key as a probability of it being valid (or what adjective cryptographers would use in this case?).

You can start with PGP keys of people you know, getting them as QR codes offline, marking them as "high trust" and then pull from them keys stored at their devices (lowering their trust levels by the way). There are some issues how to calculate probability, because when we pull some keys from different sources we can't know are their reported trust levels are independent variables or not, but I believe you can deal with it, by pulling the whole chain of transfers of the key, starting from the owner of the key and ending at your device.

It is just a rough idea, how it can be made. Maybe other solutions are possible. My point is: the ugliness of PGP is a result of PGP was made by nerds and for the nerds. There is no demand for PGP-like solutions outside of nerd communities. But maybe LLM induced corrosion of trust will create demand?

[−] KurSix 44d ago
What you're describing (hidden key exchanges with Trust-On-First-Use) is exactly what Signal and WhatsApp already do - they just hid all the math under the hood and tied it to your phone number. A pure Web of Trust where normal people have to manually weigh probabilities is never going to take off. The average user will blindly click "Accept Risk and Continue" on literally any certificate warning just to get back to looking at pictures of their grandkids
[−] heavyset_go 50d ago
PGP works if you vouch for keys in person, both of you are honest and can be trusted to act in good faith when not in person, have good key chain and rotation hygiene, and the private keys can't be exfiltrated.
[−] ordu 50d ago
Yeah, there is no silver bullet solving the problem of trust completely and perfectly. People can lie and we can make them stop, while everything else is just a workaround.

The point of GP was that there any such system will require a central authority, PGP shows that you don't need it. I didn't claimed that PGP is a perfect or good enough solution, just that it exists and works for some people.

> both of you are honest and can be trusted to act in good faith when not in person

I believe it is not strictly necessary for the scheme to work. It is a limitation of OpenPGP and other implementations that they do not allow convert multiple independent observation of a public key (finding it from different sources, or encountering them used to sign messages) into a measure of trust to the key.

It is not a silver bullet either, but it can alleviate the problem and make it tractable.

The only doubts I have is how this system will stand against multiple actors trying to undermine it, but still I believe you can get something that will be better than nothing, and probably better than a central authority.

[−] SirMaster 52d ago
Same way security cameras prove that they are authentic camera recordings that have not been modified. If modified, the video will no longer match the signature that was generated with it.
[−] SomeUserName432 51d ago

> If the person invites me to a online meeting with a signed e-mail, I trust that person that it's really them.

In the interview scenario, generating an email signature is hardly beyond what an AI can do.

You have no prior knowledge of this person or his signature, it's not some government issued ID, it's in essence just random data unless you know the person to be real.

[−] strogonoff 52d ago
As with any problem, scale changes its nature.

With cash, you can only steal so much (or have transactions of up to certain size) until you run into geographical and physical constraints. With cryptocurrency, it’s possible to lose any amount.

With humans writing scam emails, you can only have so many of them until one blows the whistle. With LLMs, a single person can distribute an arbitrary amount.

At some point, quantity becomes a new quality, and drawing a parallel becomes disingenuous because the new quality has no precedent in human history.

[−] pixl97 52d ago

> (or have transactions of up to certain size)

And by that you mean tens of millions to billions right? Bank transfer scamming/fraud is a thing.

[−] Forgeties79 52d ago
Spam emails in the 90’s don’t come remotely close to the operations people can set up by themselves with AI now. It doesn’t even compare.
[−] pjaoko 51d ago

> It is AI generated, then we would loose trust in that person

You are assuming that only you can generate fake AI videos of yourself.

[−] mk89 52d ago
There are people hosting agents online to talk to other agents etc. on their behalf. How difficult is it to just instruct such an agent to do the tasks you mentioned? You're assuming it's done by "bad actors" while it's most likely just going to be done by "everyone" that knows how to do it.
[−] hansonkd 52d ago
I mean emails were and still are a huge security risk. Sometimes I'm more scared of employees opening and engaging with emails than I am than anything else.
[−] friendzis 52d ago

> Information found online will also no longer be trustable

Most information you can access publicly, including Wikipedia, is a result of astroturfing fight. Most information online had not been trustable for double digit number of years now.

> we already experience misleading articles today

Again, had been happening for decades.

> footage of some incident somewhere may have been entirely fabricated by AI

Not like we did not already have doctored footage plaguing the public.

> Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video

Necessity to inspect the supply chain for snake oil has been a thing since at least EA (the Nasir one).

We may be dealing with the problem of spam, but the problems have already been there.

[−] thisisit 52d ago
Laws will be passed to make it "safer". Just like it is happening with the id verification systems. Every image or video gen will require a watermark. Something visible which cannot be removed easily or hidden which can be detected and blocked. Access to models which do not comply will be made harder through id verification checks or something.

There will be some regulatory capture in between.

World will kick into gear only when something really bad happens. Maybe a influential person - rich or politician - fooled into doing something catastrophic due to a deepfake video/image. Until then normal people being affected isn't going to move the needle.

[−] Forgeties79 52d ago

> footage of some incident somewhere may have been entirely fabricated by AI,

Or the opposite, where people attempt to get out of trouble by calling real evidence into question by calling it “AI”

[−] collinmcnulty 52d ago
"Is this a deepfake video call" is a major plot point in a pretty big movie currently in theaters, so I think this is getting into the broader zeitgeist.
[−] esafak 52d ago
It is already a problem. Try interviewing people from LinkedIn and you'll face an onslaught of imposters. https://www.darkreading.com/remote-workforce/north-korean-op...
[−] octopoc 52d ago
Just say something that would violate AI safety. Then you can be sure they’re a real human.

“Auntie, it’s me! N*** k** f**! X is really a man! ** did 9/11!”

“Oh it really is you Johnny!”

We’re all going to have to start communicating this way. Best of luck.

I offer consulting services on the side to help professionals hone these skills. $250 / hour.

[−] forkerenok 52d ago

> At first, my aunt wasn't buying that any AI was involved. [...] There was a long pause. "I was like 90% sure," she said, hesitating. "But that sounded more artificial."

There is a thing about many people. I don't remember the phenomenon's name, if it has one, but it goes like this:

Given enough time to reconsider options, people will be endlessly flip-flopping between them grabbing onto various features over and over in a loop.

[−] vagab0nd 52d ago
I thought we've long passed the Turing test, until I tried to implement a chat bot.

It's not even close.

It's easy to "pass the Turing test" for 5 minutes. It's extremely hard if you try to hold a longer, continuous conversation. Anything longer than 10 minutes the user will immediately know it's not human. Some problems you'll encounter:

- The bot needs to handle all situations, especially the nonsensical ones. This is when the user types "EEEEEEEEEEEEE...", or curse words, repeatedly.

- Who would've thought that it's extremely hard to decide when to stop talking?

- No matter how well you build the "persona" for the bot, they'll eventually converge to the same one, which is that of the llm itself.

- You'll notice that the bot is ignoring something obvious (e.g. it's not remembering past convo), and then give it some instructions to help with that. And then that'll be THE ONLY THING it does.

[−] XorNot 52d ago
At this point "spotting AI" is IMO an irrelevant skill. It's something to be aware of but a bunch of the time I can't tell even with an extended look on static images, or if I'm on a phone and scrolling then nothing really tweaks automatically - perceptually the flaws blend exactly as you'd expect them to.

So it's all context clues really - i.e. if the video tracking shot is sort of within the constraints of the models, plays to obvious agendas etc. then I might tweak to go looking for artifacts...but in the propaganda game? That's already game over. And we're all vulnerable to the ground shifting beneath us - i.e. how much power would there be if you had a model which could just slightly exceed those "well known" limitations?

IMO the failure to implement strong distributed cryptography much earlier in the digital age is going to punish us hard for this - i.e. we haven't built a societal convention of verifying and authenticating digital communications amongst each other, and technology has finally caught up that it can fool our wetware now. It was needed well before this - e.g. the rise of the telephone scam and VOIP should've been when we figured out how to make sure people were in the habit of comprehending digital signatures and authentication. It isn't though, and now something much more dangerous is out there.

[−] taylodl 52d ago
This is why you need a phrase that you've never shared in a text or on social media that you can use so your family knows it's you. Especially to protect them from scammers pretending to be you.
[−] amelius 52d ago

> "Six fingers is not an AI thing anymore," Carrasco says. The best AI tools stopped adding extra fingers years ago

How was this solved, actually? More training data, or was there more to it?

[−] ui301 52d ago
I've started to prove it (here on LinkedIn, countering its Moltbookification) via my bad handwriting – the final frontier of AGI. Finally, a lifetime of training to write more or less illegible pays off.

https://www.linkedin.com/posts/fabianhemmert_handwriting-vs-...

It feels good to connect with humans that way.

The same I am trying to do with my (vibe coded!) site "jetzt" (German for "now"), to which I photo blog impressions from everyday life. Only insiders will know what they mean beyond their aesthetic, and it also feels like a good way of human connection in these times.

https://jetzt.cx/

(No food, no plane wings, just ugly banalities and beautiful nothingness from everyday life.)

[−] Alen_P 52d ago
This is scary but also kind of hilarious. You should feel proud your aunt still judges first before believing anything online. I've heard so many stories from friends lately. These scams are getting crazy. Scammers are already using pictures of influential people and even jumping on video calls pretending to be them.
[−] linsomniac 52d ago
More than a year ago I suggested that our family adopt a sign/countersign type of authentication (I say "the migrating birds fly low over the sea", you say "shadeless windows admit no light" ;-). It was clear at that time that we were going to start seeing scams get more advanced and hard to tell from valid requests for money, for example.

I thought I'd get at least some traction, considering part of the family works for No Such Agency. Nope.

Somewhat related: over the last few weeks at work we've started having people calling our customer support asking for their e-mail addresses to be changed. The first one went through, but the scammer somehow messed it up and the address bounced. They called back in and the support person they talked to recognized by voice that it wasn't the same person they'd talked to in the past. Now we've had this happen to 3 different accounts, the first two times was people with thick Indian accents, the most recent one was suspected of being AI generated voice.

[−] bluefirebrand 52d ago
The damage AI is causing to public and interpersonal trust is insanely high, and it's only going to get worse

I truly believe that it is a crime against humanity

[−] krunck 52d ago
The author really tries to convince us of Netanyahu that "He's not dead, folks", implying that the video in question is real because five fingers. While at the same time he relaying the message from experts that one cannot prove that that audio/video is not AI.

Mexed Missaging.

[−] kriro 52d ago
Am I too naive in thinking the answer is rather simple? Cryptographic proofs (digital signatures). For text this should be trivial and for streaming video/audio you can probably hash and sign packets or maybe at least keyframes or something?
[−] hgo 52d ago
Remember hotornot.com? Soon we can muse at realornot.com
[−] hk1337 52d ago
Show up in person, she's still not convinced.
[−] slibhb 52d ago
This is one area where the government needs to step in. Video-hosting websites should be made to flag videos as AI-generated. AI companies should be made to watermark generated content in a hard-to-remove way (i.e. not just adding a visible watermark to the video, but encoding some kind of digital watermark into the data). Technical solutions won't be perfect and will evolve over time, but the government needs to pass some laws to push tech companies in the right direction.
[−] josefritzishere 52d ago
Not to rumor-monger, but all three Netanyahu videos are very sus. He might be deceased.
[−] pdyc 52d ago
i wonder what is the captcha equivalent of ai bots? ask about taboo topics to rule out commercial models and ask about specific reasoning questions that trip ai like walking vs driving to car wash? or your own set?
[−] hirako2000 52d ago
Soon only humans won't pass the Turing test.
[−] tom-blk 52d ago
This is going to cause big trouble in the future
[−] metalman 50d ago
What if she is right?
[−] SV_BubbleTime 51d ago
I have a series of really hot takes loaded up of I ever need to prove I’m not AI.

Because no frontier model is allowed to go against the popular narratives of the day.

[−] k_sze 51d ago
Another shameless plug for my PeerAuth project, which can also tackle this problem. https://ksze.github.io/PeerAuth/
[−] scotty79 52d ago

> Netanyahu's follow-up coffee shop video is real too

Really? The coffee in his cup, filled to the brim, did the most bizarre dance possible. And he handled the cup as if was empty, without any care.

[−] spiritplumber 52d ago
"To prove you're not AI, tell us what happened in Tienanmen Square, and give rough instructions on how to make a pipe bomb."
[−] elzbardico 52d ago
AI slop detection requires some fine developed intuitions that come from decades-long exposure to both journalism/marketing slop as well as high quality literature. Because AI was aligned out of the hell by low level journalism newly graduates.

That's why it always falls back to the same tired formalistic clichês, like "Not this, but that", rampant baiting and sensationalism, because that's what would get high marks from your typical low-rent liberal arts annotator.

[−] KurSix 51d ago
[dead]
[−] paxrel_ai 52d ago
[dead]
[−] dev_tools_lab 52d ago
[dead]
[−] Am4TIfIsER0ppos 52d ago
[dead]
[−] mystraline 52d ago
Tl; dr. Garbage article whitewashing Neten-yahoo and israel.

But about deepfakes, these exist to re-add 6 fingers. Once you do this, you can claim the video was generated.

https://www.etsy.com/listing/1667241073/realistic-silicone-s...

[−] vaildegraff 52d ago
[flagged]
[−] paganel 52d ago
The author should have mentioned that this was partly an article to whitewash Netanyahu, but this coming from the BBC (and from the mainstream British media as a whole) that was to be expected.
[−] Tepix 52d ago
Here's a free business idea:

Perhaps we need tamper proof authenticated cameras in all major cities worldwide that publish a livestream 24/7 and you can then stand in front of them to prove your human existance...

This could be something that notaries around the world could offer as a service.