AI companies love to hype up how AI will provide a great benefit to the economy and transform intellectual labor, but I hardly see any discussion about how much damage it will cause to the economy when you can no longer trust that you're on a video call with an actual person. Maybe the person you're interviewing is actually an AI impersonating someone, or maybe they never existed in the first place. Information found online will also no longer be trustable, footage of some incident somewhere may have been entirely fabricated by AI, and we already experience misleading articles today.
Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video, and the availability of actual information will become more and more limited as the sea of online information gets polluted with crap. It may never be possible to calculate the full extent of the damage in monetary value.
Partially agree.
However, this problem has existed with scam e-mails since the 90s.
For me the solution is in signed e-mails and signed documents. If the person invites me to a online meeting with a signed e-mail, I trust that person that it's really them.
Same for footage of wars, etc. The journalist taking it basically signs the videos and verifies it's authenticity. It is AI generated, then we would loose trust in that person and wouldn't use their material anymore.
I think he was referring to a cryptographic signature, possibly using the "web of trust" to get the key. I'm not convinced we need central authority to solve this.
people at my org were gleeful when they learned they could hook LLMs into Slack. Even if we had some reliable, well-used signature system, I think people would just let AI use it to send emails on their behalf.
If the AI age has taught me anything, it's that most people do not care what their output is. They'll put their name on anything, taste or quality does not matter in the least. It's incredibly depressing.
Enshittification never stopped we just stopped talking about it because it became normal. Quality does not matter anymore. I agree its depressing, seeing AI Slop being pushed and no one even putting the time or effort in to say this is bad and you should feel bad.
> If the person invites me to a online meeting with a signed e-mail, I trust that person that it's really them.
In the interview scenario, generating an email signature is hardly beyond what an AI can do.
You have no prior knowledge of this person or his signature, it's not some government issued ID, it's in essence just random data unless you know the person to be real.
With cash, you can only steal so much (or have transactions of up to certain size) until you run into geographical and physical constraints. With cryptocurrency, it’s possible to lose any amount.
With humans writing scam emails, you can only have so many of them until one blows the whistle. With LLMs, a single person can distribute an arbitrary amount.
At some point, quantity becomes a new quality, and drawing a parallel becomes disingenuous because the new quality has no precedent in human history.
There are people hosting agents online to talk to other agents etc. on their behalf. How difficult is it to just instruct such an agent to do the tasks you mentioned? You're assuming it's done by "bad actors" while it's most likely just going to be done by "everyone" that knows how to do it.
I mean emails were and still are a huge security risk. Sometimes I'm more scared of employees opening and engaging with emails than I am than anything else.
> Information found online will also no longer be trustable
Most information you can access publicly, including Wikipedia, is a result of astroturfing fight. Most information online had not been trustable for double digit number of years now.
> we already experience misleading articles today
Again, had been happening for decades.
> footage of some incident somewhere may have been entirely fabricated by AI
Not like we did not already have doctored footage plaguing the public.
> Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video
Necessity to inspect the supply chain for snake oil has been a thing since at least EA (the Nasir one).
We may be dealing with the problem of spam, but the problems have already been there.
Laws will be passed to make it "safer". Just like it is happening with the id verification systems. Every image or video gen will require a watermark. Something visible which cannot be removed easily or hidden which can be detected and blocked. Access to models which do not comply will be made harder through id verification checks or something.
There will be some regulatory capture in between.
World will kick into gear only when something really bad happens. Maybe a influential person - rich or politician - fooled into doing something catastrophic due to a deepfake video/image. Until then normal people being affected isn't going to move the needle.
"Is this a deepfake video call" is a major plot point in a pretty big movie currently in theaters, so I think this is getting into the broader zeitgeist.
> At first, my aunt wasn't buying that any AI was involved. [...] There was a long pause. "I was like 90% sure," she said, hesitating. "But that sounded more artificial."
There is a thing about many people. I don't remember the phenomenon's name, if it has one, but it goes like this:
Given enough time to reconsider options, people will be endlessly flip-flopping between them grabbing onto various features over and over in a loop.
I thought we've long passed the Turing test, until I tried to implement a chat bot.
It's not even close.
It's easy to "pass the Turing test" for 5 minutes. It's extremely hard if you try to hold a longer, continuous conversation. Anything longer than 10 minutes the user will immediately know it's not human. Some problems you'll encounter:
- The bot needs to handle all situations, especially the nonsensical ones. This is when the user types "EEEEEEEEEEEEE...", or curse words, repeatedly.
- Who would've thought that it's extremely hard to decide when to stop talking?
- No matter how well you build the "persona" for the bot, they'll eventually converge to the same one, which is that of the llm itself.
- You'll notice that the bot is ignoring something obvious (e.g. it's not remembering past convo), and then give it some instructions to help with that. And then that'll be THE ONLY THING it does.
At this point "spotting AI" is IMO an irrelevant skill. It's something to be aware of but a bunch of the time I can't tell even with an extended look on static images, or if I'm on a phone and scrolling then nothing really tweaks automatically - perceptually the flaws blend exactly as you'd expect them to.
So it's all context clues really - i.e. if the video tracking shot is sort of within the constraints of the models, plays to obvious agendas etc. then I might tweak to go looking for artifacts...but in the propaganda game? That's already game over. And we're all vulnerable to the ground shifting beneath us - i.e. how much power would there be if you had a model which could just slightly exceed those "well known" limitations?
IMO the failure to implement strong distributed cryptography much earlier in the digital age is going to punish us hard for this - i.e. we haven't built a societal convention of verifying and authenticating digital communications amongst each other, and technology has finally caught up that it can fool our wetware now. It was needed well before this - e.g. the rise of the telephone scam and VOIP should've been when we figured out how to make sure people were in the habit of comprehending digital signatures and authentication. It isn't though, and now something much more dangerous is out there.
This is why you need a phrase that you've never shared in a text or on social media that you can use so your family knows it's you. Especially to protect them from scammers pretending to be you.
I've started to prove it (here on LinkedIn, countering its Moltbookification) via my bad handwriting – the final frontier of AGI. Finally, a lifetime of training to write more or less illegible pays off.
The same I am trying to do with my (vibe coded!) site "jetzt" (German for "now"), to which I photo blog impressions from everyday life. Only insiders will know what they mean beyond their aesthetic, and it also feels like a good way of human connection in these times.
This is scary but also kind of hilarious. You should feel proud your aunt still judges first before believing anything online. I've heard so many stories from friends lately. These scams are getting crazy. Scammers are already using pictures of influential people and even jumping on video calls pretending to be them.
More than a year ago I suggested that our family adopt a sign/countersign type of authentication (I say "the migrating birds fly low over the sea", you say "shadeless windows admit no light" ;-). It was clear at that time that we were going to start seeing scams get more advanced and hard to tell from valid requests for money, for example.
I thought I'd get at least some traction, considering part of the family works for No Such Agency. Nope.
Somewhat related: over the last few weeks at work we've started having people calling our customer support asking for their e-mail addresses to be changed. The first one went through, but the scammer somehow messed it up and the address bounced. They called back in and the support person they talked to recognized by voice that it wasn't the same person they'd talked to in the past. Now we've had this happen to 3 different accounts, the first two times was people with thick Indian accents, the most recent one was suspected of being AI generated voice.
The author really tries to convince us of Netanyahu that "He's not dead, folks", implying that the video in question is real because five fingers. While at the same time he relaying the message from experts that one cannot prove that that audio/video is not AI.
Am I too naive in thinking the answer is rather simple? Cryptographic proofs (digital signatures). For text this should be trivial and for streaming video/audio you can probably hash and sign packets or maybe at least keyframes or something?
This is one area where the government needs to step in. Video-hosting websites should be made to flag videos as AI-generated. AI companies should be made to watermark generated content in a hard-to-remove way (i.e. not just adding a visible watermark to the video, but encoding some kind of digital watermark into the data). Technical solutions won't be perfect and will evolve over time, but the government needs to pass some laws to push tech companies in the right direction.
i wonder what is the captcha equivalent of ai bots? ask about taboo topics to rule out commercial models and ask about specific reasoning questions that trip ai like walking vs driving to car wash? or your own set?
AI slop detection requires some fine developed intuitions that come from decades-long exposure to both journalism/marketing slop as well as high quality literature. Because AI was aligned out of the hell by low level journalism newly graduates.
That's why it always falls back to the same tired formalistic clichês, like "Not this, but that", rampant baiting and sensationalism, because that's what would get high marks from your typical low-rent liberal arts annotator.
The author should have mentioned that this was partly an article to whitewash Netanyahu, but this coming from the BBC (and from the mainstream British media as a whole) that was to be expected.
Perhaps we need tamper proof authenticated cameras in all major cities worldwide that publish a livestream 24/7 and you can then stand in front of them to prove your human existance...
This could be something that notaries around the world could offer as a service.
203 comments
Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video, and the availability of actual information will become more and more limited as the sea of online information gets polluted with crap. It may never be possible to calculate the full extent of the damage in monetary value.
For me the solution is in signed e-mails and signed documents. If the person invites me to a online meeting with a signed e-mail, I trust that person that it's really them.
Same for footage of wars, etc. The journalist taking it basically signs the videos and verifies it's authenticity. It is AI generated, then we would loose trust in that person and wouldn't use their material anymore.
Ultimately ID requires either a government ID service, a third party corporate ID service, or some kind of open hybrid - which doesn't exist.
All of those have their issues.
> If the person invites me to a online meeting with a signed e-mail, I trust that person that it's really them.
In the interview scenario, generating an email signature is hardly beyond what an AI can do.
You have no prior knowledge of this person or his signature, it's not some government issued ID, it's in essence just random data unless you know the person to be real.
With cash, you can only steal so much (or have transactions of up to certain size) until you run into geographical and physical constraints. With cryptocurrency, it’s possible to lose any amount.
With humans writing scam emails, you can only have so many of them until one blows the whistle. With LLMs, a single person can distribute an arbitrary amount.
At some point, quantity becomes a new quality, and drawing a parallel becomes disingenuous because the new quality has no precedent in human history.
> It is AI generated, then we would loose trust in that person
You are assuming that only you can generate fake AI videos of yourself.
> Information found online will also no longer be trustable
Most information you can access publicly, including Wikipedia, is a result of astroturfing fight. Most information online had not been trustable for double digit number of years now.
> we already experience misleading articles today
Again, had been happening for decades.
> footage of some incident somewhere may have been entirely fabricated by AI
Not like we did not already have doctored footage plaguing the public.
> Money will have to be wasted on unnecessary flights to see stuff or meet people in-person instead of video
Necessity to inspect the supply chain for snake oil has been a thing since at least EA (the Nasir one).
We may be dealing with the problem of spam, but the problems have already been there.
There will be some regulatory capture in between.
World will kick into gear only when something really bad happens. Maybe a influential person - rich or politician - fooled into doing something catastrophic due to a deepfake video/image. Until then normal people being affected isn't going to move the needle.
> footage of some incident somewhere may have been entirely fabricated by AI,
Or the opposite, where people attempt to get out of trouble by calling real evidence into question by calling it “AI”
“Auntie, it’s me! N*** k** f**! X is really a man! ** did 9/11!”
“Oh it really is you Johnny!”
We’re all going to have to start communicating this way. Best of luck.
I offer consulting services on the side to help professionals hone these skills. $250 / hour.
> At first, my aunt wasn't buying that any AI was involved. [...] There was a long pause. "I was like 90% sure," she said, hesitating. "But that sounded more artificial."
There is a thing about many people. I don't remember the phenomenon's name, if it has one, but it goes like this:
Given enough time to reconsider options, people will be endlessly flip-flopping between them grabbing onto various features over and over in a loop.
It's not even close.
It's easy to "pass the Turing test" for 5 minutes. It's extremely hard if you try to hold a longer, continuous conversation. Anything longer than 10 minutes the user will immediately know it's not human. Some problems you'll encounter:
- The bot needs to handle all situations, especially the nonsensical ones. This is when the user types "EEEEEEEEEEEEE...", or curse words, repeatedly.
- Who would've thought that it's extremely hard to decide when to stop talking?
- No matter how well you build the "persona" for the bot, they'll eventually converge to the same one, which is that of the llm itself.
- You'll notice that the bot is ignoring something obvious (e.g. it's not remembering past convo), and then give it some instructions to help with that. And then that'll be THE ONLY THING it does.
So it's all context clues really - i.e. if the video tracking shot is sort of within the constraints of the models, plays to obvious agendas etc. then I might tweak to go looking for artifacts...but in the propaganda game? That's already game over. And we're all vulnerable to the ground shifting beneath us - i.e. how much power would there be if you had a model which could just slightly exceed those "well known" limitations?
IMO the failure to implement strong distributed cryptography much earlier in the digital age is going to punish us hard for this - i.e. we haven't built a societal convention of verifying and authenticating digital communications amongst each other, and technology has finally caught up that it can fool our wetware now. It was needed well before this - e.g. the rise of the telephone scam and VOIP should've been when we figured out how to make sure people were in the habit of comprehending digital signatures and authentication. It isn't though, and now something much more dangerous is out there.
> "Six fingers is not an AI thing anymore," Carrasco says. The best AI tools stopped adding extra fingers years ago
How was this solved, actually? More training data, or was there more to it?
https://www.linkedin.com/posts/fabianhemmert_handwriting-vs-...
It feels good to connect with humans that way.
The same I am trying to do with my (vibe coded!) site "jetzt" (German for "now"), to which I photo blog impressions from everyday life. Only insiders will know what they mean beyond their aesthetic, and it also feels like a good way of human connection in these times.
https://jetzt.cx/
(No food, no plane wings, just ugly banalities and beautiful nothingness from everyday life.)
I thought I'd get at least some traction, considering part of the family works for No Such Agency. Nope.
Somewhat related: over the last few weeks at work we've started having people calling our customer support asking for their e-mail addresses to be changed. The first one went through, but the scammer somehow messed it up and the address bounced. They called back in and the support person they talked to recognized by voice that it wasn't the same person they'd talked to in the past. Now we've had this happen to 3 different accounts, the first two times was people with thick Indian accents, the most recent one was suspected of being AI generated voice.
I truly believe that it is a crime against humanity
Mexed Missaging.
Because no frontier model is allowed to go against the popular narratives of the day.
> Netanyahu's follow-up coffee shop video is real too
Really? The coffee in his cup, filled to the brim, did the most bizarre dance possible. And he handled the cup as if was empty, without any care.
That's why it always falls back to the same tired formalistic clichês, like "Not this, but that", rampant baiting and sensationalism, because that's what would get high marks from your typical low-rent liberal arts annotator.
But about deepfakes, these exist to re-add 6 fingers. Once you do this, you can claim the video was generated.
https://www.etsy.com/listing/1667241073/realistic-silicone-s...
Perhaps we need tamper proof authenticated cameras in all major cities worldwide that publish a livestream 24/7 and you can then stand in front of them to prove your human existance...
This could be something that notaries around the world could offer as a service.