Actually it shouldn't be too hard, just a cardboard cutout with a pullstring which, when pulled, intones "we're really sorry about this and it will never happen again, I promise".
You can thank Marc Andreessen whose world view is apparently limited solely to "Snowden is a traitor". Thank god someone like him has fuck you money and can shape the world to his desires
(And I've heard he stole Mosaic code, which I don't know if it's true but would be consistent)
It is odd when people try to put Notch on the level of someone like Carmack. Like because the guy made a billion dollars that means his opinion should be highly valued in perpetuity. He just seems like a fairly average game dev that lucked his way into making Lego 2.
Considering he reportedly paid top FB people "billions", you would hope they would be doing more than building a Mini Me. Is this the modern Stalin statue?
Hilarious that at one end of the AI world Anthropic is doing exit interviews with their models, while at the other end Zuck is trying to create a digital twin and trap them in an eternal work prison, Severance style. 2 groups having starting with the thought that today's models are getting more person-like and going in complete opposite directions with that.
I wonder how it works in his mind's eye. Does it make decisions? Does it dispense little Zuckian wisdom proverbs? Does it try to become your friend?
I can understand the appeal; being able to be "present" without the time cost can mean (possibly significantly more) presence at the same cost. This could be very attractive especially to those managing personal relations, like sales representatives.
But I'm surprised that the risks seem to be so underestimated.
Once this clone exists, what happens if it gets out into the wild? Imagine everyone having full access do what is effectively a digital model of your personality. Imagine your competition putting your own model to use against you.
And the better the approximation of this model, the worse the damage to yourself.
These people are certifiable and have too much money to misallocate on nonsense. This is like Gavin Belson's holographic avatar (which of course did not work).
There was an old Soviet cartoon about a child who found a box containing two magical servants and immediately asked them for ice cream and sweets. Well, since the servants "do everything for you", the first servant fetched the sweets for him, and the second one ate them for him. I've often thought about this cartoon since the AI thing started.
Back in the 1980s, some Japanese companies had rooms in which you could whack at an effigy of the boss with a shinai. Just to let off a bit of steam. Will Meta's workers be able to do something like this with Zuckerberg's AI clone?
The FT piece says "They added that the character was being trained on the billionaire’s mannerisms, tone and publicly available statements, as well as his own recent thinking on company strategies, so that employees might feel more connected to the founder through interactions with it."
Surely the more likely outcome is that employees feel less connected to "the founder" because they know that there's a high chance they are simply talking to an AI clone?
What happens when Zuck is EOL? Does he transfer his Meta shares to a trust owned by the AI clone? Does that mean that we will have to deal with Zuck for literally forever??
> Meta CEO Mark Zuckerberg could soon have an AI clone of himself to interact with and provide feedback to employees, according to a report from the Financial Times.
If you're the type of person who checks the comments on a post with this kind of headline, then you probably also want to (re-)watch the 2 minute highlight reel of Mark's backyard meat-smoking party. https://www.youtube.com/watch?v=eBxTEoseZak
For artificial intelligence to replace oneself, it would need a digital copy of one's way of thinking. I believe this is impossible to implement with current AI.
93 comments
(And I've heard he stole Mosaic code, which I don't know if it's true but would be consistent)
https://web.archive.org/web/20191221082346/http://ludumdare....
https://web.archive.org/web/20210722173354/https://www.youtu...
https://www.youtube.com/playlist?list=PLgAujBKarXXoMxJDyi1Am...
I wonder how it works in his mind's eye. Does it make decisions? Does it dispense little Zuckian wisdom proverbs? Does it try to become your friend?
Edit: RE Anthropic haha: https://news.ycombinator.com/item?id=47750086
But I'm surprised that the risks seem to be so underestimated.
Once this clone exists, what happens if it gets out into the wild? Imagine everyone having full access do what is effectively a digital model of your personality. Imagine your competition putting your own model to use against you.
And the better the approximation of this model, the worse the damage to yourself.
The FT piece says "They added that the character was being trained on the billionaire’s mannerisms, tone and publicly available statements, as well as his own recent thinking on company strategies, so that employees might feel more connected to the founder through interactions with it."
Surely the more likely outcome is that employees feel less connected to "the founder" because they know that there's a high chance they are simply talking to an AI clone?
> Meta CEO Mark Zuckerberg could soon have an AI clone of himself to interact with and provide feedback to employees, according to a report from the Financial Times.
https://www.ft.com/content/02107c23-6c7a-4c19-b8e2-b45f4bb9c...
https://archive.is/mtVXJ
That way AI-AI can chat and save humans’ time.