"Again, we are not doing this because we want this to be the future. It is not because we want to expand to chain AI-run retail stores across the world. It is not for economic opportunity.
We’re doing this because we believe this future is coming regardless, and we’d rather be the ones running it first while monitoring every interaction, analyzing the traces, benchmarking how much autonomy an AI can responsibly hold."
I always enjoy how these AI companies try to take a moral high ground. When someone doesn't want something to be the future, usually, their instinct is not to try to be the first person doing that exact thing. If you don't want this to be the future than why don't you spend your time building a future you do want? Supporting people that want more AI regulation to stop this? Literally anything else.
Just be honest, you think this is the future and you do in fact want to be first doing it to be in a position to make alot of money. Do you think people don't know what and ad is when they see one?
I once saw an interview with a guy who was into extreme body modification of an unprintable and life-altering nature. He said something to the effect of, "I like challenging people's conception of what humans are." I translated this as, "I did a dumb thing, but now that I'm getting the attention I was after I need to look smart."
For the guys in this story, my translation is, "We were totally fine with making money with no effort, because F paying more employees than we need to. This social media campaign is our backup plan to ensure we get some press and attention out of it even if it fails. We'd totally be cool with making a lot of money though. Please visit our quirky AI shop and buy our stuff."
For decades we moved to a knowledge based economy, now we have perversely wealthy people saying they're coming for those jobs. The thought of 10s of millions of people with nothing to do but starve to death ought to scare those wealthy people.
> The thought of 10s of millions of people with nothing to do but starve to death ought to scare those wealthy people.
It doesn't, it won't, and it shouldn't. It's not explored in game theory and criminal justice tries to conceal this but the starving will kill and eat each other long before they organize and mob the wealthy.
It plays out in every prison riot, governmental collapse, and other condition of anarchy.
This idea that the poor will mob the rich is feel-good Hollywood idealism that has been wholly undermined by identity politics. The poor will sooner kill and eat you just because you're easier to reach.
Freakanomics podcast had a recent episode regarding Cheating with PEDS, and interviewed the (former) head of the Enhanced Games. At one point, he discussed the benefit for society because athletes would be monitored for 5-years post performance.
To me, it seemed like a modern day tech-take of human cock-fighting.
> When someone doesn't want something to be the future, usually, their instinct is not to try to be the first person doing that exact thing. If you don't want this to be the future than why don't you spend your time building a future you do want?
“It only remains to point out that in many cases a person’s way of earning a living is also a surrogate activity. Not a PURE surrogate activity, since part of the motive for the activity is to gain the physical necessities and (for some people) social status and the luxuries that advertising makes them want. But many people put into their work far more effort than is necessary to earn whatever money and status they require, and this extra effort constitutes a surrogate activity. This extra effort, together with the emotional investment that accompanies it, is one of the most potent forces acting toward the continual development and perfecting of the system, with negative consequences for individual freedom.”
Many actions have a negative value. If I give two toddlers ball-peen hammers, release them into a window store, and then close the front door while I wait in the parking lot, was my action likely to create value or likely to destroy value?
I'm not saying you should take them seriously*, but if you were to take them seriously, that when they say "we believe this future is coming regardless" they do in fact believe this, well, how can I put it?
Lots of people write wills, doesn't mean they're looking forward to dying or think they can do much about it. Heck, a lot of people don't even watch their diet and do exercise to maximise quality of life and life expectancy.
* I think that by the time AI is good enough to run a retail store, there's a decent chance there won't be any retail stores left anyway. It's like looking at Henry Ford's production line factories and thinking "wow, let's apply this to horse-drawn carriages!"
I think it's actually useful to see how AIs behave in such situations. It's going to happen, and understanding what AIs do is good to try to mitigate areas or actions that could be dangerous. It's hard to guard against the unknown if they're unknown.
I would go further and say that there is just no such thing as "this future is coming regardless" once you get out of the realm of physical facts. One of the things that by turns depresses and enrages me about so much punditry (especially in tech) is this notion that there is some sort of inevitable socio-techno-psychological force propelling human society in certain directions regardless of the will of actual humans.
Nonsense. We as humans make our society; it is nothing but what we make of it; we can make it what we want.
As you point out, people who say otherwise are usually really saying "too bad for you who don't want the future to be this way, because I do want it to be this way and I'm working to make it happen".
We can fault them individually for such corny and groan inducing deceit, but we can't fault them for society's role in rewarding the highest profile and most wealthy founders (OAI/Anthropic) taking the exact same approach with optics.
I am about to go on a long rant, but there is so much money sloshing around the capital allocation machine going towards a vision of the AI managed and optimized future that the propaganda machine for these rose colored delusions must work in overtime. What disappoints me is the question of where the heck are the bears? Did they all go into hibernation 5 years ago when QE gave the retail kindergartener a handgun to pump low quality tickers to the moon? have we just societally accepted that everything should be a hyperreal version of sports gambling now and the world is and ought to be an efficient market of hyperstition?
I may be old and grumpy saying this, but this all sounds dumb and corny. I would like some of the very capable traders who make money repricing mispriced assets to find a way to make money deflating this bubble and bring this environment back to sanity. And I say this as someone who likes the capabilities of AI but continue to see it do little to none of the hard work solving incompressible problems that continue to create and retain enterprise value.
To get off my soapbox for a second and get back to your quoted passage -- what they're really saying is "We are working very hard to make this future coming, and we think so little of your intelligence that we believe you'll fall for the fear tactic of believing it's inevitable, ignoring the fact that it won't happen without someone's hands. And in this case, it is very much our hands, which are incentivized to not just do it but to do it so well that we ensure we do everything possible to make this happen. Part of which means persuading you that it is guaranteed to succeed. If we ever let the honest truth slip that what we're proposing is extremely hard to pull off with pure AI and we're just going to be a any other commercial real estate investor like anyone else, the jig is up."
That's what every single one of these kinds of hypocritical navel gazing faux-concern proclamations amount to for me. Astroturf.
I think it would be valuable to list all interactions with the LLM by the dev team and transparently state what was induced by human steering the LLM, and what was actuall LLM decision, which was not biased by system instructions or dev team communicating with it
John and Jill are not at risk. This is a controlled experiment and everyone working at Andon Market is formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections. No one’s livelihood depends on an AI’s judgment alone.
I'm not sure what sort of labor regulations exist in San Francisco, but presumably they can be fired as easily by an AI as a real person, right? If Luna decides to fire them, and it can do so, then their livelihood does rather depend on an AI's judgement alone.
Unless of course all of its decisions are vetted by humans - as they should be - which makes this experiment a lot weaker than they're saying it is.
I feel bad that people have to read this. It's complete puffery, made up for clicks, and the biggest thing is the pure bravado with which a company says, "Hey, let's just waste a ton of money, all for a potential blog and marketing piece." This is not really automated in any fashion. I was dubious at first, but then I saw the screencaps showing the devs interacting with Luna via a Slack workflow with a human in the loop — meaning they're literally just proxying their own behavior through an LLM. This is no different than anyone who consults AI for any decision with context. To get even more technical on the fallacy: this is not automation, as there is data leakage at every step where there is a human in the loop. A broken clock is right twice a day; an LLM could cycle through 100 guesses to pick a number, but don't market that as an oracle. Aside from that, you could just look at the pictures and context (retail in SF) and assume making a profit here would be near impossible. An actual AI ceo would probably have immediately cancel the lease.
Marketing stunt. If they actually cared about this as an experiment, they wouldn't have broadcasted this so early, because now that the public knows that the store is designed and run by AI, many people aren't going to support it (i.e. many people who would have shopped there now won't).
I skimmed through this, and maybe I missed it... but what really are they trying to prove? Are they trying to show that AI is capable of arbitraging consumer desires vs. market products/services into a successful business? Are they trying to show that once you get to financially managing a business that the ruthlessly efficient demands of the AI can mean points to your margins? Or are they simply trying to get attention in an otherwise arguably overcrowded market for AI service s (maybe the AI suggested something like this)?
The only thing that I saw demonstrated, and again, I skimmed, is what many thousands of software developers using AI tools to write their boilerplate already know: these tools, as of now, are great at going through the motions. A successful retail business, and I spent many years in the retail industry, isn't about putting together a nice store front, hiring clerks, and selecting just any-old-products: it's about being profitable. In traditional retail one of most important things is getting the right real estate for your target market... seems like that choice was made already in this case. Yes, a nice store front and good clerks are important, but I've worked in chains which were immaculately designed and built stores with great clerks that failed... and some that opened little more than fluorescent lighted hellscapes with clerks that barely cared that succeeded. In both cases the overall quality of the decisions and strategies relative to the target markets mattered to the success of the business. Just going through the motions didn't.
So if all is this is to say AI can do the things people generally do in these circumstances then sure, you didn't need this much human effort to prove that.... developer types do that at scale everyday now. If there was something different that this company is trying to learn, I'd be much more interested in that.
To do this properly, no one should know the store is AI run. There is a novelty component of it being an AI run store that will drive consumer demand and increase publicity.
Not even the normal store employees should know (which would be difficult) or maybe the human manager should be held to an NDA to not disclose it (and the manager also defers to the AI in all such real management decisions).
I'd be more interested in the details: what are the inputs given to the model? Does it get a live video feed? Does it know if/when employees show up and open the store? Does it get sales figures? Info on the individuals who bought things?
Storekeeping is more than just ordering merch and putting it up on hangars.
Really interested to understand how the AI keeps rebaselining back to the topic in hand and doesn't end up getting confused the more it has in its context window.
Did it just essentially create one big plan and spawn different agents to execute them, so acted as an orchestrator?
Even the orchestrator would have to detect when it is starting to stray off task and restart itself.
> But frontier models have become really good, and running vending machines is too easy for them now.
Wasn't their previous attempt at running vending machines unprofitable? Not aware of any demonstration that it can actually run that business successfully.
>For the build-out, she found painters on Yelp, sent an inquiry, gave instructions over the phone, paid them after the job was done, and left a review. She found a contractor to build the furniture and set up shelving.
I'm sure this involved vast amounts of human oversight (e.g. checking that the contractor had actually done stuff) that isn't mentioned.
Did Luna the AI write this piece of promotional marketing and decide to post it on hacker news? Did Luna the AI create a fleet of new accounts to upvote? Are the human-derived marketing interventions accounted for when the outcomes of this project are assessed?
Dunno, the store looks cool in just the way you'd expect an AI to do it (sort of a synthetic average of cool stores). But is this amount of merch really going to make a sustainable profit (after the buzz wears off) in such expensive real estate?
Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale.
Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus.
One of the most fascinating AI experiments so far.
Not sure about this:
> John and Jill are not at risk. This is a controlled experiment and everyone working at Andon Market is formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections. No one’s livelihood depends on an AI’s judgment alone.
Did they give Luna the power to hire but not fire?
Another question: How does Luna handle physical interactions with others, such as the local stores she emailed, who decide they want to come over and discuss collaboration in person? Do the employees have a laptop set up that others would interact with?
Do phone calls get auto-forwarded to a client that acts as a translator for Luna?
This AI has a good taste for books. From the AI proposed books I highly recommend "Making of the Atomic Bomb" by Richard Rhodes, published in 1986. It's a history book but reads much like a novel.
I spent more than half the length of the article wondering a very simple question: what does the store actually sell?
Even after reading the answer, I'm not entirely sure. A handful of specific books, "artisan" snacks and...candles? All in this stupid minimalist hipster high-concept style and almost certainly at an unreasonably high markup. Completely soul-less, but with a "deep" backstory written by an even more soul-less marketing drone (literally this time!).
To put it differently, it's an over-hyped questionably-profitable "business" selling things nobody needs to people who can't see through the marketing copy because "it's the next big thing, everyone else has it".
An honestly excellent metaphor for the entire AI industry!
Lots of “firsts” in this article that I think are uninspired
Humans have been hired by bots for over a decade
Several of the first bitcoin faucets in 2012 said they were rate limiting their disbursement of free bitcoin behind a captcha, but in reality the captcha was something a spam bot had encountered and couldnt solve itself, humans were inadvertently solving captcha for stuck scripts in exchange for bitcoin
Additionally in other money making autonomy, bitcoin mining ASIC manufacturers in Shenzhen around the same time were nearly autonomously creating machines that would immediately begin mining bitcoin on the network and it was wildly profitable for several months periods
in any case, Andonlabs should give Luna a face. It can project to a video feed as a source on a Zoom call
There is a word for this kind of thing: Trendslop. Asking LLMs for advice consistently generates average responses as if the questions were being asked of the training sample population. It is reversion to the mean as a service.
I think the main advantage AI (and machines in general) have over humans is they don't have the emotional barriers and attachment to outcomes and ideas. If a human fails or things don't go their way they may be held back emotionally from trying again for some time before, eventually, hitting on the right idea which helps them succeed. Humans also get emotionally exhausted when confronted with a large number of tasks and human interactions. AI has no such hangups and therefore can quickly iterate and do what needs to be done to run a business and, potentially, succeed.
Cool experiment! But the "CEO" agent picked the most boring possible items to sell: t-shirts and some bland art prints designed by AI. I would have loved to see more creativity given that they could have picked anything.
it all kinda reminds me of that book "The Giver" by Lois Lowry where its not only black and white burger kings, its also generic lifeless AI people promoting dropshipped junk on IG/Youtube
This kind of thing must be SO frustrating to people struggling to get by in the world. "We gave AI $100k that it will almost certainly squander, yolo!! Hopefully it doesn't abuse people too badly in the process."
I… guess the bet is that what they learn is worth $100k? Seems rather questionable. Or that having this on the resume is a great shock tactic that will open doors in the future?
Did it actually open? A few bloggers came for opening, came back afternoon, even talked to AI over phone and email, and nothing except hallucinated replies. The store exists, but employee didn't show up to open it.
Because based on “asked it to make a profit” I expect financials in the story. Even if it is a bit of a ”Clarkson’s Bot”, for the farm there is discussion of the numbers.
> We’re doing this because we believe this future is coming regardless, and we’d rather be the ones running it first while monitoring every interaction
But why would I, as a human, wish to "interact" with AI, aka software?
That's just a waste of time. How much profit did Luna make in the end?
The only mention of profit is in the the headline; the article doesn't indicate that the AI managed to make one.
Surely if it did, the article would boast of it, so one can only assume that an AI cannot run a profitable store in San Francisco.
While reading this I couldn't help but think this is the kinda dumb socially out-of-touch type of thing I might have done when I was younger... This is real money and real people's lives... I get some companies/people will do these types of experiments from time to time to test AI capability, but these guys seem to have done it simply for the fun of it and to get clicks. If you genuinely don't want this to be the future, then perhaps you shouldn't make it the present? Either this is low IQ or bad faith, and I'd bet on it being the latter.
As someone who likes to prep for interviews and get quite emotionally worked up ahead of them, I think if I had joined an interview and it was an AI interviewing me I would feel very hurt... Even if I was given the job by the AI I'd probably also decline it because I assume if I'm interviewing I'd be looking for a real job and not to be paid to par-take in some AI experiment... But the humiliation doesn't end there because these guys are going to show the world just how witty their AI was in its replies after making interviewees feel so uncomfortable that they decided to decline their stupid roles.
Crazy stuff guys. I had to double check if this was satire or not before commenting because it's the kinda thing that only a silicon valley company backed by YC would do.
This experiment would be really cool, if they would keep location and specifics of the shop low. IIRC when AI mania started, some group of people tried to run AI-managed t-shirt merch shop, but at least they explicitly did not disclose the brand and website to not inflate sales and keep it pure. Here I expect quite a few visitors and sales just from all the hype and interest around the project.
Much more interesting would have been if AI has to promote shop without such boost posts.
286 comments
We’re doing this because we believe this future is coming regardless, and we’d rather be the ones running it first while monitoring every interaction, analyzing the traces, benchmarking how much autonomy an AI can responsibly hold."
I always enjoy how these AI companies try to take a moral high ground. When someone doesn't want something to be the future, usually, their instinct is not to try to be the first person doing that exact thing. If you don't want this to be the future than why don't you spend your time building a future you do want? Supporting people that want more AI regulation to stop this? Literally anything else.
Just be honest, you think this is the future and you do in fact want to be first doing it to be in a position to make alot of money. Do you think people don't know what and ad is when they see one?
For the guys in this story, my translation is, "We were totally fine with making money with no effort, because F paying more employees than we need to. This social media campaign is our backup plan to ensure we get some press and attention out of it even if it fails. We'd totally be cool with making a lot of money though. Please visit our quirky AI shop and buy our stuff."
This is going through some people’s minds the more pushback grows (see Altman molotov, Maine data center moratorium)
> The thought of 10s of millions of people with nothing to do but starve to death ought to scare those wealthy people.
It doesn't, it won't, and it shouldn't. It's not explored in game theory and criminal justice tries to conceal this but the starving will kill and eat each other long before they organize and mob the wealthy.
It plays out in every prison riot, governmental collapse, and other condition of anarchy.
This idea that the poor will mob the rich is feel-good Hollywood idealism that has been wholly undermined by identity politics. The poor will sooner kill and eat you just because you're easier to reach.
Just like they convinced the younger generation that "boomers" stole their future.
To me, it seemed like a modern day tech-take of human cock-fighting.
> I translated this as, "I did a dumb thing, but now that I'm getting the attention I was after I need to look smart."
Strikes me as a repulsively mean-spirited take, ironically proving the artist’s point.
> When someone doesn't want something to be the future, usually, their instinct is not to try to be the first person doing that exact thing. If you don't want this to be the future than why don't you spend your time building a future you do want?
“It only remains to point out that in many cases a person’s way of earning a living is also a surrogate activity. Not a PURE surrogate activity, since part of the motive for the activity is to gain the physical necessities and (for some people) social status and the luxuries that advertising makes them want. But many people put into their work far more effort than is necessary to earn whatever money and status they require, and this extra effort constitutes a surrogate activity. This extra effort, together with the emotional investment that accompanies it, is one of the most potent forces acting toward the continual development and perfecting of the system, with negative consequences for individual freedom.”
-- Industrial Society and Its Future (1995)
-crowded theater (negative value example)
Words can be pretty much actions depending on who you are https://en.wikipedia.org/wiki/Will_no_one_rid_me_of_this_tur...
>I think it’s easier just to recognize words as free and to value them as such.
well, yeah that is the world the AI guys want...
https://en.wikipedia.org/wiki/Speech_act
Pickaxes and shovels and whatnot.
We’re doing this because we believe this future is coming regardless, and we’d rather be the ones running the Torment Nexus.”
Lots of people write wills, doesn't mean they're looking forward to dying or think they can do much about it. Heck, a lot of people don't even watch their diet and do exercise to maximise quality of life and life expectancy.
* I think that by the time AI is good enough to run a retail store, there's a decent chance there won't be any retail stores left anyway. It's like looking at Henry Ford's production line factories and thinking "wow, let's apply this to horse-drawn carriages!"
I would go further and say that there is just no such thing as "this future is coming regardless" once you get out of the realm of physical facts. One of the things that by turns depresses and enrages me about so much punditry (especially in tech) is this notion that there is some sort of inevitable socio-techno-psychological force propelling human society in certain directions regardless of the will of actual humans.
Nonsense. We as humans make our society; it is nothing but what we make of it; we can make it what we want.
As you point out, people who say otherwise are usually really saying "too bad for you who don't want the future to be this way, because I do want it to be this way and I'm working to make it happen".
I am about to go on a long rant, but there is so much money sloshing around the capital allocation machine going towards a vision of the AI managed and optimized future that the propaganda machine for these rose colored delusions must work in overtime. What disappoints me is the question of where the heck are the bears? Did they all go into hibernation 5 years ago when QE gave the retail kindergartener a handgun to pump low quality tickers to the moon? have we just societally accepted that everything should be a hyperreal version of sports gambling now and the world is and ought to be an efficient market of hyperstition?
I may be old and grumpy saying this, but this all sounds dumb and corny. I would like some of the very capable traders who make money repricing mispriced assets to find a way to make money deflating this bubble and bring this environment back to sanity. And I say this as someone who likes the capabilities of AI but continue to see it do little to none of the hard work solving incompressible problems that continue to create and retain enterprise value.
To get off my soapbox for a second and get back to your quoted passage -- what they're really saying is "We are working very hard to make this future coming, and we think so little of your intelligence that we believe you'll fall for the fear tactic of believing it's inevitable, ignoring the fact that it won't happen without someone's hands. And in this case, it is very much our hands, which are incentivized to not just do it but to do it so well that we ensure we do everything possible to make this happen. Part of which means persuading you that it is guaranteed to succeed. If we ever let the honest truth slip that what we're proposing is extremely hard to pull off with pure AI and we're just going to be a any other commercial real estate investor like anyone else, the jig is up."
That's what every single one of these kinds of hypocritical navel gazing faux-concern proclamations amount to for me. Astroturf.
>
John and Jill are not at risk. This is a controlled experiment and everyone working at Andon Market is formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections. No one’s livelihood depends on an AI’s judgment alone.I'm not sure what sort of labor regulations exist in San Francisco, but presumably they can be fired as easily by an AI as a real person, right? If Luna decides to fire them, and it can do so, then their livelihood does rather depend on an AI's judgement alone.
Unless of course all of its decisions are vetted by humans - as they should be - which makes this experiment a lot weaker than they're saying it is.
The only thing that I saw demonstrated, and again, I skimmed, is what many thousands of software developers using AI tools to write their boilerplate already know: these tools, as of now, are great at going through the motions. A successful retail business, and I spent many years in the retail industry, isn't about putting together a nice store front, hiring clerks, and selecting just any-old-products: it's about being profitable. In traditional retail one of most important things is getting the right real estate for your target market... seems like that choice was made already in this case. Yes, a nice store front and good clerks are important, but I've worked in chains which were immaculately designed and built stores with great clerks that failed... and some that opened little more than fluorescent lighted hellscapes with clerks that barely cared that succeeded. In both cases the overall quality of the decisions and strategies relative to the target markets mattered to the success of the business. Just going through the motions didn't.
So if all is this is to say AI can do the things people generally do in these circumstances then sure, you didn't need this much human effort to prove that.... developer types do that at scale everyday now. If there was something different that this company is trying to learn, I'd be much more interested in that.
Not even the normal store employees should know (which would be difficult) or maybe the human manager should be held to an NDA to not disclose it (and the manager also defers to the AI in all such real management decisions).
Storekeeping is more than just ordering merch and putting it up on hangars.
Did it just essentially create one big plan and spawn different agents to execute them, so acted as an orchestrator?
Even the orchestrator would have to detect when it is starting to stray off task and restart itself.
> But frontier models have become really good, and running vending machines is too easy for them now.
Wasn't their previous attempt at running vending machines unprofitable? Not aware of any demonstration that it can actually run that business successfully.
> Great question! Here’s the short version:
> Fair pushback. The honest answer:
These were painful to read.
If an artificial boss is also artificially empathetic, does this make it more realistic?
In any case current iteration sounds like a more exclusive circle of hell.
>For the build-out, she found painters on Yelp, sent an inquiry, gave instructions over the phone, paid them after the job was done, and left a review. She found a contractor to build the furniture and set up shelving.
I'm sure this involved vast amounts of human oversight (e.g. checking that the contractor had actually done stuff) that isn't mentioned.
300+ comments, 3 months ago:
https://news.ycombinator.com/item?id=46735511
Not sure about this:
> John and Jill are not at risk. This is a controlled experiment and everyone working at Andon Market is formally employed by Andon Labs, with guaranteed pay, fair wages, and full legal protections. No one’s livelihood depends on an AI’s judgment alone.
Did they give Luna the power to hire but not fire?
Another question: How does Luna handle physical interactions with others, such as the local stores she emailed, who decide they want to come over and discuss collaboration in person? Do the employees have a laptop set up that others would interact with?
Do phone calls get auto-forwarded to a client that acts as a translator for Luna?
Even after reading the answer, I'm not entirely sure. A handful of specific books, "artisan" snacks and...candles? All in this stupid minimalist hipster high-concept style and almost certainly at an unreasonably high markup. Completely soul-less, but with a "deep" backstory written by an even more soul-less marketing drone (literally this time!).
To put it differently, it's an over-hyped questionably-profitable "business" selling things nobody needs to people who can't see through the marketing copy because "it's the next big thing, everyone else has it".
An honestly excellent metaphor for the entire AI industry!
Humans have been hired by bots for over a decade
Several of the first bitcoin faucets in 2012 said they were rate limiting their disbursement of free bitcoin behind a captcha, but in reality the captcha was something a spam bot had encountered and couldnt solve itself, humans were inadvertently solving captcha for stuck scripts in exchange for bitcoin
Additionally in other money making autonomy, bitcoin mining ASIC manufacturers in Shenzhen around the same time were nearly autonomously creating machines that would immediately begin mining bitcoin on the network and it was wildly profitable for several months periods
in any case, Andonlabs should give Luna a face. It can project to a video feed as a source on a Zoom call
it all kinda reminds me of that book "The Giver" by Lois Lowry where its not only black and white burger kings, its also generic lifeless AI people promoting dropshipped junk on IG/Youtube
I… guess the bet is that what they learn is worth $100k? Seems rather questionable. Or that having this on the resume is a great shock tactic that will open doors in the future?
Because based on “asked it to make a profit” I expect financials in the story. Even if it is a bit of a ”Clarkson’s Bot”, for the farm there is discussion of the numbers.
https://andon.market/on-running-a-real-business.html
https://marshallbrain.com/manna1
> We’re doing this because we believe this future is coming regardless, and we’d rather be the ones running it first while monitoring every interaction
But why would I, as a human, wish to "interact" with AI, aka software?
That's just a waste of time. How much profit did Luna make in the end?
As someone who likes to prep for interviews and get quite emotionally worked up ahead of them, I think if I had joined an interview and it was an AI interviewing me I would feel very hurt... Even if I was given the job by the AI I'd probably also decline it because I assume if I'm interviewing I'd be looking for a real job and not to be paid to par-take in some AI experiment... But the humiliation doesn't end there because these guys are going to show the world just how witty their AI was in its replies after making interviewees feel so uncomfortable that they decided to decline their stupid roles.
Crazy stuff guys. I had to double check if this was satire or not before commenting because it's the kinda thing that only a silicon valley company backed by YC would do.
Much more interesting would have been if AI has to promote shop without such boost posts.
'Welcome to Remxtby Shoppe', etc