The future of everything is lies, I guess – Part 5: Annoyances (aphyr.com)

by aphyr 169 comments 283 points
Read article View on HN

169 comments

[−] jerf 34d ago
I don't need to conduct 1000 transactions per day. I don't forsee a world in which it will be some sort of fatal inconvenience to need to approve all purchases. I certainly don't plan on ever just handing over my credit card to an LLM, due to its fundamental architectural issues with injection, and I still don't anticipate handing it over to any future AI architecture anytime soon because I struggle to imagine what benefits could possibly be worth the risk of taking down such a basic, cheap barrier.

All that stuff about support, though, inevitable.

[−] ToucanLoucan 34d ago
Agreed. My only real complaint with this article is it frames needing to argue with a machine as though this is a new, freshly annoying thing. I already do this constantly.

Every time I call the Costco pharmacy, I just hit 0 immediately because: Phone. Trees. Suck. They have always sucked, it's just an awful, grindingly slow way to accomplish ANYTHING, and it's so, so much easier to, when I need help, get a person on the line who can figure out what's gone wrong and sort it.

The only people benefiting from cutting that down are the scum class (combo of shareholders and executives) and who's shocked, really. Everything is being ruined nearly at all times to benefit the scum class.

[−] gdulli 34d ago
At least phone trees are deterministic and there's still (usually) an option to get to a person for matters that aren't covered by the multiple choice options. Talking to AI is a much worse experience and the hope of the industry is that there won't need to be a human as a fallback anymore because (they believe) the AI is intelligent enough to handle anything.
[−] Barbing 33d ago
The very very very lovely executives at Intuit (thank you for your contribution to society boize) have a good plan for calling their TurboTax help line: you don't spell your name to the robot, you don't talk to a human.

(unless saying "no" / "agent" etc. the fifteen time would've been the trick! Sure, my name can be "O K"...

(I would def love this system if I worked there though, just surprising it didn't have an offramp along the way... maybe they did but everyone used it)

[−] SpicyLemonZest 34d ago
I'm surprised to find so many people who consider human-based customer support a good experience. I wasted an hour on the phone last month with a series of polite support agents who I'm sure were wonderful people in their personal lives. They kept saying they'd like to try one more thing, making me wait 5 minutes (just short enough that I can't get anything done in the interim!), and then asking for one more pointless permutation of the workflow that did not work because their website was not showing me a button the support scripts said should be there. Talking to an LLM would have let me realize a lot faster that we weren't getting anywhere.
[−] wincy 33d ago
This happened to me when I tried to buy Oakley’s, it was because I’d changed my router to an ad blocking DNS which made their support session lookups fail, so they couldn’t help me. Transactions failing, all because of their site being too tightly integrated into tracking and ad platforms. I ended up going with Zenni and got similar glasses for 1/5 the price.
[−] fyredge 33d ago

> because their website was not showing me a button the support scripts said should be there.

At that point, it's effectively a phone tree executed by a human. Colloquially, human-based support means getting a hold of someone who knows how to solve problems, and worst case, knowing who to contact to solve the problem. That means employees who know their worth which unfortunately, businesses do not want to pay.

[−] JohnMakin 33d ago
there are many human customer support systems where the goal is to frustrate you into saying something to make them hang up, or make you give up.

good, human customer service is a big margin my current company eats our competitors alive on

[−] rtgfhyuj 34d ago
you're part of the scum class btw (we all hold shares)
[−] Leomuck 34d ago
So basically more ways of trying to make people buy things, do things, think things than before? I feel like our whole world more and more circulates around manipulation and the absence of truth and discourse.

Then again, I do think LLMs are an incredible technological achievement. The issue is not so much what they do or that they exist, but how they are utilized. Right now, they are utilized to further the class divide between rich and poor.

Who are we to trust in the future? Not big companies, not the state, not LLMs. Time to organize around groups and collectives that we know we can trust and that we know have our wellbeing in mind.

[−] groundzeros2015 34d ago

> The issue is not so much what they do or that they exist, but how they are utilized

This is exactly how we got here though. Technology is not passive. It changes incentives, procedures, ideas and shapes the world. If we don't structurally limit what and how it's used, then we are not in control, no matter what are choices personally are.

[−] api 34d ago
A major problem is that if we structurally limit what technologies do, we are still not in control. Now whoever we empowered to control and limit the technology is in control. Who keeps them accountable?

You’ll probably get one of three outcomes: regulatory capture by monopolies, self dealing by bureaucrats to enrich themselves or gain power, or regulatory capture by self absorbed ideologues who halt all progress or force it down some ideologically approved path.

In none of those scenarios is anything aligned with the best interest of the people.

[−] groundzeros2015 34d ago
I don’t disagree. A consumer oriented democracy is not well equipped for the challenge.
[−] bigfudge 34d ago
That’s what you will get in the US. It’s not clear a functioning democracy would produce the same outcome.
[−] groundzeros2015 34d ago
I think it’s pretty for hard for democracies not to cater to the most base desires.
[−] api 33d ago
As opposed to? What makes the ego and base desires of an aristocracy superior?

It’s hard for humans not to get bogged down in base desires, period, because of the dopamine system.

[−] groundzeros2015 33d ago

> As opposed to?

A government which can choose to protect values which are unpopular in the short term.

> What makes the ego and base desires of an aristocracy superior?

Their awareness of higher values and goals. For example how technology might impact the population.

I would recommend Aristotle’s politics for an overview of the strengths and weakness of various government types.

[−] Nasrudith 34d ago
I hate to tell you this but nobody has ever been in control. To think you can is to think you can unring a bell.
[−] pixl97 34d ago
Right, and that's why we all died in a nuclear war.....
[−] ElectronCharge 34d ago
The disincentives to nuclear war are glaringly obvious enough that even politicians (and their masters) get it.

AI isn't like that. One problem is that it's rather generally misunderstood at this point. "AI" is not "intelligence". It's intelligence-adjacent, and something like LLMs is part of our psyche...the subconscious facility that allows us to form sentences without really thinking about it.

At any rate, I have to agree with most of the points the blog author brings up.

[−] pixl97 33d ago
I mean, not really. The only reason we've not died in a nuclear war is building nuclear bombs is very very difficult and expensive. If suddenly it became quick and easy to get nukes, we'd flash fry pretty quick when any and every suicidal nut with convictions got their hands on one.
[−] intended 34d ago
Our society, pre internet, built systems to manage trust. The conditions that allowed those systems to exist (the speed of transmission of data, the ratio of content generation to verification, the ability to shape consensus), have changed.

You are ringing the clarion call for community and cooperation, and it will not work. Not because people don’t want community or the better things, but because incentives make the world go round.

The choice between making some money at the cost of polluting the information commons is no choice at all. That degradation of the commons means no one can escape. No community you form, no group you build, dodges the fallout when someone decides to set fire to shared infrastructure.

We are moving into the dark forest era of the information economy. As models improve, inference costs drop, and capacity increases, the primary organism creating content online will be the bot.

Instead of building communities of people, build collections based on rules of engagement. Participants - be it bots or humans - must follow proscribed rules of conflict and debate.

That way it doesn’t matter if you are talking to a machine or a person. All that matters is that the rules were followed.

[−] Barbing 33d ago
Very interesting, I've thought in a completely different direction, towards human verification. "IRL KYC for friends" or something

I always hit problems with it though. Let's say I can find someone I trust. Maybe it's me. Say I only enter online spaces, at least with intent of discussion, with those I've met in real life. Well, at some point, someone I've met face to face would be incentivized to maybe share a link to their friend's concert. Perhaps there's a free guest list spot in it for them if the show sells out. Or maybe it's all gravy, but eventually:

I want to expand the network we've created together, and it means trusting someone else to bring in people to the online space I've never met in real life. This could again be fine for a long time, but won't someone eventually be incentivized (especially if this practice were common) to promote this supplement, promote that politician...?

(recognize astroturfing is different from the impending slop tsunami but both feel to be in the same stadium)

[−] intended 33d ago
Proof of human is the natural first stop.

Your solution shares its essence with a club, a WhatsApp group or interest group.

It works, but you will still be at the mercy of the large communities and economies of thought that the members are a part of.

That is the broader environment you are a part of.

Everyone from FAANG firms, governments to game companies struggle to identify real people from bots.

If your platform is global, then you have to contend with users from different legal regimes and jurisdictions.

The issue is that verification is logistically expensive, ends up infringing on rights, legally complex and on top of all that - error prone.

To top it off - If proof of life ends up gatekeeping any form of value, you will set up incentives to break verification.

[−] SoftTalker 34d ago

> I feel like our whole world more and more circulates around manipulation

Hate to break it to you but it's always been this way, and it was easier in the past when information was so much more expensive to distribute.

[−] 01HNNWZ0MV43FF 34d ago
The Old Internet was a whalefall - Information online was fairly trustworthy while being more convenient and more plentiful than in-person information.

The whale's been eaten now. The broader Internet is mostly not trustworthy, or convenient, and the information is not even very plentiful.

People will and are retreating into high-trust zones. In-person networks, product recommendations from real friends, and closed group chats.

It's not the end of the world, but things have changed. We'll have to put more work into finding information than we're used to.

[−] nalekberov 34d ago

> Right now, they are utilized to further the class divide between rich and poor.

Ironically this was the main reason LLMs were introduced in the first place, not to benefit the poor, but to widen the gap between the rich and the poor.

[−] mentalgear 34d ago

> Time to organize around groups and collectives that we know we can trust

I’ve had the same thoughts, but if you look deeper, it all circles back to what we already had: (open, transparent) public institutions, society, and government by the people. The foundation wasn't the problem; the environment was.

Along the way, social media noise, engagement-optimisation and Kardashian-style "entertainment news" infecting real news made an attention economy where, no matter how scandalous you are, attention can be minted into dollars. That is what polluted our infosphere and lead to the lack of trust.

Now, nobody trusts these previously mentioned public entities any more - sometimes due to state-actor or ad-tech disinformation, and sometimes for good reason like when the poisoned public allowed these 80s-style telemarketer-style political weirdos and their cronies to take over public administration.

[−] LogicFailsMe 34d ago
Local models and powerful consumer HW and an informed populace that doesn't hate STEM, but that's not good for the shareholder value so you get expensive everything everywhere all at once instead. And if you dare question the mindset of hating on STEM whilst being addicted to its fruits, that just means you're another one of those maximally SV-aligned sociopaths so why bother? Evolve and let the chips fall where they may because I don't see any other options that play out in the idiocracy craving for strong confidently wrong leadership.
[−] drzaiusx11 34d ago
The majority of human history has been written by the ruling class of the day. Transparency only seems to follow in the wake of their inevitable fall, usually at great cost in retrospective research via the oft thankless unraveling of threads of truth from their more copious fictions. Much like the machines we construct in our likeness, we too seem to get stuck in endless regressive cycles.

Folks in the "now" have always had a tendency to cling to their fictions as if they were truth for whatever reason; like nationalist exceptionalism, racial superiority, or religions rooted in "othering", etc. Humans seem to have an innate desire to fool themselves and trust in things they should not. Perhaps it's simply a sort of existential coping mechanism of living in a cold, unforgiving reality. We seek the comfort of lies.

Organizing around groups of trust, tends to lead to factionalism and conflicts. Knowing and trusting are sadly very different things in our species.

[−] gaythread 33d ago
[dead]
[−] sassymuffinz 34d ago
Self inflating nipple shaped balloons that generate their own lift without any helium would be an incredible achievement but that doesn't mean it's useful beyond being novel. Chatbots are ultimately just predictive text on steroids, and only complete fools would base their business, or entire economy around it.
[−] morgengold 34d ago
My father just changed his car key battery with the help of ai and he likes that. He also consulted it about about car insurance regulations and he got more out of it than searching the web himself.

For most simple mainstream questions I just ask ai instead of googling shitty results.

Most of the time ai is good enough and often better than the status ante.

People do not care if it is a stupid token prediction machine as long as the job gets done.

[−] vyr 34d ago
have worked closely with customer support teams, can confirm that the goal of any technical improvements that go in front of CS agents is to reduce ticket volume, and thus costs. of course they measure retention and satisfaction but ticket volume is always the big one. chatbots were big for this long before LLMs existed.

a fun side effect is that CS is also an early warning system for companies, so when you make it harder to get through to a human, you start throwing out info on your users' pain points. of course this only matters if people have a choice about whether to use your product, so that's gotta be an upside for insurance companies, etc.

[−] Lerc 34d ago
A COMPUTER CAN NEVER BE HELD ACCOUNTABLE

THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION

—IBM internal training, 1979

It took me a while to realise that the premise is saying the same thing as the reason why we have so many "Computer says no" experiences today.

The conclusion only follows if you want someone to be accountable.

If you want to avoid being accountable, computers should make all management decisions. This has nothing to do with AI other than it provides another mechanism to do that.

People saying "I'd love to help you but the computer won't let me do that" has been happening for years now.

Websites develop abusive patterns because A/B testing lets a process decide based on the goal you want, It doesn't measure the repercussions so you have made no decision to allow them.

Management read it as

A COMPUTER CAN NEVER BE HELD ACCOUNTABLE

THEREFORE THERE CAN BE NO LIABILITY IF COMPUTERS MAKE ALL MANAGEMENT DECISIONS

[−] Hoasi 34d ago
The erosion and further diffusion of responsibility is the trend that worries me the most, since it’s already how all mid-size organisations, businesses and institutions alike, operate by design, and LLMs are likely to make that much worse.
[−] kevg123 34d ago
I sent the entire series by Aphyr [1] to some friends. Two of them, independently, responded with a variant of, "TLDR, can you give a summary?"

I chat with these friends a lot but I rarely send articles that I suggest they read and that I think are profound, so I expected them to read it. These are smart people that have a history of reading lots of books.

They are both huge AI proponents now and use AI for nearly everything now. Debates on various topics with them used to be rich; now, they're shallow and they just send me AI summaries of points they're clearly just predisposed to. Their attention spans are dwindling.

[1] https://aphyr.com/data/posts/411/the-future-of-everything-is...

[−] tao_oat 34d ago
[−] gs17 34d ago

> Perhaps we’ll see distributed boycotts where many people deploy personal models to force Burger King’s models to burn through tokens at a fantastic rate.

Given how many people hate AI in general, I'm surprised there hasn't been anything like this happening. They could even get around the irony of using "AI" themselves, I bet low-tech language models like Markov chains could provide sufficient time wasting potential (I'd love to see it done with an old fashioned AIML chatbot). Asymmetric chatbot warfare.

[−] _doctor_love 34d ago
I've been enjoying these articles by 'aphyr and I think they raise important points. Primarily though, they read to me as polemics of a curiously American nature.

The pattern goes something like this:

- this development is bad

- companies will be unrestrained in their use of this development

- there will be no rules so they can do whatever they want

- we are all fucked as a result

But then...propose that we make some laws to put rules around this stuff, also known as regulations and everybody goes "whoa hold up hold up hold up...I dunno about that part."

Dear friends - America has always been this way. Study your 19th and 20th century history. Companies will exploit the shit out of us unless we put some rules in place to prevent it. Yes, that might mean making less money in the short term as regulations cause friction. But in the long term it means we can have a better and actually livable society.

(For what it's worth I'm an American and not an uppity European or Australian taking potshots from across the pond; no offense to Euros or Aussies intended, love you guys)

[−] gib444 34d ago
He talks in the future tense about things already happening for some time. I've had phone systems lie about talking to a bot.

> Companies are now trying to divert support requests into chats with LLMs

More than trying they are doing it very successfully and for a long time now

I do agree things can still get 10x worse than even the current state though

> When you talk to a person, there’s a “there” there—someone who, if you’re patient and polite, can actually understand what’s going on

I've found they have been trained to be machine-like for many years now and not actually help. They focus on empathy and understanding and caring about your needs...and diverting your attention away from actually resolving the issue. Here's an example recently I experienced:

I complained to my bank about how they show refunds on the app. I got a call from a lovely sounding lady who used a comforting tone to ask if I had any special needs she needs to be aware of so she can "provide extra support" .

At beginning I made it clear I'm not chasing a particular refund but rather raising a specific complaint about how refunds are shown in the app. 4 times she mentioned a specific refund assuring me that it's been refunded, ignoring everything I said at the beginning. She explained how refunds work. She explained how pending transactions work (all off topic). She explained 3 times about how they can't (won't) create a feedback loop and begged my permission to close off the complaint, saying she hopes I can understand.

That was all a very dressed up "I'll pass your feedback to the PM. Thanks" but it was whole ridiculous long phone call trying to make me feel "heard" and I came away feeling like a 15 year old

[−] smitty1e 34d ago
So, providing actual customer service becomes a market differentiator?

"Yes, we cost more, but your get what you pay for" can be a good play.

[−] elzbardico 34d ago
The worst thing is that non-technical people, and actually a lot of technical people without experience in ML, will tend to overstimate the capabilities of those systems, neither the nuances of probabilitic thinking to properly integrate their outputs in a decision.

Remember that the polygraph still exists, now we will be dealing with a massive portion of the decision makers will treat as artificial inteligence not in the technical sense we use, but as real inteligence, maybe even super-inteligence.

[−] tgsovlerkhgsel 34d ago
Regarding companies trying to block any contact with customer service and adding endless AI hurdles: In some countries, having a reachable means of contact is legally required. Is there a NOYB-style organization that specializes in enforcing this right (suing companies on behalf of consumers)?

For the "bureaucracy has royally fucked up and doesn't want to fix it", if it is something that can be fixed with money and isn't time sensitive (e.g. you need a refund rather than get the airline to actually provide you the ticket you already paid for and want to fly this weekend): In countries that have effective small claims courts, these can be a surprisingly convenient (less hassle than the "talk to the bot" wall of the company!) to resolve this kind of issue.

I hope that these resolution methods become more common - I think the tools to fight enshittification often already exist, we just don't use them enough. A welcome side effect would, of course, be that this would impose a real cost on the enshittifiers, creating an incentive to provide proper support.

[−] petermcneeley 34d ago

> Since LLMs are unpredictable and vulnerable to injection attacks, customer service machines must also have limited power

Haha yes. I interacted with a bank one. It was like press 5 for mortgages but with a text to speech front end.

At the end of the day the LLM can be tricked into doing anything.

[−] ongytenes 30d ago
I was blocked from reading this article. I was intrigued enough from the summary to consider reading his other parts from the series.

He chose to block me based on the state I live in.

[−] calvinmorrison 34d ago
We are inseparable from technology. Technology which we cannot opt out of to live in a society. We do not get to pick and chose what types of technology we engage in. Dr. K predicted this decades ago and he was right.
[−] ramon156 33d ago
- Use AI for Knowledge holes, fact check them, then accept them as solved. - Use agents to write code that is defined in a spec, review manually, accept them as solved.

Nothing more, nothing less

[−] Barbing 33d ago
"That Dropped Call With Customer Service? It Was On Purpose."

I knew that one time I needed a free Sam's Club membership for one thing and they kept on dropping me...

[−] LogicFailsMe 34d ago
D^HLying is easy, it's comedy that's hard...
[−] AtlasBarfed 33d ago
Sucky customer service is a direct economic function of reduced competition and increase monopolization / cartel dynamics.
[−] abcde666777 33d ago
I've found these posts to be excellent - wonderful reads really. Props to the author.
[−] ixtli 34d ago
Excellent essay. I see some of this is already happening imo
[−] fandorin 34d ago
„Agentic commerce means handing your credit card to a Large Language Model” - this is simply not true. LLMs/Agents will never get any credit/debit card details, they will be just an interface.
[−] 0xbadcafebee 34d ago
This is doomerism. Yes, everything will get worse. But everything will also get better. Such is progress. (for every one of these examples of annoyances, I can think of two ways to use AI to get around the annoyance. not clever programmer things, but things an average person who learns to use Codex or Claude Desktop to operate their desktop will know)

Most of these annoyances are also things that existed before AI, and will continue to exist after, because consumerist capitalism. The good little obedient consumers get abused because they don't stand up for themselves. Customer service is an enfuriating maze? Yeah, because you voted with your dollars (and political indifference) to allow companies to make customer service (the thing you pay for) worse. We bring these problems on ourselves. It's pointless to complain if you aren't willing to do anything to change it. (And if you think you can't change it, there's other nations to look at, as well as the fact that you live in a democracy - for now - unlike the rest of the world)

Hell, we already have companies whose sole purpose is to manage your subscriptions for you because you're too lazy to do it yourself. You could look at this and say, man, the world is terrible! Or you could look at this and say, man, how great is my life that I can not only subscribe to a lot of things without going bankrupt, but I have extra cash left over to pay a company to manage my subscriptions?

Don't let the hedonic treadmill and complacency trick you into A) accepting a worse life, or B) convincing yourself your life is bad when it's actually better than most people's.

[−] zer00eyz 34d ago
Everything that is old is new again.

Payment processing, is better than it was in 2000, but still not good.

Micropayments: this is obnoxiously expensive to do.

Discovery, and discoverability: again here we have better but not good solutions (and many of the ones that were once good are enshitified).

Pricing: this is a problem everywhere, and frankly we need the law to change in a way that is pro consumer. Publishing prices, disclosure of fees, in both services and for payment processing (that 3 percent back from visa looks a lot less attractive when it's part of a 5 percent mark up).

Customer service: well there are already companies promoting models where they cut you off and send you into a black hole (google is a prime example). Good customer service will become a differentiator, and maybe a "paid for" service as well.

[−] ufocia 34d ago
AI on AI warfare
[−] siliconc0w 33d ago
I'm excited for the AI lawsuits and litigation. It used to be squarely in the domain of the well financed but I can see the legal system absolutely inundated with AI generated legal slop.
[−] Myrmornis 34d ago
I read the first couple of posts in the series. The essay is full of criticism of LLMs, and in a couple of places the author distances himself, as if he himself isn't using them ("some people I respect tell me that...").

It's certainly worth discussing the fact that the entire industry is starting to outsource large amounts of our thinking and writing work to non-sentient statistical algorithms, but this discussion needs to honestly confront the extent to which they are successfully completing useful tasks today.

[−] christkv 34d ago
Meh I'm going to run my own agent to argue with their agents. Endless patience.
[−] jcgrillo 34d ago
At various previous companies I've worked at product managers, executives, and engineers love bandying about the idea of "building for nontechnical users" as a way to make their widgets more "friendly". But it's just another way to otherize and denigrate "those people" who are the out group. They might, through a metacognitive defect or simple sociopathy, actually believe they're "doing good" by considering the poor creature's plight and making compassionate decisions on their behalf. But it's all crap. All they're actually doing is confirming their biases. LLMs are the divine nectar to these people, an enshittification accelerant par excellence.
[−] KronisLV 34d ago

> ML models will hurt innocent people.

Lots of blaming LLMs but I think the root cause lies elsewhere, I’m not even sure whether dismissing it as “capitalism” or “profit motives” would do it justice, because in general it feels more like the world that we live in lacks humanity.

Even in a capitalist world, a company could take a stance and decide not to purposefully screw people over, but in the world that we live in instead they look for ways to better screw over people and extract more money from them. It doesn’t matter whether your customer support is handled by someone from India, a crappy telephone tree or some voice model, when the incentive is the same - to do the bare minimum for customer “support” (in practice, just getting you to fuck off). Same for handling insurance claims and “dynamic pricing” of things - it doesn’t matter whether it’s some proprietary algorithm or just an LLM making crap up when the goal is to screw you over.

Blaming “AI” for all of this would be barking up the wrong tree (without that tech they’d just find other ways), though one can definitely acknowledge that this technology provides another convenient scapegoat, same as how you can lay employees off and just say cause it’s because of AI when in actuality it’s just greed and wanting to make your books look better.

[−] redsocksfan45 34d ago
[dead]
[−] semiinfinitely 34d ago
this guy will probably never stop yapping after having gotten just a little bit of attention on his original post
[−] agentultra 34d ago
To lie requires recognition of the truth and an intention to deceive. LLM’s don’t have such abilities. They are systems that generate plausible sequences of symbols based on training inputs, alignments, reinforcement, and inference. These systems don’t know or care what truth is and therefore cannot lie.

It’s already bad. I’m not looking forward to the future. These systems are terrible. It’s a future without people that they want for some reason. I’d rather deal with people incompetent, tired, annoyed people than an LLM.

[−] Scholmo 34d ago
Don't agree with this.

LLM when it came out, was perfect as an interface between a system and a normal human.

So many people call customer support for issues they could in theory fix themselves. If that LLM system can understand me well enough, its an okay interface.

In worst case you have to escalate anyway. My mum actually told me that she talked to some AI.

And yes normal systems are also not correct often enough. With AI/LLM software will get cheaper which should incresase quality overall.

I dont think ai/llm in this case will change anything.

Relevant change will happen due to the fact that humans can be replaced by AI/LLMs. It was not even imaginable a few years back how a good ai system would even look like. Translaters lost their jobs, basic arists lost their jobs. Small contracts for basic things are gone. The restaurant poster no one cares? AI. The website translation for some small business? no one cares.