Meta and TikTok let harmful content rise to drove engagement, say whistleblowers (bbc.com)

by 1vuio0pswjnm7 194 comments 331 points
Read article View on HN

194 comments

[−] bigfishrunning 60d ago
I feel like this is general knowledge for the past 5 or so years, but the real question is "What do we do about it?". Personally, I put real effort into not spending time being outraged online, but this is a societal ill that's bigger then I am...
[−] munk-a 60d ago
"What do we do about it?"

Shut down the behavior with regulations or shut down the companies. Meta and TikTok have no natural right to exist if they are a net negative to society.

[−] slg 60d ago
Specifically, I believe Section 230 protections shouldn't apply to algorithmicly promoted content. TikTok hosting my video isn't inherently an endorsement of what I'm saying, but proactively pushing that video to people is functionally equivalent even if you want to quible over dictionary definitions. These algorithms take these platforms from dumb content-agnostic pipes that deserve protections to editorial enterprises that should bear responsibility for what they promote.
[−] jcranmer 60d ago
There is a decent legal argument to be made that §230 doesn't immunize platforms for the speech of their algorithm, to the extent that said speech is different from the speech of the underlying content. (A simple, if absurd, example of this would be if I ran a web forum and then created a highlight page of all of the defamatory comments people posted, then I'm probably liable for defamation.)

The problem of course is that it's difficult to disentangle the speech of algorithmic moderation from the speech of the content being moderated. And the minor issue that the vast majority of things people complain about is just plain First Amendment-protected speech, so it's not like the §230 protections actually matter as the content isn't illegal in the first place.

[−] robhlt 60d ago
I don't think we even need to go that far. Just remove protection for paid advertisements. It's absurd that Meta cannot be held liable for the ads they promote when a newspaper can be held liable if they were to publish the same ad.
[−] soco 59d ago
But isn't this difficult when the tech bosses are in cahoots with the country bosses? And honestly even if the leadership changes, I somehow have a feeling the techs will naturally switch boats as well - might be a reason why the opposition doesn't paint them that much nowadays, to make sure they switch along.
[−] Aurornis 60d ago
How would you square that with a site like Hacker News, which has algorithms for showing user-submitted links and user-generated comments?
[−] deeponey 60d ago
Really nice to see someone else bringing this up. Algorithmic editorial decisions are still editorial decisions. I think ultimately search and other forms of selective content surfacing should not have ever been exempt. They were never carriers. I appreciate that this would make the web as we know it unusable. I think failing to tackle this problem has will also make the web unusable, and in a worse way.
[−] philipallstar 59d ago
This seems the same as news organisations choosing which news to report on, but driven by user behaviour rather than the org's employees themselves.
[−] zzzeek 60d ago
oddly enough the TikTok referred to here was to be shut down in the US. But then the executive branch ignored the law while it could organize handing the company over to Larry Ellison instead. But these allegations date to when the company was fully under the control of ByteDance, and not US-regulated entities at all.
[−] cryptoegorophy 60d ago
Wouldn’t we need to shut down all news outlets, all the twitters and all the newspapers then? They might not be on the toxic spectrum as meta/tiktok, but are very close
[−] ronnier 60d ago
Then we'll just use the Chinese apps. Or do you plan on shutting down our access to Chinese apps too?
[−] rkagerer 60d ago
"What do we do about it"

Account --> Delete

[−] sandworm101 60d ago

>> Meta and TikTok have no natural right to exist if they are a net negative to society.

Exactly. And when we are done with them we will shut down Molson and Anheuser-Busch. Then we can go after the people who make selfy sticks. Then the company that owns that truck that cut me off last week. Basically, organization who i dislike should not be allowed to exist.

[−] diacritical 60d ago
Regulating content that makes people enraged seems like a slippery slide towards regulating any kind of "unwanted" speech. I get regulating CSAM, calls for violence or really obvious bullying (serious ones like "kill yourself" to a kid), but regulating algorithms that show rage bait leaves a lot of judgement to the regulators. Obviously I don't trust TikTok or Meta at all, but I don't trust the current or the future governments with this much power.

For example, some teen got radicalized with racist and sexist content. That's bad in my opinion, as I'm not a racist or a sexist. But should racist or sexist speech be censored or regulated? On what grounds? How do we know other unpopular (now or in the future) speech won't be censored or regulated in the future? Again, as much as I'm not a racist or sexist, I don't think the government should have a say in whether a company should be able to promote speech like "whites/blacks are X" or "men/women are Y". What's next? Should we regulate speech about religion (Christians/Muslims/atheists are Z) or ethics (anti-war people or vegans are Q) or politics or drugs or sex?

The current situation is shitty, but giving too much power to regulators will likely make it way shittier. If not now, in the future, since passed regulations are rarely removed.

[−] aaron695 60d ago
[dead]
[−] bdangubic 60d ago
regulation will never happen because these are instruments to control the masses
[−] ThrowawayR2 60d ago
Tax and heavily regulate online advertising. The root of the problem is that it is very, very lucrative to drive engagement and until you get rid of the monetary incentive, the problem will never go away.
[−] cj 60d ago
"Make the drug less good" likely isn't the answer. Nor is banning it.

What caused Gen Z to drink less than millenials? Maybe Gen Z has the answer.

[−] AndrewKemendo 60d ago
It’s like asking how do you get people to stop drinking alcohol

As long as there are people who don’t acknowledge or care about the health effects it will exist. If that’s a plurality of your population then you have a fundamental population problem IF you are in the group who thinks it’s bad.

Aka every minority-majority split on every issue ever.

So the answer is: live in a society governed by science. Unfortunately none exist

[−] like_any_other 60d ago

> "What do we do about it?"

I'd suggest something like banning algorithmic amplification - your feed is posts of people you follow and nothing else. But that's not what will happen. What will happen is there will be [1] vague laws about preventing vague "harm", written to give legal teeth to the Overton window. Not in those words, but companies that would go against it will be mired in lawfare, while those that comply will be allowed to grow.

And if you complain, they'll motte-and-bailey you - you're not in favor of "harm", are you? We're not an authoritarian speech police, we only seek to protect people from "harm".

[1] Or rather, are - see https://en.wikipedia.org/wiki/Online_Safety_Act_2023

[−] xtiansimon 59d ago
My IG feed is largely taken over by congressional members videos, crazy $#!t the president (and his crew) says, and the keystone cops. And boy howdy is there a lot of rage inducing behavior going on.

I feel more informed than if I was only listening to NPR.

That said, I stay away from anything that’s produced—sound track, too many cuts/edits, talking head commentary. I guess in this context, if I’m going to be driven to emotional anxiety, it’s going to be from something that happened or something someone said, and not the internet’s interpretation.

You can’t “produce content” that I will watch _as news_. It has to be in some real way happening (with some deference to Rashomon).

[−] techpression 60d ago
The people who were voted to power (across the globe, not just the US) to do something about it are stuck getting their dopamine kicks posting garbage on the same platforms. It’s truly a terrible timeline we are in.
[−] toomuchtodo 60d ago
Regulate it. Laws, consequences, etc.
[−] yoyohello13 60d ago
I’m going to bet we do nothing and continue to complain instead.
[−] jmyeet 60d ago
What do we do? We treat platforms with algorithmic news feeds as publishers not platforms in the Section 230 sense.

Think about it this way: imagine if you took a million random posts or videos. You would find a wide range of political views, conspiracy theories and so on. Whatever your position on any of those issues, you could find content pushing those views.

So if your algorithm selects and distributes content that fits your desired views and suppresses content that opposes your views, how are you different from a random publisher who posts content with those exact same views?

This is kind of like the "secret third thing" of Section 230 where you get all the protections of being a platform and all the flexibility of being a publisher and we need to close that loophole. Let platforms choose which one they are.

Another example: if I create a blog and write a post that accuses my local mayor of being a drug addict and a pedophile, I can be sued for defamation. You can try the journalism defense but it won't shield you from defamation. Traditoinal media outlets are normally very careful about what they publish for this reason.

But what if I run Facebook or Twitter and one of my users says the exact same thing? Well I'm just a platform. I have a libel shield. But again, my algorithm can promote or suppress that claim. Even if I have processes to moderate that content, either by responding to a court order to take it down and/or allowing users to flag it and then take it down myself with human or AI moderation, the damage can't really be rolled back.

We've let tech companies get away with "the algorithm" being some kind of mysterious and neutral black box that just does stuff and we have no idea what. It's complete bullshit. Every behavior of such an algorithm reflects a choice made by people, period. And we need to start treating this as publishing.

[−] wotsdat 60d ago
[dead]
[−] noAnswer 60d ago
[flagged]
[−] b65e8bee43c2ed0 60d ago
[flagged]
[−] ray023 59d ago
"Harmful content" translation: What the government do not like. What ISRAEL does not like. Another call for more censorship from a force financed state propaganda outlet nobody with a brain takes seriously. How original.
[−] nelsonfigueroa 60d ago
I can't say I'm surprised and I think most people wouldn't be surprised either. But it's always good to have evidence.
[−] hmate9 60d ago
Is this unavoidable? I mean it does generate clicks and views and user engagement so if one platform is doing it, doesn't that automatically mean that the other has to do it? Otherwise they will continuously lose market share.
[−] cdrnsf 60d ago
Of course they did. As long as they're legally allowed to do so and profit from doing so they will continue.
[−] nikisweeting 60d ago
Does anyone know of watchdog agencies that do the research to document and litigate harmful algorithmic trends?.

I know https://www.reset.tech/ does really good work in this space, but are there others, and who is funding them?

[−] Otterly99 59d ago
It's the same story since at least 2012. It is well documented in the book "The chaos machine" by Max Fisher.

Facebook employees, journalists and psychologists have studied the phenomenon and Facebook's (as well as Youtube's) response is always the typical "We have done something" to calm the protest, but it's never really the case. It's a constant game of deflecting, delaying, diminishing, denying.

[−] leftytak 60d ago
As long as the general public respond to sensationalism, what’s stopping the social media platforms from exploiting.

Most of them are click baits anyways.

[−] ptek 59d ago
If you make 20 billion and the fine is 0.2 billion…. I don’t think they care about their users mental health.
[−] KennyBlanken 60d ago
Given how TikTok "trends" seem to consist mostly of "get teenagers to do stuff that causes huge expenses for US society":

* "eat tide pods" * "stick a fork in electrical sockets in your school" * "destroy your school's shit" aka "Devious Licks" - bathrooms, chromebooks (jamming stuff into the charging ports to start fires...) * "drink a shitload of Benadryl to see what happens" * "steal a kia/hyundai and drive 80mph, run from the cops, etc"

...convince me that this is not a purposeful attack on US society by the CCP?

[−] ETH_start 59d ago
The idea that there is a certain category of content that is harmful and there are certain people who have the authority to declare what is harmful is extremely dangerous, practically how every single censorship system has ever been built.
[−] simpaticoder 60d ago
The feedback loop for this moral hazard is slow but implacable. You can treat the zeitgeist as a dumping ground for so long, until you get so big, that you can no longer treat it like an idealized infinite substance.
[−] Forgeties79 60d ago
I remember The Social Dilemma’s entire premise was basically this headline minus TikTok, and that came out what? 7 or 8 years ago?

Not saying “well duh” I just think at this point I have to ask “are we going to do anything about it?”

We’ve known about the financial incentives to promote anger and outrage online for at least a decade now. So what are we going to do about it?

[−] charcircuit 60d ago
British people complaining about free speech and trying to censor the internet. America needs to keep standing up to British censorship interests.
[−] tempodox 59d ago
Sadly, nothing new. Why do they do it? Because they can. That regulators let companies operate this way is a massive failure.
[−] 1vuio0pswjnm7 60d ago
Someone changed the title and added a typo
[−] tsoukase 59d ago
Since a long time whistleblowers aren't needed to say the obvious and self-evident about online media. A thoughtful user can realize it instantly. From the era of b/w TV programs until now the content has the same goal. I believe after enough iterations of user control the delivery will become regulated like drugs. Now it's starting for kids.
[−] khernandezrt 58d ago
As someone who uses IG a lot. I have found this to be overwhelmingly true. Very often when i stumble upon a controversial video the very top comment is a ratioed hot take on the topic, as if meta purposely put the comment at the top to ruffle feathers. On top of that, when i find controversial topics(like the moon landing), a large majority of comments are leaning to one, extreme opinion with all the other differing opinions pushed to the very very far bottom of the comment section
[−] webdoodle 60d ago
Throw away your 'smartphone' and stop using anti-social media. It is killing society, and only making the Billionaires more powerful. They are evil and will do anything to stay in power.
[−] softwaredoug 60d ago
In my experience there’s a strong “banality of evil” that happens.

Some poor schlub ML Eng has shipped a feature that wins an A/B test. They’re pushing to get promoted. Their management wants to show they’re hitting their KPIs.

An engine of destruction filled with well meaning people just hoping to advance in their careers.

You might say, it’s ultimately the designers of the incentives that matter. Even there, the leadership will change. Inevitably the needs of the capitalist machine take over.