I feel like this is general knowledge for the past 5 or so years, but the real question is "What do we do about it?". Personally, I put real effort into not spending time being outraged online, but this is a societal ill that's bigger then I am...
Shut down the behavior with regulations or shut down the companies. Meta and TikTok have no natural right to exist if they are a net negative to society.
Specifically, I believe Section 230 protections shouldn't apply to algorithmicly promoted content. TikTok hosting my video isn't inherently an endorsement of what I'm saying, but proactively pushing that video to people is functionally equivalent even if you want to quible over dictionary definitions. These algorithms take these platforms from dumb content-agnostic pipes that deserve protections to editorial enterprises that should bear responsibility for what they promote.
There is a decent legal argument to be made that §230 doesn't immunize platforms for the speech of their algorithm, to the extent that said speech is different from the speech of the underlying content. (A simple, if absurd, example of this would be if I ran a web forum and then created a highlight page of all of the defamatory comments people posted, then I'm probably liable for defamation.)
The problem of course is that it's difficult to disentangle the speech of algorithmic moderation from the speech of the content being moderated. And the minor issue that the vast majority of things people complain about is just plain First Amendment-protected speech, so it's not like the §230 protections actually matter as the content isn't illegal in the first place.
I don't think we even need to go that far. Just remove protection for paid advertisements. It's absurd that Meta cannot be held liable for the ads they promote when a newspaper can be held liable if they were to publish the same ad.
But isn't this difficult when the tech bosses are in cahoots with the country bosses? And honestly even if the leadership changes, I somehow have a feeling the techs will naturally switch boats as well - might be a reason why the opposition doesn't paint them that much nowadays, to make sure they switch along.
Really nice to see someone else bringing this up. Algorithmic editorial decisions are still editorial decisions. I think ultimately search and other forms of selective content surfacing should not have ever been exempt. They were never carriers. I appreciate that this would make the web as we know it unusable. I think failing to tackle this problem has will also make the web unusable, and in a worse way.
oddly enough the TikTok referred to here was to be shut down in the US. But then the executive branch ignored the law while it could organize handing the company over to Larry Ellison instead. But these allegations date to when the company was fully under the control of ByteDance, and not US-regulated entities at all.
Wouldn’t we need to shut down all news outlets, all the twitters and all the newspapers then? They might not be on the toxic spectrum as meta/tiktok, but are very close
>> Meta and TikTok have no natural right to exist if they are a net negative to society.
Exactly. And when we are done with them we will shut down Molson and Anheuser-Busch. Then we can go after the people who make selfy sticks. Then the company that owns that truck that cut me off last week. Basically, organization who i dislike should not be allowed to exist.
Regulating content that makes people enraged seems like a slippery slide towards regulating any kind of "unwanted" speech. I get regulating CSAM, calls for violence or really obvious bullying (serious ones like "kill yourself" to a kid), but regulating algorithms that show rage bait leaves a lot of judgement to the regulators. Obviously I don't trust TikTok or Meta at all, but I don't trust the current or the future governments with this much power.
For example, some teen got radicalized with racist and sexist content. That's bad in my opinion, as I'm not a racist or a sexist. But should racist or sexist speech be censored or regulated? On what grounds? How do we know other unpopular (now or in the future) speech won't be censored or regulated in the future? Again, as much as I'm not a racist or sexist, I don't think the government should have a say in whether a company should be able to promote speech like "whites/blacks are X" or "men/women are Y". What's next? Should we regulate speech about religion (Christians/Muslims/atheists are Z) or ethics (anti-war people or vegans are Q) or politics or drugs or sex?
The current situation is shitty, but giving too much power to regulators will likely make it way shittier. If not now, in the future, since passed regulations are rarely removed.
Tax and heavily regulate online advertising. The root of the problem is that it is very, very lucrative to drive engagement and until you get rid of the monetary incentive, the problem will never go away.
It’s like asking how do you get people to stop drinking alcohol
As long as there are people who don’t acknowledge or care about the health effects it will exist. If that’s a plurality of your population then you have a fundamental population problem IF you are in the group who thinks it’s bad.
Aka every minority-majority split on every issue ever.
So the answer is: live in a society governed by science. Unfortunately none exist
I'd suggest something like banning algorithmic amplification - your feed is posts of people you follow and nothing else. But that's not what will happen. What will happen is there will be [1] vague laws about preventing vague "harm", written to give legal teeth to the Overton window. Not in those words, but companies that would go against it will be mired in lawfare, while those that comply will be allowed to grow.
And if you complain, they'll motte-and-bailey you - you're not in favor of "harm", are you? We're not an authoritarian speech police, we only seek to protect people from "harm".
My IG feed is largely taken over by congressional members videos, crazy $#!t the president (and his crew) says, and the keystone cops. And boy howdy is there a lot of rage inducing behavior going on.
I feel more informed than if I was only listening to NPR.
That said, I stay away from anything that’s produced—sound track, too many cuts/edits, talking head commentary. I guess in this context, if I’m going to be driven to emotional anxiety, it’s going to be from something that happened or something someone said, and not the internet’s interpretation.
You can’t “produce content” that I will watch _as news_. It has to be in some real way happening (with some deference to Rashomon).
The people who were voted to power (across the globe, not just the US) to do something about it are stuck getting their dopamine kicks posting garbage on the same platforms.
It’s truly a terrible timeline we are in.
What do we do? We treat platforms with algorithmic news feeds as publishers not platforms in the Section 230 sense.
Think about it this way: imagine if you took a million random posts or videos. You would find a wide range of political views, conspiracy theories and so on. Whatever your position on any of those issues, you could find content pushing those views.
So if your algorithm selects and distributes content that fits your desired views and suppresses content that opposes your views, how are you different from a random publisher who posts content with those exact same views?
This is kind of like the "secret third thing" of Section 230 where you get all the protections of being a platform and all the flexibility of being a publisher and we need to close that loophole. Let platforms choose which one they are.
Another example: if I create a blog and write a post that accuses my local mayor of being a drug addict and a pedophile, I can be sued for defamation. You can try the journalism defense but it won't shield you from defamation. Traditoinal media outlets are normally very careful about what they publish for this reason.
But what if I run Facebook or Twitter and one of my users says the exact same thing? Well I'm just a platform. I have a libel shield. But again, my algorithm can promote or suppress that claim. Even if I have processes to moderate that content, either by responding to a court order to take it down and/or allowing users to flag it and then take it down myself with human or AI moderation, the damage can't really be rolled back.
We've let tech companies get away with "the algorithm" being some kind of mysterious and neutral black box that just does stuff and we have no idea what. It's complete bullshit. Every behavior of such an algorithm reflects a choice made by people, period. And we need to start treating this as publishing.
"Harmful content" translation: What the government do not like. What ISRAEL does not like. Another call for more censorship from a force financed state propaganda outlet nobody with a brain takes seriously. How original.
Is this unavoidable? I mean it does generate clicks and views and user engagement so if one platform is doing it, doesn't that automatically mean that the other has to do it? Otherwise they will continuously lose market share.
It's the same story since at least 2012. It is well documented in the book "The chaos machine" by Max Fisher.
Facebook employees, journalists and psychologists have studied the phenomenon and Facebook's (as well as Youtube's) response is always the typical "We have done something" to calm the protest, but it's never really the case. It's a constant game of deflecting, delaying, diminishing, denying.
Given how TikTok "trends" seem to consist mostly of "get teenagers to do stuff that causes huge expenses for US society":
* "eat tide pods"
* "stick a fork in electrical sockets in your school"
* "destroy your school's shit" aka "Devious Licks" - bathrooms, chromebooks (jamming stuff into the charging ports to start fires...)
* "drink a shitload of Benadryl to see what happens"
* "steal a kia/hyundai and drive 80mph, run from the cops, etc"
...convince me that this is not a purposeful attack on US society by the CCP?
The idea that there is a certain category of content that is harmful and there are certain people who have the authority to declare what is harmful is extremely dangerous, practically how every single censorship system has ever been built.
The feedback loop for this moral hazard is slow but implacable. You can treat the zeitgeist as a dumping ground for so long, until you get so big, that you can no longer treat it like an idealized infinite substance.
Since a long time whistleblowers aren't needed to say the obvious and self-evident about online media. A thoughtful user can realize it instantly. From the era of b/w TV programs until now the content has the same goal. I believe after enough iterations of user control the delivery will become regulated like drugs. Now it's starting for kids.
As someone who uses IG a lot. I have found this to be overwhelmingly true. Very often when i stumble upon a controversial video the very top comment is a ratioed hot take on the topic, as if meta purposely put the comment at the top to ruffle feathers. On top of that, when i find controversial topics(like the moon landing), a large majority of comments are leaning to one, extreme opinion with all the other differing opinions pushed to the very very far bottom of the comment section
Throw away your 'smartphone' and stop using anti-social media. It is killing society, and only making the Billionaires more powerful. They are evil and will do anything to stay in power.
In my experience there’s a strong “banality of evil” that happens.
Some poor schlub ML Eng has shipped a feature that wins an A/B test. They’re pushing to get promoted. Their management wants to show they’re hitting their KPIs.
An engine of destruction filled with well meaning people just hoping to advance in their careers.
You might say, it’s ultimately the designers of the incentives that matter. Even there, the leadership will change. Inevitably the needs of the capitalist machine take over.
194 comments
Shut down the behavior with regulations or shut down the companies. Meta and TikTok have no natural right to exist if they are a net negative to society.
The problem of course is that it's difficult to disentangle the speech of algorithmic moderation from the speech of the content being moderated. And the minor issue that the vast majority of things people complain about is just plain First Amendment-protected speech, so it's not like the §230 protections actually matter as the content isn't illegal in the first place.
Account --> Delete
>> Meta and TikTok have no natural right to exist if they are a net negative to society.
Exactly. And when we are done with them we will shut down Molson and Anheuser-Busch. Then we can go after the people who make selfy sticks. Then the company that owns that truck that cut me off last week. Basically, organization who i dislike should not be allowed to exist.
For example, some teen got radicalized with racist and sexist content. That's bad in my opinion, as I'm not a racist or a sexist. But should racist or sexist speech be censored or regulated? On what grounds? How do we know other unpopular (now or in the future) speech won't be censored or regulated in the future? Again, as much as I'm not a racist or sexist, I don't think the government should have a say in whether a company should be able to promote speech like "whites/blacks are X" or "men/women are Y". What's next? Should we regulate speech about religion (Christians/Muslims/atheists are Z) or ethics (anti-war people or vegans are Q) or politics or drugs or sex?
The current situation is shitty, but giving too much power to regulators will likely make it way shittier. If not now, in the future, since passed regulations are rarely removed.
What caused Gen Z to drink less than millenials? Maybe Gen Z has the answer.
As long as there are people who don’t acknowledge or care about the health effects it will exist. If that’s a plurality of your population then you have a fundamental population problem IF you are in the group who thinks it’s bad.
Aka every minority-majority split on every issue ever.
So the answer is: live in a society governed by science. Unfortunately none exist
> "What do we do about it?"
I'd suggest something like banning algorithmic amplification - your feed is posts of people you follow and nothing else. But that's not what will happen. What will happen is there will be [1] vague laws about preventing vague "harm", written to give legal teeth to the Overton window. Not in those words, but companies that would go against it will be mired in lawfare, while those that comply will be allowed to grow.
And if you complain, they'll motte-and-bailey you - you're not in favor of "harm", are you? We're not an authoritarian speech police, we only seek to protect people from "harm".
[1] Or rather, are - see https://en.wikipedia.org/wiki/Online_Safety_Act_2023
I feel more informed than if I was only listening to NPR.
That said, I stay away from anything that’s produced—sound track, too many cuts/edits, talking head commentary. I guess in this context, if I’m going to be driven to emotional anxiety, it’s going to be from something that happened or something someone said, and not the internet’s interpretation.
You can’t “produce content” that I will watch _as news_. It has to be in some real way happening (with some deference to Rashomon).
Think about it this way: imagine if you took a million random posts or videos. You would find a wide range of political views, conspiracy theories and so on. Whatever your position on any of those issues, you could find content pushing those views.
So if your algorithm selects and distributes content that fits your desired views and suppresses content that opposes your views, how are you different from a random publisher who posts content with those exact same views?
This is kind of like the "secret third thing" of Section 230 where you get all the protections of being a platform and all the flexibility of being a publisher and we need to close that loophole. Let platforms choose which one they are.
Another example: if I create a blog and write a post that accuses my local mayor of being a drug addict and a pedophile, I can be sued for defamation. You can try the journalism defense but it won't shield you from defamation. Traditoinal media outlets are normally very careful about what they publish for this reason.
But what if I run Facebook or Twitter and one of my users says the exact same thing? Well I'm just a platform. I have a libel shield. But again, my algorithm can promote or suppress that claim. Even if I have processes to moderate that content, either by responding to a court order to take it down and/or allowing users to flag it and then take it down myself with human or AI moderation, the damage can't really be rolled back.
We've let tech companies get away with "the algorithm" being some kind of mysterious and neutral black box that just does stuff and we have no idea what. It's complete bullshit. Every behavior of such an algorithm reflects a choice made by people, period. And we need to start treating this as publishing.
I know https://www.reset.tech/ does really good work in this space, but are there others, and who is funding them?
Facebook employees, journalists and psychologists have studied the phenomenon and Facebook's (as well as Youtube's) response is always the typical "We have done something" to calm the protest, but it's never really the case. It's a constant game of deflecting, delaying, diminishing, denying.
Most of them are click baits anyways.
* "eat tide pods" * "stick a fork in electrical sockets in your school" * "destroy your school's shit" aka "Devious Licks" - bathrooms, chromebooks (jamming stuff into the charging ports to start fires...) * "drink a shitload of Benadryl to see what happens" * "steal a kia/hyundai and drive 80mph, run from the cops, etc"
...convince me that this is not a purposeful attack on US society by the CCP?
Not saying “well duh” I just think at this point I have to ask “are we going to do anything about it?”
We’ve known about the financial incentives to promote anger and outrage online for at least a decade now. So what are we going to do about it?
Some poor schlub ML Eng has shipped a feature that wins an A/B test. They’re pushing to get promoted. Their management wants to show they’re hitting their KPIs.
An engine of destruction filled with well meaning people just hoping to advance in their careers.
You might say, it’s ultimately the designers of the incentives that matter. Even there, the leadership will change. Inevitably the needs of the capitalist machine take over.