We have a massive poisoning of the commons catastrophe coming, driven by further authoritarian government overreach and control. I've seen no one working on this, and in fact most people on HN seem to be working on ways to further exacerbate this problem. I don't just mean half solutions like tor or social protocols that let you in and out of walled gardens.
There's still a tiny window of opportunity for engineers to come up with or design technical safeguards, but eventually this problem will move past the realm of what's easily solvable and out of our hands, and into policy makers hands. A big part of me feels like that window is already slammed shut.
It feels like "Autonomous Coding Agents" are being astroturfed on the daily on HN. The same arguments and tropes are echoing through every thread.
It's hard to distinguish who's a bot, who's a narrative pusher and who's an enthusiast. Which is exactly what you'd want from an astroturfing campaign. There's a clear benefit: people in the industry are reading this, and in doing so they're granting mindshare.
There's one way that can prevent inauthentic support campaigns - personal key signature. But judging by how afraid people, especially in the US, need to be of their government surveilling them, this isn't going to catch on.
What's interesting about that is that indeed, there are a lot of people pushing the 'autonomous coding agents are great' narrative but there is one crucial bit missing: they absolutely never show their code.
Yes. I’ve also been asking every engineer I know what they’re doing with AI and there’s a lot of people doing a lot of different things, but it’s a deep mismatch with the online rhetoric.
This phenomenon appears to be incrementally coming for every single topic and public platform.
I feel the same way. Most people I've talked to are using AI for better search. I don't know anyone using it heavily to do their main job (writing code). I think a lot of the accounts bragging about how much they are doing with AI are bots.
I'm even shocked when I hear people are using it for better search. I've found it to be terrible for search, and constantly fabricating things. It's distilled everything that is bad about new Google, where it prefers popular results to accurate ones - but with actual fabrication that becomes infinitely worse.
I literally ask it to look for something, and immediately afterwards (before reading the long-winded result), ask it if the results were real or fabricated. It's just how the cost-benefit analysis works out, and I didn't learn until a ton of times reading the results, getting suspicious of a few, doing websearches to verify them, not finding them, then coming back to ask if they were real.
"Sorry! It's absolutely fair that you called me out on that... It's important that you hold me to a high standard... You're absolutely right..."
I'm finding it valuable for compressing all of the docs in the world, so I don't have to look up what a function does or how to accomplish something in some framework or CLI. I find it capable of writing code if I move an inch at a time; build copious verbose debugging output that I feed back into it every time it screws up; and when it goes into a stupid loop being stupid, just debugging by hand before wasting hours trying to get it to see something that it doesn't want to see.
There's a lot of money wrapped up in people thinking a certain way: AI is useful. Work should be done in a corporate office. The American Dream is attainable. Recession is not coming. War is good. The world is dangerous. Others want to harm you. Lots of investment in astroturfing these themes because a population who believes them will more easily be separated from their money.
>It feels like "Autonomous Coding Agents" are being astroturfed on the daily on HN. The same arguments and tropes are echoing through every thread.
Isn't this what exactly you'd expect in a connected world? The best arguments from both sides proliferate, thereby causing "The same arguments and tropes are echoing through every thread".
> Isn't this what exactly you'd expect in a connected world?
I would expect a figurative war for human attention. With so much information being available, everyone would try to make people focus on what they want to communicate.
> The best arguments
Some of these tropes and arguments aren't really the best. There's a lot of rhetorical gotchas, e.g. "that's exactly what I'd expect from a human" when an automated solution isn't up to par.
> from both sides
The only real "side" is the one actively pushing for something. Everyone else isn't a camp - they're just random people.
>I would expect a figurative war for human attention. With so much information being available, everyone would try to make people focus on what they want to communicate.
How does this relate to online commenting? Are you expecting the "figurative war for human attention" to make comments more diverse?
>Some of these tropes and arguments aren't really the best. There's a lot of rhetorical gotchas, e.g. "that's exactly what I'd expect from a human" when an automated solution isn't up to par.
I think you're overestimating the epistemic rigor of the average internet commenter, eternal September, etc.
>The only real "side" is the one actively pushing for something
Are you implying the "astroturfing" is only on one side? If you might just be experiencing motivated reasoning and/or confirmation bias. Most of the astroturfing behavior can be applied to the anti-AI side as well, eg. people complaining about electricity or water consumption in every thread about the impacts of AI, or "ai slop".
It feels the same way on GitHub trending. I used to check it frequently to see what the hottest newest tech was and stay up to date. Now it's oversaturated by whatever the newest AI bubble is. It also doesn't help that MCP enabled products like OpenClaw star their own repo and artificially inflate their perceived value.
I hate to sound like I’m turfing for cryptocurrencies, isn’t there like an identity solution there that the crypto nerds solved for to keep identity verification anonymous and surveillance proof?
Need to double check what is available, though I feel like that angle could work.
I’ve been wondering also if a simple lie & deception detection type system could be a useful angles. It’s complicated in practice; though the human intuition would say it’s figured this out millennia ago- I can’t tell you how many times my body has figured out someone’s toxic negative vibe by feeling. And I think we probably understand this better than we think and can represent it in the computer space with analysis of signals and some follow on questions. Hope I’m not too naive here.
To quote The Cable Guy, there’s only one answer, someone has to kill the babysitter (tv, social media, Big Tech). It’s hard to kill the babysitter when everyone in Congress is invested balls deep in the babysitter. Eisenhower warned of the coming overreaching powers of the Military Industrial Complex, but no one is attacking the Government Stock Market Tech Complex (GSMTC).
If you can point me at someone that would fund such projects (not VCs), would be happy to apply. Projects like NLNet aren't keen on funding larger scope projects. At least if you do not have the thought leader influencer clout.
I agree that it feels like the tiny window of opportunity hasn't quite shut yet, and it's a problem space I know I should take more interest in. What do you see as the viable technical directions? Something along the lines of what Altman was trying to do with his Orb? Something along the lines of the C2PA's Content Credentials?
There were many disinformation research organizations in the US, including at major institutions such as Harvard and Stanford, that were forced to close by conservatives through lawfare or apparently through donor pressure.
(It's interesting that conservatives saw it as a partisan cause.)
strong agree, I feel like it poisons the fabric of society somehow when everything you interact with is fake or even just has a good chance of being fake, regardless of the also-shitty fact that it is also often trying to influence you.
their landing page stops short of saying that Doublespeed would be "a good fit for your political campaign." I'd prefer fighting an AI-powered drone over becoming a victim of "Dead Internet-aaS" startup. at least, flying lawnmowers are honest
I recently did some looking into how public perception campaigns work.
I found it amazing that I could not find any organisation that tracks these campaigns. These are often very well funded and those funds go to people.
Part of the problem is a successful public opinion campaign results in something that most people believe, we probably only get to see the failures. Challenging something that is widely held is not well received, whether or not you are right or wrong.
Some things I did find out. Fake news stories don't change people's opinions very much. They enable media to shape narratives because people will reject genuine stories outside the narrative because they know that fake news stories exist. Fake news exists to be seen as fake to establish that the things you disagree with could also be fake.
There are companies that specialise in this.
Reputation management companies might tell you who their clients are or what they do, but never at the same time. I suspect the best ones do neither.
> Recent events in the world have highlighted just how influential social media can be, both in a national context and internationally. To list a few examples: platforms like Twitter and Facebook played a prominent role in the events surrounding the recent US presidential elections; social media and messaging platforms made possible the many decentralized mass protests that have popped up around the globe, from the pro-democracy movements in Hong Kong, Thailand and Belarus to the Black Lives Matter protests in the United States; and of course, the whole of the internet, for better or worse, played a role in shaping how the world responds to the COVID-19 pandemic. But with great power comes the great potential for manipulation and misuse.
I think everyone would agree with this but is there any formal evidence of how Twitter and TikTok affect elections or legislation?
My browser highlights a few hundred accounts. For HN and other comment-oriented sites, local userscripts are supported by browser plugins, including mobile Safari. These can highlight known usernames and implement blocklists. Most LLMs can generate a userscript on demand for non-obfuscated sites, including userid list for manual edit.
This is notorious in platforms like reddit, with people jumping in to suggest no name products in response to questions. It doesn't help that reddit allows private profiles, thus allowing astroturfers to get away with it. Also, another case is LLM astroturfing, we're bombarded with doomerism and obituaries about programming, some of said opinions are subtler, short comments, the most dangerous ones, because little by little they jab you, though the most conspicuous ones are easy to identify. And then there's the political astroturfing. In my country smokescreens are the defacto tool, but it is suspicious of the amount of high quality edits and memes that came out about the Epstein files, essentially cementing him as a "meme" and not a monster that abused minors.
54 comments
There's still a tiny window of opportunity for engineers to come up with or design technical safeguards, but eventually this problem will move past the realm of what's easily solvable and out of our hands, and into policy makers hands. A big part of me feels like that window is already slammed shut.
It's hard to distinguish who's a bot, who's a narrative pusher and who's an enthusiast. Which is exactly what you'd want from an astroturfing campaign. There's a clear benefit: people in the industry are reading this, and in doing so they're granting mindshare.
There's one way that can prevent inauthentic support campaigns - personal key signature. But judging by how afraid people, especially in the US, need to be of their government surveilling them, this isn't going to catch on.
This phenomenon appears to be incrementally coming for every single topic and public platform.
I literally ask it to look for something, and immediately afterwards (before reading the long-winded result), ask it if the results were real or fabricated. It's just how the cost-benefit analysis works out, and I didn't learn until a ton of times reading the results, getting suspicious of a few, doing websearches to verify them, not finding them, then coming back to ask if they were real.
"Sorry! It's absolutely fair that you called me out on that... It's important that you hold me to a high standard... You're absolutely right..."
I'm finding it valuable for compressing all of the docs in the world, so I don't have to look up what a function does or how to accomplish something in some framework or CLI. I find it capable of writing code if I move an inch at a time; build copious verbose debugging output that I feed back into it every time it screws up; and when it goes into a stupid loop being stupid, just debugging by hand before wasting hours trying to get it to see something that it doesn't want to see.
>It feels like "Autonomous Coding Agents" are being astroturfed on the daily on HN. The same arguments and tropes are echoing through every thread.
Isn't this what exactly you'd expect in a connected world? The best arguments from both sides proliferate, thereby causing "The same arguments and tropes are echoing through every thread".
> Isn't this what exactly you'd expect in a connected world?
I would expect a figurative war for human attention. With so much information being available, everyone would try to make people focus on what they want to communicate.
> The best arguments
Some of these tropes and arguments aren't really the best. There's a lot of rhetorical gotchas, e.g. "that's exactly what I'd expect from a human" when an automated solution isn't up to par.
> from both sides
The only real "side" is the one actively pushing for something. Everyone else isn't a camp - they're just random people.
>I would expect a figurative war for human attention. With so much information being available, everyone would try to make people focus on what they want to communicate.
How does this relate to online commenting? Are you expecting the "figurative war for human attention" to make comments more diverse?
>Some of these tropes and arguments aren't really the best. There's a lot of rhetorical gotchas, e.g. "that's exactly what I'd expect from a human" when an automated solution isn't up to par.
I think you're overestimating the epistemic rigor of the average internet commenter, eternal September, etc.
>The only real "side" is the one actively pushing for something
Are you implying the "astroturfing" is only on one side? If you might just be experiencing motivated reasoning and/or confirmation bias. Most of the astroturfing behavior can be applied to the anti-AI side as well, eg. people complaining about electricity or water consumption in every thread about the impacts of AI, or "ai slop".
> There's one way that can prevent inauthentic support campaigns - personal key signature.
You would be surprised at how cheaply opinions can be purchased, especially globally.
Need to double check what is available, though I feel like that angle could work.
I’ve been wondering also if a simple lie & deception detection type system could be a useful angles. It’s complicated in practice; though the human intuition would say it’s figured this out millennia ago- I can’t tell you how many times my body has figured out someone’s toxic negative vibe by feeling. And I think we probably understand this better than we think and can represent it in the computer space with analysis of signals and some follow on questions. Hope I’m not too naive here.
[0] e.g. https://www.businessinsider.com/sam-altman-tools-for-humanit... and the feature piece at https://time.com/7288387/sam-altman-orb-tools-for-humanity/
[1] https://contentcredentials.org and https://c2pa.org
> I've seen no one working on this, and in fact most people on HN seem to be working on ways to further exacerbate this problem.
It's against the HN guidelines to insinuate that astroturfing happens on HN.
(It's interesting that conservatives saw it as a partisan cause.)
their landing page stops short of saying that Doublespeed would be "a good fit for your political campaign." I'd prefer fighting an AI-powered drone over becoming a victim of "Dead Internet-aaS" startup. at least, flying lawnmowers are honest
I found it amazing that I could not find any organisation that tracks these campaigns. These are often very well funded and those funds go to people.
Part of the problem is a successful public opinion campaign results in something that most people believe, we probably only get to see the failures. Challenging something that is widely held is not well received, whether or not you are right or wrong.
Some things I did find out. Fake news stories don't change people's opinions very much. They enable media to shape narratives because people will reject genuine stories outside the narrative because they know that fake news stories exist. Fake news exists to be seen as fake to establish that the things you disagree with could also be fake.
There are companies that specialise in this.
Reputation management companies might tell you who their clients are or what they do, but never at the same time. I suspect the best ones do neither.
> Recent events in the world have highlighted just how influential social media can be, both in a national context and internationally. To list a few examples: platforms like Twitter and Facebook played a prominent role in the events surrounding the recent US presidential elections; social media and messaging platforms made possible the many decentralized mass protests that have popped up around the globe, from the pro-democracy movements in Hong Kong, Thailand and Belarus to the Black Lives Matter protests in the United States; and of course, the whole of the internet, for better or worse, played a role in shaping how the world responds to the COVID-19 pandemic. But with great power comes the great potential for manipulation and misuse.
I think everyone would agree with this but is there any formal evidence of how Twitter and TikTok affect elections or legislation?