The Cognitive Dark Forest (ryelang.org)

by kaycebasques 275 comments 575 points
Read article View on HN

275 comments

[−] bsza 47d ago
The best analogy I can think of (quite similar to this one) is that the internet is low Earth orbit and AI is the Kessler syndrome. We abandon the place not to hide ourselves, but because it is saturated with garbage, and anything you try to put up there will only result in even more garbage being generated, without any positive effect.

The ideal solution would be to remove the garbage, but right now we can't even detect it, let alone figure out a way to get rid of it. Besides, it's a zero sum game, why bother cleaning up when you can just effortlessly pump out more garbage in hopes that some of it will remain in orbit for long enough to benefit you.

[−] Barrin92 47d ago
I don't buy the analogy. The problem with Kessler syndrome is that low earth orbit is physically crowded, you run into collisions. I don't care about the garbage. I don't care about the AI era. I've been writing code in Emacs for 20 years, I'll be writing code in Emacs in 20 years, every open source project I contribute to still looks the same because all these AI people, like the blockchain people do is just make new stuff up in their own incestuous tupperware salesmen ecosystems.

I do pity the bug bounty people who rely on goodwill in their programs given that everything with a financial incentive is vulnerable. But otherwise the great thing about digital spaces is that there is, for practical purposes, unlimited space.

Every day there's another "how do you deal with the AI-apocalypse" article, I don't just ignore it

[−] chongli 47d ago
I think by "internet" they mean search engine results pages. If you restrict yourself to short, common queries and only look at the top 10 results on the page, then the space really is very limited. If all those top 10s for common queries start to get crowded out with AI slop, then people are going to start abandoning search.
[−] bsza 47d ago
Well, if you open-source anything these days and it does make it big, you can be prepared for a flood of low-effort slop PRs that you must either review for free or stop accepting external contributions altogether, making it effectively closed-source. You can't choose to ignore the garbage, it will collide with your stuff, unless your stuff is small enough to avoid collisions (in which case no one will see it).
[−] Ygg2 47d ago

> I've been writing code in Emacs for 20 years, I'll be writing code in Emacs in 20 years

Bold assumption. On what will you run Emacs if average PC costs $12000? Yes. Even Raspberry Pi. It's not called war on general computing for nothing.

If you say the cloud, that will be cut up and reused by the next AI crawler.

[−] ohelm 47d ago
I would suffocate it. Know the greedy snake idiom? A snake is so hungry and greedy that it suffocates on its prey?

Best you can do is to spread all of the goods it provides, as it is too greedy to not devour them itself. It will consume them and suffocate slowly.

[−] bodegajed 47d ago
This is why I now check when I'm researching for a solution (that an LLM cannot figure out.) I go to github but often check if the project was created before 2022 due to AI slop concerns.
[−] middayc 47d ago
This is interesting.

When I read if for the second time, trying to understand it - maybe even better match for the low orbit flying garbage would be "enshitification"? As the time goes on, more and more garbage is produced, and we have no clear way or specific motivated entity to start removing it so it just grows.

[−] ozozozd 47d ago
I guess not many people know but app templates for Uber, AirBnb etc. have been around for years now. You don’t even have to prompt. It’s sitting on the shelf, complete.

“Execution is hard” was never about the code part.

Up until 2 years ago I was an engineer/entrepreneur. I could build anything. Other stuff, selling, supporting (execution) was hard.

LLMs made building some of the things I could build faster/easier, others not so much.

Well, the other stuff is still pretty hard. Maybe harder because there is a tonne of spam.

So feel free to share your ideas. Everyone’s gonna think they’re LLM generated anyways.

[−] scottlawson 47d ago
The thesis that in the past it was safe to share ideas and projects because the execution was hard, and that now things have changed because of AI is an interesting AI, but I wonder if it is really true.

It certainly seems true that for small projects and relatively narrow scoped things that AI can replicate them easily. I'm thinking specifically about blog posts where people share their first steps and simple programs as they learn something new, like "here is how I set up a flask website", "here is how I trained a neural network on MNIST".

But if AI is empowering people to take on more complex projects, perhaps it takes the same amount of time to replicate the execution of a more advanced project?

In other words, maybe in the past, it would take me 10 hours to do a "small" project, which today I could do in 1 hour with the assistance of AI.

And now, with the assistance of AI, I can go much farther in 10 hours and deliver a more complex project. But that means that someone else trying to replicate this execution is still going to need around 10 hours to replicate it.

Basically, I'm agreeing that AI can reduce barrier to replicating the execution of another person's project, but at the same time, that we can make more complex projects that are harder to replicate. So a basic SASS crud app is trivial now but a multi-disciplinary domain specific app that integrates multiple systems is still going to be hard to replicate.

[−] xantronix 47d ago
I have been mulling this over and I think I have some solutions in mind, at least for myself.

• No more sharing my project work as open source. No more open discussion. I don't care how badly I want to show the world; if I'd like somebody to see, I will have it printed in a physical book, or I will give them access to my private repository not reachable via the public Internet.

• Bring back LAN parties. Not for gaming necessarily, but for the purpose of exchanging works of engineering and art in an intimate, intentional way.

• Take this as an opportunity to build closer, longer-lasting relationships with people.

• No more emphasis on metrics. I can microdose on dopamine from natural sources, like, looking at a beautiful sky at sunset, or cuddling my dog.

• Open hardware, or, in the very least, hardware we can still control on our own volition. If this means we must be retrocomputing enthusiasts, then so be it.

[−] rhubarbtree 47d ago
This is mislead by the nerd philosophy that the tech is the business. It absolutely isn’t, the tech is a small part of a startup. Witness that Spotify continues to exist despite being known and replicated by the major giants.

Poetically expressed, but ultimately based on a false notion of what a business actually is.

[−] pugio 47d ago
Thanks, this helped crystallize something for me: the play the AI labs are making is anti-fragile (in the Nassim Taleb sense):

> The very act of resisting feeds what you resist and makes it less fragile to future resistance.

At least along certain dimensions. I don't think the labs themselves are antifragile. Obviously we all know the labs are training on everything (so write/act the way you want future AIs to perceive you), but I hadn't really focused on how they're absorbing the innovation that they stimulate. There's probably a biological analog...

Well there are many, and I quote this AI response here for its chilling parallels:

> Parasitic castrators and host manipulators do something related. Some parasites redirect a host’s resources away from reproduction and into body maintenance or altered tissue states that benefit the parasite. A classic example is parasites that make hosts effectively become growth/support machines for the parasite. It is not always “stimulate more tissue, then eat it,” but it is “stimulate more usable host productivity, then exploit it.” (ChatGPT 5.4 Thinking. Emphasis mine.)

[−] Skyy93 47d ago
This article makes no real sense to me.

>You think of something new and express it - through a prompt, through code, through a product - it enters the system. Your novel idea becomes training data. The sheer act of thinking outside the box makes the box bigger.

This was the same before, if you had a novel idea and make a product out of it others follow. Especially for LLMs, they are not (till now) learning on the fly. Claude Opus 4.6 knowledge cut off was August 2025, so every idea you type in after this date is in the training data but not available, so you only have to be fast enough. Especially LLMs/AI-Agents like Claude enable this speed you need for bringing out something new.

The next thing is that we also have open source and open weight models that everyone of use with a decent consumer GPU can fine-tune and adapt, so its not only in the hands of a few companies.

>We will again build and innovate in private, hide, not share knowledge, mistakes, ideas.

Why should this happen? The moment you make your idea public, anyone can build it. This leads to greater proliferation than before, when the artificial barrier of having to learn to code prevented people from getting what they wanted or what they wanted to create.

[−] dwd 47d ago
As a separate analogy, and one related to physical products. I built a website for a guy many years ago who had patented a clamp for frameless glass panels that didn't require drilling the glass; primarily used for pool fencing.

The problem was as soon as he got the patent, it was available to view in countries where the cost to enforce his patent wasn't viable, and the market very quickly filled with cheap imitations. He straight out said at the time he regretted getting the patent.

[−] zenogais 47d ago
Might just be independent discovery, but the main idea of this blog post is more or less the exact theory advanced in the recent book "The Dark Forest Theory of the Internet" by Bogna Konior (https://www.amazon.com/Dark-Forest-Theory-Internet-Redux/dp/...).
[−] dreamglider 47d ago
For the better part of the last 20+ years big corpos had the $ to throw and replicate virtually anything they wanted. They got the cash and man power, yet they didn't do it. Why? Because they don't care, they have business to run, they need to somewhat keep focus which can't happen when scattering attention all over the place. The difference now is that for relatively >simple< projects (4-6 months work of a team of 5-6 ppl) one can do it faster using LLMs. Basically - I can get faster to a place I could always go to but didn't (and still don't) want to.

One seems to omit the fact that LLMs are fundamentally designd for workload quite different for what they are being used right now. Sure you can improve them but can't escape / workaround the current NLP design endlessly. Then there's the irony - Internet did deliver on free (as close as it gets) and easy access to information (any). Did this make people smarter, more knowledgable, more tech savvy & etc? Nope, it didn't. Just like the libraries didn't (queues at libraries were and are a rare event). Big deal that the information is readily available when people do not know what do with it or care to do anything.

Ideas are cheap, the chances of having some truly unique idea that is also business feasable are not that big. It's not so much about the ideas but rather the ability to execute, follow through and well - make sales while constantly improving what you got. Staying silent, going dark - have their merit but only when the wheels are already turning and one is into acting, not into fearful hiding.

In either case - awesome blog post!

[−] stego-tech 47d ago
I’m still optimistic that this is cyclical in nature, and not an inevitable - or indefinite - outcome.

Humanity has endured regular cycles of shared enlightenment (usually accompanying profound technological or societal revolutions) and dark forests of protectionism, and we always find a way to the other side. Sometimes these cycles last a century; sometimes, but a few years. Still, we always make it to the other side.

In the case of LLMs, we have to make a few assumptions: that they will not lead to AGI, nor will we solve the problem of real-time learning or context windows. These are, admittedly, huge assumptions, but the current state of AI and compute suggests a nugget of truth to them for the time being. If that’s the case, then perhaps this “dark age” of the dark forest is bounded by the limitations of silicon-based computing (hence the push towards Quantum) and the human frustration with diminishing returns from technological investment. As artisans and brilliant minds withdraw, the forest risks starvation and withering from a lack of sustenance; if humans withdraw from technology because they must hand over IDs and personal data, because to engage with technology is to surrender to surveillance and persecution, then the natural trend will be to withdraw over time - and the markets will adapt accordingly, with or without external/government intervention.

That is to say that the dark forest only lasts as long as its inhabitants decide to persecute each other for daring to light a path forward. Right now, the incentives very much favor those willing to harm others for personal enrichment; that is not always the case, and humans decide when that reasoning becomes vilifiable.

[−] alembic_fumes 47d ago

> This is the true horror of the cognitive dark forest: it doesn’t kill you. It lets you live and feeds on you. Your innovation becomes its capabilities. Your differentiation becomes its median.

Oh no, the terrible dystopia where anyone can benefit from anyone else's good ideas without restrictions! And without any gatekeepers, licensing agreements, copyright, and not even a lawyer in sight!

If this is the dark future that AI use brings for us, I say bring it. Even if it means that somebody gets filthy rich in the process, while making the rest of the humanity better off.

[−] mpalmer 47d ago
As a work of persuasive writing, this is unfocused and seems mostly generated.

One thing I would have expected of someone who knows their history - forget LLMs, this is how startups have worked for decades now. You're only as good as your idea, your ability to execute, and your moat. And the small fish get eaten.

> The original Dark Forest assumes civilizations hide from hunters - other civilizations that might destroy them. But in the cognitive dark forest, the most dangerous actor is not your peer. It’s the forest itself.

Note the needless undercutting of the metaphor for the sake of the limp rhetorical flourish.

> I wrote this knowing it feeds the thing I’m warning you about. That’s not a contradiction. That’s the condition. You can’t step outside the forest to warn people about the forest. There is no outside.

Quite dramatic!

Except literally going outside and just talking to people? Using whiteboards?

Also, you fed it when you used a model to write this blog post. You didn't have to do that.

[−] movedx 47d ago
If AI makes replicating other people’s ideas faster and easier, thus allowing capital-heavy market players to just absorb whatever idea you manage to execute, then perhaps, somewhat ironically, the economic moat you’ll have is your human nature, contact, and time? Perhaps we’ll see a shift in sentiment towards wanting to deal with and spend time with the people in the business, rather than just what the business can do for you and yours from a software perspective?

I believe the idea of “off-shoring” your IT is a good example of this. My brother works for a business whose clients would drop them the moment they off-shored any aspect of their IT support. Not because of data sovereignty, but simply because they value them being on-shore, in the same time zone, and being native English speakers. And this is despite the fact it would drop the prices they’re paying for IT by 30-40%.

[−] ginko 47d ago

>You are creating your cool streaming platform in your bedroom. Nobody is stopping you, but if you succeed, if you get the signal out, if you are being noticed, the large platform with loads of cash can incorporate your specific innovations simply by throwing compute and capital at the problem. They can generate a variation of your innovation every few days, eventually they will be able to absorb your uniqueness. It’s just cash, and they have more of it than you.

That's not exactly a new phenomenon and doesn't require AI. If anything that was worse in the 90s with Microsoft starving out pretty much any would-be competitor they could find.

And it wasn't just Microsoft: https://en.wikipedia.org/wiki/Sherlock_(software)#Sherlocked...

[−] caycecan 47d ago
Near the end you start to describe the paradigm the machines build in The Matrix. Neo is the aberration they seek to reincorporate to sustain their inability to innovate.
[−] bonoboTP 47d ago
Valuable ideas have already been those that others find unintuitive and it's kinda hard to get people on board because they are skeptical and they need long form, tailored explanation for them to get convinced. If a short elevator pitch convinces them to go home and try to build it, it's probably already being considered by others.
[−] layer8 47d ago

> Resistance isn’t suppressed. It’s absorbed. The very act of resisting feeds what you resist and makes it less fragile to future resistance.

On the other hand, if your primary goal is to change the world, or “be the change you want to see”, maybe being public and feeding it isn’t so bad, especially if others don’t?

[−] spartanatreyu 47d ago
This post puts forward two paths:

1) Everyone and everything is subsumed into the forest. Innovation becomes unprofitable for the innovator as the one who controls the forest uses their capital to clone every new innovation.

2) Everyone withdraws from the forest. Innovation goes private. The forest stops growing, but doesn't die.

---

But there's two things the post doesn't consider:

1) Viral licensing.

What happens to a model if it is trained on data that comes with a license? What happens if the laws that be decide that the model producers, the models and the products of the models themselves must follow the conditions of the licences. How will that affect the model producers? What if customers don't want to be beholden to those licenses? What happens if the conventional wisdom is to avoid models to avoid lawsuits? What happens when models, model producers and customers power lawsuits against (other) model producers? Where would the new equilibrium between model producers and innovators move to?

2) Non-profits models

What happens to model producers if customer shift to become non-profits themselves, specifically those that pay employees instead of model producers. Would the model producers become starved out? Or would they need to switch to non-profit status as well? How would model producers, the models and the forest as a whole change if profit no longer became the priority?

[−] boutell 47d ago
OK, so maybe we're headed for a dark forest scenario as far as profit driven startups go.

But if your goal is simply for the thing to exist, there is a strong incentive to share.

[−] storus 47d ago
This has some grain of truth though companies would only execute your ideas if they don't destroy their own business. Imagine creating your own Bloomberg Terminal/Capital IQ using agentic AI - you'd directly attack incumbents and not give them more profitable ideas. For potentially profitable ideas one could just look at all companies Google/Meta bought in the past and killed, then just redo them using AI.
[−] greenbit 47d ago
".. meat doesn't scale"

For better or worse, that pretty much captures everthing you need to know about the remainder of your s/w career these days, if you think about it

[−] jauntywundrkind 47d ago
The view here shows big huge powers of technocapital consuming all else, stealing every idea.

My hope is the opposite. Integrative, resonant computing (https://resonantcomputing.org/ https://news.ycombinator.com/item?id=46659456 although I have some qualms with it's focus on privacy), with open social protocols baked in seems like maybe possibly can eat some of the vicious consumptive technocapital. In a way that capital's orientation prevents it from effectively competing with. MCP is already blowing up the old rules, tearing down strong gates, making systems more fluid / interface-y / intertwingular again, after a long interregnum of everything closing it's APIs / borders.

People seem so tired and exhausted, so aware of how predatory the technosystems about us are. But it's still so unclear people will move, shift, much less fund and support the better world. The AT proto Atmosphereconf is happening right now, and there's been a long mantra of "we can just build things"; finding adoption but also doing what conference organizer Boris said yesterday, of, "maybe we can just pay for things", support the projects doing amazing work: that's a huge unknown that is essential to actually steering us out of the dark technology, where none of us get to see or get any way in how the software-eaten world arounds us runs, where mankind for the first time in tens or hundreds of thousands of years been cut off from the world os, has been removed from gods's enlightenment / our homo erectus mankind-the-toolmaker natural-scientist role.

I think the answer to the Dark Forest fear to be building together. To be a radiant civilization, together. To energize ourselves & lead ourselves towards better systems, where we all can do things, make things, grow things, in integrative social empowering ways.

[−] mystraline 47d ago
The saying has been "Ideas are cheap, execution is hard".

No, it leaves out a critical understanding.

Dumb ideas are EXPENSIVE. Most ideas are average. Great ideas are exceedingly rare.

But now, its finding the great ideas is the real problem space. And now, execution on those great ideas are what we all seek.

[−] griffzhowl 47d ago

> The platform doesn’t need to bother with individual prompts - it just needs to see where the questions cluster. A map of where the world is moving.

This was insightful, but is it much different to the kind of data google and other search engines have had access to for a long time?

And while LLMs might have sped up the rate of code generation, the tech giants have always been able to set a team on reverse engineering whatever they feel like, though they also often just bought up the startup that was producing what they wanted. I guess I'm not seeing exactly where LLMs specifically are creating the dark forest, rather than the consolidated, centralized tech landscape itself

[−] theAurenVale 46d ago
this is already hapening in the visual space too. go look at AI generated product photos or headshots from two years ago vs now, everything converges toward the same clean, competent, completely forgettable look. the dark forest isnt just text, its images, its video, its anywhere the cost of producing "good enough" drops to zero and nobody has to make an actual creative decision anymore. the irony is that real direction and real taste become more valuble when everything else is noise, but most people cant tell the difference until they see it side by side
[−] mikewarot 47d ago
In a recurring metatheme when it comes to AI and coding, I call bullshit. It's been 80+ years since we had a really great idea introduced, in "As We May Think" by Vannevar Bush[1,2]. We still don't have a Memex. Hell, we don't even have a standard way to add annotation[3] on top of hypertext. No matter how useful the idea is, and how much some of us want it, it just isn't going to happen because of copyright.

Instead, we've got the slop[4,5] that TBL came up with, and it stuck.

The best ideas aren't the most profitable, and thus remain outside the goals of the "Dark Forest". The best thing to do is to just have fun, and not worry about profit, like this man, his cats, and his use of the 3d printer to make a train for them.[6]

  [1] https://worrydream.com/refs/Bush%20-%20As%20We%20May%20Think%20(Life%20Magazine%209-10-1945).pdf
  [2] https://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/
  [3] https://en.wikipedia.org/wiki/Text_annotation
  [4] https://en.wikipedia.org/wiki/World_Wide_Web
  [5] https://en.wikipedia.org/wiki/HTML
  [6] https://www.youtube.com/watch?v=J3CnDXh7hH0 - "Building a Cat-Sized Lego Train"
[−] SirensOfTitan 47d ago
I don't quite remember the details, but there's a fascinating section in Julian Jaynes's "Origin of Consciousness in the Breakdown of the Bicameral Mind" where he talks about how metaphors condense down into more complex forms, and as they do they unlock new realities previously impossible to fathom. The classical example here is the simultaneous discovery of calculus by Newton and Leibniz: the larger context defines what is possible.

I was recently running myself through a thought experiment similar to the author here: if LLMs truly do make generation of ideas cheap (I'm still a skeptic here even within software), then as soon as products enter the public awareness they become trivial to reproduce. For example, in a prompt like: "Uber but for babysitters," "Uber for" is doing a tremendous amount of work. Before Uber, its model, UX, modes of engagement would've taken pages and pages to describe, but after, it becomes comparatively much cheaper.

... in this way, LLMs could cheapen ideas and creativity so much that they make other factors (which are already the weighing functions) more important, and I think the imbalance here is deeply troubling. Those factors are namely network effects (existing customers, brand recognition, existing relationships, capital). And when balance is shifted more toward network effects, it means that the whole system becomes more brittle because it makes it even harder to boot out incumbents.

There are a whole slew of issues with LLMs, particularly around their intended devaluation of labor, and we aren't talking enough about them.

[−] skybrian 47d ago
How about putting an idea or a vibe-coded demo out there in hopes that others will copy it, because you want it to exist and become more common? But it's less work if someone else maintains it.

This is free as in free puppy.

[−] JoachimS 47d ago
I've for a long time visioned AGI as something emergent from advertising agents competing about trying to extract as much "money" from the resource called "humans" as possible. Luring, coercing the resource by feeding it info, forcing it to follow instructions, threatening it, stealing info etc. The agent doesn't need to understand what money is, what a human is or that there really is a physical world.

The Dark Forest idea and the original post resonates well with this.

I few days ago I created a new repo for a new block cipher explicitly not to be used. And directly got several mails from bots promising that they (claiming to be humans) had looked at my repo and they could include it into their portfolio of especially good projects they also had vetted. Being part of this portfolio would almost guarantee that my repo and project would be used. If I only paid them some money first.

Creating the public repo meant sending a signal out into the digital world where agents are hunting for the human prey/resorce to extract value from.

The repo in question: https://github.com/secworks/tau256

[−] beej71 47d ago
Makes me think of rebuilding libraries with AI to change the license.
[−] rglover 47d ago
This feels dramatic. The parts are there to rebuild a better web. If you want to build it, build it. But most still want the money, they just want to get it while also retaining the moral high ground. A VPS can still be had for cheap. Code is now "free" (not necessarily good code but like you suggest "good enough"). The only thing stopping you at this point is your own ego (and its expectations of success).

"You wanna escape Armageddon, read a different book." - KRS-One

[−] itsgrimetime 47d ago
“These AI tools are garbage and can’t create anything worth creating”

“These AI tools are so powerful they can steal your ideas with nothing but a sentence”

I know that’s not exactly what OP is saying but the pretentiousness of the “we knew better” got to me a little bit. I think it’s a cool and unique analogy but I’m not as pessimistic.

Ideas have become so cheap to try/experiment with, more people are able to try 10x more or whatever, and that may keep increasing, I think there are way less hunters than hunted

[−] jrowen 47d ago
It is also asymmetric. If you announce your presence, even if 4 out of 5 civs that notice you don’t annihilate you immediately (but they probably should), the fifth might. It’s just a probability game, with permadeath.

So hiding is the most rational - the only - strategy of survival.

This is a paranoid and cynical strategy that doesn't win out in the known history of life. What works is grow, expand, mingle, maintain - assimilate but don't annihilate.

[−] mentalgear 47d ago
Reminds me of Gary Marcus's argument: LLMs (the forest) aren't genuinely intelligent, but the LLM providers can - by feeding off from the vast amounts of user data - make them simply memorized enough to turn every out-of-distribution challenge into an in-distribution retrieval and mixing task without having 'AI' that is ever truly intelligent i.e. can generalise.
[−] dinakernel 47d ago
Have you read mike masnicks ? https://www.techdirt.com/2026/03/25/ai-might-be-our-best-sho...

It actually points out the completely opposite and I liked that quite a bit That AI allows us to get back the open web in in a way.

[−] mmaunder 47d ago
The only barrier to a flourishing truly open source AI model ecosystem is the cost of training a highly capable model. This will get as cheap as it is to buy a computer and contribute to Linux. OSSAI movements will replace traditional OSS. And as with software, the early Slackware-like versions will be poor substitutes, but it will get better and then dominate.
[−] convexly 47d ago
Most people already have this problem with their own thinking though. You make a big call at work, it plays out over 6 months, and by the time you know whether or not it was right you've already rewritten why you made it. That feedback loop barely exists.
[−] vb7132 47d ago

> The comments can be even more interesting and thought provoking than the post

I love this ending. I don't agree with author's views. But the article is very coherent, thought provoking. And definitely the comments here on HN are even more interesting.

[−] king_phil 47d ago
Dark forest makes no sense to me. Why would a civilization eradicate another, spending huge amounts of resources (time, energy, material) when the universe has such an enormous scale that you cannot even get to each other in a timescale that makes much sense...
[−] kadhirvelm 47d ago
Honestly my hope is the arbitrage that allowed big tech to make the kind of margins it does on software starts to go away because it’s sooo cheap to build software. In other words, defending the technical moats that we rely on today doesn’t make sense in the future because it’s not a reliable way to make money. Aka no need to protect your technical secrets because there’s no capitalist reason to lol. Taken further, my naive hope is societal attention moves away from this layer and onto whatever becomes the new way to make money and the people left paying attention to software are big on sharing
[−] OrangePilled 47d ago
My working thesis is that anxiety over AI-generated material is worry about having control over the 80–90% of human output that affords most of society with a comfortable, affirming life.
[−] stephen_cagle 47d ago
I think the most interesting idea here is the idea of people purposely keeping secrets in order to maintain advantages.

Beliefs: At this time, I do not actually believe that LLM's can innovate in any real way. I'm not even clear if they can abstract. I think the most creative thing they can do is act as digital "nudgers" on combinatorial deterministic problems; illustrated by their performance on very specific geometry and chemistry problems.

Anyway, my point is that I think they may still need human beings to actually provide novel solutions to problems. To handle the unexpected. To simplify. LLM's can execute once they have been trained, but they cannot train themselves.

In the past, the saying in silicon valley was often "ideas are cheap". And there was some truth to that. Execution was far more difficult then the idea itself. Execution was so much more difficult than "pure thought" that you could often publicize the algorithmic/process/whatever that you had and still offer a product/service/consultancy that made use of it. The execution was the valuable thing.

But LLM's execute at a cost that is fractional of human cost and multiples of human development speed. The idea hasn't increased in value, but the execution cost has decreased markedly. In this world, protecting the idea is far more valuable than it is in the previous world. You can't keep your competitors away by out executing them, but you can keep them away if you have some advantage that they do not understand.

And, I agree, that is quite worrisome. If people don't share knowledge then knowledge disseminate much more slowly as everyone has to independently learn things on their own. That is a frightening future.

[−] simianwords 47d ago
Can someone explain what I'm missing here?

If we are talking about releasing OpenSource software, they can already be used by companies with zero effort.

I'm guessing the author is talking about released closed source software or simply talking about ideas? What kind of serious company or startup is building in the open and sharing trade secrets or ideas?

I'm genuinely confused and I think this article is pure slop without any core idea.

[−] andai 47d ago
Did you use AI to write this? My perplexity sense is tingling ;)
[−] __MatrixMan__ 47d ago
I think this only applies to a rather narrow set of ideas.

I'm not really interested in pursuing ideas that stop being good if somebody gets there first. If I bothered to design it its because I wanted it to exist and if somebody makes it exist then I'm happy because then I get to use it.

So what kind of things does this apply to? Likely, it's zero sum games, schemes to control other people, ways to be the first to create a new kind of artificial scarcity, opportunities to make a buck by ruining something that has been so far overlooked by other grifters. In other words: bad ideas.

If AI becomes a threat to those who habitually dwell in such spaces, great, screw em.

In the meantime, the rest of us can build things that we would be happy to be users of, safe in the knowledge that if somebody beats us to it, we'll happily be users of that thing too.

[−] JeremyHerrman 47d ago
I reject pretty much every point of this article, and I worry that it will lead readers down two dark roads: apathy and secrecy

do you really think bigco is going to steal your vibecoded app just because you used their API? ridiculous. They could already do this before AI with their army of devs.

should you hide all of your ideas until they're perfect and ready for millions of users? we all know this goes against a core tenet of startups which is still true today: launch early and often.

promptfoo/openclaw weren't cloned by openai when they got poplular, they were bought for real $$$

also, regarding this:

> 2009, I bought a refurbished ThinkPad, installed Xubuntu, and started coding.

you can still do this, even with that same 2009 thinkpad. the hard work is in getting your app out there in front of people, coding is just a small piece of a successful business

[−] orbital-decay 47d ago
Some of that is rose-tinted glasses.

1. Sharing was never really safe, open source by default only became possible because of SaaS and rent-seeking behavior.

2. Early web (not internet) wasn't hyperconnected. With the advent of global-scale social media it was immediately obvious to many this will lead to monoculture and reduced diversity. What thought to be the information superhighway became the information superconductor with zero resistance, carrying infinite current. Also known as short circuit.