It's weird to me that this is "suddenly" an issue.
It has been known for decades that Red Hat Inc's largest customer is the U.S. Army[1]. It's a very large part of the reason why Red Hat took over development of SELinux and made it on by default in their distros.
And the Army isn't exactly known for handing out cupcakes...
[1] "Red Hat’s partnership with the U.S. Army spans 10 years
starting with the deployment of Red Hat Enterprise Linux in
2002 and, to this day, the U.S. Army remains one of Red Hat’s
largest customers by volume."
Optics. One can argue that Red Hat was working with DoD on just security. But after this white paper on how to better kill people, that facade has fallen over.
Suddenly war and associated killings stopped being theoretical. Bunch of people that used to be as dangerous (in practice) to civilization as paintballers started actually using real weapons on real people.
Military in peacetime is cosplaying (larping?) war. So there's little resistance to aiding them in their silliness. When they actually start to bomb people, it's another story.
They deal was, we aid you in your pretend-wars, but you don't start actual ones. This deal has been violated and people don't abide by that.
It’s low latency for realtime. Discussion of these things have been public for twenty years since Red hat released MRG, and this is a conspiracy theory website.
>The focus should be on national defense, aid during disasters, and responding to the legitimate requests of sovereign, democratic nations to come to their defense (e.g. helping Ukraine fight off the Russian invasion).
Carving out the particular military engagements your company deems less than justified sounds nice but isn't workable in practice. You have to swallow the whole pill if you want to sell to the DoD.
Better to have smart bombs than dumb ones. Or rather, better to have 1 smart bomb than 1000 dumb ones spread across an entire city in order to pick off the particular building, vehicle, or person you want.
Specially AI Hallucination bombs, that hit a park named "Police Park", because it thinks it's killing policemen[1], or a children school with Shahed in the name[2], because it thinks It has something to do with drones.
This isn't even that new. Part of the motivation for building autonomous nuclear response programs during the cold war was specifically to remove accountability, and guilt, from human operators. But AI does bring it to a new level.
> Part of the motivation for building autonomous nuclear response programs during the cold war was specifically to remove accountability, and guilt, from human operators.
Details please. Because I can see the reality being most likely an attempt to avoid conflict by solidifying MAD, by trying to prevent a human from vetoing a second-strike.
Your links talk about the places that were bombed, but I don't see anything apart for conjecture that this was the product of AI targeting.
Also this is a vast underestimate of the ability of organizations that were able to locate most of Iranian leadership throughout the war in their hiding places, but suddenly their Farsi is so bad they need a twitter account to tell them this is a Park
What's the running rumor right now of which AI was involved? I heard Claude awhile back, but this makes me wonder how much Redhat could have been involved?
That “smart” vs “dumb” distinction doesn’t apply here though. What is discussed has nothing to do with the ability to physically land a bomb in a precise location, that problem seems to be solved reasonably well already. “Smart” in this case has more to do with using ML/LLM to select a target.
The reality looks more like the worst of both worlds to me.
If you genuinely needed only a handful of "surgical strikes", thete would be no need to "compress the kill cycle".
What we see in Gaza, Lebanon and Iran looks more like "smart carpet bombing": Some AI system generates a continuous stream of "targets" from sensor and intelligence data, according to whatever criteria political leadership defines and according to a given level of allowed "collateral damage", then those targets are immediately fed to drones or warplanes to destroy - essentially a continuous "pipeline" that probably "ideally" (in the dreams of those people) should become fully automated.
For THAT kind of vision, "efficiency" in destroying any particular target and checking all legally required boxes as quickly as possible is probably paramount.
(And in addition to that, there are probably still enough "dumb bombs" if no one is looking)
Smart bombs are no good if they are directed by a dumb AI targeting system, a dumb alcoholic accelerationist religious fanatic Secretary of War, or a dumb narcissistic genocidal pedophile President.
As someone who works for the DoD, the so called "disturbing" language in the paper is very commonplace in this industry. Idk if or why Red Hat is trying to redact the paper, but I'm sure it's not because they're embarrassed their software is killing people. That's par for the course for defense contractors.
> With that in mind, it seems Red Hat, owned by IBM, is desperately trying to scrub a certain white paper from the internet. Titled “Compress the kill cycle with Red Hat Device Edge”, the 2024 white paper details how Red Hat’s products and technologies can make it easier and faster to, well, kill people.
Words have meaning, and their emotional force derives from that meaning. The knowing misuse of a term like “genocide” for its emotional force is manipulative sophistry.
Besides external PR, does anyone know how this affects internal morale?
Some of the earlier Red Hat people I knew would not be OK with working on weapons systems even under the most legitimate circumstances. And they'd be much more opposed to collaborating with fascist regimes. And I think horrified by the idea of shoveling AI slop and grifter hype into life&death decisions.
Of course the tech industry makeup has changed (overall culture transitioning from hacker idealists, to finance bros), and some IBM-ification of Red Hat has has also happened. But I'd like to think Red Hat still attracts a more principled pool of talent than FAANG.
This post looks artificially buried on page 3 now, and the topic is one of the most important things that tech company workers should be thinking about right now.
75. Team from ETH Zurich make high quality quantum swap gate using a geometric phase (ethz.ch)
231 points by joko42 1 day ago | flag | hide | 54 comments
76. The disturbing white paper Red Hat is trying to erase from the internet (osnews.com)
153 points by choult 6 hours ago | flag | hide | 51 comments
77. Code is run more than read (2023) (olano.dev)
137 points by facundo_olano 1 day ago | flag | hide | 102 comments
> I don’t think there’s something inherently wrong with working together with your nation’s military or defense companies, but that all hinges on what, exactly, said military is doing and how those defense companies’ products are being used. The focus should be on national defense, aid during disasters, and responding to the legitimate requests of sovereign, democratic nations to come to their defense
The core purpose of a military is to destroy things and kill people, and the world is controlled by the people who can do that better than others. You can put all the "defense" and "disaster aid" lipstick on that you like but that doesn't change what they train for and what their real purpose is.
76 comments
It has been known for decades that Red Hat Inc's largest customer is the U.S. Army[1]. It's a very large part of the reason why Red Hat took over development of SELinux and made it on by default in their distros.
And the Army isn't exactly known for handing out cupcakes...
[1] https://unixdigest.com/includes/files/Army-RedHat-Whitepaper...
[1] "Red Hat’s partnership with the U.S. Army spans 10 years starting with the deployment of Red Hat Enterprise Linux in 2002 and, to this day, the U.S. Army remains one of Red Hat’s largest customers by volume."
Before it was a maybe, now it's certainty.
Ambiguity is quiet comforting.
Military in peacetime is cosplaying (larping?) war. So there's little resistance to aiding them in their silliness. When they actually start to bomb people, it's another story.
They deal was, we aid you in your pretend-wars, but you don't start actual ones. This deal has been violated and people don't abide by that.
No reasonable person tolerates this behavior. When military does that it immediately loses leniency of smart people.
US military is first and foremost a welfare program for people who chose to not perform any useful economic activity.
Archive URL to original paper
>The focus should be on national defense, aid during disasters, and responding to the legitimate requests of sovereign, democratic nations to come to their defense (e.g. helping Ukraine fight off the Russian invasion).
Carving out the particular military engagements your company deems less than justified sounds nice but isn't workable in practice. You have to swallow the whole pill if you want to sell to the DoD.
[1] https://x.com/MarioNawfal/status/2029575052535173364
[2] https://www.aljazeera.com/news/2026/3/6/elementary-school-in...
You or your subordinates target an elementary school: that's a war crime.
Your "battlefield AI" targets an elementary school: software bug, it happens, can't be helped.
> Part of the motivation for building autonomous nuclear response programs during the cold war was specifically to remove accountability, and guilt, from human operators.
Details please. Because I can see the reality being most likely an attempt to avoid conflict by solidifying MAD, by trying to prevent a human from vetoing a second-strike.
Meaning whatever horrors are done on either side, only the horrors committed by the loser will be "crimes". The inclusion of AI doesn't change that.
Also this is a vast underestimate of the ability of organizations that were able to locate most of Iranian leadership throughout the war in their hiding places, but suddenly their Farsi is so bad they need a twitter account to tell them this is a Park
You want consensus from non-experts for a plan to use 20 smart bombs.
Your opponent wants consensus for a plan to live-stream a demo of 1 smart bomb, and then use 19 dumb ones.
Your team has more expertise.
Your opponent's plan saves enough money to buy a better PR team than yours, and is still more cost effective than your plan.
Who wins?
If you genuinely needed only a handful of "surgical strikes", thete would be no need to "compress the kill cycle".
What we see in Gaza, Lebanon and Iran looks more like "smart carpet bombing": Some AI system generates a continuous stream of "targets" from sensor and intelligence data, according to whatever criteria political leadership defines and according to a given level of allowed "collateral damage", then those targets are immediately fed to drones or warplanes to destroy - essentially a continuous "pipeline" that probably "ideally" (in the dreams of those people) should become fully automated.
For THAT kind of vision, "efficiency" in destroying any particular target and checking all legally required boxes as quickly as possible is probably paramount.
(And in addition to that, there are probably still enough "dumb bombs" if no one is looking)
Any productivity improvement software in the wrong hands could make doing bad things more efficient.
Can we rename this "RedHat removes paper from website on using their software to 'shrink the kill-chain'"
> With that in mind, it seems Red Hat, owned by IBM, is desperately trying to scrub a certain white paper from the internet. Titled “Compress the kill cycle with Red Hat Device Edge”, the 2024 white paper details how Red Hat’s products and technologies can make it easier and faster to, well, kill people.
It appears IBM learned no lessons after WWII: https://en.wikipedia.org/wiki/IBM_and_the_Holocaust
That book will need a sequel soon.
> With things like the genocide in Gaza ...
Population: ~2,050,000
Density: 15,455.8/sq mi
Words have meaning, and their emotional force derives from that meaning. The knowing misuse of a term like “genocide” for its emotional force is manipulative sophistry.
Some of the earlier Red Hat people I knew would not be OK with working on weapons systems even under the most legitimate circumstances. And they'd be much more opposed to collaborating with fascist regimes. And I think horrified by the idea of shoveling AI slop and grifter hype into life&death decisions.
Of course the tech industry makeup has changed (overall culture transitioning from hacker idealists, to finance bros), and some IBM-ification of Red Hat has has also happened. But I'd like to think Red Hat still attracts a more principled pool of talent than FAANG.
> I don’t think there’s something inherently wrong with working together with your nation’s military or defense companies, but that all hinges on what, exactly, said military is doing and how those defense companies’ products are being used. The focus should be on national defense, aid during disasters, and responding to the legitimate requests of sovereign, democratic nations to come to their defense
The core purpose of a military is to destroy things and kill people, and the world is controlled by the people who can do that better than others. You can put all the "defense" and "disaster aid" lipstick on that you like but that doesn't change what they train for and what their real purpose is.