Sam Altman may control our future – can he be trusted? (newyorker.com)

by adrianhon 916 comments 2202 points
Read article View on HN

916 comments

[−] ronanfarrow 39d ago
Ronan Farrow here. Andrew Marantz and I spent 18 months on this investigation. Happy to answer questions about the reporting.
[−] cs702 39d ago
Thank you for coming on HN and offering to answer questions.[a]

This is a fantastic piece, very timely, evidently well-researched, and also well-written. Judging by the little that I know, it's accurate. Thank you for doing the work and sharing it with the world.

OpenAI may be in a more tenuous competitive position than many people realize. Recent anecdotal evidence suggests the company has lost its lead in the AI race to Anthropic.[b]

Many people here, on HN, who develop software prefer Claude, because they think it's a better product.[c]

Is your understanding of OpenAI's current competitive position similar?

---

[a] You may want to provide proof online that you are who you say you are: https://en.wikipedia.org/wiki/On_the_Internet%2C_nobody_know...

[b] https://www.latimes.com/business/story/2026-04-01/openais-sh...

[c] For example, there are 2x more stories mentioning Claude than ChatGPT on HN over the past year. Compare https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru... to https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...

[−] ronanfarrow 39d ago
Thank you for this, very much appreciate the thoughtful response.

The piece captures some of the anxieties within OpenAI right now about their competitive position. This obviously ebbs and flows but of late there has been much focus on Anthropic's relative position. We of course mention the allegations of "circular deals" and concerns about partners taking on debt.

[−] cs702 39d ago
Thank you. Yes, I saw that. The company's always been surrounded by endless talk about insane hype, speculative bubbles, and financial engineering. I wasn't asking so much about that.

I was asking more about your informed view on how OpenAI's technology, products, and roadmap are perceived, particularly by customers and partners, in comparison to those of competitors.

If you have an opinion about that, everyone here would love to hear about it.

[−] cs702 36d ago
UPDATE: Well-regarded people on HN are saying OpenAI's most recent GPT-5x codex model is better than Claude 5x for certain coding tasks:

https://news.ycombinator.com/item?id=47707494

[−] globalnode 38d ago
at this point even googles ai search results are better than gpt - obv. this is not for full programs but if you know what youre doing and just want a snippet, thats all you need.
[−] embedding-shape 38d ago
Wild how different experience people can have. Both Google's models and Anthrophic's hallucinate a lot for me, even when I try the expensive plans and with web searches, for some reason, and none of them come close to the accuracy and hallucination-free responses of ChatGPT Pro, which to me still is SOTA and has been since it was made available. But people keep having opposite experiences apparently, I just can't make sense of it.
[−] ethbr1 38d ago
Kagi (assistant.kagi.com) with Kimi K2.5 (their current default) has worked great for me in scenarios where the search result data is more important than the model.

I.e. what I used to use Google for and when I don't want an AI to overly summarize / editorialize result data.

[−] globalnode 38d ago
oh thats probably because im a cheap-skate and just use the free garbo models. im sure the pro version is quite good.
[−] irishcoffee 39d ago
My guess is that the answer to your question, fantastic question, is that nobody knows. I remember having the same thoughts when Covid was first “arriving” if you will: we wanted people in the know to throw us a nugget of information, and they just didn’t know.

As it turns out, and what I’m kind of going with for this LLM shit, is that it’ll play out exactly how you think it will. The companies are all too big to fail, with billionaire backers who would rather commit fraud than lose money.

[−] Ericson2314 39d ago
Ronan Farrow's expertise is investigations into elite amorality, not evaluating technical products. Why are you asking this question?
[−] keepamovin 38d ago
If you were in charge of the deciding what should be done with Sam Altman, what would you choose?
[−] unsupp0rted 39d ago
Many of us prefer OpenAI's Codex, because we think it's a better product.

No comment on the CEO: I just find the product superior in everything but UI/UX and conversation. It's better at quality code.

[−] brightbeige 39d ago
He’s replying on this twitter thread - perhaps someone with an account can ask there and link his comment here?

https://xcancel.com/RonanFarrow/status/2041127882429206532#m

[−] ed 39d ago
It's worth noting Codex has 2x more stories than Claude https://hn.algolia.com/?query=codex
[−] ATMLOTTOBEER 39d ago
Yeah we moved to Claude a few months ago, mostly because the devs kept using it anyway. Altman stuff is interesting but at the end of the day you just go with whatever tool works
[−] cableshaft 38d ago
Personally, I prefer Claude for coding, but I still prefer ChatGPT for hashing out ideas for my projects (which tend to be game designs). So I use both.
[−] lasky 35d ago
I’m assuming this is all sarcasm.
[−] georgemcbay 39d ago

> You may want to provide proof online that you are who you say you are

Unfortunately it probably doesn't even matter here on HN considering how brigaded down this story is predictably getting.

But yeah, it was a fantastic piece.

[−] taurath 39d ago
The statements around the sexual abuse allegations seemed to be the most puzzling to me - his sister’s allegations and claims of underage partners because he has a tendency to hook up with younger partners. It does seem like this piece gives him a pretty clean bill of health in that matter - I guess would you be able to talk about how you investigated?

Did you do any extra investigations into Annie’s allegations? It feels to me like the unstated conclusion is recovered memory can’t be trusted, which is a popular understanding but a very wrong one put out by the now defunct and discredited False Memory Syndrome Foundation. It was founded by the parents of the psychologist who coined DARVO, directly in reaction to her accusing them of abuse.

Dissociation is real (I have a dissociative disorder, and abuse I “recovered” but did not remember for much of my adolescence and early adulthood has been corroborated by third parties) and many CSA survivors have severe memory problems that often don’t come to a head until adulthood. I know you didn’t dismiss her claim, but the way the public tends to think about recovered memories is shaped primarily by that awful organization.

[−] jzymbaluk 39d ago
Hi Ronan, thanks for the article and for answering questions.

My question is, how do you know when an enormous project like this, conducted over an 18-month time span is "done"? I assume you get a lot of leeway from editors and publishers on this matter. How do you make the decision to finally pull the trigger on publishing?

[−] cm2012 39d ago
I just spent a while reading the article. I really appreciate you writing it. In my case, it made me like Sam Altman a lot more. But I was only able to conclude this because of all the evidence you took the time to put together. It paints the picture of someone trying to do something very difficult in a rapidly changing environment and a lot of pressure, but still making the important choices and not shirking them.
[−] fblp 39d ago
Hi Ronan appreciate you being here. what would help you and others continue to do journalism like this? (including commenting on HN?)
[−] sebmellen 39d ago
Ronan Farrow on Hacker News. Now I’ve seen everything.
[−] philip1209 39d ago
We talk about Sam Altman a lot. At this point he has a Hollywood movie in post-production, a book ("The Optimist"), and a seemingly endless stream of profiles. It feels intellectually lazy to keep researching the same guy when the industry is moving beyond him.

All evidence today suggests Anthropic is passing OpenAI in relative and absolute growth. So where's the critical reporting? The DOD coverage was framed around the Pentagon's decisions, not Anthropic's. And nobody seems interested in examining whether the company that branded itself as the ethical AI lab actually is one. That seems like a story worth writing.

[−] tbagman 39d ago
Wonderful work and writing, Ronan -- I'm appreciative of your careful balance between objective fact-finding and synthesis.

For me, a big worry about AI is in its potential to further ease distorting or fabricating truth, while simultaneously reducing people's "load-bearing" intellectual skills in assessing what is true or trustworthy or good. You must be in the middle of this storm, given your profession and the investigations like this that you pursue.

Do you see a path through this?

[−] aragonite 39d ago
I had a question about reporting conventions. In the paragraph where Altman is said to have told Murati that his allies were "going all out" to damage her reputation, the claim is attributed to "someone with knowledge of the conversation" but the attribution is tucked inconspicuously into the middle of the sentence (rather than say leading upfront ("According to someone with knowledge of the conversation, Altman...")) and Altman's non-recollection appears only parenthetically.

As a reader, am I supposed to infer anything about evidentiary weight from these stylistic choices? When a single anonymous source's testimony is presented in a "declarative" narrative style like here (with the attribution in a less prominent position), should we read that as reflecting high confidence on your end (perhaps from additional corroboration not fully spelled out)? And does the fact that Altman’s non-recollection appears in parentheses carry any epistemic signal (e.g. that you assign it less evidentiary weight)? Or is that mostly a matter of (say) prose rhythm?

[−] tsunamifury 39d ago
I know why the cantilevered pool statement is there and why you mentioned it.

I’m sure you don’t know half of the totally fucked up things Sam did to get “revenge” for the slight of a leaking pool.

[−] rupi 38d ago
Ronan Farrow, the write of this article, made a comment in this thread that is buried in all the comments, "As is always the case with incredibly precise and rigorously fact-checked reporting like this, where every word is chosen carefully (the initial closing meeting for this one was nearly eight hours long, with full deliberation about each sentence), there is more out there on that subject than is explicitly on the page."

I saw that before I read the article and it made me read the article in a very different way than I normally do. As I was reading, I found myself thinking, "Why is it worded that way? What else is the writer trying to say, or not say?"

It made reading this a lot more interactive than I normally associate with passive reading. Great job, Ronan!

[−] laylower 38d ago
Reading this makes me even happier to pay for Anthropic.

Amodei and his sister saw through the behavior and called it out.

" “Eighty per cent of the charter was just betrayed,” Amodei recalled. He confronted Altman, who denied that the provision existed. Amodei read it aloud, pointing to the text, and ultimately forced another colleague to confirm its existence to Altman directly. (Altman doesn’t remember this.) Amodei’s notes describe escalating tense encounters, including one, months later, in which Altman summoned him and his sister, Daniela, who worked in safety and policy at the company, to tell them that he had it on “good authority” from a senior executive that they had been plotting a coup. Daniela, the notes continue, “lost it,” and brought in that executive, who denied having said anything. As one person briefed on the exchange recalled, Altman then denied having made the claim. “I didn’t even say that,” he said. “You just said that,” Daniela replied. (Altman said that this was not quite his recollection, and that he had accused the Amodeis only of “political behavior.”) In 2020, Amodei, Daniela, and other colleagues left to found Anthropic, which is now one of OpenAI’s chief rivals."

[−] arionhardison 39d ago
Hi @ronanfarrow — I have only had one interaction with Sam Altman in person, and I was advised to keep it to myself. I know this crowd may not care, but Altman is absolutely terrified of Black people — not in any contextual sense, but in a visceral, instinctive way. For someone who, as you put it, "controls our future," this should matter.

FYI: I am by far not the only one to have experienced this and it 100% impacts hiring and other decisions at OpenAI.

[−] andrewrn 39d ago
“By 2018, several Y.C. partners were so frustrated with Altman’s behavior that they approached Graham to complain. Graham and Jessica Livingston, his wife and a Y.C. founder, apparently had a frank conversation with Altman. Afterward, Graham started telling people that although Altman had agreed to leave the company, he was resisting in practice”

You can subtly see residue of this frustration in Dalton and Michael’s videos when Sam Altman comes up. It’s only thinly veiled that Sam was a snake while at YC.

[−] jablongo 39d ago
For me, the attempted productization of Sora was conclusive proof that 1) OAI was overcapitalized and desperate for revenue 2) safety didn't matter to them much 3) improving the world didn't matter much either.

At one point you mentioned an interaction with OpenAI staff where you were looking to interview AI Safety researchers. You were rebuffed b/c "existential safety isn't a thing". Does this mean that you could find no evidence of a AI Safety team at OAI after Jan Leike left? If you look at job postings it does seem like they have significant safety staff...

[−] kmfrk 39d ago
Gobsmacking details about Altmans' time as Y Combinator president, in case anyone's wondering.

Fantastic reporting.

[−] stavros 39d ago
I found it very interesting that Altman et al were worried that AI will become supremely intelligent and China will make a supervirus or some AI drones or whatnot, but not a single person was worried about destroying all jobs because we wouldn't need humans any more.

Or maybe they were not so much "worried" but "hopeful" that they'd amass literally all the wealth in the world.

[−] neonate 39d ago
[−] krackers 39d ago
[1] is also good to read as a follow-up, and compare the personalities

https://harpers.org/archive/2026/03/childs-play-sam-kriss-ai...

[−] thrwaway55 39d ago
We need only ask the dead. Aaron Schwartz knew what Altman is. The answer to the topic is no.
[−] snakeboy 38d ago
I usually use free archived versions to read mainstream journalism pieces. Seeing this convinced me to subscribe. I've always loved The New Yorker, and am happy to support serious longform journalism (and I know that Ronan is one of the best).

However, it's a shame that the only way to subscribe to the print version is to pay $260 upfront for the yearly subscription. Meanwhile the digital version is $1/week ($52 upfront) for one year, or even just $10 for one month.

[−] vlovich123 39d ago

> Chesky stayed in contact with the tech journalist Kara Swisher, relaying criticism of the board.

Ronan interesting writing as always. I’m curious if the role of the media as a pawn of the rich and powerful to sway perception and build narratives concerns you, especially given your personal experiences with this and the reporting you’ve done. Are there reforms you think reporters and/or news organizations should adopt to make sure access doesn’t become direct or indirect manipulation and how do you fight against that in your own reporting?

[−] swingboy 39d ago
It's really interesting reading about how these folks view LLMs. Yeah, they're transformative, but I don't know that we're going to be eating ramen in a Neo-Tokyo street bar anytime soon. So much "A.G.I" mentioned in the article.
[−] strgrd 38d ago
I remember reading these direct quotes from SA in 2016 from the New Yorker and thinking, yeah, this guy is just miserable:

> “Well, I like racing cars. I have five, including two McLarens and an old Tesla. I like flying rented planes all over California. Oh, and one odd one—I prep for survival. My problem is that when my friends get drunk they talk about the ways the world will end. After a Dutch lab modified the H5N1 bird-flu virus, five years ago, making it super contagious, the chance of a lethal synthetic virus being released in the next twenty years became, well, nonzero. The other most popular scenarios would be A.I. that attacks us and nations fighting with nukes over scarce resources. I try not to think about it too much, but I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

> "If you believe that all human lives are equally valuable, and you also believe that 99.5 per cent of lives will take place in the future, we should spend all our time thinking about the future. But I do care much more about my family and friends.”

> "The thing most people get wrong is that if labor costs go to zero... The cost of a great life comes way down. If we get fusion to work and electricity is free, then transportation is substantially cheaper, and the cost of electricity flows through to water and food. People pay a lot for a great education now, but you can become expert level on most things by looking at your phone. So, if an American family of four now requires seventy thousand dollars to be happy, which is the number you most often hear, then in ten to twenty years it could be an order of magnitude cheaper, with an error factor of 2x. Excluding the cost of housing, thirty-five hundred to fourteen thousand dollars could be all a family needs to enjoy a really good life.”

> "...we’re going to have unlimited wealth and a huge amount of job displacement, so basic income really makes sense. Plus, the stipend will free up that one person in a million who can create the next Apple.”

[−] ainch 39d ago
Great piece. And a good excuse to read up on the use of diaeresis in English (eg. coördination, reëlection) to distinguish repeated vowels - I hadn't seen the New Yorker's usage before.
[−] 6Az4Mj4D 39d ago
I am in 40s and going to be made redundant this June. In future only people who can afford to keep things like Claude, OpenAI and most importantly create value using them more than what others can do be able to survive. Otherwise, game is more or less over, and I question what's next for my own future while I learn to use Claude in FOMO. I cannot trust Sam or others if they will have any interest to keep this tech affordable for common people like me.
[−] dmitrygr 39d ago
The number of "Altman doesn’t remember this" or "Altman denies this" is hilarious
[−] just_once 39d ago
Amazing that this article and an actual comment from Ronan Farrow is this far down the list while...Scientists Figured Out How Eels Reproduce (2022) has 6 times the points.
[−] morleytj 39d ago
Wow, this is an incredibly detailed piece. Really in depth reporting and the kind of detailed investigation we need more of on important topics like this.

> "Employees now call this moment “the Blip,” after an incident in the Marvel films in which characters disappear from existence and then return, unchanged, to a world profoundly altered by their absence."

This is a very small detail, but an instinctive grimace crosses my face at the thought of these sort of Marvel references and I'm not entirely sure why.

[−] wk_end 39d ago
This anecdote is so absurd it sounds like satire. This is the guy with the $23M mansion?

> Amodei’s notes describe escalating tense encounters, including one, months later, in which Altman summoned him and his sister, Daniela, who worked in safety and policy at the company, to tell them that he had it on “good authority” from a senior executive that they had been plotting a coup. Daniela, the notes continue, “lost it,” and brought in that executive, who denied having said anything. As one person briefed on the exchange recalled, Altman then denied having made the claim. “I didn’t even say that,” he said. “You just said that,” Daniela replied.

[−] throw4847285 39d ago
A new Ronan Farrow piece is a rare gift (and Marantz is no slouch). Can't wait to read this in the physical magazine when it arrives!
[−] ambicapter 39d ago
I didn't have the mental energy to read the whole thing but man the final paragraph is some really good writing. Way to tie it all in together.
[−] HardwareLust 39d ago
Of course he cannot be trusted. Anyone whose motivation is based on greed is by nature untrustworthy.
[−] innocenttop 39d ago
Why is the story so downranked? Folks at HackerNews have something to do with it ?
[−] kazinator 38d ago
Altman cannot control anything because he doesn't have any secret sauce. Everything he has has been replicated by others.

He doesn't have anything comparable to, say, the operating system platform dominance of Microsoft Windows, or service platform dominance of YouTube.

The entire value proposition of OpenAI is that billions of people don't know that anything other exists than ChatGPT, which is rather tenuous and volatile.

[−] bootload 39d ago
“By 2018, several Y.C. partners were so frustrated with Altman’s behavior that they approached Graham to complain. Graham and Jessica Livingston, his wife and a Y.C. founder, apparently had a frank conversation with Altman. Afterward, Graham started telling people that although Altman had agreed to leave the company, he was resisting in practice”

This statement rings true.

JL, PG has mentioned often, is his weapon to test the “people” integrity aspect of YC / Startups. It’s not lost on me both Altman and Thiel both associated with YC were useful short term only, highlighting how regular “character” evaluations are required at higher levels of responsibility.

[−] steve_adams_86 39d ago

> Amodei, in one of his early notes, recalled pressing Brockman on his priorities and Brockman replying that he wanted “money and power.” Brockman disputes this. His diary entries from this time suggest conflicting instincts. One reads, “Happy to not become rich on this, so long as no one else is.” In another, he asks, “So what do I

really want?” Among his answers is “Financially what will take me to $1B.”

I can't imagine having such uninspired thoughts and actually writing them down while in a role of such diverse and worthwhile opportunities. I'd like to ask "how the hell do these people find themselves in these positions", but I think the answer is literally what he wrote in his diary. What a boring answer. We need to filter these people out at every turn, but instead they're elevated to the highest peaks of power.

[−] slg 39d ago
One thing that stands out when reading profiles like this is the number of positive and negative descriptions of the subject that agree. For example, there seems to be little dispute that Altman will happily say something that he knows/believes isn't true, there's just a lot of people who are willing to forgive any lies if the lies are in service of something they themselves agree with.
[−] wolvoleo 38d ago
[−] einrealist 39d ago
I don't trust anyone who claims that LLMs today are superhumanly intelligent. All they do is perform compute-intensive brute-force attacks on the problem/solution space and call it 'reasoning', all while subsidising the real costs to capture the market. So much SciFi BS and extrapolation about a technology that is useful if adopted with care.

This technology needs to become a commodity to destroy this aggregation of power between a few organizations with untrustworthy incentives and leadership.

[−] jesterson 39d ago
Watch Altman's reaction in Tucker Carlson interview to the question about (alleged) murder of OpenAI researcher Suchir Balaji.

The overall response and particularly the body language speaks a lot.

[−] ernsheong 38d ago
I bet Satya Nadella is regretting defending Altman now.
[−] almostdeadguy 39d ago
Seems this got buried from the front page very quickly
[−] pharos92 39d ago
We focus these critiques far too much on the face rather than the underlying mechanics. Just like in politics, we critique the personality/politician yet the underlying system architecture evades it.

Sam Altman clearly has a long history of nefarious activity. But the underlying threat posted by AI to society, the economy and human freedom persists with or without his presence.

[−] shevy-java 38d ago
I don't trust him. He already made statements that convinced me I don't want to touch anything he controls. In a way it is similar to Meta and co. For some reason the US corporations behave very suspiciously once past a certain threshold size. With Win11 from Microsoft I always wonder whether there is a not so hidden subagenda in place.
[−] ycui1986 39d ago
he won't. if anything, openai is falling behind recently. the trend won't change easily. it is like the old time Netscape.
[−] lenerdenator 39d ago
If you are asking if a single human can be trusted with such a responsibility, the answer is, by default, no.