I don’t know how OpenAI screwed this up. They had the best tech, the largest installed base, the best brand recognition.
And somehow instead of prosecuting the lead in all areas, they got all hubristic and sloppy and just failed to iterate on the core product, while also failing to respond quickly when Anthropic showed that coding agents are the flywheel that makes the whole company faster.
It’s like they thought they had an unassailable monopoly and speedran to the lazy incumbent position, all in a matter of months.
Anecdotally, I would actually argue tbe opposite - Anthropic is overrated, ass-kissed way too much here for mediocre coding abilities (especially for Elixir). ChatGPT most of the time one-shots complex solutions in comparison. The only reason why people shit on OpenAI so much is because of the defence deal, but, it's not like Anthropic is a saint either:
I’m so confused. Your link shows they are pushing for guardrails, what is bad about that? It is consistent with Anthropic safety-first principles, and what Dario wrote and talked about for the past decade or so? Could you be more direct with your criticism? Otherwise it’s hard to engage with
Claude Code is IMO the benchmark today. For all of the various contexts I’ve used it in it has mostly oneshot the tasks I’ve given it and is very user friendly for someone who is not a professional software engineer. To the extent it fails I can usually figure out quickly why and correct it at a high level.
I think Codex is a better fit for professional software engineers. It's able to one-shot larger, more complex tasks than Claude and also does better context management which is really important in a large codebase.
On the other hand, I think Claude is more friendly/readable and also still better at producing out-of-the-box nice looking frontend.
I think this is where we might have differing opinions. I'm a CTO by profession and I know what bad code is, so it is quite easy for me, based on my professional experience, point out when Claude generates bad code. And when you point it out, or ask it why it didn't take the correct/simpler approach - the response is always along the lines of "Oops, sorry!" or "You're absolutely right to question that..."
Yeah my CTO says similar things. I usually just tell him to add it to the backlog and move on. At the end of the day these tools save us 3-4 eng hires and that’s what the board cares about
Why pick elixir specifically here? I’m using opus/sonnet via Claude code for a moderately complex personal project built on phoenix and have had a good experience
Yeah, I've been building a fairly complex app with Claude and it has been great. Backend stack is a Go service, with TS front end and a solver running or-tools in Python.
I do think I do a good job of being very structured at breaking down my requirements and acceptance criteria (thanks dual lives as a devops and SRE guy and then PM). Extensive unit testing, discipline in use of sessions and memories and asking it to think of questions it should be asking me before even formulating a plan.
Claude is good, I'm definitely not saying it's bad. But if you work with LiveView, it will tend to choose more complexity over simplicity. Weirdly enough I have a feeling it's trained more on Python/Ruby (Object oriented paradigms) style code than functional code, so it tries to get things done not so functionally.
Strange take, I wasn’t lionizing Anthropic at all, and certainly wasn’t being preachy or moralistic.
I shit on OpenAI because they had a completely clear path to being dominant in consumer and enterprise, in chatbot and coding agent, and in text, image, and video. And somehow they screwed it up by being unfocused, unnecessarily slimy, drama-ridden, and slow to react to the market.
OpenAI has a lot of good tech and some good product. They may yet succeed. But they embody the lazy hare partying with celebs while the tortoises pull ahead in the race.
no i cant. chatgpt is a mobile app/website, not a model or agentic harness. if you are confusing these things then sadly you have no idea whats going on.
Sam lost the plot for me. He took too many interviews which led me to not trust him. Last straw came with him standing by Anthropic one day then throwing them under the bus the next. He showed little awareness on why that is problematic.
That's why I changed as well. I got really irritated how Altman tried to get the social credit by having principles, only to change them the moment it was convenient.
> Sam lost the plot for me. He took too many interviews which led me to not trust him. Last straw came with him standing by Anthropic one day then throwing them under the bus the next. He showed little awareness on why that is problematic.
It should have become clear to all that he was an untrustworthy person when he was fired from OpenAI by its then-board. My understanding is their complaint was he was lying, untrustworthy, and manipulative; and enough stories came out at the time to confirm that.
It's clearly because they didn't hire me after I applied :)
In all seriousness, I use Codex for work and Claude at home, and I feel like nowadays they're actually pretty competitive with each other. I don't know that it's that far behind.
I agree that they clearly erroneously assumed that no one would be able to catch up with them, though. OpenAI had such a head start that that should have been by itself a moat.
Aside from the fabricated drama and the trend chasing, OpenAI still has the best overall model and API service. Anthropic is really good, no doubt. But gpt-5.4 is a better model than even Opus, even if its a marginal advantage. I use both.
Coding assistants won't win this game. They sure will win the hearts of developers, but to scale you need mass adoption and products for which users want to pay substantially. OpenAI is falling behind in the small features in their chat and app offering and have failed to innovate in their expensive offerings.
Codex btw is getting very competitive. It is fast and no longer far behind.
Classic SV hubris. Talk to OpenAI people and they’re so convinced they’re untouchable, they don’t bother worrying about things like revenue, or product strategy. All they cared about was being the first to AGI. Well it looks like that isn’t happening soon enough. And now they have zero moat except brand recognition, which is quickly getting eroded.
Congrats to OpenAI for having this fake "beef" with itself through Anthropic, but Google is still going to win the most users. Users aren't so stupid that they will fall for this kind of obvious psychological manipulation in perpetuity.
Despite what the folks here like to believe about themselves, I think the reality is we as attuned to what is in fashion and on trend as everyone else, just about different stuff. Last year it was Chatgpt, this year Claude is the new hotness. Things move so fast we barely have time to form our own opinion, so we fall back on what we read or hear from others. In 12 months who knows what it will be... Gemini? ¯\_(ツ)_/¯
Long term, my feeling is Anthropic's focus on enterprise is the most obviously lucrative but also least defensible application of LLMs. If (more likely when) open source models reach the point of being "good enough" then it's a race to the bottom on pricing. Maybe it will be like AWS vs GCP et al, but I kinda doubt it.
Investors do not care about the product, the users, etc. They care about cash. There are lots of ways to make cash that don't involve having a good product. But if you commit to spending a trillion dollars on hardware, then borrow hundreds of billions in the short term, and it turns out there's no way to recoup the cost, the investors go looking for better returns. This would've worked back in the old days of a bull market, angels looking for the next whale (with "modest" $5BN investments), and startups with no rivals. But in a bear market with multiple competitors trading on a commodity? Lol. Finally the bubble bursts.
Anthropic is not meaningfully better. Their stance is “the good guys have to make money to be in the fight with the bad guys” and so they do all the things their perceived bad guys do. I don’t know how they can do any different, but we just trust them to be good? What is the difference?
> The large gap between OpenAI’s $852-billion valuation and Anthropic’s $380 billion has investors rushing to grab equity in the latter before it rises, according to Augment co-founder Adam Crawley.
Interesting, so there are a lot of people still eager to invest in valuations of well greater than a-quarter-trillion, but OpenAI's latest raise has sucked up all the oxygen for enthusiasm of that valuation going even higher.
Which could be a "dumb money" move ("competitor number lower, already-big-number is scary") or a "smart money" move ("Anthropic is gaining position-wise, and currently is lower valued, let's bet on the one we think is better positioned") or some mix of both.
OpenAI just raised a shit-ton so clearly there is plenty of money out there who don't think there's a bubble or even a blown opportunity there. But the wider community doesn't think they have the competition in the bag, while still being willing to invest in big-AI-cos at absolutely enormous valuations.
If local hardware/models get good enough to take 80%-90% of what people use subscriptions for today... hoo boy. Big-AI is a bet I wouldn't be confident placing billions on. Unless your horizon is more "wait for IPO or next raise or positive news, then get out ASAP" than "hold for 5+ years."
Both of these valuations are absolutely absurd. I guess Anthropic looks good in comparison, but I don't want to hold that bag.
The Chinese models are catching up in quality while being a fraction of the price. The market will speak, how many devices that contributed to this thread were made in the USA?
Sure you can argue the Chinese companies are heavily subsidized, but no major LLM lab is remotely close to making a profit this decade.
My loose understanding as someone adjacent to the AI model space is that you have good models that are costly and cheap models that are decent, so a lot of the publicly visible fights where Claude and ChatGPT leapfrog each other is the companies doing cost-benefit of how much optimization to do on the models before your userbase revolts because the agent "used to be great and now kinda sucks".
As a small business owner whose team is entirely in Google Workspace (Drive, Gmail, Chat -- so inbuilt RAG right there), I wonder if Gemini will be the darkhorse. As a user Gemini's a distinct third in "AI smarts", but most business owners aren't power users who are gonna setup Codex or Code to slurp up their work emails and internal docs/SOPs.
The article feels a touch clickbait-y since people love a good fight between the top players and OAI's lost a buncha public goodwill over the past year.
I switched to Anthropic briefly to try out Code, it was great (having never tried such a product) and better than their chat bot which found prettier and nicer to talk to but far more often wrong.
Then shortly after Codex was released which made that more accessible and instantly preferred that compared to Code since much more generous allotments for $20 plan. Claude constantly kept hitting. And Code actually had a more robust UI and was more accurate when doing the same project side by side vs Claude Code.
But imagine many haven't tried Codex recently since all we hear about is Claude Code. So while they may have momentum, at least for me with no stake in the game, Codex finding far better, but I suppose that could all change again on a whim.
> large gap between OpenAI’s $852-billion valuation and Anthropic’s $380 billion
IIRC Anthropic's revenue is either roughly at parity with or larger than OpenAI's, and Anthropic is growing faster[1]. All indicators are that Anthropic should be worth more than OpenAI. Given that, one could reasonably expect the relative valuations to change a great deal. In any case, it's not clear why OpenAI would command such a price premium over Anthropic.
Anthropic's run rate increased from $12B to $19B in the period between February 12 and the end of the month. If the implied growth rates held through March, Anthropic may well be larger than OpenAI now.
let me know when they scrap the data centers, id love to get some good deals on hvac equiptment. these companies cannot possibly make enough money when you can run something on your own computer that works mostly as good
Both companies have the same issue: the unit economics are bad, and there’s no moat.
I’ve been primarily on Claude for the last 6 months or so, but have been hitting rate limits. I switched work to Codex seamlessly, just like I could switch to any other provider seamlessly
IMO This says a lot more about the investor herd mentality and FOMO than it does about the technological or any other kind of merit of Anthropic vs OpenAI.
I am not a finance pro, but is it not normal and expected for secondary equity markets to be hesitant against large unloads of pre-money company stocks, even in cases where the company in question isn't in water like OpenAI is? Does this not say more about the failure of the investors themselves in balancing their portfolio properly than OpenAI itself?
The latest Opus routinely tells me the latest GPT Pro responses are much better. The GPT responses cost 10x more than GPT at least. And GPT takes 10’s of minutes. So unless and until I’m needing and ready for a really expensive “math checker” it gets left alone.
People are so late to Anthropic, I used Claude 1.3 back when it released, and I've stood by Claude since the early days. I don't think people quite realized the potential of Anthropic
Odd timing...Everything I've read about Claude the last several days suggests that its users are disappointed, even furious at what's happened to its performance.
I wonder how much of this is associated with Scam Altman's personal negative PR and Anthropic's recent PR wins.
I'm inclined to think there isn't much of an association becauss investors don't seem very concerned with morality, but I know ~dozen developers that either switched to, or started using Claude in the past month or so, while not knowing anyone that uses Codex.
153 comments
And somehow instead of prosecuting the lead in all areas, they got all hubristic and sloppy and just failed to iterate on the core product, while also failing to respond quickly when Anthropic showed that coding agents are the flywheel that makes the whole company faster.
It’s like they thought they had an unassailable monopoly and speedran to the lazy incumbent position, all in a matter of months.
https://www.cnbc.com/2026/02/12/anthropic-gives-20-million-t...
> it's not like Anthropic is a saint either:
I’m so confused. Your link shows they are pushing for guardrails, what is bad about that? It is consistent with Anthropic safety-first principles, and what Dario wrote and talked about for the past decade or so? Could you be more direct with your criticism? Otherwise it’s hard to engage with
On the other hand, I think Claude is more friendly/readable and also still better at producing out-of-the-box nice looking frontend.
> not a professional software engineer
I think this is where we might have differing opinions. I'm a CTO by profession and I know what bad code is, so it is quite easy for me, based on my professional experience, point out when Claude generates bad code. And when you point it out, or ask it why it didn't take the correct/simpler approach - the response is always along the lines of "Oops, sorry!" or "You're absolutely right to question that..."
I do think I do a good job of being very structured at breaking down my requirements and acceptance criteria (thanks dual lives as a devops and SRE guy and then PM). Extensive unit testing, discipline in use of sessions and memories and asking it to think of questions it should be asking me before even formulating a plan.
I shit on OpenAI because they had a completely clear path to being dominant in consumer and enterprise, in chatbot and coding agent, and in text, image, and video. And somehow they screwed it up by being unfocused, unnecessarily slimy, drama-ridden, and slow to react to the market.
OpenAI has a lot of good tech and some good product. They may yet succeed. But they embody the lazy hare partying with celebs while the tortoises pull ahead in the race.
> ChatGPT most of the time one-shots complex solutions in comparison
is an intelligible sentence.
Paraphrased: ChatGPT often completes complex solutions in one try whereas Claude does not (or performs less well).
I guess you can’t take me seriously?
On podcasts his attitude is basically “oh yeah all of you are basically fucked our products will take everyone’s jobs in a couple years.”
Altman is a lot more coy and comes across as saying what’s politically expedient at any given point in time.
> Sam lost the plot for me. He took too many interviews which led me to not trust him. Last straw came with him standing by Anthropic one day then throwing them under the bus the next. He showed little awareness on why that is problematic.
It should have become clear to all that he was an untrustworthy person when he was fired from OpenAI by its then-board. My understanding is their complaint was he was lying, untrustworthy, and manipulative; and enough stories came out at the time to confirm that.
In all seriousness, I use Codex for work and Claude at home, and I feel like nowadays they're actually pretty competitive with each other. I don't know that it's that far behind.
I agree that they clearly erroneously assumed that no one would be able to catch up with them, though. OpenAI had such a head start that that should have been by itself a moat.
Codex btw is getting very competitive. It is fast and no longer far behind.
Long term, my feeling is Anthropic's focus on enterprise is the most obviously lucrative but also least defensible application of LLMs. If (more likely when) open source models reach the point of being "good enough" then it's a race to the bottom on pricing. Maybe it will be like AWS vs GCP et al, but I kinda doubt it.
5.4 Extra high >> Opus 4.6
> The large gap between OpenAI’s $852-billion valuation and Anthropic’s $380 billion has investors rushing to grab equity in the latter before it rises, according to Augment co-founder Adam Crawley.
Interesting, so there are a lot of people still eager to invest in valuations of well greater than a-quarter-trillion, but OpenAI's latest raise has sucked up all the oxygen for enthusiasm of that valuation going even higher.
Which could be a "dumb money" move ("competitor number lower, already-big-number is scary") or a "smart money" move ("Anthropic is gaining position-wise, and currently is lower valued, let's bet on the one we think is better positioned") or some mix of both.
OpenAI just raised a shit-ton so clearly there is plenty of money out there who don't think there's a bubble or even a blown opportunity there. But the wider community doesn't think they have the competition in the bag, while still being willing to invest in big-AI-cos at absolutely enormous valuations.
If local hardware/models get good enough to take 80%-90% of what people use subscriptions for today... hoo boy. Big-AI is a bet I wouldn't be confident placing billions on. Unless your horizon is more "wait for IPO or next raise or positive news, then get out ASAP" than "hold for 5+ years."
The Chinese models are catching up in quality while being a fraction of the price. The market will speak, how many devices that contributed to this thread were made in the USA?
Sure you can argue the Chinese companies are heavily subsidized, but no major LLM lab is remotely close to making a profit this decade.
As a small business owner whose team is entirely in Google Workspace (Drive, Gmail, Chat -- so inbuilt RAG right there), I wonder if Gemini will be the darkhorse. As a user Gemini's a distinct third in "AI smarts", but most business owners aren't power users who are gonna setup Codex or Code to slurp up their work emails and internal docs/SOPs.
The article feels a touch clickbait-y since people love a good fight between the top players and OAI's lost a buncha public goodwill over the past year.
Then shortly after Codex was released which made that more accessible and instantly preferred that compared to Code since much more generous allotments for $20 plan. Claude constantly kept hitting. And Code actually had a more robust UI and was more accurate when doing the same project side by side vs Claude Code.
But imagine many haven't tried Codex recently since all we hear about is Claude Code. So while they may have momentum, at least for me with no stake in the game, Codex finding far better, but I suppose that could all change again on a whim.
> large gap between OpenAI’s $852-billion valuation and Anthropic’s $380 billion
IIRC Anthropic's revenue is either roughly at parity with or larger than OpenAI's, and Anthropic is growing faster[1]. All indicators are that Anthropic should be worth more than OpenAI. Given that, one could reasonably expect the relative valuations to change a great deal. In any case, it's not clear why OpenAI would command such a price premium over Anthropic.
1 - OpenAI says they are doing $2B/mo run rate https://openai.com/index/accelerating-the-next-phase-ai/
Anthropic's run rate increased from $12B to $19B in the period between February 12 and the end of the month. If the implied growth rates held through March, Anthropic may well be larger than OpenAI now.
I’ve been primarily on Claude for the last 6 months or so, but have been hitting rate limits. I switched work to Codex seamlessly, just like I could switch to any other provider seamlessly
What’s interesting is that they’re both still losing money on their models and are essentially giving away compute for free, although lossy.
They’ve bought up computing power and are now renting it back to the rest of us, with a decent HMI, while subsidising us to use it.
ChatGPT's chat quality has recently dropped hard. While Claude is pricier, it actually takes the effort to think through complex tasks.
All the while, Chinese models are providing cheaper alternatives.
“We literally couldn’t find anyone in our pool of hundreds of institutional investors to take these shares“
This doesn’t bode well for an IPO. The market is smelling a stinker.
Get your popcorn ready for a mad scramble to salvage investments if indeed the shark has been jumped.
I'm inclined to think there isn't much of an association becauss investors don't seem very concerned with morality, but I know ~dozen developers that either switched to, or started using Claude in the past month or so, while not knowing anyone that uses Codex.