Lots of critiques here! Something missing in this discussion is people asking _why_ it is that they're doing this. The people who work there aren't stupid!
I think this is a disconnect between people who think that large companies are static entities with established products vs. large companies that still operate like a startup and are trying to grow. When you're building your business from $0 in revenue, you don't know what will work! You try different things, you [launch over and over again](https://www.ycombinator.com/library/6i-how-to-launch-again-a...)...all in hopes of something that works, sticks, and starts to grow.
In every example here, I see OpenAI trying something new, hoping it will grow, and shutting it down after it doesn't. Sora is the pre-eminent example of this. They make news, but you don't talk about the things they launch that successfully grow!
OpenAI isn't shutting down Codex or ChatGPT, because those were launches that they did that actually worked! When you go look at the tweets and communication from OpenAI employees when ChatGPT launched, nobody was sure that it would work. But it did. And if they hadn't launched, we would have never known how valuable it was.
All that is to say...you don't know what will work until you launch. Most things fail, and it's correct to shut them down. But focusing on the products that haven't worked instead of the products that have gets you more clicks, but actually depresses innovation by making future launches less likely.
People are much more willing to give the benefit of the doubt on things like that when the flagbearers of your industry aren't running around sucking all of the oxygen out of the system and telling people things are "solved": that your product will obsolete them in the next 6-12 months.
We get it. They say that stuff to raise money, make sales and keep the party going. But don't expect too much sympathy when the strategy falters a bit.
Sora was losing 15M a day and it was running at least 3 months, so that's a total 1.3 Billion. That's a pretty expensive experiment. It sounds like a company with lots of VC to burn and no discipline. Even Jensen Huang accused them of lack of discipline in business approach.
Yeah. $1.3B isn't scrappy startup pivots, it's the sort of money Google/Meta/Microsoft/Yahoo/Salesforce burn on strategic acquisitions. And those entities absolutely get and deserve the sneers when they "sunset" the product 6-18 months later having concluded that it wasn't even showing enough signs of being a market they should bother with keeping the lights on.
At least Sora was novel and technically impressive, I guess.
Ah, but when you're little, if each misstep annoys a few early users who hit a dead-end with your project, the longer-term reputational damage is trivial. You've still got 99.999% of the TAM (total addressable market) that is ready to be charmed by something new, with no negative vibes in their mind.
As you get bigger, serious numbers of people get annoyed at dealing with a company that keeps inviting us into the Roach Motel of doomed products and features. Big case in point was Google's spree, a few years back, in terms of launching big new services/features that soon afterward got shut down. Great training ground for ambitious PMs; miserable user experience.
Somewhere between the death of Google+ and the demise of Google Hangouts, even folks like me began thinking: Why should I engage with new Google stuff if it's likely to be blown up in a few years, leaving me with buried IP from whatever I tried to do?
> When you're building your business from $0 in revenue, you don't know what will work!
When you announce a post money valuation of $852 billion, you should probably be a bit better at figuring out what works, though. You're not a scrappy startup any more, even if you like cosplaying as one.
I'm not sure your criticism is quite fair. I think everyone here is willing to cut more slack to the underdog. But when your company represents an outsized chunk of the digital economy and employs 10k+ people, and only then says "sooo, let's try to build some sort of a profitable product here", I can see why people are rolling their eyes.
OpenAI also burned a lot of goodwill by pretending to be a nonprofit foundation focused on the betterment of mankind and then executing one of the most spectacular rugpulls in modern history. So yeah, people will be giving them a hard time even if it turns out that the valuation is justified.
Companies with non-stupid people can still do stupid things.
I think the issue with the experimentation is that they still don't have an obvious golden goose yet. Google has been able to fuck around with experiments because search/ads are always still there to carry the team and provide an infinite money spigot, even if the experiments mostly fail. But OpenAI doesn't really have an equivalent for that.
Something missing in this discussion is people asking _why_ it is that they're doing this. The people who work there aren't stupid!
They have infinite amounts of investor money to burn and no obvious way forward. TFA's line about "spaghetti at the wall" pretty much summed up what happens in that situation.
And in terms of "the people who work there aren't stupid", you can have technically talented people who are very good at their specific thing and hopeless at anything else, a friend of mine once summed it up as "the dumbest smart people I ever met". This is why you need skilled management to let them do their thing but also steer them in the right direction as they're doing it. From the descriptions of OpenAI it's kinda rudderless apart from the one-man hype machine at the top.
They very nearly gave Elon Musk a controlling interest in the company. Their justification for not doing so was entirely vibes based. "Stupid" is a broad categorization, someone can be smart in some areas and do dumb things. You shouldn't let your personal appraisal for someones talent color the actual results they produce.
They’ve lost a whole lot of people in prominent roles over the past few years. I wonder how much of the misfires and general thrash in product direction is a result of brain drain and/or so many hands changing. Or maybe I’m confusing cause and effect… hard to tell
OpenAI is growing fast, pivoting is only to be expected. It would normally be something HN folk would typically value and encourage on startups.
They have clearly been lacking focus and now finally they seem to be working towards a narrower direction, which is usually highly valued by investors.
This article doesn't explore the depth of the decisions and only regurgitate what you may find your neighbor complaining about on X but with a better stylesheet.
This is important context in the wake of yesterday’s “raise” announcement. A lot of this stuff seems to just quietly never happen once the ink on the PR puff dries.
The AI industry increasingly looks in scramble mode to keep the hype going as those storm clouds of financial and business reality get darker and darker on the horizon.
The pattern with most of these is they optimized for the demo, not the sustained interaction. Making an AI impressive for 5 minutes is easy. Making it feel like a presence that belongs in your daily life, knowing when to talk, what to remember, what to forget. That's a completely different engineering problem.
Has there ever been a period o time where people saw a bubble coming and that we were in one, but it just inexorably refused to pop/drug out this long? This isn’t a rhetorical question, I’m wondering how this period compares to other irrational periods of the economy like railroad fever etc.
What they really should focus on is making those models more efficient. With them most likely losing money on inference (+model training + salaries + building data centers), I can't see why they would want more compute and more products, since more tokens spent is actually bad for them.
My guess is Sam Altman is a better VC than CEO. Better at hype, networking, fund raising, and back room political hijinks than shipping a focused product
He seems to be trying to take almost a "venture studio" approach by throwing shit at the wall, but the problem with these things is always that the "internal startups" are "founded" by people who don't have enough incentive or control over their product to perform as well as an actual startup, and are distracted by internal politics. And frankly, it may also be that the really good founders will just do their own startup vs working on a quasi-startup inside a large org so there's some selection bias as well.
The stargate, nvidia and amd deals are all linked together and the fallout is not public. Nvidia and amd stock seems to not care about it at all. Oracle fired 30000 employees, not sure if it’s to fund that initiative or a fall out of that
I'm not an OAI fanboy by a longshot - but I'd view lots of experiments that didn't work out as a healthy thing, especially for a company trying to find footing in a new industry.
"Disney’s then-CEO Bob Iger... was sold on Sora, too. He lauded Altman’s ability to “look around corners”..."
WTF is that supposed to mean? I'm sorry, maybe I'm being dense. I can't figure out what "look around corners" is supposed to mean. "Think outside the box," I guess? Why "look around corners?"
I mean, maybe I do get it. Altman has a weird face that looks like you can't predict where his eyes are based on where his head is. "Shifty," one might say. But I doubt that's what Iger meant.
It's dumb. It's dumb corporate speak. I'm so sick of this kind of stuff getting a pass. We used to bully people over using the word "synergy." Let's make america anti-corporate-weasel again.
I think the VC/investor community needs to take A LOT of blame here. They've created an insane rush to financialize everything to moon at the drop of a hat.
I asked an LLM today to take a word document and a PowerPoint template to “convert” the docx to a ppt. It asked some good questions like “it looks like this template is a title slide and content slide only, would you like all the content on the content slide?”
“Yes” I said, slightly impressed. It then asked me to clarify the subheadings, which were correct.
“Cool, this seems neat” I mused.
It generated the PowerPoint. There was not a single word on it from the docx, the header slide was devoid of words, and the 6 following content sliders were identical, and empty of words.
I suppose it’s cool that it used the correct template.
I did the work manually in 10 minutes after waiting around and responding to an LLM for 15.
What a fucking joke. “AI” is a term that we really need to stop fucking using.
Before he left I use to enjoy enraging a manager several layers above me. In one instance I explained that asking us to cut a few corners to get things done was fine, usually we can figure out acceptable ways of doing it. But then, it is your job to take those fake numbers and figure out how we are doing. No matter how much effort you make if bullshit goes in you know what will come out.
Now imagine an entire economy working like that. Like say, LLM's are good enough to run entire companies but you don't get to run a company because you are good at it. LLM's can perfectly manage employee schedules but the real job is more like marriage counseling or group therapy. Somewhere along the road we forgot which jobs make the economy go. They are probably the ones with the lowest salaries as those lack the effort of conjuring the job into existence.
Humanity needs obvious things cloths, food, housing, transportation etc but that isn't where the money is. The people cooking the books have the money and they are looking for something like a book cooking book. The market for openAI will be in lying convincingly for the benefit of the investor. Reality must be auctioned off like domain names or search engine placements. Altman is really the perfect guy for the job no one wants. ha-ha
Alternatively we could humble ourselves, ask the Chinese how reality works and attempt to steal their fu. It's just a thought.
199 comments
I think this is a disconnect between people who think that large companies are static entities with established products vs. large companies that still operate like a startup and are trying to grow. When you're building your business from $0 in revenue, you don't know what will work! You try different things, you [launch over and over again](https://www.ycombinator.com/library/6i-how-to-launch-again-a...)...all in hopes of something that works, sticks, and starts to grow.
In every example here, I see OpenAI trying something new, hoping it will grow, and shutting it down after it doesn't. Sora is the pre-eminent example of this. They make news, but you don't talk about the things they launch that successfully grow!
OpenAI isn't shutting down Codex or ChatGPT, because those were launches that they did that actually worked! When you go look at the tweets and communication from OpenAI employees when ChatGPT launched, nobody was sure that it would work. But it did. And if they hadn't launched, we would have never known how valuable it was.
All that is to say...you don't know what will work until you launch. Most things fail, and it's correct to shut them down. But focusing on the products that haven't worked instead of the products that have gets you more clicks, but actually depresses innovation by making future launches less likely.
We get it. They say that stuff to raise money, make sales and keep the party going. But don't expect too much sympathy when the strategy falters a bit.
I’d previously heard 1M a day.
As you get bigger, serious numbers of people get annoyed at dealing with a company that keeps inviting us into the Roach Motel of doomed products and features. Big case in point was Google's spree, a few years back, in terms of launching big new services/features that soon afterward got shut down. Great training ground for ambitious PMs; miserable user experience.
Somewhere between the death of Google+ and the demise of Google Hangouts, even folks like me began thinking: Why should I engage with new Google stuff if it's likely to be blown up in a few years, leaving me with buried IP from whatever I tried to do?
I was disappointed Google killed Reader but I pivoted. Otherwise, Google's reputation for me is fine-ish.
> When you're building your business from $0 in revenue, you don't know what will work!
When you announce a post money valuation of $852 billion, you should probably be a bit better at figuring out what works, though. You're not a scrappy startup any more, even if you like cosplaying as one.
OpenAI also burned a lot of goodwill by pretending to be a nonprofit foundation focused on the betterment of mankind and then executing one of the most spectacular rugpulls in modern history. So yeah, people will be giving them a hard time even if it turns out that the valuation is justified.
I think the issue with the experimentation is that they still don't have an obvious golden goose yet. Google has been able to fuck around with experiments because search/ads are always still there to carry the team and provide an infinite money spigot, even if the experiments mostly fail. But OpenAI doesn't really have an equivalent for that.
And in terms of "the people who work there aren't stupid", you can have technically talented people who are very good at their specific thing and hopeless at anything else, a friend of mine once summed it up as "the dumbest smart people I ever met". This is why you need skilled management to let them do their thing but also steer them in the right direction as they're doing it. From the descriptions of OpenAI it's kinda rudderless apart from the one-man hype machine at the top.
>The people who work there aren't stupid!
They very nearly gave Elon Musk a controlling interest in the company. Their justification for not doing so was entirely vibes based. "Stupid" is a broad categorization, someone can be smart in some areas and do dumb things. You shouldn't let your personal appraisal for someones talent color the actual results they produce.
> GPT-4o
Why is this on the list? Like... what? How about including GPT 3.5 and GPT 2 here too?
They have clearly been lacking focus and now finally they seem to be working towards a narrower direction, which is usually highly valued by investors.
This article doesn't explore the depth of the decisions and only regurgitate what you may find your neighbor complaining about on X but with a better stylesheet.
The AI industry increasingly looks in scramble mode to keep the hype going as those storm clouds of financial and business reality get darker and darker on the horizon.
He seems to be trying to take almost a "venture studio" approach by throwing shit at the wall, but the problem with these things is always that the "internal startups" are "founded" by people who don't have enough incentive or control over their product to perform as well as an actual startup, and are distracted by internal politics. And frankly, it may also be that the really good founders will just do their own startup vs working on a quasi-startup inside a large org so there's some selection bias as well.
WTF is that supposed to mean? I'm sorry, maybe I'm being dense. I can't figure out what "look around corners" is supposed to mean. "Think outside the box," I guess? Why "look around corners?"
I mean, maybe I do get it. Altman has a weird face that looks like you can't predict where his eyes are based on where his head is. "Shifty," one might say. But I doubt that's what Iger meant.
It's dumb. It's dumb corporate speak. I'm so sick of this kind of stuff getting a pass. We used to bully people over using the word "synergy." Let's make america anti-corporate-weasel again.
I mean, even Andresson-Horowitz was taking NFT's seriously as though they weren't a scam only a few years ago (https://a16z.com/the-nft-starter-pack-tools-for-anyone-to-an...).
These people are also looking (and funding) quantum computing companies as though quantum computing is right around the corner after AGI.
They need to cool their jets. AI is certainly a worthwhile and super important development, but it's still possible to go overboard with it.
“Yes” I said, slightly impressed. It then asked me to clarify the subheadings, which were correct.
“Cool, this seems neat” I mused.
It generated the PowerPoint. There was not a single word on it from the docx, the header slide was devoid of words, and the 6 following content sliders were identical, and empty of words.
I suppose it’s cool that it used the correct template.
I did the work manually in 10 minutes after waiting around and responding to an LLM for 15.
What a fucking joke. “AI” is a term that we really need to stop fucking using.
Now imagine an entire economy working like that. Like say, LLM's are good enough to run entire companies but you don't get to run a company because you are good at it. LLM's can perfectly manage employee schedules but the real job is more like marriage counseling or group therapy. Somewhere along the road we forgot which jobs make the economy go. They are probably the ones with the lowest salaries as those lack the effort of conjuring the job into existence.
Humanity needs obvious things cloths, food, housing, transportation etc but that isn't where the money is. The people cooking the books have the money and they are looking for something like a book cooking book. The market for openAI will be in lying convincingly for the benefit of the investor. Reality must be auctioned off like domain names or search engine placements. Altman is really the perfect guy for the job no one wants. ha-ha
Alternatively we could humble ourselves, ask the Chinese how reality works and attempt to steal their fu. It's just a thought.