Elon Musk pushes out more xAI founders as AI coding effort falters (ft.com)

by merksittich 827 comments 521 points
Read article View on HN

827 comments

[−] dang 64d ago
All: please stick to thoughtful, substantive discussion. You may not owe you-know-whom better, but you owe this community better if you're participating in it.

If you don't have a thoughtful, substantive comment to add, not commenting is also a good option. There are quite a few interesting submissions to talk about.

https://news.ycombinator.com/newsguidelines.html

[−] Imnimo 64d ago
I think the problem for xAI is that it can really only hire two types of researchers - people who are philosophically aligned with Elon, and people who are solely money-motivated (not a judgment). But frontier AI research is a field with a lot of top talent who have strong philosophical motivation for their work, and those philosophies are often completely at odds with Elon. OpenAI and Anthropic have philosophical niches that are much better at attracting the current cream of the crop, and I don't really see how xAI can compete with that.
[−] jazzpush2 64d ago
In an interview with xAI I was literally told that certain parts of the model have to align with Elon, and that Elon can call us and demand anything at anytime. No thanks!
[−] jarrettcoggin 64d ago
From my time at Tesla, this is 100% the case. When Elon asked for something, it was “drop what you are doing and deliver it”, then you got pressed to still deliver the thing you were already working on against the original timeline before the interrupt.
[−] dgxyz 64d ago
Oh I worked at one of them.

I found the best thing to do was to ignore the interrupts and carry on until they kick you on the street. Then watch from a safe distance as all the stuff you were holding together shits the bed.

[−] jarrettcoggin 64d ago
Definitely one approach to the circumstances. I tried some variation of this and it blew up in my face (as I expected ).

Towards the end of my time there, a “fixer” was brought in to shore up the team that I was working on. The “fixer” also became my manager when they were brought on.

The “fixer” proceeded to fire 70+% of the team over the course of 6-8 months and install a bunch of yes people, in addition to wasting about $2,000,000 on a subscription to rebuild our core product with a framework product no one on the team knew. I was told to deploy said framework product on top of Kubernetes (which not a single person on my team had any experience with) while delivering on other in-flight projects. I ignored the whole thing.

I ended up deciding I was done with Tesla and went into a regularly scheduled 1:1 with my manager (the “fixer”) with a written two-weeks notice in hand, only to be fired (with 6-weeks severance, thankfully) before I was able to say anything about giving notice.

One of the best ways to get fired in my opinion.

[−] pm90 64d ago
Out of curiosity, it sounds like you're the kind of person that could easily find another job. Why slog it out until the end rather than quit/find a better gig? Genuinely interested because every time I've ended up with a manager like that my mental health has suffered so now I generally start planning my exit as soon as I'm stuck with a bad manager.
[−] hananova 64d ago
Ethically, if you do not agree with the company you work at, the optimal course of action if you can stomach it is to stay and do a bad job rather than get replaced by someone who might do a good job.

I have been in such a situation before, and while I was not able to coast along until the company went under, the time delta between me getting fired and the company going under was measured in weeks.

In hindsight I'd probably not do it again, it was hugely mentally taxing, and knowingly performing work in such a way that it provides negative value to the company (remember, the goal is to make it go under) is in my experience actually harder than just doing a good job... Especially if being covert is a goal.

[−] jkubicek 64d ago
Have you read the CIA’s Simple Sabotage Field Manual?

https://www.cia.gov/static/5c875f3ec660e092cf893f60b4a288df/...

[−] MengerSponge 64d ago
I've seen it, but I think it's got some places that it would benefit from more clarity. Can we put together a committee to improve and protect our processes from it? We could call it a task force if that's easier to sell to management.
[−] kevin_thibedeau 62d ago
This demands a tiger team.
[−] malikolivier 64d ago
I did not know the existence of this manual. It was a very interesting read! Especially after page 28 (General Interference with Organizations and Production).
[−] jstummbillig 64d ago

> Ethically, if you do not agree with the company you work at, the optimal course of action if you can stomach it is to stay and do a bad job rather than get replaced by someone who might do a good job.

What...? In what way is it anything other than highly unethical to sabotage someone you have a contract with, because you disagree with them?

[−] 47282847 64d ago
Plenty of historical examples of work environments where sabotage would have been the most ethical thing to do (and often you will only know in hindsight). But yeah in most circumstances a simple disagreement doesn’t warrant the psychological cost of such sabotage.
[−] tremon 63d ago
the psychological cost of such sabotage

Of course. One always needs to weigh it against the psychological cost of complying with unethical directions.

[−] jstummbillig 63d ago
What do you mean...? Plenty to do what?

Your opinion of the situation is not enough to justify this course of action in 99.99% of cases and the residual 0.01% should not be enough to fuel your ego to do anything other than quit decently, and look for an employer that is more aligned with whatever your ideals are.

I repeat the insane statement that we are arguing over here: "Ethically, if you do not agree with the company you work at, the optimal course of action if you can stomach it is to stay and do a bad job rather than get replaced by someone who might do a good job."

This says: ANY company you work for and disagree with over anything: Don't quit! Sabotage [maybe people are confused about what "do a bad job" means, and that this usually leads to other people getting hurt in some way, directly or indirectly, unless your job is entirely inconsequential]. And that's supposed to be ethically optimal.

What the fuck?

[−] Teever 63d ago
"Don't struggle only within the ground rules that the people you're struggling against have laid down." -- Malcolm X

"If you're unhappy with your job you don't strike. You just go in there every day, and do it really half-assed. That's the American way. -- Homer Simpson

"To steal from a brother or sister is evil. To not steal from the institutions that are the pillars of the Pig Empire is equally immoral." -- Abbie Hoffman

Some might consider it unethical but others might also consider it immoral to not do what you're describing.

I guess you're fortunate enough to have only worked at places where your moral framework matched up with their business practices and treatment of the staff.

That isn't the case for most people. Most people are put into situations at one time or another where the people they're working for don't value them as equals, where the people they work for casually violate reasonable laws like product safety or enivronmental standards laws and what's worse these people will suffer no consequences for doing so.

No White Knight in shining armour is going to come from the government to shut them down. No lightning from heaven will strike them down. No financial penalty to dissuade them from further defection from society and the common man in the game that is life.

So what do you do? Do you do nothing? Just put your nose to the grindstone and keep working for the man? Do you quit, only to end up penniless and jobless, with poor prospects of an alternative, and even if you found one maybe it's 'meet the new boss same as the old boss'?

Nah, you come into work every day and you subtly fuck it up. You subtly fuck it up and you take whatever value you can extract.

They'd do the same to you.

They are doing the same to you.

[−] swiftcoder 63d ago

> In what way is it anything other than highly unethical to sabotage someone

Ethics is more complicated than that. Is it unethical to sabotage your employer if your employed is themselves acting unethically?

[−] freeone3000 63d ago
Have we gotten so lost that “working against your enemies” is no longer something we aspire to do?
[−] cheschire 63d ago
You’ve seen Schindler’s List, right?
[−] ako 63d ago
Assume you work for e.g., a cigarette company. A company responsible for many deaths by unethically adding highly addictive substances. By sabotaging the company you are making this world a better place. Ethically it's the right thing to do.

Or, assume you're hired by the Nazi to work in concentration camps. Ethically it's the right thing to do to sabotage their gas chambers.

[−] LtWorf 63d ago
Let's say you work for elon musk and are a decent person…
[−] jstummbillig 63d ago
Why would you start to work for elon musk if you consider yourself a decent person, but him unworkable for? Have you not heard of elon musk beforehand...? Did you let yourself be employed with the specific goal of sabotaging the work, in what must be the least effective (but certainly very lucrative) coup possible?

What is it? Am I to believe this person is a chaotic mastermind? Or a selfish idiot? Or non-existant?

[−] AnimalMuppet 64d ago
Even ethically, this is only true if you think the ethics of the place are so bad that sabotage is warranted. That's not every place that you have ethical problems with.

To do that (and hide it), you have to become a dishonest person yourself. That is ethically destructive to you. So the threshold for doing this should be pretty high.

[−] super256 63d ago
I don't think sabotaging a company just because you don't want to work with a certain framework and deploy it on k8s is a good idea.
[−] lithocarpus 64d ago
Yeah, I could see this being true if there was really _nothing else_ I could possibly be doing with my time that is worthy. But there are a lot of worthy things I could be doing with my time.
[−] lesuorac 64d ago
Ethically perhaps but financially and mentally its surely better to start looking for a new role (at a different company) that is more in alignment with you, no?
[−] d0odk 64d ago
Ethically, if you extend this reasoning, are we not obligated to find a position in the most morally repulsive organization we are aware of, and then coast?
[−] _bent 64d ago
yes, this is called 'effective altruism'
[−] RobRivera 64d ago
I think there is an implied "given the company you joined turns out to be nonethically aligned"
[−] lcnPylGDnU4H9OF 63d ago
One could find a position in the most morally attractive organization they are aware of, and then work really hard.
[−] metalcrow 64d ago
well not coast, the intent is sabotage
[−] Nevermark 63d ago
As they say, two uneth’s make a thical.

I really wouldn’t want to be in this position. But it feels very motivating. It would sooth some difficult memories.

I can see myself putting in a lot of hours.

The willingness to be fired, in both good and bad situations, can be mentally freeing and an operational/political advantage. Many of us fail to push as hard as we optimally could, when we have too much on the line.

[−] jarrettcoggin 64d ago
IMO, this is a good question and deserves a solid answer, so I’ll do my best.

Setting aside the “fixer” for the time being, I really enjoyed the work I did at Tesla. Tesla was the first company that gave me very high levels of autonomy to just own projects and deliver. It also pushed me to take on projects that I had previously wanted to do that I hadn’t been given a chance to work on before.

(Side note: At that point in time in my career, my thinking was that I needed to earn opportunities to work on projects at work to build skills that would enhance my career. I didn’t see the value in working on projects outside of work to build skills because I didn’t think those side-project skills would be valued by other companies the same as “day job” experience. I’ve since learned this isn’t true when it’s done right.)

I spent a lot of time at Tesla delivering value for a bunch of people who desperately needed it at the time, and the thanks I received from them was genuine. It felt very good to help others at Tesla out in a meaningful way, so I kept chugging along to the best of my abilities. Life was throwing lemons at me in my personal dealings, and Tesla was helping me make lemonade from a career standpoint. Besides, all the long work hours were a good distraction from the home life stuff.

In a lot of ways, it was a very fulfilling environment to work in, but it wasn’t for the faint of heart. People often quit within a month or two because the environment was too fast paced with too many projects under tight deadlines and projects quickly followed one after another. An environment like Tesla just doesn’t let up, so one has to figure out how to manage the stress without much support from others. Oftentimes, if you do need to let up at Tesla (or introduce friction in any sort of seemingly non-constructive way), that’s the cue you aren’t working out for the company anymore and it’s time to find someone to replace you.

Coming back around to the original question of why I stuck it out until the end. Just before the “fixer” was brought in, I was “soft promoted” by a director (no title change, but was given direct reports and a pay bump, the title change was suppose to come a couple of months later as the soft-promotion happened just before an annual review cycle). The director who soft-promoted me was someone who I got along with well and it seemed like things were going in the right direction in my career at that point. The director was in charge of a couple of projects that went sideways in a very visible way, and Elon basically fired the director after the second project went south, which is why the “fixer” was brought in.

When the “fixer” first took over things, it seemed like I was going to continue on the path that the director had originally laid out for me. The “fixer” said I was going to get more headcount and work on bigger projects, but this never materialized.

I really didn’t like working for the “fixer” after a while. IMO, it was clear they didn’t know what they were doing, they weren’t willing to listen to feedback, and I spent a lot of time trying to provide guidance to the “fixer”, but it wasn’t seen as helpful and I felt like I was spinning gears. My mental health did start to suffer as I got more burned out towards the end of my tenure there.

Eventually, I was tasked with hiring someone to be my manager and I saw the writing on the wall (sort of). I started to look for a new job just in case. At one point, I thought bringing in someone between myself and the “fixer” would be a good thing. I didn’t realize I was actually finding my replacement. Two days after my replacement was hired, I was let go (this was the 1:1 meeting where I was going to turn in my notice, but HR served me papers instead).

To your original point, if I was in a similar situation now, I would be planning my exit immediately instead of trying to make the best of a bad situation, but I had to learn that lesson the hard way.

[−] givemeethekeys 64d ago
In my case at a different firm, I happily gave notice than to put up with the "fixer", who had been hired by the other "fixer", both of which were mostly only good at shitting all over the place and driving most of the technical organization out of the company. I got the feeling that was the whole point, so I resigned instead of waiting for my eventual layoff.
[−] candu 63d ago
As someone who now lives and works in Denmark: it's sad that so many of us have been conditioned to think 6 weeks severance is generous.

Here, labor unions are quite widespread, and very effective at negotiating reasonably but firmly. As a result, I can depend on 3 months severance _guaranteed under law_ after 6 months at a job. (After 3 years, it goes up to 4 months, and then from there up to a max of 6 months.)

It puts the responsibility for risk of instability, errors in planning hiring / capacity, etc. firmly where it belongs: with the employer.

(And no, the economic sky is not falling here as a result. Quite the opposite.)

[−] sehansen 60d ago
Welcome to our cozy little country; I hope you're settling in well.

Just out of curiosity: Assuming you're a SW engineer, did you join IDA or Prosa or did you decide to not join an union? I'd like to gathers some more datapoints to help other engineers moving to Denmark make an informed decision.

[−] RobRivera 64d ago
[flagged]
[−] echelon 64d ago
Why did Tesla work initially? Because they were first to market and people were willing to overlook flaws?

When did it start falling apart?

Why hasn't the same happened to SpaceX? (Gov contracts, too big to fail, national defense, no competition yet, etc.?)

And honestly, why hasn't anyone domestically put up a decent fight against Tesla? Best I can think of is Rivian, and those have their own issues.

[−] gentleman11 64d ago
[flagged]
[−] zimpenfish 64d ago

> When Elon asked for something, it was “drop what you are doing and deliver it”, then you got pressed to still deliver the thing you were already working on against the original timeline before the interrupt.

To be fair, I've experienced that in a good 50% of my employment career[0] and I've not once worked for any of his companies.

[0] Ignoring the "servers are melting" flavour of "drop what you are doing" because that's an understandable kind of interruption if you're a BAU specialist like me.

[−] exe34 64d ago
yeah that wouldn't work for me. when my boss asks me to do something unexpected, I ask, what do you want me to drop this week? if he doesn't want to pick, I ask, so what do you want first?
[−] thordenmark 63d ago
This is the case at every company I've worked at. When the CEO says jump, the response is to jump or pack your stuff. What's special about xAI/Tesla/SpaceX?
[−] ekianjo 63d ago
This is not specific to Tesla. If the CEO wants something done in most companies you follow the CEO's order first and drop everything else.
[−] jesterson 64d ago
I wonder why this is surprising. In other type of organizations when CEO demands something everyone is usually behaves like naah, screw it, i rather do what i like, isn't it? Or everyone yells yes sir and runs around?

You may not like Elon - I got it, but let's not pretend he is running xAI/Tesal substantially different from competitors.

[−] actsasbuffoon 64d ago
I have wondered if that’s why Grok seems so weird and dim-witted compared to better models.

Part of my job involves comparing the behavior of various models. Grok is a deeply weird model. It doesn’t refuse to respond as often as other models, but it feels like it retreats to weird talking points way more often than the others. It feels like a model that has a gun to its head to say what its creators want it to say.

I can’t help but wonder if this is severely deleterious to a model’s ability to reason in general. There are a whole bunch of topics where it seems incapable of being rational, and I suspect that’s incompatible with the goal of having a top-tier model.

[−] alberth 63d ago
Let’s restate this another way:

“ In an interview with {COMPANY} I was literally told that … {COMPANY-OWNER} can call us and demand anything at anytime. “

Doesn’t sound so crazy when Elon name is removed from it.

Note: I’m no Elon fan, but do think sometimes HN overreacts when his name is mentioned.

[−] doctornoble 63d ago
Same. It was a bit less literal in mine, more like “how do you handle situations where key stakeholders and one in particular have certain demands”
[−] bdangubic 64d ago
wild, but not surprising! anything else interesting you can share from that interview?
[−] kvetching 64d ago
I don't see the problem with this. The chatbot is the most important part of Grok, so it makes sense Elon would be dogfooding it then providing suggestions.. He wants it to be truthful... It was shown on benchmarks recently that it hallucinates the least...
[−] bearjaws 64d ago
Feel like the canary was when Grokpedia became a project.

Giant waste of time while Anthropic/OAI keep surging forward.

I also keep hearing this narrative that Twitter is a good data source, but I cannot imagine it's a valuable dataset. Sure keeping up with realtime topics can be useful, but I am not sure how much of a product that is.

[−] Sol- 64d ago
I don't use it myself, but I feel like the way Grok is integrated into Twitter is a pretty good thing for discussions, as it is certainly a more objective and rational voice than most human participants. I think it's good that people tag @grok if they don't understand something or want an opinion, even if it looks pretty silly to see "@grok is this true" repeated multiple times in replies.

That said, Musk's attempts at misaligning the thing and make it prefer his opinions of course destroy any trust. It's surprising that it's seemingly as good and helpful as it is despite the corruption attempts.

I also don't quite get how the business model is supposed to work out if its main usecase is to serve Twitter. I know they provide API access as all other models, but with how distrusted Musk is and how sensitive of a topic reliable model behavior is, they seem to sabotage themselves. Which company wants it to go mechahitler on them?

[−] moogly 64d ago
I feel xAI is just a very big version of the Boring Co. "flamethrower": an unserious endeavor which is just a reskinned existing tool (it was a reskinned weed burner), but people were wowed by it anyway, since Musk was behind it, and they all pretended it was something new and notable.

The burning (heh) question is which SpaceX subsidiary will fail first, xAI or Tesla (not yet a subsidiary, but it's written in the stars (heh))?

Then again SpaceX is also jumping the shark what with their orbital data centers (remember those?).

Might be time to start a new Musk company soon.

[−] twodave 64d ago
Used Grok for the first time, in a Tesla, and for that purpose it actually made a lot of sense. It’s very well-integrated into the car’s systems and communication style while driving tends to be very tweet-esque. I think this is the niche they should lean into more (live assistant, e.g. Jarvis type stuff) and leave the more agentic niche to folks like Anthropic. Maybe even delegate more difficult or background tasks to those sorts of models. As a verbal interface I found it pretty pleasant.
[−] nemothekid 64d ago
While I believe Grok was a decent model (in some of our internal use cases it performed the best until Gemini 2.5-pro came out), I can't help lament how the team chose to run.

xAI (and Twitter) was the loudest about six-hour workdays, sleeping in the office, and always shipping. ~2 years later it feels like they have nothing to show for it. I'm sure the engineers at Google worked 4 days a week, 2 hours a day, with half of that being spent at the Google cafeteria and they dusted xAI years ago.

[−] dang 64d ago
Recent, related, and apparently ahead of the curve:

Ask HN: What Happened to xAI? - https://news.ycombinator.com/item?id=47323236 - March 2026 (6 comments)

[−] numbers_guy 64d ago
Unfortunate. The Grok team built a phenomenal model. I use it all the time and it very often out performs GPT and Claude, on coding and STEM research related tasks. I was part of the beta for a while Grok 4.2 Beta with multi-agents and it was just amazingly good.

People aren't using it for reasons other than its capabilities. I mean, I don't think my boss would approve a paid Grok subscription for example.

[−] Animats 64d ago
“Orbital space centres and mass drivers on the Moon will be incredible.” - Musk

Right.

The product is the stock. TSLA: [1] Up by 3x in the last two years, despite no new models, the Cybertruck failure, the Robotaxi failure, the large truck failure, and an overall decline in sales. How does he do it?

It's a concern seeing Space-X, which builds good rockets, drawn into the X and AI money drains. Space-X is needed. If X and X/AI tanked, nobody would care.

[1] https://www.cnbc.com/quotes/TSLA

[−] xnx 64d ago
xAI's biggest contribution to the space seems to have been their x-rated image/video model. Hard to see what xAI has to offer against Gemini, Claud, ChatGPT.
[−] breve 64d ago

>

"AI was not built right first time around, so is being rebuilt from the foundations up"

So Tesla's recent $2 billion investment in xAI was a bad deal?

It looks a lot like a public company is being used to bail out a private one.

[−] pelorat 64d ago
This is veiled speak for "No one wants to work for us, so we need to contact rejected applicants to fill positions".

I use AI for work, but not agentic, at most per method/function using GitHub CoPilot (which has Grok on it).

Grok is at best useful for commenting code.

[−] awestroke 64d ago
@grok is this real?

@grok fire the bottom 50% engineers from x.ai ranked by number of commits per day

@grok generate a hypothetical picture of an Elon who is not under the influence of large amounts of Ketamine

I honestly don't know what to expect from Elon these days. But it's rarely good news.

[−] fraywing 64d ago
Grok's UVP is still nonconsensual porn, right?
[−] maplethorpe 64d ago

> Toby Pohlen, a former DeepMind researcher, was put in charge of the “Macrohard” project to build digital agents that Musk said could replicate entire software companies. Musk said it was the “most important” drive at the company. The name is a “funny” reference to Microsoft, the billionaire added. Pohlen left 16 days later.

When I was 9 years old, my uncle asked me what I was going to do for work when I got older. I told him I was going to start a company called "MacroHard", and become the richest man alive. He told me that's not how the world works. Turns out it is.

[−] g947o 64d ago

> Recruiters have been contacting unsuccessful candidates from previous interviews and assessments to offer them jobs, often on better financial terms, the people said.

I'm not sure those candidates would want to work for xAI after seeing the news and everything unless they desperately need a job right now.

It's not hard to imagine getting laid off or fired weeks if not days after joining the company.

[−] heraldgeezer 64d ago
I do use Grok as a chatbot sometimes. Very good for sourcing X and general web search. Not as "prude" as the others too.
[−] mikkupikku 64d ago
Maybe they shouldn't have spent so much time trying to make their model have an edgy cringe attitude, Idk.
[−] Zigurd 64d ago
Obviously catching up to others in agent assisted coding is the motivation for this. But it is also an odd decision in the same way that Meta hiring an AI leader from a data labeling company is odd.
[−] thebigspacefuck 63d ago
Grok 4.20-beta1 scores above GPT-5.4-high and just behind Opus 4.6 on LMArena for Text https://arena.ai/leaderboard

I guess for coding if you’re not first you’re last, but this is damn impressive considering. It looked like they pulled the coding model from the benchmarks, but it was similar.

[−] hermanzegerman 64d ago
The Takeover by SpaceX was obviously a Bailout. And now they pressure NASDAQ to change the rules so they can dump their junk into the index funds.
[−] gigatexal 64d ago
Maybe SpaceX will buy xAI and Tesla to hide the systemic problems at his companies into the warm embrace of the one legit useful company.
[−] nateburke 64d ago
It feels like xAI is perpetually playing catch-up.

They haven't quite committed enough to a novel direction relative to anthropic or OAI, what's described in the OP seems symptomatic of a lack of differentiation.

If you spend all your time judging yourself relative to the incumbents, there will be no time left over to innovate.

The leash is too tight!

[−] amai 63d ago
Nobody wants a biased AI like xAI. If I want a biased opinion, I can just ask my neighbor. A good AI feels like the collective knowledge of humanity at my fingertips, not the random ramblings of a lonely old man.
[−] LZ_Khan 64d ago
How come all the departed researchers are Chinese nationals?
[−] Marazan 64d ago
Wow, bit weird that Musk, who must have known about how badly xAI was doing, spent so much of his investors money buying out xAI.

What an enormous blunder.

[−] bmitc 63d ago
Did everyone forget that xAI was just a way for Musk to weasle his way out of debt from his Twitter purchase?
[−] TheAceOfHearts 64d ago
I've been saying this for a while, but if I had to use Grok for anything programming-related I'd feel very sad and unproductive. I was playing around with a local TTS model codebase but having some issues getting it to work, so I tried explaining the problem to all the major models to see how they performed. Grok performed the worst by a significant margin, and the worst part was that it easily became stuck trying minor changes that didn't solve the key problem.

If we are to take any claims of Recursive Self Improvement seriously at all, then having a competent coding model seems like a key asset where you need to guarantee that you're remaining competitive. Why wouldn't you make coding models a top priority if you expect it to ultimately help your internal teams become more productive and effective?

There's also not an unlimited supply of researchers and engineers for them to keep burning through people at the rate at which they've been working. Although I guess for people with short timelines it makes sense to sprint hard, while people with longer timelines are more likely to treat this as a marathon. Maybe the years of burning bridges and developing such a toxic reputation are finally catching up to Elon. I think part of the harm that Elon has done is framing all the work in xAI as engineering while being highly dismissive of research, but a lot of research requires running experiments or thinking about problems and exploring them for long periods of time. If you're just grinding out work nonstop you don't really have time to let your mind wander and explore new ideas.

Honestly, I'm surprised they've done such a terrible job with programming. I remember around summer last year it was quite apparent how far behind they were with coding tools, but Elon was posting about taking that domain a bit more seriously. Why didn't any of those efforts materialize into real outputs? Something must be truly dysfunctional inside of xAI for them not to be shipping anything at all, especially considering Elon's propensity to ship undercooked products while continuing to iterate on them, as he has done in many previous cases.

I've noticed that Elon has also gone very hard on social media posting a ton of criticisms against the other big AI company CEOs like Daario Amodei. This suggests to me that he must feel very threatened, otherwise he wouldn't be resorting to such childish behavior. He must feel incredibly frustrated that no amount of money is able to make him more competitive within the AI space.

[−] gigatexal 64d ago
Ali think all of Elon’s companies have an Elon problem: he’s so polarizing he’s limiting the talent pool to choose from. I’m here for it because I don’t care for the man and want him to fail… but yeah that seems clear to me that his polarizing antics are costing his companies.
[−] tmaly 64d ago
I think it would have been better to have just brought Ashok Elluswamy over and placed him in charge of a group and then tried to just keep the researchers on rather than firing them. It is hard to get anything done if you do not have the talent already onboard.
[−] measurablefunc 64d ago
It's surprising that AI coding agents have network effects but it's true. Think about it from first principles & you'll realize that the bottleneck is how many people are using it to write real code & providing both implicit (compiler errors, test failures, crash logs, etc) & direct ("did not properly follow instructions", "deleted main databases", "didn't properly use a tool", etc) feedback. No one is using xAI for serious software engineering so that leaves OpenAI, Anthropic, & Google w/ enough scale to benefit from network effects. No one has real AI but what they do have is the appearance of intelligence from crowdsourced feedback & filtering. This means companies that are already in the lead will continue to stay there & xAI started way too late so they will continue to lose in every domain that actually matters & benefits from network effects.
[−] teladnb 64d ago
It does not surprise me. The free Grok got worse since 4.0, they increasingly save money by not responding at all or only allowing one answer. Grok now defends the administration and billionaires.

The company seems to burn money like crazy. Everyone knows that "AI in space" and the downgrade to a moon trip after claiming for 15 years that Mars is just around the corner are marketing.

All AIs are toys and the coding promises are just a lie to string along investors. Unfortunately many of these are senile Star Trek watchers who buy into everything.

[−] lvl155 64d ago
xAI showed me that it’s really still OAI and Anthropic (which is basically the OG devs). No matter how much money you throw at the problem, the entire space is still in the hands of a few.
[−] catapart 64d ago
lol! no surer sign of a junior/naive/ignorant developer or manager than the sentiment "okay, well, let's start from scratch and do it right this time."

big projects generate cruft. there are ways to minimize it, but as you go along there will always be some stuff that doesn't quite mesh with whatever else you've got going on. if you insist on ironing out every single wrinkle (admirable!) you'll never actually deliver a result.

I'm not saying this will fail. green field projects can certainly be a godsend when they produce something better than what they attempt to replace. but they are always a sign of failure. of not being able to work your way out of the mess you made with the first attempt. so that just begs the question: what are you going to do when this attempt gets hard to work with? going to give up and start over again - do it right that time? or...?

[−] stainablesteel 64d ago
im not surprised, grok definitely falls behind as both a coding agent and a research tool.

claude codes the best, gpt is the best research tool, and grok is really only great at videos. which isn't a huge loss, but videos don't have the same functional capacity as academic topics and coding

[−] sergiotapia 64d ago
Will this be an indictment on the insane work hours I've heard the xai team pulls?
[−] localghost3000 64d ago
Musk sounds like such a nightmare to work for. I legitimately don't understand why anyone would put up with him. What's the appeal?
[−] holoduke 64d ago
Where is the grok coding cli?
[−] pmdr 63d ago
MechaHitler and undressing aside, Grok is the best integration of AI into an existing website ever attempted.
[−] anigbrowl 64d ago
This might explain why Grok went unavailable to non-subscribers at X the other day.
[−] sidcool 64d ago
The pleasure people get at seeing Elon's companies failing is astounding.
[−] zzleeper 64d ago
Wait, what does this imply for Cursor? I DGAF about xAI and will never use their Grok, but I did like Cursor more than the alternatives (even if I'm just running opus 4.6 most of the time).

But now he is poaching the two heads of engineering of a company that's trying to move very quickly, how is that going to affect their speed and success?

[−] rvz 64d ago
Not even Elon believes that Cursor is worth $50B or even $29B.
[−] BigTTYGothGF 64d ago
I feel like even just a couple years ago it would have been shocking to see an article involving Musk have this kind of spin. Like you'd never see a line like this:

> The name is a “funny” reference to Microsoft, the billionaire added.

in something from 2023 or earlier.

[−] repple 64d ago
Their goal of moving compute to space combined with their capacity to launch tons of payload will make this look like a tiny blip.
[−] causalzap 64d ago
[flagged]
[−] dang 64d ago
I couldn't find a working archive link for the ft.com article - anyone?

Since it's the original source I've left it up, but added other URLs to the toptext.