Muse Spark: Scaling towards personal superintelligence (ai.meta.com)

by chabons 367 comments 393 points
Read article View on HN

367 comments

[−] tty456 37d ago
I don't get the comments trashing this. If it slightly beats or even matches Opus 4.6, it means Meta is capable of building a model competitive with the leading AI company. Sure, they spent a lot of money and will have on-going costs. But how much more work would it take to turn that into a coding agent people are willing to try (and pay for) along side their usage of a collection of agents (Claude, Codex, etc)? Also means Meta doesn't have to pay another company to use a SATA model across all their products (including IG and WhatsApp, vr) which will matter to their balance sheet long term (despite the constant r&d spend).
[−] prodigycorp 37d ago
Comments trashing this are rightly correct skeptics who remember the benchmaxxing of llama 4. This model was out in the woods as early as like a couple months ago but they didn't release it because it was at gemini 2.5 pro levels.
[−] refulgentis 37d ago

> 4. This model was out in the woods as early as like a couple months ago but they didn't release it because it was at gemini 2.5 pro levels.

Source? (Even if rumor)

[−] nl 36d ago
NYTimes had a story about this (March 12):

> Meta’s new foundational A.I. model, which the company has been working on for months, has fallen short of the performance of leading A.I. models from rivals like Google, OpenAI and Anthropic on internal tests for reasoning, coding and writing, said the people, who were not authorized to speak publicly about confidential matters.

> The model, code-named Avocado, outperformed Meta’s previous A.I. model and did better than Google’s Gemini 2.5 model from March, two of the people said. But it has not performed as strongly as Gemini 3.0 from November, they said.

> They added that the leaders of Meta’s A.I. division had instead discussed temporarily licensing Gemini to power the company’s A.I. products, though no decisions have been reached.

https://www.nytimes.com/2026/03/12/technology/meta-avocado-a...

https://archive.is/uUV5h#selection-715.98-715.277

[−] o10449366 36d ago
[flagged]
[−] prodigycorp 36d ago
It was from a techmeme ride home podcast where the host discussed "sources at the company said". I don't remember which day's episode it was.
[−] zozbot234 37d ago
The llama4 series was one of the earliest large MoE's to be made publically available. People just ignored it because they were focused on running smaller and denser models at the time, we should know better these days.
[−] canes123456 37d ago
Why go into coding agents? Both anthropic and OpenAI are going all in on that. The opportunity is customer facing AI now.

OpenAI has the mindshare but they going to have to decide if they allocate their limited compute for free users or go all in trying to keep up with Anthropic in enterprise.

[−] modeless 37d ago
It's a decent model if the benchmarks are to be believed, but it won't be close to Opus in usefulness for programming. None of these benchmarks completely capture what makes a model useful for day-to-day coding tasks, unfortunately. It will take time for them to catch up, and Opus will keep improving in the meantime. But it's good to have more competition.
[−] redox99 37d ago

> If it slightly beats or even matches Opus 4.6

It doesn't though

[−] ChipopLeMoral 37d ago

> I don't get the comments trashing this.

People like to hate on Meta regardless of anything, and regardless of whether it's justified or not. Not saying it isn't, just that it's many people's default bias.

[−] simonw 37d ago
Pelicans: https://simonwillison.net/2026/Apr/8/muse-spark/

I also had a poke around with the tools exposed on https://meta.ai/ - they're pretty cool, there's a Code Interpreter Python container thing now and they also have an image analysis tool called "container.visual_grounding" which is a lot of fun.

[−] daft_pink 37d ago
This really reinforces the idea that the AI race and the Railroad Mania of the 19th century are very similar.

So many different companies are going to have similarly powerful ai that there will be no moat around it and it will be cheap. They will never earn their investment back.

[−] _2d30 37d ago
Ran some of my internal benchmarks against this and I'm very unimpressed. I don't think this moves them into the OAI v Anthropic v Gemini conversation at all.

Major analytical errors in their response to multiple of my technical questions.

[−] gloosx 36d ago

>Text field.

>"Ask Meta AI..." placeholder.

>Colourful blue Send button.

>Eager to try, entering question... hitting Send.

>Log in or create an account to access.

>15 seconds of loading time

>Continue with Facebook or Instagram

Typical meta move, throwing a dark pattern at you from the beginning instead of just letting you try it

Won't even bother to continue, somehow OpenAI got this right.

[−] laser 37d ago
First thing I tried is a visual reasoning test on floor plan documents that applies directly to something I'm working on and needed that I posed to ChatGPT, Claude, Gemini, and Grok yesterday (lowest tier paid plans on each). In that test only Gemini succeeded while the other models hallucinated/incorrectly reported the relative location of building units.

I just posed the identical prompt/document to Muse Spark and it knocked it out of the park, extracted and displayed the pertinent pages from a multi-page PDF inline in the chat and rendered a correct answer.

This may be a one-off or lucky start but given the incredible result out of the gate I'm optimistic and will continue testing in parallel against other models before potentially making it my primary daily driver, excluding coding where the harnesses of claude code and codex are still needed (although hopefully they release something in this space too).

That being said Meta has the most adversarial data-usage policies I've seen among LLM providers so that's unfortunate for handling anything sensitive, but it also stands to reason that they have a long term advantage with such a massive proprietary data set. I'd prefer to also have a paid plan like the other services that allows me to keep my data out of training, rather than a free service and my usage being monetized in other ways.

[−] zmmmmm 37d ago
The real question for me, if we assume they once again have a competitive frontier model, is what this means for Meta's strategy now. In particular, have they abandoned all their philosophy of the open ecosystem / open model play they were pursuing before?

While it's true, llama4 sucked, I still can't help feeling they have lost ground compared to where they would have been if they maintained that strategy. Due to llama, they were considered a peer with the other frontier model providers. Now they are not even in the conversation. It would take an incredible shift in performance to make me even consider using their new model. They may have a model, but the other providers have been busy building whole ecosystems around their tech which Meta has none of.

Maybe they could dump $1b into OpenCode or something and reignite the open ecosystem play with an open harness. They need something to get back in the conversation, if that's where they want to be. Otherwise, it will just be another closed, hidden proprietary AI model driving user facing Meta apps, but which nobody else cares about.

[−] granzymes 37d ago
Comes impressively close to GPT 5.4 / Gemini 3.1 Pro / Opus 4.6! Mostly behind OpenAI on coding/agentic benchmarks, behind Google on text reasoning, behind Anthropic on Humanity's Last Exam with tools (surprisingly the only benchmark where Anthropic leads currently).

Meta hasn’t fully caught up, but they came close and I think can solidly claim to be a frontier lab again. I’d call it a 3.5 horse race right now, and hopefully their next model improves. More model competition is good!

Poor Grok 4.2 should probably be dropped from the table.

[−] glerk 37d ago
Personal as in Meta gets your personal data so they can sell you more ads.
[−] TobTobXX 37d ago

> Muse Spark is a natively multimodal reasoning model with support for [...] visual chain of thought [...].

Do they mean "the chain of thought is visible to the user" (ie. not hidden like ChatGPT), or "the medium of the chain of thought is not text, but visuals" (ie. thinking in images).

I'd guess the former, since it wouldn't be economical to generate transient images, just for thinking. But I'm not sure why they'd highight that in that case. If it were the second thing, that'd be extremely interesting. The first model not to think in text.

[−] tekacs 37d ago
https://meta.ai/share/pe4HxOfv2Bp

Finding a little bit tricky to evaluate because the harness is unfortunately very, very bad (e.g. search is awful). Can't wait to try this in some real external services where we can see how it performs for real.

Definitely getting ordinary high-quality results, overall. But hard to test agentic behavior and hard to test prose quality, even, when just working off of the default chat interface.

One thing that stands out is that _for_ the quality it feels very, very fast. Perhaps it's just only very lightly loaded right now, but irrespective it's lovely to feel.

I'm quite impressed with the tone overall. It definitely feels much more like Opus than it does, like, GPT or Grok in the sense that the style is conversational, natural and enjoyable.

[−] moab 37d ago
"Muse Spark is available now, and Contemplating mode will be rolling out gradually in meta.ai."

How does one get their hands on these models? They are not open-source, right? I go to meta.ai, but it's just a chat interface---no equivalent to codex or claud code? Can you use this through OpenCode? Is meta charging for model access, or is the gathering of chat data a sufficiently large tithe?

[−] hackrmn 37d ago
The hero image on the linked page, which consists of a muted teal background with the words "Introducing Muse Spark", weighs in at 3,5MB. I don't even...
[−] ddp26 37d ago
The second paragraph starts "Muse Spark is the first step on our scaling ladder and the first product of a ground-up overhaul of our AI efforts. To support further scaling, we are making strategic investments..."

This article is about Meta, not about the user. Who signs off on these? Is the intended audience other people at Meta, not the user?

[−] yalogin 37d ago
Meta is in a weird spot. They caught up late to the game and instead of releasing llama as a chat bot they open sourced it, precisely because they lost the mind share. They thought chatbot is not their product and I am sure they are regretting it now. Mark is obsessed with becoming the android of something and he poured billions into the metaverse thinking he is first and failed. He then open sourced llama and wanted to be the android of llms. He ended up enabling groq but it didn’t benefit meta directly at all. They have no revenue or mind share path from llms but continue to pour billions into it. The only 1-1 mapping is with the glasses but that is a tough fit for the company given they are extremely allergic to privqcy and security.

Not sure what this is now.

[−] throwaw12 37d ago
How is that Meta spent so much money for talent and hardware, but the model barely matches Opus 4.6?

Especially, looking at these numbers after Claude Mythos, feels like either Anthropic has some secret sauce, or everyone else is dumber compared to the talent Anthropic has

[−] bguberfain 37d ago
We all know it... but I think they were very bold in this warning about using your private messages to train public models. _Your messages with AIs will be used to improve AI at Meta. Don't share information, including sensitive topics, about others or yourself that you don't want the AI to retain and use_
[−] anxtyinmgmt 36d ago
I wanted to root for Mark and Meta as another frontier lab especially focused on open source but at this moment I have to say who cares. Gemini has a better OS track record thus far. Alex Wang is a reputational hazard. It is hard to get over the bias that this too might be benchmaxxed. I'd love to see demos of products actually using these models to overcome that but with the current pace of progress now my intuition says skip all this.
[−] gallerdude 37d ago
This would have been an amazing release 6 months ago. But the industry moves so fast, this is a trite release. Maybe it’s best for Meta to sell their superintelligence division. I don’t think Zuck’s vision is particularly compelling.
[−] eranation 37d ago
So this is why Anthropic rushed the weirdest "pre-responsible-disclosure-totally-not-for-marketing" announcement yesterday? To make sure Spark doesn't steal their thunder? (Spark beats Opus 4.6 on some benchmarks...). Or did I become a bitter cynical old man.
[−] hvass 37d ago
Genuine question: Why release this the day after Mythos? It does not appear SOTA (just based on benchmarks). OpenAI will likely release Spud tomorrow.
[−] sidcool 37d ago
Will experiment with the model. But I am scared of sharing any information with the Zuck ecosystem.
[−] GalaxyNova 37d ago
It is unfortunate that they decided to stop doing open-weight releases.

What could have been interesting has been reduced to simply another subpar LLM release.

[−] toddmorey 37d ago
Question: since they've rebooted their approach to AI... have they given up on open models? There's no mention of open source or open weights or access to the models beyond their hosted services.
[−] gritspants 37d ago
I would like someone to tell me how stupid I am. If I were Meta/Zuck I'd open source a great model the moment my company developed it. This just looks like a pitch to investors, otherwise.
[−] khurdula 37d ago
"we hope to open-source future versions of the model."

Love to see it. Cheers!

[−] edwcross 37d ago
What is the "BioTIER-refuse" thing mentioned in the "Bioweapons Refusal" graph?

I Googled it and found absolutely nothing.

Well, to be honest, I got 100% of websites containing the French word "boîtier" (box) with a typo.

Even on Google Scholar, the closest match is "BioTiER (Biological Training in Education and Research) Scholars Program", which is at least 10 years old and has nothing to do with that.

Is that an AI-generated image with an AI-generated name that has no physical existence?

[−] binaryturtle 37d ago
Looks like it needs a meta account? As soon you hit enter it wants to log-in. I guess I won't try this any time soon. :)
[−] zurfer 37d ago

> Muse Spark is available today at meta.ai and the Meta AI app. We’re opening a private API preview to select users.

[−] ChrisArchitect 37d ago
Associated Meta news post with consumer-friendly takes: https://about.fb.com/news/2026/04/introducing-muse-spark-met...
[−] spearman 37d ago
Uploading images requires logging in. Logging in is broken. It redirects to https://meta.ai/?error=Token%20exchange%20failed and doesn't show any error message. Impressive.
[−] visioninmyblood 37d ago
https://meta.ai/ this is where you can try it seems like the API is not publicly accessable yet. I feel they are very late to the game and do not show value to customers over other models.
[−] cvhc 37d ago
Can't login. No error message in the UI. But the URL changes to "https://www.meta.ai/?error=Token%20exchange%20failed".
[−] nharada 37d ago
Saying nothing about the actual performance of this model, it does strike me how .... minimal(?) this announcement is. Their safety section is like 2 paragraphs about bioweapons. Go look at the reports for OpenAI and Anthropic's model releases. It's like 50+ pages of tests, examples, reports, and benchmarks across a bunch of safety and wellfare metrics.

If Meta wants to be seen as a cutting edge massive lab they need to come across as one instead of looking like a school project version of a frontier model.

[−] KoolKat23 37d ago
Perhaps I'm wrong, but definitely seems to be SOTA. Although looking at it's ARC-AGI-2 score it's reasoning isn't very good. I suspect it's got the benefits of scale but lacks that human added element, understandable considering they claim to be building it from the ground up. This should come in time if they have a good team. In real life, I'd imagine one would worry about overfitting when using it.

(I'm not using it as I'm not agreeing to their ad terms).

[−] chankstein38 37d ago
Personal Superintelligence made me think this was an open-source model being released and I was excited. Then I continued reading and I'll just wait until the model comes out.
[−] eranation 37d ago
Sarcasm aside, tried it (with instant mode), it's an impressive model.

It nailed all the ChatGPT meme gotchas (walk to the carwash, Alice 50 brothers, upside down cup, R's in strawberry, which number is bigger, 9.11 or 9.9?)

I guess all that money poaching OpenAI / Anthropic talent went somewhere...

Now, would I use "Meta Muse Code" or "Muse CoWork" if I have to have a facebook account to all of my developers? Maybe not.

Would I use it via an API key? I might, depends on the pricing!

[−] supermatt 36d ago
Does "personal" here mean "run the model on your personal hardware", or just "give your personal data to meta"?
[−] anigbrowl 37d ago
Kinda off topic but I wonder why they picked this name, knowing of Nvidia's Spark. They're different products, obviously, but the potential for confusion is real as both brands are competing for mindshare in the AI space. I opened this story expecting to read they'd deployed on a cluster made of Spark machines or somesuch.
[−] maxaravind 36d ago
Personal superintelligence sounds nice until you actually try to use it.

We spent time yesterday arguing through an architecture decision. Today I ask the Agent to help implement it - it knows nothing about any of that. You’re effectively starting over.

Feels like the real problem isn’t intelligence, it’s continuity. And most benchmarks don’t even touch that.

[−] khalic 37d ago
Oh good, if they built a lab, I’m sure they took the time the precisely define what they mean by super intelligence? Right? …
[−] Alifatisk 36d ago
Do we have any numbers on input, output and conversation context window limit?

I tried multiple riddles, graphs and questions I know some LLMs fails at, but this one seems to do well. But I still don't have much trust in Meta after the scandal of them fiddling with their previous models to look good.

[−] rvz 37d ago
Until you actually try the model itself, assume any benchmark presented to you as being part of the marketing material of the model, as it is not independently verified and completely biased.

The same is true with any other model, unless otherwise stated.

In the next few days, we'll see who Meta has paid to promote this model on social media.

[−] oliver236 37d ago
so glad its beating all the others on bioweapons refusal. this is what i most wanted out of the latest SOTA model