Qwen3.6-35B-A3B on my laptop drew me a better pelican than Claude Opus 4.7 (simonwillison.net)

by simonw 97 comments 463 points
Read article View on HN

97 comments

[−] ericpauley 28d ago
Going to have to disagree on the backup test. Opus flamingo is actually on the pedals and seat with functional spokes and beak. In terms of adherence to physical reality Qwen is completely off. To me it's a little puzzling that someone would prefer the Qwen output.

I'd say the example actually does (vaguely) suggest that Qwen might be overfitting to the Pelican.

[−] wongarsu 28d ago
Qwen's flamingo is artistically far more interesting. It's a one-eyed flamingo with sunglasses and a bow tie who smokes pot. Meanwhile Opus just made a boring, somewhat dorky flamingo. Even the ground and sky are more interesting in Qwen's version

But in terms of making something physically plausible, Opus certainly got a lot closer

[−] kmacdough 28d ago
Given adherence is a more significant practical barrier, it's probably the better signal. That is, if we decide too look for signal here.
[−] BobbyJo 28d ago
The fundamental challenge of AI is preventing unprompted creativity. I can spin up a random initialization and call all of it's output avante garde if we want to get creative.
[−] userbinator 28d ago
I recently fell down the rabbithole of AI-generated videos, and realised that many of the "flaws" that make them distinctive, such as objects morphing and doing unusual things, would've been nearly impossible or require very advanced CGI to create.
[−] doobiedowner 28d ago
[flagged]
[−] itake 28d ago
"artistically interesting" is IMHO both a subjective and 'solved' problem. These models are trained with an "artistically interesting" reward model that tries to guide the model towards higher quality photos.

I think getting the models to generate realistic and proportional objects is a much harder and important challenge (remember when the models would generate 6 fingers?).

[−] tpm 28d ago
The Opus bike isn't very physically plausible though.
[−] tecoholic 28d ago
Even the first one - Qwen added extra details in the background sure. But he Pelican itself is a stork with a bent beak and it's feet is cut off it's legs. While impressive for a local model, I don't think it's a winner.
[−] mejutoco 28d ago
Did you see opus bike though for that same test? I know it is about the flamingo but that is bad.
[−] kube-system 28d ago
Qwen, at least, can draw a complete bicycle frame. The opus frame will snap in half and can’t steer.
[−] gowld 27d ago
Qwen's frame is so strong that it broke both feet off the pelican.
[−] kube-system 27d ago
Clearly he's riding a fixie and trying to stop. Pelican didn't drink his Ovaltine.
[−] irthomasthomas 28d ago
It's a 3B model. It should not be this close. Debating their artistic qualities is missing the point.
[−] jbellis 28d ago
For coding, qwen 3.6 35b a3b solved 11/98 of the Power Ranking tasks (best-of-two), compared to 10/98 for the same size qwen 3.5. So it's at best very slightly improved and not at all in the class of qwen 3.5 27b dense (26 solved) let alone opus (95/98 solved, for 4.6).
[−] mentalgear 28d ago
I understand the 'fun factor' but at this point I really wonder what this pelican still proofs ? I mean, providers certainly could have adapted for it if they wanted, and if you want to test how well a model adapts to potential out of distribution contexts, it might be more worthwhile to mix different animals with different activity types (a whale on a skateboard) than always the same.
[−] wood_spirit 28d ago
Such a disconnect from the minutes I’ve lost and given up on Gemini trying to get it to update a diagram in a slide today. The one shot joke stuff is great but trying to say “that is close but just make this small change” seems impossible. It’s the gap between toy and tool.
[−] big-chungus4 28d ago
I swear every single time someone says "my laptop" on hacker news, it's some insane MacBook that is more powerful than 98% computers out there
[−] sailingcode 28d ago
I'm an iguana and need to wash my bicycle in the carwash. Shall I walk or take the bus?
[−] ralph84 28d ago
You can just straight up ask Opus if it's good at generating images and it will say no. It has never been marketed as being for image generation.
[−] VHRanger 28d ago
That's not surprising; Opus & Sonnet have been regressing on many non-coding tasks since about the 4.1 release in our testing
[−] f33d5173 28d ago
I don't know what such a demo would prove in the first place. LLMs are good at things that they have been trained on, or are analogues of things they have been trained on. SVG generation isn't really an analogue to any task that we usually call on LLMs to do. Early models were bad at it because their training only had poor examples of it. At a certain point model companies decided it would be good PR to be halfway decent at generating SVGs, added a bunch of examples to the finetuning, and voila. They still aren't good enough to be useful for anything, and such improvements don't lead them to be good at anything else - likely the opposite - but it makes for cute demos.

I guess initially it would have been a silly way to demonstrate the effect of model size. But the size of the largest models stopped increasing a while ago, recent improvements are driven principally by optimizing for specific tasks. If you had some secret task that you knew they weren't training for then you could use that as a benchmark for how much the models are improving versus overfitting for their training set, but this is not that.

[−] ineedasername 28d ago
On thinking about the reasons this may be something at least slightly more than training on the task is the richness with which language is filled with spatial metaphors even in basic language not by laymen considered metaphor outside the field of linguistics proper, in which concepts eg Lakoff's analysis in "Metaphors we Live By and others are simply part of the field, (though unsurprisingly, among the HN crowd I've occasionally seen it brought up)

The amount of money you have in the bank may often "increase" or "decrease" but it also goes up and down, spatial. Concepts can be adjacent to each, orthogonal. Plenty more.

So, as models utilize weight more densely with more complex strategies learned during training the patterns & structure of these metaphors might also be deepened. Hmmm... another thing to add to the heap of future project-- trace down the geometry of activations in older/newer models of similar size with the same prompts containing such metaphors, or these pelican prompts, test the idea so it isn't just arm chair speculation.

[−] bulbar 27d ago

> I’m giving this one to Qwen too, partly for the excellent

SVG comment

You say you like the one from Qwen better and the only reason you give has nothing to do with the task.

In general, one should provide specific expectations regarding the properties of the image before the experiment. One important property should be "does not hallucinate stuff into the image that are unrelated to the prompt".

[−] Quarrelsome 28d ago
Maybe the next time we suspect they're optimising for the test, switch the next test to drawing "the cure for cancer".
[−] comandillos 28d ago
I've been using Qwen3.5-35B-A3B for a bit via open code and oMLX on M5 Max with 128Gb of RAM and I have to say it's impressively good for a model of that size. I've seen a huge jump in the quality of the tool calls and how well it handles the agentic workflow.
[−] aliljet 28d ago
I'm really curious about what competes with Claude Code to drive a local LLM like Qwen 3.6?
[−] jedisct1 28d ago
I'm currently testing Qwen3.6-35B-A3B with https://swival.dev for security reviews.

It's pretty good at finding bugs, but not so good at writing patches to fix them.

[−] lofaszvanitt 28d ago
That Qwen flamingo on the unicycle is actually quite good. A work of art.
[−] quux 28d ago
This is a useless benchmark now a days, every model provider trains their models on making good pelicans. Some have even trained every combination of animal/mode of transportation
[−] atonse 28d ago
Wonder what would happen if we unleashed Karpathy’s autoresearch on the pelican bicycle test. And had it read back the image to judge it.

Oh maybe it might continue to iterate on the existing drawing?

[−] refulgentis 28d ago
I liked both of Opus' better, it was very illuminating, in both cases I didn't see the error's Simon saw and wondered why Simon skipped over the errors I saw.

Pelican: saturated!

[−] bottlepalm 28d ago
I really wish they spent some time training for computer use. This model is incapable of finding anywhere near the correct x,y coordinate of a simple object in a picture.
[−] 999900000999 28d ago
How much ram on the MacBook.

God bless these open models. Claude can’t subsidize its users forever and no one can afford 1200$ a month for llm credits.

[−] JaggerFoo 28d ago
FYI, using a 128GB M5 MacBook Pro, sourced from another article by the author.
[−] Havoc 28d ago
Between the legs and the beak I'd still rate the opus pelican higher
[−] hopinhopout 28d ago
LLM's really causing serious brainrot if html pelican drawings are a usage basis for your programming projects, even all these shitty benchmarks don't say or mean anything if companies secretly tweak them on the go
[−] yieldcrv 28d ago
All those models that were just at version 1.x in 2024

That’s so wild

[−] justinbaker84 28d ago
I love this benchmark!
[−] stevefan1999 26d ago
why is that flammingo in Qwen smoking?
[−] kburman 28d ago
looks like opus have been nerfed from day1
[−] nba456_ 28d ago
Good reminder that these tests have always been useless, even before they started training on it.
[−] 19qUq 28d ago
How about switching to MechaStalin on a tricycle? It gets kind of boring.
[−] tmatsuzaki 28d ago
[dead]
[−] aimadetools 28d ago
[dead]
[−] whywhywhywhy 28d ago
[flagged]
[−] smcl 28d ago
[flagged]
[−] simon_is_genius 28d ago
[flagged]
[−] throwuxiytayq 28d ago
I literally cannot believe that people are wasting their time doing this either as a benchmark or for fun. After every single language model release, no less.