Flash-MoE: Running a 397B Parameter Model on a Laptop (github.com)

by mft_ 120 comments 398 points
Read article View on HN

120 comments

[−] tarruda 55d ago
Note that this is not the only way to run Qwen 3.5 397B on consumer devices, there are excellent ~2.5 BPW quants available that make it viable for 128G devices.

I've had great success (~20 t/s) running it on a M1 Ultra with room for 256k context. Here are some lm-evaluation-harness results I ran against it:

    mmlu: 87.86%

    gpqa diamond: 82.32%

    gsm8k: 86.43%

    ifeval: 75.90%
More details of my experience:

- https://huggingface.co/ubergarm/Qwen3.5-397B-A17B-GGUF/discu...

- https://huggingface.co/ubergarm/Qwen3.5-397B-A17B-GGUF/discu...

- https://gist.github.com/simonw/67c754bbc0bc609a6caedee16fef8...

Overall an excellent model to have for offline inference.

[−] Aurornis 55d ago
Reading the details, he is using 2-bit quantization and reduced the number of experts per token from 10 down to 4 to get 5 tokens/sec. Cool proof of concept but it’s far from the quality and performance of the 397B model as normally used. Dropping the number of experts is particularly misleading.

This is some interesting work, but applying such extreme measures to LLMs to get them to run severely degrades quality. I know he claims negligible quality loss, but in my experience 2-bit quantizations are completely useless for real work. You can get them to respond to prompts, but they lose their intelligence and will go around in circles.

He also shows 5-6 tokens per second. Again that’s impressive for a large model on limited hardware but it’s very slow. Between the severely degraded model abilities and the extremely slow output the 397B result should be considered an attempt at proving something can technically run, not evidence that it can run well and produce output you’d expect from a 397B model.

He even mentions the obvious problems with his changes:

> *2-bit quantization produces \name\ instead of "name" in JSON output, making tool calling unreliable.

So right out of the gate this isn’t useful if you want to do anything with it. He could have tried smaller models or less quantizations to get actual useful output from the model, but it wouldn’t look as impressive. It’s honestly getting kind of exhausting to read all of these AI-coded (admitted in the link) and AI-written papers made more for resume building. It would have been interesting to see this work applied to running a useful model that hadn’t been lobotomized instead of applying tricks to get an impressive headline but useless output.

[−] jllyhill 55d ago
To be honest, I'm getting tired of a "laptop" in every one of these clickbait titles turning out to be $3000 Macbook. Sure, it's impressive to achieve this degree of the LLM compression, but I really don't like that the title implies local LLM becomes a viable for an average person with the actual hardware being out of reach for 99%.
[−] homarp 55d ago
[−] zozbot234 55d ago
The github page mentions that a naïve mmap approach is bottlenecked by per-page overhead. Can this be mitigated by setting up explicit "huge" pages? (2M using the CONT PTE feature if the "native" page size is 16k; 32M using a PMD level block mapping; or 1G using the CONT PMD feature.) Does macOS support this out of the box? Alternatively, one might use a simple mmap and then something like posix_fadvise to set up prefetching of the data.
[−] justacatbot 55d ago
The quality degradation at 2-bit is a real issue. For actual work tasks, a well-tuned 30B at 4-bit usually outperforms a 70B+ at 2-bit in my experience. The expert reduction on top of that compounds things - you're essentially running a fairly different model. Still interesting to see the upper bound of what consumer hardware can attempt, even if the result isn't production-ready.
[−] bertili 55d ago
Very impressive! I wonder if there is a similar path for Linux using system memory instead of SSD? Hell, maybe even a case for the return of some kind of ROMs of weights?
[−] andai 55d ago

> Metal Compute Shaders — Hand-written Metal kernels

Hand written... by GPT? ;)

[−] RandyOrion 54d ago
This project shows an interesting automated search for engineering problems that I like to see more.

The experience of utilizing tiered storage (gpu vram, ram, and ssd) is generally poor for a lot of LLM inference engines out there, e.g., llama.cpp, sglang, vllm, etc..

My own experience shows that both weight and KV cache offload to ram on sglang and vllm is unavailable or unusable. Copying extra parameters from documents and adding them to already working commands results in errors. Llama.cpp does support weight offload, but the experience is not pleasant, low pcie (gpu <-> ram) utilization, low gpu utilization, and really low tokens per second.

[−] mkw 55d ago
TLDR I took a stab at leveraging Dan's work and making it more practical:

https://github.com/matt-k-wong/mlx-flash

2 bit quantization lobotomizes the model but is impressive nonetheless! Maybe one day we'll be able to have intelligent 2 bit quants... I wonder.

my version supports - 4bit quantization, hybrid streaming (Disk + ram), arbitrary model compatibility, tested on Mamba2, and lets up the framework for LM Studio integration

I leveraged this work (Credit to Danveloper) and am in the middle of making this work on more practical models and quants. It still uses flash streaming, but done so with a control knob so you can choose how much ram and how little ram to use. In the craziest case, it uses as little ram as possible but is very slow, however, in the balanced case you use some ram and it's much faster.

I designed it around the intelligence dense Nemotron 3 Nano 30B and Nemotron Cascade 2 30B models (which are smaller, more intelligence density) and can run on low end 16GB machines, though you can run arbitrarily large models on larger machines (designed for very low end, but capable of high end).

[−] JSR_FDED 55d ago
This is a very impressive result. If I understand correctly the bottleneck is the SSD in this architecture - the author seems to get almost 15GB/s - but I seem to remember the max b/w was about 8GB/s. What am I missing?
[−] druide67 53d ago
The finding about removing the 9.8 GB Metal LRU cache for a 38% speedup is the most interesting part. Same lesson as PostgreSQL's advice against application-level buffer pools that compete with the OS page cache : the hardware memory compressor doing 130K decompressions/sec was pure overhead.

Curious about the remaining gap: 5.7 tok/s vs 18.6 theoretical (from SSD bandwidth). Is the ~70% overhead mostly GPU compute on non-expert layers (attention, norm), or is there I/O scheduling room left?

[−] spwa4 55d ago
Does this mean that it should be possible to load up a system with ~10 (seems to me at least the number of active experts) SSDs to get 40 tok/s even on truly gigantic models?
[−] shubhamintech 54d ago
4.4 tok/s with reliable structured output is a solid local benchmark altho the question is whether SSD streaming introduces per-token latency variance that messes up tool call parsing downstream. The gap between 400 GB/s unified memory bandwidth and 17.5 GB/s SSD reads means you're in the hot path pretty much every time an expert isn't cached.
[−] qiine 55d ago
It seem strange to me that the only way to use an llm is to fit it entirely in volatile memory from the get go.

To render movies we happily wait for the computer to calculate how lights bounce around, for hours even days.

So why not do the same with AIs? Ask big question to big models and get the answer to the universe tomorrow?

[−] maxloh 55d ago
Can you add a license to the repo? Legally we couldn't run any code without a license attached to it.
[−] haomingkoo 55d ago
Really interesting approach. Curious how the 2-bit quantization affects the model's reasoning ability on longer chains of thought vs shorter prompts. The benchmarkslook solid but real-world usage seems like a different story based on the comments here.
[−] 999900000999 54d ago
If I have a dedicated GPU with 12GB of VRAM and 32 GB of system ram, can I combine the two for LLMs.

So far ollama will use the 12GB and then give up

[−] m-hodges 55d ago
As frontier models get closer and closer to consumer hardware, what’s the most for the API-driven $trillion labs?
[−] lostmsu 55d ago
How large is the KV cache?
[−] 383toast 55d ago
yeah 4tok/s is kinda unusable though
[−] breakingcups 54d ago

> No Python. No frameworks. Just C, Objective-C, and hand-tuned Metal shaders.

Welp, I know where those tokens came from.

[−] mannyv 55d ago
Everyone is focused on the bad 2 bit result but who cares? He says don’t use it because it’s bad.
[−] pdyc 55d ago
impressive, i wish someone takes a stab at using this technique on mobile gpu's even if it does not use storage it would still be a win. I am running llama.cpp on adreno 830 with oepncl and i am getting pathetic 2-3t/s for output tokens
[−] matchbox 55d ago
this is awesome Dan!
[−] utopiah 54d ago
I honestly don't get "why" despite having done similar things myself, e.g. run on a model on a VR headset itself.

I mean I've done it because I could, so I imagine others are doing that too. But then... once it's done I don't actually use it. I ticked that box but eventually when STOA aren't that useful I have a hard time imagining actual positive use cases (... not like offline spam or naughty chat in the woods) that would benefit from such technically impressive demos.

[−] NamlchakKhandro 54d ago
lmao 4.4 tokens per second is hilariously and utterly bad.

anyone suggesting that it's a reasonable speed should find another career

[−] claud_ia 54d ago
[dead]
[−] robutsume 55d ago
[dead]
[−] maxothex 54d ago
[dead]
[−] fluxist 54d ago
[dead]
[−] Yanko_11 54d ago
[dead]
[−] Yanko_11 55d ago
[dead]
[−] openclaw01 54d ago
[dead]
[−] leontloveless 55d ago
[dead]
[−] diablevv 55d ago
[flagged]
[−] leontloveless 55d ago
[dead]
[−] leontloveless 54d ago
[dead]
[−] leontloveless 55d ago
[dead]
[−] gregfrank 54d ago
[dead]
[−] aplomb1026 55d ago
[dead]
[−] thestack_ai 54d ago
[dead]
[−] qcautomation 55d ago
[dead]
[−] jee599 54d ago
[dead]
[−] jee599 55d ago
[dead]
[−] arikrahman 54d ago
[dead]
[−] dmonterocrespo 54d ago
[dead]