> Apple Silicon changes the physics. The CPU and GPU share the same physical memory (Apple's Unified Memory Architecture) ... no bus!
Beware the reality distortion field: This is of course how it's worked on most x86 machines for a long time. And also on most Macs when they were using Intel chips.
Why did all my x86 onboard iGPU reserve a fixed amount of RAM on boot, inaccessible to the OS? Why do dGPU bring their own VRAM and how to directly manipulate it from the CPU without copying?
Correct me if I'm wrong, but that reserved memory is for the framebuffer? The iBoot bootloader also reserves some memory for the framebuffer.
dGPUs bring their own VRAM because it's a different type of memory, allowing them to get higher performance than they could with DDR. The M4 Max requires 128GB of LPDDR5X to reach its ~500GB/s bandwidth. The RX Vega 64 had that same bandwidth in 2017 with just 8GB of HBM2.
Nope, the reserved memory is what's available to use from the various APIs (VK, GL, etc). More recently there's OS support for flexible on demand allocation by the GPU driver.
Of course the APIs have allowed you to make direct use of pointers to CPU memory for something like a decade. However that requires maintaining two separate code paths because doing so while running on a dGPU is _extremely_ expensive.
As someone that's worked on GPU drivers for shared memory systems for over 15 years, supporting hardware that was put on the market over 20 years ago, and they've "always" (in my experience) been able to dynamically assign memory pages to the GPU.
The "reserved" memory is more about the guaranteed minimum to allow the thing to actually light up, and sometimes specific hardware blocks had more limited requirements (e.g. the display block might require contiguous physical addresses, or the MMU data/page tables themselves) so we would reserve a chunk to ensure they can actually be allocated with those requirements. But they tended to be a small proportion of the total "GPU Memory used".
Sure, sharing the virtual address space is less well supported, but the total amount of memory the GPU can use is flexible at runtime.
To the first question: blame Windows I guess. But even on older chips, GPU code could access memory allocated on the CPU side so this didn't cap the amount of data your GPGPU code could crunch.
I remember this was mostly a BIOS setting how much memory to allocate for iGPU - and once set in the BIOS, that memory was not accessible to the underlying OS (besides GPU I/O).
Agree, maybe "changes the physics" was too strong, shared cpu/gpu memory is not new.
What is different then is the combination of
1. UMA memory (and yes, iGPU had this, pre-M1)
2. enough bandwidth / GPU throughput for local inference
3. straightforward makeBuffer(bytesNoCopy:) path
So, the novelty isn't the shared memory itself, but the whole chain lining up to make the Wasm linear memory -> Metal-buffer approach practical + performant enough.
(and not saying there's some Apple Silicon magic here either ... it'd work anywhere there was UMA and no-copy host-pointer path)
Apple Silicon uses unified memory where the CPU and GPU use the exact same memory and no copies from RAM to VRAM are needed. The article opens with mentioning just that and indeed it is the whole point of the article.
I am always a bit baffled why Apple gets credited with this. Unified memory has been a thing for decades. I can still load the biggest models on my 10th gen Intel Core CPU and the integrated GPU can run inference.
The difference being that modern integrated GPU are just that much faster and can run inference at tolerable speeds.
(Plus NPUs being a thing now, but that also started much earlier. Thr 10th gen Intel Core architecture already had instructions to deal with "AI" workloads... just very preliminary)
That’s shared, not unified, it’s partitioned where cpu and gpu copies are managed by driver. Lunar lake (2024) is getting closer but still not as tightly integrated as apple and capped to 32GB only (Apple has up to 512GB). AMD ryzen ai max is closer to Apple but still 3 times slower memory.
I don't think people are crediting Apple with inventing unified memory - I certainly did not. There have been similar systems for decades. What Apple did is popularize this with widely available hardware with GPUs that don't totally suck for inference in combination with RAM that has decent speed at an affordable price. You either had iGPUs which were slow (plus not exactly the fastest DDR memory) but at least sitting on the same die or you had fast dGPUs which had their own limited amount of VRAM. So the choice was between direct memory access but not powerfull or powerfull but strangled by having to go through the PCIE subsystem to access RAM.
The article is talking about one particular optimization that one can implement with Apple Silicon and I at least wasn't aware that it is now possible to do so from WebAssembly - so to completely dismiss it as if it had nothing to do with Apple Silicon is imho not fair.
> on Apple Silicon, a WebAssembly module's linear memory can be shared directly with the GPU: no copies, no serialization, no intermediate buffers
enhance
> no copies, no serialization, no intermediate buffers
would it kill people to write their own stuff why are we doing this. out of all the things people immediately cede to AI they cede their human ability to communicate and convey/share ideas. this timeline is bonkers.
On one side it sounds promising to exploit shared memory properties to speed up inference. But on the other hand, the well established inference engines are perhaps already well optimized to overlap compute and communication efficiently. In this case the host-device copies are likely not a problem to tackle.
53 comments
> Apple Silicon changes the physics. The CPU and GPU share the same physical memory (Apple's Unified Memory Architecture) ... no bus!
Beware the reality distortion field: This is of course how it's worked on most x86 machines for a long time. And also on most Macs when they were using Intel chips.
dGPUs bring their own VRAM because it's a different type of memory, allowing them to get higher performance than they could with DDR. The M4 Max requires 128GB of LPDDR5X to reach its ~500GB/s bandwidth. The RX Vega 64 had that same bandwidth in 2017 with just 8GB of HBM2.
Of course the APIs have allowed you to make direct use of pointers to CPU memory for something like a decade. However that requires maintaining two separate code paths because doing so while running on a dGPU is _extremely_ expensive.
The "reserved" memory is more about the guaranteed minimum to allow the thing to actually light up, and sometimes specific hardware blocks had more limited requirements (e.g. the display block might require contiguous physical addresses, or the MMU data/page tables themselves) so we would reserve a chunk to ensure they can actually be allocated with those requirements. But they tended to be a small proportion of the total "GPU Memory used".
Sure, sharing the virtual address space is less well supported, but the total amount of memory the GPU can use is flexible at runtime.
What is different then is the combination of
1. UMA memory (and yes, iGPU had this, pre-M1) 2. enough bandwidth / GPU throughput for local inference 3. straightforward
makeBuffer(bytesNoCopy:)pathSo, the novelty isn't the shared memory itself, but the whole chain lining up to make the Wasm linear memory -> Metal-buffer approach practical + performant enough.
(and not saying there's some Apple Silicon magic here either ... it'd work anywhere there was UMA and no-copy host-pointer path)
The value would be in actor processes, where you can delegate inference without paying the 'copy tax' for crossing the sandbox boundary.
So, less "inference engine" and more "Tmux for AI agents"
Think pausing, moving, resuming, swapping model backend.
I scoped the post to memory architecture, since it was the least obvious part ... will follow up with one about the actor model aspect.
The whole Apple Silicon thing is (in this case) just added details that don't actually matter.
[1] https://github.com/WebAssembly/memory-control/blob/main/prop...
The difference being that modern integrated GPU are just that much faster and can run inference at tolerable speeds.
(Plus NPUs being a thing now, but that also started much earlier. Thr 10th gen Intel Core architecture already had instructions to deal with "AI" workloads... just very preliminary)
The article is talking about one particular optimization that one can implement with Apple Silicon and I at least wasn't aware that it is now possible to do so from WebAssembly - so to completely dismiss it as if it had nothing to do with Apple Silicon is imho not fair.
That's the same no matter the physical memory system architecture.
> on Apple Silicon, a WebAssembly module's linear memory can be shared directly with the GPU: no copies, no serialization, no intermediate buffers
enhance
> no copies, no serialization, no intermediate buffers
would it kill people to write their own stuff why are we doing this. out of all the things people immediately cede to AI they cede their human ability to communicate and convey/share ideas. this timeline is bonkers.
Also, these folks should be amazed by 8 and 16 bit games development, or games consoles in general.