> We plan to deliver improvements to [..] purging mechanisms
During my time at Facebook, I maintained a bunch of kernel patches to improve jemalloc purging mechanisms. It wasn't popular in the kernel or the security community, but it was more efficient on benchmarks for sure.
Many programs run multiple threads, allocate in one and free in the other. Jemalloc's primary mechanism used to be: madvise the page back to the kernel and then have it allocate it in another thread's pool.
One problem: this involves zero'ing memory, which has an impact on cache locality and over all app performance. It's completely unnecessary if the page is being recirculated within the same security domain.
The problem was getting everyone to agree on what that security domain is, even if the mechanism was opt-in.
I'm really surprised to see you still hocking this.
We did extensive benchmarking of HHVM with and without your patches, and they were proven to make no statistically significant difference in high level metrics. So we dropped them out of the kernel, and they never went back in.
I don't doubt for a second you can come up with specific counterexamples and microbenchnarks which show benefit. But you were unable to show an advantage at the system level when challenged on it, and that's what matters.
You probably weren't there when servers were running for many days at a time.
By the time you joined and benchmarked these systems, the continuous rolling deployment had taken over. If you're restarting the server every few hours, of course the memory fragmentation isn't much of an issue.
> But you were unable to show an advantage at the system level when challenged on it, and that's what matters.
You mean 5 years after I stopped working on the kernel and the underlying system had changed?
I wouldn't be surprised if both 'adsharma' and 'jcalvinowens' were right, just at different points in time, perhaps in a bit different context. Things change.
I recently started using Microsoft's mimalloc (via an LD_PRELOAD) to better use huge (1 GB) pages in a memory intensive program. The performance gains are significant (around 20%). It feels rather strange using an open source MS library for performance on my Linux system.
There needs to be more competition in the malloc space. Between various huge page sizes and transparent huge pages, there are a lot of gains to be had over what you get from a default GNU libc.
One has to wonder if this due to the global memory shortage. ("Oh - changing our memory allocator to be more efficient will yield $XXM dollar savings over the next year").
As an Australian who was just made redundant from a role that involved this type of low level programming - I love working on these these kinds of challenges.
I'm saddened that the job market in Australia is largely React CRUD applications and that it's unlikely I will find a role that lets me leverage my niche skill set (which is also my hobby)
I remember I was a senior lead softeng of a worldbank funded startup project, and have deployed Ruby with jemalloc in prod. There's a huge noticeable speed and memory efficiency. It did saved us a lot of AWS costs, compare to just using normal Ruby. This was 8 years ago, why haven't projects adopt it yet as de facto.
> With the leverage jemalloc provides however, it can be tempting to realize some short-term benefit. It requires strong self-discipline as an organization to resist that temptation and adhere to the core engineering principles.
This doesn't quite read properly to me. What does it actually mean, does anyone know?
It would be great if Meta was able to sustain to support more open source projects, especially those they benefit from.
For example they use AsmJit in a lot of projects (both internal and open-source) and it's now unmaintained because of funding issues. Maybe they have now internal forks too.
Surprised not to see any mention of the global memory supply shock. Would love to learn more about how that economic is shifting software priorities toward memory allocation for the first time in my (relatively young) career
Meta never abandoned jemalloc. https://github.com/facebook/jemalloc remained public the entire time. It's my understanding that Jason Evans, the creator of jemalloc, had ownership over the jemalloc/jemalloc repo which is why that one stopped being updated after he left.
Few months back, some of the services switched to jemalloc for the Java VM. It took months (of memory dumps and tracing sys-calls) to blame the JVM, itself, for getting killed by the oom_killer.
Initially the idea was diagnostics, instead the the problem disappeared on its own.
First impressions: LOL, the blunt commentary in the HN thread title compared to the PR-speak of the fb.com post.
Second thoughts: Actually the fb.com post is more transparent than I'd have predicted. Not bad at all. Of course it helps that they're delivering good news!
Large engineering orgs often underestimate how much CI pipelines amplify performance issues. Even small inefficiencies multiply when builds run hundreds of times a day.
239 comments
> We plan to deliver improvements to [..] purging mechanisms
During my time at Facebook, I maintained a bunch of kernel patches to improve jemalloc purging mechanisms. It wasn't popular in the kernel or the security community, but it was more efficient on benchmarks for sure.
Many programs run multiple threads, allocate in one and free in the other. Jemalloc's primary mechanism used to be: madvise the page back to the kernel and then have it allocate it in another thread's pool.
One problem: this involves zero'ing memory, which has an impact on cache locality and over all app performance. It's completely unnecessary if the page is being recirculated within the same security domain.
The problem was getting everyone to agree on what that security domain is, even if the mechanism was opt-in.
https://marc.info/?l=linux-kernel&m=132691299630179&w=2
We did extensive benchmarking of HHVM with and without your patches, and they were proven to make no statistically significant difference in high level metrics. So we dropped them out of the kernel, and they never went back in.
I don't doubt for a second you can come up with specific counterexamples and microbenchnarks which show benefit. But you were unable to show an advantage at the system level when challenged on it, and that's what matters.
By the time you joined and benchmarked these systems, the continuous rolling deployment had taken over. If you're restarting the server every few hours, of course the memory fragmentation isn't much of an issue.
> But you were unable to show an advantage at the system level when challenged on it, and that's what matters.
You mean 5 years after I stopped working on the kernel and the underlying system had changed?
I don't recall ever talking to you on the matter.
There needs to be more competition in the malloc space. Between various huge page sizes and transparent huge pages, there are a lot of gains to be had over what you get from a default GNU libc.
Jemalloc Postmortem - https://news.ycombinator.com/item?id=44264958 - June 2025 (233 comments)
Jemalloc Repositories Are Archived - https://news.ycombinator.com/item?id=44161128 - June 2025 (7 comments)
I'm saddened that the job market in Australia is largely React CRUD applications and that it's unlikely I will find a role that lets me leverage my niche skill set (which is also my hobby)
> With the leverage jemalloc provides however, it can be tempting to realize some short-term benefit. It requires strong self-discipline as an organization to resist that temptation and adhere to the core engineering principles.
This doesn't quite read properly to me. What does it actually mean, does anyone know?
For example they use AsmJit in a lot of projects (both internal and open-source) and it's now unmaintained because of funding issues. Maybe they have now internal forks too.
https://technology.blog.gov.uk/2015/12/11/using-jemalloc-to-...
Initially the idea was diagnostics, instead the the problem disappeared on its own.
Second thoughts: Actually the fb.com post is more transparent than I'd have predicted. Not bad at all. Of course it helps that they're delivering good news!