Loved the details about how memory access actually maps addresses to channels, ranks, blocks and whatever, this is rarely discussed.
Not sure how this works for larger data structures, but my first thought was that this should be implemented as some microcode or instruction.
Most computation is not thaat jitter sensitive, perception is not really in the nano to microsecond scale, but maybe a cool gadget for like dtrace or interrupt handers etc.
@lauriewired, I think the most interesting thing that I learned from this is that memory refresh causes readwrite stalls. For some reason I thought it was completely asynchronous.
But otherwise, nice work tying all the concepts together. You might want to get some better model trains though.
I like the project: taking it from refresh-induced tail latency to racing threads assigned to addresses that are de-correlated by memory channel. Connecting this to a lookup table which is broadcasted across memory channels to let the lookup paths race makes for a nice narrative, but framing this as reducing tail latency confused me because I was expecting this to do a join where a single reader gets the faster of the two racers.
From a narrative standpoint, I agree it makes more sense to focus on a duplicated lookup table and fastest wins, however, from an engineering standpoint, framing it in terms of channel de-correlated reads has more possibilities. For example, if you need to evaluate multiple parallel ML models to get a result then by intentionally partitioning your models by channel you could ensure that a model does reads on only fast data or only slow data. ML models might not be that interesting since they are good candidates for being resident in L3.
But practically speaking, in a real application - isn’t any performance benefit going to be lost by the reduced cache hit rate caused by having a larger working set?
Or are the reads of all-but-one of the replicas non-cached?
Maybe - but if that’s the case you are likely using the wrong data structure.
Additionally you are going to be memory starving every other thread/process because you are hogging all the memory channels, and making an already bad L3 cache situation worse.
Outside of extremely niche realtime use cases (which would generally fit in L3 cache) I can’t see how this would improve overall throughput, once you take into account other processes running on the same box.
This addresses the “short long tail” (known bounded variance due to the multiple physical operations underlying a single logical memory op), but for hard real time applications the “long long tail” of correctable-ECC-error—and-scrub may be the critical case.
I wonder if there will be a hardware solution in the future that duplicates memory over multiple channels and gives the first result back transparently without threads and racing.
23 comments
Not sure how this works for larger data structures, but my first thought was that this should be implemented as some microcode or instruction.
Most computation is not thaat jitter sensitive, perception is not really in the nano to microsecond scale, but maybe a cool gadget for like dtrace or interrupt handers etc.
* Video [2]
1. https://x.com/lauriewired/status/2041566601426956391 (https://xcancel.com/lauriewired/status/2041566601426956391)
2. https://www.youtube.com/watch?v=KKbgulTp3FE
But otherwise, nice work tying all the concepts together. You might want to get some better model trains though.
From a narrative standpoint, I agree it makes more sense to focus on a duplicated lookup table and fastest wins, however, from an engineering standpoint, framing it in terms of channel de-correlated reads has more possibilities. For example, if you need to evaluate multiple parallel ML models to get a result then by intentionally partitioning your models by channel you could ensure that a model does reads on only fast data or only slow data. ML models might not be that interesting since they are good candidates for being resident in L3.
But practically speaking, in a real application - isn’t any performance benefit going to be lost by the reduced cache hit rate caused by having a larger working set? Or are the reads of all-but-one of the replicas non-cached?
Apologies if I am missing something.
Additionally you are going to be memory starving every other thread/process because you are hogging all the memory channels, and making an already bad L3 cache situation worse.
Outside of extremely niche realtime use cases (which would generally fit in L3 cache) I can’t see how this would improve overall throughput, once you take into account other processes running on the same box.
Do you have an example use case?
OT: Tail Slayer. Not Tails Layer. My brain took longer to parse that than I’d have wanted.