Pull to refresh

What are skiplists good for? (antithesis.com)

by mfiguiere 70 comments 290 points
Read article View on HN

70 comments

[−] cremer 26d ago
Redis sorted sets are probably the most widely deployed example. Redis uses a skiplist for range queries and ordered iteration paired with a hash table for O(1) lookups. Together they cover the full API at the right complexity for each operation

Skiplists also win over balanced BSTs when it comes to concurrent access. Lock-free implementations are much simplier to reason about and get right. ConcurrentSkipListMap has been in the standard library since Java 6 for exactly this reason and it holds up well under high contention

[−] antirez 25d ago
Yep, and it was simple in Redis to augment them with the "span" in order to support ranking, that is, given al element, tell me its position in the ordered collection.
[−] ignoramous 25d ago

>

Skiplists also win over balanced BSTs when it comes to concurrent access.

Balanced Skiplists search better than plain Skiplists which may skew (but balancing itself is expensive). Also, I've have found that finger search (especially with doubly-linked skiplist with million+ entries) instead of always looking for elements from the root/head is an even bigger win.

> ConcurrentSkipListMap has been in the standard library since Java 6 for exactly this reason and it holds up well under high contention

An amusing observation I lifted from OCaml's implementation (which inturn quotes Donald Knuth): MSBs of PRNG values have more "randomness" than LSBs (randomness affects "balance"): https://github.com/ocaml/ocaml/blob/389121d3/runtime/lf_skip...

---

Some neat refs from our codebase:

- Skip lists: Done right (2016), https://ticki.github.io/blog/skip-lists-done-right / https://archive.is/kwhnG

- An analysis of skip lists, https://eugene-eeo.github.io/blog/skip-lists.html / https://archive.is/ffCDr

- Skip lists, http://web.archive.org/web/20070212103148/http://eternallyco... / https://archive.is/nl3G8

[−] nulltrace 25d ago
Rebalancing is what really kills you. A CAS loop on a flat list is pretty straightforward, you get it working and move on. But rotations? You've got threads mid-insert on nodes you're about to move around. It gets ugly fast. Skiplists just sidestep the whole thing since level assignment is basically a coin flip, nothing you need to keep consistent. Cache locality is worse, sure, but honestly on write-heavy paths I've never seen that be the actual bottleneck.
[−] zelphirkalt 24d ago
Binary search trees, at least the ones I am thinking of, have known purely functional, and therefore lock free implementations. I am currently looking into AVL trees and they don't seem that complicated of an implementation for example.
[−] maerF0x0 25d ago
[−] carlsverre 25d ago
(I used to work at SingleStore, and now work at Antithesis)

SingleStore (f.k.a. MemSQL) used lock-free skiplists extensively as the backing storage of their rowstore tables and indexes. Adam Prout (ex CTO) wrote about it here: https://www.singlestore.com/blog/what-is-skiplist-why-skipli...

When SingleStore added a Columnar storage option (LSM tree), L0 was simply a rowstore table. Since rowstore was already a highly optimized, durable, and large-scale storage engine, it allowed L0 to absorb a highly concurrent transactional write workload. This capability was a key part of SingleStore's ability to handle HTAP workloads. If you want to learn more, take a look at this paper which documents the entire system end-to-end: https://dl.acm.org/doi/10.1145/3514221.3526055

[−] reitzensteinm 25d ago
At the intersection of these two topics, does Antithesis have any capabilities around simulating memory ordering to validate lock free algorithms?
[−] carlsverre 25d ago
We support thread-pausing via instrumentation. This can cause threads to observe different interleavings, which can help uncover bugs in concurrent algorithms. At this time, we don't perform specific memory model fault injection or fuzzing.
[−] reitzensteinm 24d ago
I wrote a library called Temper, which simulates the Rust/C++ memory model with atomics in a similar way to Loom. But it goes much deeper on that narrow domain, and to my knowledge it's the most accurate library of its kind with the largest set of test cases.

If you simulate using mock CPU instructions like memfence or LL/CS there's no guarantee your model fits your ultimately executed program.

Unless of course, you do something like antithesis and directly test what compiled. It's an interesting alternative world.

I've taken the liberty of adding you to LinkedIn - would love to grab a drink next time you're in the SF Bay area.

https://github.com/reitzensteinm/temper

[−] ozgrakkurt 26d ago
Some more links that are inside the article:

- More info about skiplists: https://arxiv.org/pdf/2403.04582

- Performance comparison with B-tree ?: https://db.cs.cmu.edu/papers/2018/mod342-wangA.pdf

- Other blog from Anthithesis about writing their own db: https://antithesis.com/blog/2025/testing_pangolin/

Also I find it a bit hard to understand the performance outcome of this setup.

I know formats like parquet and databases like ClickHouse work better when duplicating data instead of doing joins. I guess BigQuery is similar.

The article is great but would be also interesting to learn how performance actually worked out with this.

[−] nz 25d ago
Back in 2014, I did an analysis of (single threaded) CPU-efficiency and RAM-efficiency of various data-structures (skiplists, slablists, avl-trees, rb-trees, b-trees):

https://nickziv.wordpress.com/wp-content/uploads/2014/02/vis...

I used whatever I could find on the internet at the time, so the comparison compares both algorithm and implementation (they were all written in C, but even slight changes to the C code can change performance -- uuavl performs much better than all other avl variants, for example). I suspect that a differently-programmed skip-list would not have performed quite so poorly.

The general conclusion from all this, is that any data-structure that can organize itself _around_ page-sizes and cache-sizes, will perform very well compared to structures that cannot.

[−] josephg 26d ago
For this problem, I’d consider a different approach. You have a fuzzer, and based on some seed it’s generating lots of records. You then need to query a specific record (or set of records) based on the leaf.

I’d just store a table of records with the leaf, associated with the seed. A good fuzzer is entirely deterministic. So you should be able to regenerate the entire run from simply knowing the seed. Just store a table of {leaf, seed}. Then gather all the seeds which generated the leaf you’re interested in and rerun the fuzzer for those seeds at query time to figure out what choices were made.

[−] _dain_ 25d ago
(I work at Antithesis)

Yes, this is (more or less) how we regenerate the system state, when necessary. But keep in mind that the fuzzing target is a network of containers, plus a whole Linux userland, plus the kernel. And these workloads often run for many minutes in each timeline. Regenerating the entire state from t=0 would be far too computationally intensive on the "read path", when all you want are the logs leading up to some event. We only do it on the "write path", when there's a need to interact with the system by creating new branching timelines. And even then, we have some smart snapshotting so that you're not always paying the full time cost from t=0; we trade off more memory usage for lower latency.

Oh one other thing: the "fuzzer" component itself is not fully deterministic. It can't be, because it also has to forward arbitrary user input into the simulation component (which is deterministic). If you decide to rewind to some moment and run a shell command, that's an input which can't be recovered from a fixed random seed. So in practice we explicitly store all the inputs that were fed in.

[−] bob1029 26d ago
On practical machines they aren't good for much. To access a value in a skip list you have to dereference way more pointers than in a b+ tree. On paper they're about the same, but in practice the binary tree will tend to outperform. You get way more work done per IO operation.
[−] marginalia_nu 26d ago
Skiplists are designed for fast intersection, not for single value lookup (assuming a sane design that's not based on linked lists, that's just an educational device that's never used in practice).

They are extremely good at intersections, as you can use the skip pointers in clever ways to skip ahead and eliminate whole swathes of values. You can kinda do that with b-trees[1] as well, but skip lists can beat them out in many cases.

It's highly dependent on the shape of the data though. For random numbers, it's probably an even match, but for postings lists and the like (where skiplists are often used), they perform extremely well as these often have runs of semiconsecutive values being intersected.

[1] I'll argue that if you squint a bit, a skiplist arguably is a sort of degenerate b-tree.

[−] namibj 25d ago
B(+)Trees do actually admit a fast intersection (they offer a way more powerful projected-to-shared-keyspace mutual index join, technically it's even able to do antijoin but that'll actually modify iteration more than a very genericalized inner join; basically whenever you look at a key in any one of the involved indices you project it to a shared keyspace before doing the comparison-based-search things):

You get cache locality from the upper layers, and for navigation basically let mut head = keyspace.min(); 'outer: while(!cursors[0].finished()){ for(&mut cursor in cursors.iter_mut()) { let new_head = cursor.seek_to_target_or_next_after_if_none_match(head); if (head != new_head) {continue 'outer; }} /* passed all without seeking past target on any one */ output_fun(head, cursors.iter().map(|x| x.val())); }. If you want you can do the inner loop's seeks concurrently, which helps if those are IO latency bound and you can afford to waste absolute IOPS on eagerly doing that. You'll want to locally compute the max() of those returned and assign that to head. Imagine the cursors are lambda-parametrized to feel like they operate on the projected shared keyspace.

If the keys are a bitstring prefix suited to a binary prefix trie you can actually intersect that way, it's beyond worst case optimal when multiple key columns are involved. Sadly any simple implementation strategies of those algorithms have prohibitive external-memory-machine coefficients for their nominally poly-logarithmic IOPS, due to involvement of combinatorial explosion / curse of dimensionality in search tree/trie structures. They do work though. C.f. "Tetris-LoadBalanced"/"Tetris-Reordered".

The latter even tames one index containing "all" even numbers and the other "all" odd numbers, well, matters more if you involve 3+ columns :D

[−] torginus 25d ago
I don't understand how they differ in this regard from range trees, which they essentially are, just their method of construction is different.

Things like BSP trees are very good at intersections indeed, and have been used for things since time immemorial, but I think the skiplist/tree tradeoff is not that different in this domain.

[−] senderista 25d ago
Treaps can handle parallel set operations very efficiently:

https://www.cs.cmu.edu/~scandal/papers/treaps-spaa98.pdf

[−] mananaysiempre 25d ago
Maestro Tarjan tells us skiplists and treaps are very nearly the same thing: https://arxiv.org/abs/1806.06726. I don’t see how to transpose TFA’s extension of the former to the latter, though.
[−] EGreg 25d ago
Actually, prolly trees are probably best for intersections. You can use bloom filters as a first pass
[−] ahartmetz 26d ago
Skiplists have some nice properties - the code is fairly short and easy to understand, for one. Qt's QMap used to be skip list based, here's the rationale given for it: https://doc.qt.io/archives/qq/qq19-containers.html#associati...
[−] dwdz 26d ago
It seems like Qt went from red-black tree to skip list in Qt4 and back to red-black tree in Qt5.
[−] torginus 25d ago
yeah it turns out that complex code, when its properly encapsulated and implemented in a bug-free manner, is not such a cost after all.

A correct skiplist is easier to NIH than a correct red-black tree (which for me was the final boss of the DS class in college), but has performance edge cases a red-black tree doesnt, if you treat it like a search tree.

[−] winwang 26d ago
Only somewhat related but there is supposedly a SIMD/GPU-friendly skiplist algo written about here: https://csaws.cs.technion.ac.il/~erez/Papers/GPUSkiplist.pdf
[−] locknitpicker 26d ago
FTA:

> Skiplists to the rescue! Or rather, a weird thing we invented called a “skiptree”…

I can't help but wonder. The article makes no mention of b-trees if any kind. To me, this sounded like the obvious first step.

If their main requirement was to do sequential access to load data, and their problem was how to speed up tree traversal on an ad-hoc tree data structure that was too deep, then I wonder if their problem was simply having tree nodes with too few children. A B+ tree specifically sounds to be right on the nose for the task.

[−] torginus 25d ago

>What are skiplists good for

In practice, I have found out, nothing much. Their appeal comes from being simpler to implement than self-balancing trees, while claiming to offer the same performance.

But they completely lack a mechanism for rebalancing, and are incredibly pointer heavy (in this implementation at least), and inserts/deletes can involve an ungodly amount of pointer patching.

While I think there are some append-heavy access patterns where it can come up on top, I have found that the gap between using a BST, a hashtable, or just putting stuff in an array and just sorting it when needed is very small.

[−] mrjn 26d ago
skiplists form the basis of in-memory tables used by LSM trees, which are themselves the basis of most modern DBs (written post 2005).
[−] teiferer 26d ago
In the age of agentic programming and the ever increasing pressure to ship faster, I'm afraid this kind of knowledge will become more and more fringe, even moreso than it is today. Who has the time to think through the intricacies of parallel data structures? Clearly we'll just throw more hardware at problems, write yet another service/api/http endpoint and move on to the next hype. The LLM figures out the algorithms and we soon lose the skills to develop new ones. And tell each other the scifi BS myth that "AI" will invent new data structures in the future so we don't even beed humans in the loop.
[−] tooltower 26d ago
In my personal projects, I've used it to insert/delete transactions in a ledger. I wanted to be able to update/query the account balance fast. Like the article says, "fold operations".
[−] talideon 25d ago
Plenty, but these days mostly if you either (a) want a simple implementation or (b) don't have to worry much about cache locality. The problem is that (b) doesn't really exist outside of retrocomputing and embedded systems these days. It's still one of my favourite data structures, just because it's a clever way to get most of the benefits of more complicated datastructures on small systems with minimal code.
[−] medbar 26d ago
Skiplist operations are local for the most part, which makes it easier to write thread-safe code for than b-trees in practice. Anecdotally, they were a nice implementation problem for my Java class in uni. But I liked working with b-lists more.

Skip trees/graphs sound interesting, but I can't think of any use case for them off the top of my head.

[−] shawn_w 26d ago
Random access with similar performance to a balanced binary tree, and ordered iteration as simple as a linked list. It's a nice combination. (Of course, so is a binary search of a sorted array, which I lean more towards these days unless doing a lot of random insertions and deletions throughout the life of the mapping).
[−] maxtaco 25d ago
Backpointers to earlier epochs in append-only cryptographic data structures like key transparency logs. If the client last fetched epoch 1000, and the server reports the current epoch is 3000, the server can return log(2000) intermediate epochs, following skip pointers, to provide a hash chain from epoch 3000 to 1000.

https://github.com/foks-proj/go-foks/blob/main/lib/merkle/bi... https://github.com/foks-proj/go-foks/blob/main/lib/merkle/cl...

[−] aaa_aaa 26d ago
Almost nothing. My friend and I used it once (in a rather obscure problem). Then used simple lists with some tricks with better performance because of the locality etc.
[−] einpoklum 25d ago

>

Every insert would need to write to both systems, and since we want to analyze the data online (while new writes are streaming in) keeping the two databases consistent would require something like two-phase commit (2PC).

Not convincing. One can write the bulk data which is at first unused - no need to sync anything. Then one writes to the tree DB using, where each node only stores a key to the relevant data.

[−] torben-friis 26d ago
Could someone provide intuitive understanding for why the "express lanes" in a skip list are created probabilistically?

My first instinctive idea would be that there is an optimal distance, maybe based on absolute distance or by function of list size or frequency of access or whatever. Leaving the promotion to randomness is counter intuitive to me.

[−] esafak 25d ago
[−] siddsukh 25d ago
The skiptree is a great example of constraint-driven invention - you didn't set out to build a new data structure, you had a specific constraint (BigQuery's poor point lookups) and the solution naturally emerged from combining two existing ideas. The part about writing a JavaScript compiler to generate kilobyte-scale SQL queries is underappreciated, too. Most engineers would have switched databases, but building a compiler when output is too complex to write by hand is almost always the right call.
[−] fnordpiglet 26d ago
A major global bank operated all trading, especially the complex stuff, off of a globally replicated skip list.
[−] m00dy 26d ago
It was really cool to mention its name during tech interviews but not anymore I guess.
[−] ncruces 25d ago
I guess I'm missing the point of this, so I'm probably looking at it wrong.

If you're saving multiple ancestors in ancestors_between why not save all of them?

And if the goal is to access the info for all of the ancestors, what makes it more likely than average that your ancestors aren't on the same layer as yourself (i.e. that e.g. half of them aren in ancestors_between)?

Because, if a fixed ratio (50 or even 10%) of an arbitrary node's ancestors is at the same layer, isn't complexity still the same (only reduced by a constant factor)?

[−] calvinmorrison 25d ago
Cyrus uses skip lists for its internal db structs.
[−] hpcgroup 25d ago
[dead]
[−] linzhangrun 26d ago
[flagged]
[−] jimmypk 25d ago
[flagged]
[−] feverzsj 26d ago
If you need a graph db, use a graph db.