Lean proved this program correct; then I found a bug (kirancodes.me)

by bumbledraven 178 comments 395 points
Read article View on HN

178 comments

[−] ctmnt 31d ago
This article’s framing and title are odd. The author, in fact, found no bugs or errors in the proven code. She says so at the end of the article:

> The two bugs that were found both sat outside the boundary of what the proofs cover. The denial-of-service was a missing specification. The heap overflow was a deeper issue in the trusted computing base, the C++ runtime that the entire proof edifice assumes is correct.

Still an interesting and useful result to find a bug in the Lean runtime, but I’d argue that doesn’t justify the title. Or the claim that “the entire proof edifice” is somehow shaky.

It’s important to note that this is the Lean runtime that has a bug, not the Lean kernel, which is the part that actually does the verification (aka proving). [1] So it’s not even immediately clear what this bug would really apply to, since obviously no one’s running any compiled Lean code in any kind of production hot path.

[1] https://lean-lang.org/doc/reference/latest/Elaboration-and-C...

[−] gopiandcode 31d ago
Repeating myself, when we speak of bugs in a verified software system, I think it's fair to consider the entire binary a fair target.

If a buffer overflow causes the system to be exploited and all your bitcoins to be stolen, I don't think the fact that the bug being in the language runtime is going to be much consolation. Especially if the software you were running was advertised as formally verified as free of bugs.

Second, there was a bug in the code. Maybe not a functional correctness bug, but I, along with many and most end users, would consider a crashing program buggy. Maybe we just have different tastes or different standards on what we consider an acceptable level of software quality.

W.r.t people running Lean in production, you'd be surprised...

[−] danparsonson 31d ago
We're not speaking about bugs in a verified system so much as writing articles making specific claims about that. Surely if we're at the level of precision of formal verification, it's incumbent upon us to be precise about the nature of a problem with it, no? "Lean proved this program correct and then I found a bug" heavily implies a flaw in the proof, not a flaw in the runtime (which to my mind would also be a compelling statement, for the reasons you describe).
[−] SAI_Peregrinus 31d ago
Or it implies a bug in the specification. The spec differing from the intent is a frequent source of bugs, it doesn't matter what language the spec is written in. Most people have experience with (or have seen news stories about) specification bugs in natural-language specifications: legal loopholes!
[−] spinningslate 31d ago
This is the biggest risk with the rejuvenated interest in formal proof. That LLMs can generate proofs is useful. Proof assistants that can check them (Lean/FStar/Isabelle/...) similarly so.

But it just moves the question to whether the theorems covered in the proof are sufficient. Underlying it all is a simple question:

Does the system meet its intended purpose?

To which the next question is:

What is the intended purpose?

Describing that is the holy grail of requirements specification. Natural language, behaviour-driven development, test-driven development and a host of other approaches attempt to bridge the gap between implicit purpose and explicit specification. Proof assistants are another tool in that box.

It's also one of the key motivators for iterative development: putting software in front of users (or their proxies) is still the primary means of validation for a large class of systems.

None of which is implied criticism of any of those approaches. Equally, none completely solves the problem. There is a risk that formal proofs, combined with proof assistants, are trumpeted as "the way" to mitigate the risk that LLM-developed apps don't perform as intended.

They might help. They can show that code is correct with respect to some specification, and that the specification is self-consistent. They cannot prove that the specification is complete with regards its intended purpose.

[−] gopiandcode 31d ago
Sorry, I'm not sure I follow. We are talking about bugs in a verified system, that is, in this case, a verified implementation of a zlib-based compression tool. Did it have bugs? yes. Several in fact. I'd recommend reading the article for a detailed listing of the bugs in the tool.
[−] danparsonson 31d ago
You give the answers to this in the artcle:

> The most substantial finding was a heap buffer overflow! but, not in lean-zip's code, but in the Lean runtime itself. (emphasis mine)

> The OOM denial-of-service is straightforward: the archive parser was never verified. (me again)

"Lean proved this program correct" vs "I found bugs in the parts that Lean didn't prove was correct". The failure was not in Lean's proof (which as I said is heavily implied by the title), but in ancillary or unverified code.

Do I misunderstand how Lean works? I am by no means an expert (or even an amateur) on Lean or formal systems in general. Surely the first class of bug could be found in any code that uses the framework, and the second class of bug could happen to any system that isn't proven? Does that make sense? Otherwise where's the boundary? If you find a bug in the OS, does that mean Lean failed somehow? How about the hardware? If your definition of a 'formally verified system' goes beyond the code being verified and the most important thing is whether or not you can make it crash, then the OS and the hardware are also part of the system.

Of course bugs are important to users regardless of the cause, but the audience for your article seems to be software engineers and as a software engineer, your article was interesting and you found something useful, but the title was misleading; that's all I'm saying.

[−] danparsonson 31d ago
Further to my earlier reply - a more succinct way to look at it might be:

- When they fix the run time, bug A goes away. So the proof still holds and the zlib code is still correct.

- When they add a system of proofs for the parser and modify that, then bug B goes away. So the proof still holds and the zlib code is still correct; and now more of the library is proven correct.

The formulation of the title is "I was told X but that's not true"... but that's not true. You were told X, and X is true, but you found Y and Z which are also important.

[−] amoss 31d ago
What is the program?

There are two different answers to this question, and which one is "correct" depends entirely on the context of who is asking it.

1. It's the code that is specific to this program that sits above the run-time layer (internal view, that most programmers would take).

2. It's the code in the binary that is executed (external view, that most users would take).

The key question does not seem to be "was the proof correct", rather "did the proof cover everything in the program". The answer depends on whether you are looking at it from the perspective of a programmer, or a user. Given the overly strong framing that the article is responding to - highlighting the difference in this way does seem to be useful. The title is correct from the perspective that most users would take.

[−] danparsonson 31d ago
Yes but, without wishing to be snarky, did you read the article? There is no program as such, in either sense - the announcement from Lean only mentions "a C compression library" (zlib). Not only that, but since we're talking about formal verification, a programmer would likely understand that that is about proving a bounded, specific codebase at source code level, and not operating on a binary along with its associated dependencies (again caveat my limited understanding of these things).

My feeling is that if you told the average non-technical user that some person/organisation had produced a formally verified version of a C compression library, you would likely get a blank look, so I think it's reasonable to assume that both Lean's intended audience, and the audience of the blog post linked here, correspond with number 1. in your list.

[−] amoss 31d ago
The article describes fuzzing the library, this execution requires a program to be compiled. Typically fuzzing involves a minimal harness around the payload (a single call into the library in this case). There is clearly a bug in this program, and it does not exist in the minimal harness. It must be in the library code, which was covered by the proof.

The bounded, specific codebase that you refer to is typically the library *and all of its dependencies*, which in this case includes the Lean runtime. This is why formal verification is difficult: the proof chain needs to extend all the way down to the foundations. In this case the original gushing claim that everything was verified is incorrect and premature. The article seems like a good exposition of why.

[−] StilesCrisis 30d ago
I am pretty sure you could tell a teenager "there's a ZIP compression program that's scientifically proven to have no bugs" and they'd understand you. People don't have to be CS experts to understand that. (Technically it's Gzip but that's mostly irrelevant to understanding the claim here)
[−] nathell 31d ago
Gentle reminder about this excerpt from HN Guidelines:

> Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that".

[−] vntok 31d ago
Say you study some piece of software. And it happens that it has an automated suite of tests. And it happens that some files aren't covered by the test suite. And you happen to find a bug in one of those files that were not covered.

Would you publish a blog post titled "the XXX test suite proved there was no bug. And then I found one"?

It would be a bit silly, right?

[−] tpoacher 31d ago
You're technically right, but what things are versus how they're promoted or understood by most people (rightfully or not) often diverges, and therefore such "grounding" articles are useful, even if the wording addresses the perceived rather than the factual reality.

By way of analogy, if there was an article saying "I bought a 1Tb drive and it only came with 0.91 terabits", I think if you started explaining that technically the problem is the confusion between SI vs binary units and the title should really be "I bought a 0.91 terabit drive and was disappointed it didn't have more capacity", technically you'd be right, but people would rightfully eyeroll at you.

[−] Someone 31d ago

> when we speak of bugs in a verified software system, I think it's fair to consider the entire binary a fair target.

I agree, but it’s not fair to imply that the verification was incorrect if the problem lies elsewhere.

This is a nice example of how careful you have to be to build a truly verified system.

[−] saithound 31d ago

> Repeating myself, when we speak of bugs in a verified software system, I think it's fair to consider the entire binary a fair target.

Yes, and that would be relevant if this was a verified software system. But it wasn't: the system consisted of a verified X and unverified Y, and there were issues in the unverified Y.

The article explicitly acknowledges this: "The two bugs that were found both sat outside the boundary of what the proofs cover."

[−] germandiago 31d ago

> I don't think the fact that the bug being in the language runtime is going to be much consolation. Especially if the software you were running was advertised as formally verified as free of bugs.

Reminds of what some people in the Rust community do: they fight how safe this is or not. I always challenge that the code is composed of layers, from which unsafe is going to be one. So yes, you are righ to say that. Unsafe means unsafe, safe means safe and we should respect the meaning of those words instead of twisting the meaning for marketing (though I must say I heard this from people in the community, not from the authors themselves, ever).

[−] monocasa 31d ago
Totally agreed. For instance that's why sel4 just throws the whole binary at the proof engine. That takes any runtime (minimal in sel4's case) and compiler (not nearly as minimal) out of the TCB.
[−] ctmnt 31d ago
Hi Kiran, thanks for following up. FWIW, I enjoy your blog and your work. And I do think it was a valuable bug you found; also nice to see how quickly Henrik fixed it.

Say more about people running Lean in production. I haven’t run into any. I know of examples of people using Lean to help verify other code (Cedar and Aeneas being the most prominent examples), but not the actual runtime being employed.

I took a quick scan of lean-lang.org just now, and, other than the two examples I mentioned, didn’t see a single reference to anything other than proving math.

I’m sure you’re in the Lean Zulup, based on what you’ve been up to. Are you seeing people talk about anything other than math? I’m not, but maybe I’m missing it.

[−] IshKebab 31d ago

> if the software you were running was advertised as formally verified as free of bugs.

Nobody should be advertising that. Even ignoring the possibility of bugs in the runtime, there could also be bugs in the verifier and bugs or omissions in the specification. Formally verified never means guaranteed to be free of bugs.

[−] dyauspitr 31d ago
Well then formally verify the language system. I’m not sure what the confusion is. They didn’t say the whole system is formally verified.
[−] koito17 31d ago
Yeah, the title made me think the author found a bug in the Lean kernel, thus making an invalid proof pass Lean's checks. The article instead uncovers bugs in the Lean runtime and lean-zip, but these bugs are less damning than e.g. the kernel, which must be trusted to be correct, or else you can't trust any proof in Lean.
[−] dchftcs 31d ago
A missing specification in the proof of lean-zip, a lean component, is a real problem to the philosophy and practice of software verification.

To illustrate, let's say you want to verify a "Hello world" program. You'd think a verification involves checking that it outputs "Hello, world!".

However, if a contractor or AI hands you a binary, what do you need to verify? You will need to verify that it does exactly print "Hello, world!", no more, no less. It should write to stdout not stderr. It shouldn't somehow hold a lock on a system resource that it can't clean up. It cannot secretly install a root-kit. It cannot try to read your credentials and send it somewhere. So you will need to specify the proof to a sufficient level of detail to capture those potential deviations.

Broadly, with both bugs, you need to ask a question: does this bug actually invalidate my belief that the program is "good"? And here you are pulling up a fact that the bug isn't found in the Lean kernel, which makes an assumption that there's no side-effect that bleeds over the abstraction boundary between the runtime and the kernel that affects the correctness of the proof; that safety guarantee is probably true 99.99% of the time - but if the bug causes a memory corruption, you'd be much less confident in that guarantee.

If you're really serious about verifying an unknown program, you will really think hard "what is missing from my spec"? And the answer will depend on things that are fuzzier than the Lean proof.

Now, pragmatically, there many ways a proof of correctness adds a lot of value. If you have the source code of a program, and you control the compiler, you can check the source code doesn't have weird imports ("why do I need kernel networking headers in this dumb calculator program?"), so the scope of the proof will be smaller, and you can write a small specification to prove it and the proof will be pretty convincing.

All in all, this is a toy problem that tells you : you can't verify what you don't know you should verify, and what you need to verify depends on the prior distribution of what the program is that you need to verify, so that conditional on the proof you have, the probability of correctness is sufficiently close to 1. There's a lesson to learn here, even if we deem Lean is still a good thing to use.

[−] raincole 31d ago
I found the list of articles on this site amusing:

https://kirancodes.me/posts/log-who-watches-the-watchers.htm...

You can see the clickbaitiness increases over time.

[−] NewsaHackO 31d ago
To me, saying that there is a bug in the lean runtime means lean-zip has a bug is like saying a bug in JRE means a Java app that uses the runtime has a bug, even though the Java app code itself does have a bug. It seems like the author is being intentionally misleading about his findings.
[−] dbdr 31d ago

> obviously no one’s running any compiled Lean code in any kind of production hot path

Ignorant question: why not? Is there an unacceptable performance penalty? And what's the recommended way in that case to make use of proven Lean code in production that keeps the same guarantees?

[−] kuruczgy 31d ago

> It’s important to note that this is the Lean runtime that has a bug, not the Lean kernel, which is the part that actually does the verification (aka proving). [1] So it’s not even immediately clear what this bug would really apply to

Well, Lean is written in Lean, so I am pretty sure a runtime bug like this could be exploited to prove False. Yes, since the kernel is written in C++, technically it's not the part affected by this bug, but since you cannot use the Lean kernel without the Lean compiler, this difference does not matter.

[−] thomastjeffery 31d ago
Misalignment of trust is a category of bug in my book; right next to logic error.
[−] armchairhacker 31d ago
“‘Proven’ code turns out buggy, because of a bug outside the proven specification” is relatively common. It happened in CompCert: https://lobste.rs/s/qlrh1u/runtime_error_formally_verified
[−] fainpul 31d ago

> This article’s framing and title are odd.

It's called clickbait.

[−] porcoda 31d ago
I’ve had similar experiences with code I’ve proven correct, although my issues were of the more common variety than the overflow issue - subtle spec bugs. (I think the post mentions the denial of service issue as related to this: a spec gap)

If you have a spec that isn’t correct, you can certainly write code that conforms to that spec and write proofs to support it. It just means you have verified a program that does something other than what you intended. This is one of the harder parts of verification: clearly expressing your intention as a human. As programs get more complex these get harder to write, which means it isn’t uncommon to have lean or rocq proofs for everything only to later find “nope, it has a bug that ultimately traces back to a subtle specification defect.” Once you’ve gone through this a few times you quickly realize that tools like lean and rocq are tricky to use effectively.

I kinda worry that the “proof assistants will fix ai correctness” will lead to a false sense of assurance if the specs that capture human intention don’t get scrutinized closely. Otherwise we’ll likely have lots of proofs for code that isn’t the code the humans actually intended due to spec flaws.

[−] grg0 31d ago
Clickbait title, the proved part of the program had no bugs?

As an aside, why can't people just write factually? This isn't a news site gamed for ad revenue. It's also less effort. I felt this post was mostly an insulting waste of time. I come to HN to read interesting stuff.

[−] ajaystream 31d ago
The spec-completeness problem here is the same one that bites distributed systems verification: the proof holds inside an operating envelope (no adversarial inputs, trusted runtime, bounded sizes), and the interesting failures live at the boundary. TLA+ has the same property - you can prove liveness under a fairness assumption the deployment silently violates, and nothing in the proof tells you when reality drifted outside.

What I'd actually want from the tooling is a machine-checkable statement of the envelope itself, propagated as a runtime guard rather than a compile-time comment. Then "proof holds" and "we are still inside the proof's domain" are two separate, observable properties, and the unverified-parser / unverified-runtime cases stop being invisible.

[−] DoctorOetker 31d ago
So this is very good news and was predictable.

LLM's are capable of producing code that passes formal verification.

The writing is on the wall: in the future more and more software on the abstract or platonic side of our computing base will be hermetically sealed against bugs and exploits. This quenching of bugs in the assured side will shift the mean location of bugs closer to the hardware side: at some point bugs and exploits will rely more and more on hardware quirks, and simply unspecified hardware.

Afterwards we can expect a long exponential decay of preventable safety violations: people mistakenly or surreptitiously disengaging the formal verification steps and shipping malicious or unverified code. Each such event will be its own big or small scandal, at some point there will be no deniability left: something must be on purpouse, either a malicious vulnerability or intentional disengagement of safety measures.

As the attack surface recedes towards the lower level hardware stack, it will open the debate that the community needs proper formal hardware descriptions (at least at the interface initially, not necessarily how the hardware has implemented it). As interface bugs get formalized 3 things can happen:

either vulnerabilities go extinct, and full formal hardware descriptions are not released

or vulnerabilities remain in each new generation of hardware, and malicious intent or negligence on behalf of the manufacturer can only be presumed, this will set up the community against manufacturers, as they demand full hardware descriptions (verilog, VHDL,...).

or vulnerabilities are tactically applied (vulnerabilities appear extinct to the bulk of the population, but only because manufactured vulnerabilities are sparingly exploited by the manufacturing block)

It is hard to predict what is more valuable: embedding HW vulnerabilities for the status quo and being able to exploit it for while before the public demands full hardware IP descriptions (verilog, VHDL) etc. or facing the end of vulnerabilities a little sooner but keeping hardware IP private (if the bugs stop with full interface descriptions).

[−] davesque 31d ago
Am I reading the article wrong? It appears that the author did not test the claims of the proof. Wouldn't a "bug" in this case mean she found an input that did not survive a round trip through the compression algorithm?

Update: Actually, I guess this may have been her point: "The two bugs that were found both sat outside the boundary of what the proofs cover." So then I guess the title might be a bit click baity.

[−] mindcrime 31d ago
"Beware of bugs in the above code; I have only proved it correct, not tried it." -- Donald Knuth
[−] lmm 31d ago
Not verifying the parser seems like a pretty big oversight. Parsing binary formats is notoriously dangerous!
[−] akoboldfrying 31d ago
Nice work. Amusing that Lean's own standard library has a buffer overflow bug, which "leaked out" due to being exempted from the verification.

Regarding the DoS in the lean-zip application itself: I think this is a really neat example of the difficult problem of spec completeness, which is a subcase of the general problem (mentioned by porcoda in a sibling comment) of being sure that the spec is checking the right things. For a compression program, the natural, and I would also say satisfyingly beautiful thing to prove is that decomp(comp(x)) = x for all possible inputs x. It's tempting to think that at that point, "It's proven!" But of course the real world can call decomp() on something that has never been through comp() at all, and this simple, beautiful spec is completely silent on what will happen then.

[−] vatsachak 31d ago
You guys are missing the forest for the trees.

They used an AI agent sending ideas to a fuzzer and discovered a heap buffer overflow in Lean. This is big.

[−] spullara 31d ago
claude making a statement that sounds impressive but it is actually the first codebase it has ever analyzed.

"This is genuinely one of the most memory-safe codebases I've analyzed."

[−] Animats 31d ago
Compression/decompression is a good problem for proof of correctness. The specification is very simple (you must get back what you put in), while the implementation is complex.

What seems to have happened here is that the storage allocator underneath is unverified. That, too, has a relatively simple spec - all buffers disjoint, no lost buffers, no crashes.

[−] youknownothing 31d ago
I'll probably get a lot of hate mail for this but here goes nothing... Despite what many people like to claim, you cannot prove that a program has no bugs. That means proving the absence of bugs, and you cannot prove a negative. The best thing you can do is fail to find a bug, but that doesn't mean it isn't there.

Before everyone starts blabbing about formal verification, etc., consider this: how do you know that you didn't make a mistake in your formal verification? IOW, how do you know your formal verification is bug-free? Answer: you don't. Or if you try to formally verify your formal verification then you're just translating the problem to a new layer. It's just a chain of proofs that is always ultimately based on an unproven one, which invalidates the whole chain.

You can get asymptotically close to zero-bug proof, but you can never get there 100% of the way.

[−] readthenotes1 31d ago
I didn't like the clickbait title. I would have preferred something along the lines of

"Lean proves other program correct but not itself"

[−] rhdunn 31d ago
Tools like this (formal verification, sparse, etc.) and built-in mechanism (types, generics, RAII, rust's borrow checker) can only verify issues within the scope of that framework. There are also trade-offs and limitations with each type and each implementation.

Type checking allows you to (outside of type casting such as in languages like C/C++ and casting to object for generic containers in Java) verify that an object is of a given type. That allows you to be sure that a well-formed program isn't doing things like putting a random object in a list.

Languages like C#, Scala, and Kotlin improve Java generics by making the generic type of a container or other interface/type part of the type system. This allows generic types of a generic type to preserve the inner type. This makes it possible to implement things like monads and mapping functions to preserve the generic type.

A similar thing is possible with union types, sealed interfaces/traits, etc. that allow you to check and verify the return type instead of defaulting it to a generic object/any type.

Likewise with other features like nullable/non-null annotations (or corresponding nullable type annotations like in Kotlin and recent C# versions).

All of these can be abused/circumvented, but if you keep your code within that framework the compiler will stop that code compiling. Likewise, these solve a limited set of bugs. For example, nullable types can't verify memory management and related bugs.

[−] jongjong 31d ago
Formal proofs can only ever work to validate against requirements... But a major issue with a lot of modern software is that the requirements themselves can be incorrect and some requirements can logically conflict with other requirements... So long as we have non-technical people formulating requirements, we can never have provably correct software.

I've come across issues in the past which weren't actually bugs. For example, the software was behaving exactly as intended but it looked like a bug to a user who didn't understand the nuances and complexities of what was happening.

I also cannot count how many times a non-technical person asked me to implement conflicting functionality or functionality which would have made the UX incredibly confusing for the user.

[−] crvdgc 31d ago
Unfortunately, the discussion focused on the somewhat click baity title "proved this program correct". It's unclear what "this program" is. If it refers to the core algorithm with a proof, then there's no bug. If it includes the runtime and the header parser, then Lean didn't prove it correct.

That being said, using a coding agent to direct fuzzying and find bugs in the Lean kernel implementation is the big news here. (After all the kernel's implementation is not proved.)

The moral of the story is to push for more verified code not less and try AI bug hunting.

[−] deterministic 29d ago
"The two bugs that were found both sat outside the boundary of what the proofs cover. The denial-of-service was a missing specification. The heap overflow was a deeper issue in the trusted computing base, the C++ runtime that the entire proof edifice assumes is correct (and now has a PR addressing)."

In other words, the code was proven correct according to spec by LEAN. Which is exactly what LEAN claims to do.

[−] dchftcs 31d ago
This is analogous to the fundamental problem of better automation in programming - eventually, the complexity and correctness of of the spec takes over, and if we don't manage that well, creating the spec is not that much less work than the programming part. If your program was for the wrong thing, a proof of it is also wrong.
[−] sebstefan 31d ago
So that's just one more win for formal verification despite the title it seems

I'm genuinely excited about AI agents and formal verification languages. To me it's obviously the way forward instead of moonshots trying to make agents that program in their own AI blackbox binary, or agents that code in current programming languages.

If we are heading in the direction of "huge codebases that nobody has written", or, "code is an artifact for the machine", I don't see a way out without making it proved.

If humans can review and edit the spec, then verify that the implementation matches it, suddenly leaving the implementation be an artifact for the machines seems okay

The downside of provers also being that they are a massive pain in the ass that very few want to use, this is also a complete win.

[−] seanhunter 31d ago
This is very cool work but the author is labouring under a false premise about how axiomatic systems work:

> Every Lean proof assumes the runtime is correct.

No. Every valid Lean proof assumes that if the runtime/mathlib etc is correct, then it too is correct.

Tangentially also, most lean proofs are not dependent on whether or not the runtime has things like buffer overflows or denial of service against lean itself at all, because if I prove some result in Lean (without attacking the runtime) then a bug in the runtime doesn’t affect the validity of the result in general. It does mean however that it’s not ok to blindly trust a proof just because it only relies on standard axioms and has no “sorry”s. You also need to check that the proof doesn’t exploit lean itself.

[−] vfclists 31d ago
Correctness with respect to a specification is not the same as correctness with respect to intent(ion) or expectation
[−] Alifatisk 31d ago

> The two bugs that were found both sat outside the boundary of what the proofs cover

Are we baiting people with headlines now?

[−] justinclift 31d ago

> 105,823,818 fuzzing executions.

That sounds like quite the monthly bill. o_O

[−] ButlerianJihad 31d ago
Incorrect usage of semicolon in title/headline. Should be a comma.

Tsk, tsk.

[−] aesopturtle 31d ago
This is a great reminder that ‘proved correct’ always has an invisible suffix: ‘with respect to the thing you actually specified.’ The hard part was never just proving things, it was pinning reality down tightly enough that the proof is about the right world.
[−] sylware 31d ago
c++, you asked for it.
[−] vvern 31d ago
Now I want to see benchmarks
[−] san_tekart 31d ago
[dead]
[−] sammy2255 31d ago
[flagged]