Show HN: I built a 2-min quiz that shows you how bad you are at estimating (convexly.app)

by convexly 66 comments 20 points
Read article View on HN

66 comments

[−] addisonl 39d ago

> Question: A fair die rolling a 6 twice in a row is more likely than rolling 1-2-3-4-5-6 in sequence

Two 6s in a row is 1/36 chance (1/6)^2

1-2-3-4-5-6 is a 1/46656 chance (1/6)^6

Website is claiming they are the same probability:

> Same probability: 1/46,656 — Both outcomes have exactly the same probability: (1/6)^6 = 1/46,656. This illustrates the representativeness heuristic — random-looking sequences feel more probable than ordered ones.

Website's "answer" is wrong: was the question supposed to be rolling a 6 six times in a row?

[−] cyanydeez 39d ago
Yeah, most likely it was try to identify a bias of human perception, that 1,2,3,4,5,6 would be more probably than 6x6.

A better way to illustrate this bias is with coin flips. People will tell you that odds of 6 heads is more rare than the odds 3 tails then 3 heads. The difficulty is understanding whether they mean "in order" or "as a group".

If it's in order, the odds are the same. Every order of H/T has the same probability, but humans will see "all heads" and think that's more rare. But the important bit is whether there's a clear understanding ordering.

[−] convexly 39d ago
That's definitely better framing for this question. Much cleaner way to illustrate that point!
[−] convexly 39d ago
You're right, that's a mistake in how I phrased the question. It should say "six times in a row" not "twice in a row". Fixing it now! Thanks for pointing that out!
[−] snarf21 39d ago
If anyone is interested in why we are bad at estimating, please check out the amazing book Thinking, Fast and Slow: Daniel Kahneman.
[−] convexly 39d ago
Great recommendation. That was one of the biggest influences for starting to write my decisions down and then building this.
[−] 1qaboutecs 39d ago
came with the same complaint. the website then had the nerve to tell me i am overconfident.
[−] convexly 39d ago
Fair point! Bad question on my end. The overconfidence was based on all 10 questions though, not just that one!
[−] lorenzohess 39d ago
Maybe I don't know enough about "calibration" in a technical sense, but it seems like this quiz cant really distinguish between factual knowledge and calibration skill?

Is this type of quiz reproducible for individuals and across various cross-sections of the population?

Are there studies on this? Is the quiz based on these studies?

[−] convexly 39d ago
Great question. Calibration specifically is about whether your confidence in an answer matches your accuracy, not whether you know the answer. Someone who knows a lot but is always 90% confident would score poorly even if they're wrong 20% of the time, as an example.

In terms of research, Tetlock's Expert Political Judgement and Superforecasting were the foundation. He did a 20 year study that showed domain experts were barely better than chance at long-range predictions. The Brier score was the standard metric for that research.

[−] lorenzohess 39d ago
I see, that makes a lot of sense. Maybe the UI should reflect this? Have one button for True or False or Uncertain, and then the slider for confidence in the answer?
[−] convexly 39d ago
That's a really good UX idea. I can see how it's not the most intuitive now. Separating the direction from the confidence level would make it much clearer. Adding that to my list.
[−] Evgeniuz 39d ago
There’s a bias, I think. When I saw the title that is about how bad I’m at estimating, I’ve leaned towards counterintuitive answers. This got me quite a high score. I think test set should also include intuitive facts (or maybe I was just lucky).
[−] convexly 39d ago
As much as it is counterintuitive, that is actually a valid calibration strategy. If you notice the questions lean slightly towards counterintuitive and adjust for it, that IS better calibration! But you raise a fair point about framing bias from the title.
[−] iamtedd 39d ago
Why do I need to sign up to get the results? Why couldn't it just be on the page?
[−] convexly 39d ago
The Brier score and "diagnosis" are shown immediately, no signup needed. The email is optional and only if you want to see the calibration curve and the question breakdown sent to you. I'll make that clearer!
[−] iamtedd 38d ago

> The email is optional and only if you want to see the calibration curve and the question breakdown sent to you.

That still doesn't make sense. Why can't it just be shown on the page?

[−] reltnek 39d ago
I think this might be conflating confidence with accuracy. I tried leaving the slider the the middle (nominally the least confident position) and it gave a score of 0.25 and diagnosed it as 'overconfident'.
[−] convexly 39d ago
That is definitely a bug, thank you for pointing that out. Should have been neutral! I'll push a fix for this.
[−] macleginn 39d ago
The Brier score is pathological when the guess is 0.5: regardless of the outcome, it will be equal to 0.25, so if you define "better than random" as having a score < 0.25, actually acting randomly makes you "overconfident".
[−] EForEndeavour 39d ago
Apologies if this is off-topic, but having spent more time than I'd like to admit having to create and edit webapps that emerged entirely out of Claude Code, Cursor, Codex, etc. with minimal to no direct code-writing by their human subscribers, this website has strong AI smells:

- Inter font

- all caps section headers

- Lucide icons

- em dashes, of course the em dashes

- bubble status badges (of course with all-caps "IN PROGRESS" and "COMING SOON" that mean the same thing)

- Uncited claims like "Most founders are overconfident in the 70-90% range" and "Most people score between 0.20 and 0.30"

- No less than FOUR blog articles all published April 4

None of these points is by any means a dealbreaker. And after all, I suppose a product should be judged on its merits and the value it delivers to its users, not on the tools used to create it. But together, the frontend bears the unmistakeable generative AI "smell" that telegraphs that the human(s) directing the tools building this app might be optimizing for speed over rigor and quality (further supported by the volunteer QA/QC happening in the comments), and may only be as good and reliable as the uncritically accepted outputs of a $20/month coding assistant.

[−] convexly 39d ago
That's all true. I'm a solo founder and have been using Claude heavily to build this. It definitely shows in many places, and I'll make sure to clean those up. I did not expect to get this many visits from a show HN (almost at 1600 quiz takers from the last few hours alone). The core math is sound, but I agree the presentation needs more care. Appreciate the honest feedback!
[−] testycool 39d ago
I thought it was interesting, but don't appreciate having to give you my email to see full results.

I unsubscribe from mails that aren't useful to me day-to-day because they're distracting.

Other than that it seems like a cool idea. I'd recommend slightly bigger fonts. I often have this issue with Gemini.

  Brier Score: 0.216 (lower is better)
  Diagnosis: Overconfident
[−] convexly 39d ago
Just pushed a fix for that! You should be able to see everything without inputting your email now. I've made a note about font size, thank you for the feedback.
[−] gcanyon 39d ago
Wait, so roughly is it rewarding being confident when correct, and penalizing being confident when wrong? Meaning that the highest score is only achievable if you answer fully confident true or false, and get all 10 correct?

If so, isn't that conflating knowledge with over/under confidence?

[−] convexly 39d ago
Your point on scoring is correct, if you're 100% confident and right on everything you would score a perfect 0. The calibration insight is in how you handle the questions where you don't know the answer. Say you're highly knowledgeable and 95% confident on everything, but get 2 wrong scores compared to someone that says they are 70% confident on those same two questions. That would indicate that you are overconfident compared to the other person!
[−] gcanyon 36d ago
I think the 100% certain and always right scenario invalidates the calculation. In that outcome you know nothing about my (over) confidence level when I am wrong.

You should either return NA in that circumstance, or keep asking questions until you have actual data to work with.

[−] macleginn 39d ago
How are they different? If you "know" something, you are 100% confident in it, which gives you an easy 0 for this question (or a surprising 1). Philosophically, the problem is more that there is no difference between confidently and modestly wrong in terms of consequences of binary decisions.
[−] slothsonaplane 38d ago
Brier scoring works on questions with cheap, fast resolution; the strategic decisions you mention (hiring, equipment, big purchases) resolve over months or years, often ambiguously, and the counterfactual never resolves at all. Curious whether the calibration gains from the rapid-feedback quiz actually transfer to the slow-feedback domains the tool is designed to help with, or whether it ends up training a slightly different skill. A second thing: most of my strategic decisions weren't solo, and once one calibrated person sits in a room with two louder uncalibrated ones, the calibration math stops being load-bearing. Have you thought about a team variant?
[−] convexly 38d ago
Both really good points. The research does suggest that the core skill does transfer. The quiz can help with long horizon predictions. The mechanism itself seems to be the actual awareness of overconfidence rather than just domain-specific knowledge. With that being said, the gap between the quiz and real-world application is real, and tracking both over time is part of why I built the decision logging side. For your question about teams, that's a built-in feature already! Submissions are "sealed" so you submit before seeing others. The team feature also has a believability-weighted aggregation based on each submitter's track record, and I also built an IC mode for investment committees. The problem you describe about one calibrated person in a room with two uncalibrated ones is what the sealed model prevents. Everyone makes draws their own conclusion, then they compare!
[−] sonofhans 39d ago
I’ve taken the quiz but not been compelled to sign up. The site feels manipulative, e.g., the “show me all the questions” link is tiny and hidden between two larger boxes, and even then it only shows 2 questions with a signup CTA. Maybe that’s best practice growth hacking these days, but to me it’s a manipulative turnoff. If you’d given me all the questions and answers simply then I would signed up for more, especially with the discount code. Otherwise, how am I supposed to even know what I’m signup up for? Every interaction I’ve had with the site so far is a sales attempt, so mostly I expect more of those.
[−] convexly 39d ago
That's honest feedback, I appreciate it. The post quiz shouldn't feel like a sales funnel. I'll clean that up. Working on it!
[−] pacificpendant 38d ago
Having previously spent a reasonable amount of time on Metaculus I’m familiar with Brier scores and rating my confidence. I assume that’s how I was able to get better than average results. It’s an interesting app.

It’s something I’m interested improving on as well as predictions in general. I saw you suggested Thinking Fast and Slow and I’ve skimmed through some of Superforecasters. Metaculus have a bunch of resources too.

https://www.metaculus.com/help/prediction-resources/

[−] convexly 38d ago
Thank you! Happy to hear to how it compares!
[−] convexly 37d ago
Quick update: 1,934 quiz completions, 44.5% scored overconfident. Most interesting finding was that the quiz itself got more engagement than the product behind it. Added educational tooltips, a public roadmap with voting, and UTM tracking based on feedback here and from users that reached out directly!
[−] convolvatron 39d ago
I didn't find the questions very representative about estimation. that is maybe if happen to know many of random root facts about the world under which they were based, then their application might be a revenant question about ability to estimate. I really felt more like I was making uneducated guesses (0.155). I suppose I was expecting more ping pong balls in airplanes
[−] convexly 39d ago
The point I was going for was more so how people handle questions they don't know the answer to. Someone that is "well-calibrated" would set things they are uncertain about at closer to 50% instead of guessing one way or the other (overconfident). That score is excellent, so it suggests you did exactly that!
[−] Hnus 39d ago
Why is it asking for email?
[−] convexly 39d ago
I just removed that, full results should be fully visible without email! A hard refresh should show the update.
[−] rahimnathwani 39d ago
This reminds me of:

https://taketest.xyz/confidence-calibration

The same site also has something with a fixed confidence level: https://taketest.xyz/ci-calibration

[−] convexly 39d ago
That's awesome, hadn't seen this one! I like the confidence interval approach.
[−] convexly 39d ago
Made a few changes based on feedback from this thread: full results now shown immediately with no email gate, changed the UX to include true/false/uncertain buttons + a confidence slider, I cleaned up the quiz result page, and fixed the die probability question. Thanks for all the honest feedback!
[−] convexly 39d ago
Update at 2 hours: 1350+ quiz takers! 50% overconfident, 40% well-calibrated, and 10% underconfident. The average score is around 0.228, with the best score still at 0.007 (nearly perfect). The pattern so far is people are most overconfident in the 70-90% range, but are right closer to ~55% of the time.
[−] fred_is_fred 39d ago
Is it down? The start and skip button both dont work and I see this error in my console.

Manifest fetch from https://www.convexly.app/manifest.json failed, code 403

[−] convexly 39d ago
Just checked and everything is up. That might just be a console warning, but shouldn't affect the quiz. Can you try a hard refresh (ctrl+shift+R)? If that still doesn't work, what browser are you on?
[−] fred_is_fred 39d ago
I tried Chrome and Safari. It's working great on my phone, so probably zscalar.
[−] convexly 39d ago
For sure. Zscalar can block certain requests. Glad it works on your phone!
[−] convexly 39d ago
Update: 400+ quiz takers now... insane. Best Brier score so far is 0.007 (nearly perfect calibration). The worst came in at 0.600. Average is 0.230, still just better than a coin flip. Where did you land?
[−] tommica 39d ago
Worst came in 0.600? Fuck, I got 0.550...
[−] convexly 39d ago
Just need practice! People have no idea how overconfident they actually are.
[−] bovermyer 39d ago
I hit 0.012.

As a test of general knowledge it was interesting. The confidence angle was the most interesting part, though.

[−] convexly 39d ago
That's the second best score I've seen today out of 700+ quiz takers! Exceptional calibration. The confidence angle is the whole point, people don't know how far off they actually are until they see the hard data!
[−] convexly 39d ago
Interesting data from the quiz so far: 160+ quiz takers! The average is 0.239 (barely better than a coin flip at 0.25), but almost everyone indicates they are confident in their answers.
[−] unsnap_biceps 39d ago
The slider disappearing when sliding between extremes is very confusing. I think the silver should be the only thing displayed and remove the buttons entirely.
[−] convexly 39d ago
The change to buttons was based on feedback I got today. The slider disappearing is a bug. Pushing a fix now!
[−] Havoc 39d ago
I'd consider removing some questions that are bound to be country specific. e.g. The one about time spent in front of a red light.

>0.188

Slightly above avg - yay

[−] convexly 39d ago
That's fair, I'll flag those or maybe even add regional context. Nice score, well above average!
[−] suralind 39d ago
Did it twice: once had 0.177, 2nd time got 0.280. Note sure what to make of this, I guess I should always leave it on 50/50?
[−] convexly 39d ago
The variance is normal, the questions pull from a pool of 138 questions so far. 0.177 is strong. Setting everything to 50% would just get you 0.25, so you did way better on the first attempt. The goal isn't 50/50 on everything, only on the occasions where you are not confident that you are right.
[−] zupa-hu 39d ago
It is very disappointing that you can't see what you got right or wrong without giving out your email. I'm not even sure if one would learn from the email or whatever the calibration result is.

I'm happy for you if it works but I sure feel cheated. I hope others also feel it's against the spirit of a Show HN. But maybe it's just me.

[−] convexly 39d ago
That's a good point, I might have gated it too hard. I'll open up the full results now. Appreciate the feedback.
[−] loloquwowndueo 39d ago
“You averaged 97% confidence but were right 80% of the time.”

Heck yeah.

[−] senectus1 39d ago
heh.. nailed it first go:

Your Calibration Results 9/10 correct direction

Brier Score

0.131

Lower is better (0 = perfect)

Diagnosis

Well Calibrated Strong score. You were right more often than your confidence suggested. Trust your gut more.