A ~15-minute assessment using behavioral tasks — not personality quiz questions — to measure your susceptibility to seven well-studied cognitive biases. Grounded in two decades of research from MIT, Princeton, Harvard, Toronto, and Carnegie Mellon. Everything runs in your browser. Nothing is transmitted or stored.
Cognitive bias measurement is an active and contested area of psychology. Reasonable researchers reading the same evidence reach different conclusions. Some (Stanovich, University of Toronto) argue rational thinking is a measurable general ability. Others (Teovanović, Belgrade) find cognitive biases don't correlate strongly enough to support a single "rationality" factor.
Your per-bias scores (anchoring, sunk cost, etc.) are reasonably well-established — those individual effects replicate across hundreds of studies. But your archetype and composite factor scores reflect one interpretation of your results. A different researcher using different analytical choices might characterize you differently.
Treat your results as a starting point for self-reflection, not a fixed label. Cognitive biases are universal human cognitive architecture — every person has all seven to some degree. High scores are an invitation to try the research-backed interventions we'll suggest, not a character judgment.
For researchers and curious users: read the full methodology — the validated framework, the variables measured, the scoring algorithm, the limitations, and the references.
You'll work through seven short tasks — not a personality quiz, but actual behavioral measurements that probe how your mind handles specific decision challenges. The tasks are drawn from the peer-reviewed heuristics-and-biases literature (Kahneman, Stanovich, Bruine de Bruin, others). Some will feel like estimation games. Some will feel like small logic puzzles. Some are scenarios where you rate how reasonable a decision was.
1. Don't look things up. Some tasks ask you to estimate facts you're unlikely to know precisely. The point isn't to be correct — it's to see how your mind handles uncertainty. Looking things up defeats the measurement.
2. Don't overthink. Give the answer that feels right at first pass, then commit. The tasks are designed to work with quick, genuine responses. Excessive deliberation can mask the effects we're measuring.
3. You can't fail. There are no wrong answers in the punitive sense — only responses that do or don't show susceptibility to a given bias. Every human has all seven biases to some degree. This measures the shape of yours.
For each question, give your best estimate. You're unlikely to know the exact answer — that's by design. Don't look anything up; don't spend more than 10 seconds per item. If you have no idea, guess a number that feels plausibly in the right neighborhood.
Each scenario describes a real event before its resolution. Without looking anything up, estimate the probability you'd have assigned to the outcome described, given only what was known at the time. Move the slider to your best guess. We'll return to this task near the end.
For each question, choose the answer you think is correct, then rate how confident you are (50% = just guessing, 100% = absolutely certain). Don't look anything up. This task measures calibration — the gap between how confident you feel and how often you're actually right.
You'll see the same six estimation questions from Task 1. This time, each comes with a reference value. First indicate whether the true answer is higher or lower than the reference, then give your final estimate. Your previous answers are not shown — answer each fresh.
Each problem shows two premises followed by a conclusion. Your job: determine whether the conclusion logically follows from the premises — regardless of whether it happens to be true in real life. This is about logical structure, not factual accuracy.
Each problem gives you two pieces of information: a general base rate (how common something is in the population) and a specific signal (evidence about a particular case). Your job: combine them into a probability estimate. This measures how well you integrate base rates with individuating information.
Each scenario describes someone facing a choice about whether to continue or stop. Rate how likely you would be to continue in each situation. There are no right answers — scenarios are designed to probe how past investment influences future decisions.
Each case describes a decision made under uncertainty, followed by how things turned out. Rate the quality of the decision itself — the reasoning, preparation, and choice — independent of the outcome. Decisions can be well-reasoned but produce bad outcomes, or poorly reasoned but produce good ones.
Now we'll reveal what actually happened in the four scenarios from Task 2. Without looking at your earlier answers, try to recall what probability you assigned when you first saw each scenario. The gap between your recall and your actual earlier rating is the hindsight bias measurement.
These items are drawn from validated North American research scales (AOT-30, A-DMC, NFC-18, CRT). They capture how you perceive your own tendencies — which often differs from how you actually behaved on the tasks above. The gap between the two is the Bias Blind Spot, and it's a featured finding in your results.