Field Note 13 Cognitive Bias Book: Chapter 6

Overconfidence Bias

One of the most replicated findings in cognitive psychology: people systematically overestimate the accuracy of their beliefs, their relative ability, and their chances of success.

7 min read ·Harish Keswani ·

Overconfidence bias describes the tendency to place excessive certainty in your beliefs, to rate yourself above average on most dimensions, and to overestimate your chances of success. It comes in three documented forms: overprecision, overplacement, and overestimation. The correction is to anchor your estimates to base rates before allowing your subjective confidence to set the frame.

Where this came from

Research on overconfidence has accumulated since the 1970s, driven by psychologists including Daniel Kahneman, Amos Tversky, and Baruch Fischhoff. Fischhoff's early work on calibration showed that when people said they were "99% certain" of an answer, they were wrong far more than 1% of the time. Their confidence intervals were too narrow. Their certainty was not earned.

The most cited finding is a survey in which 93% of American drivers rated themselves as above-average drivers. Statistically, only 50% can be above average. Similar results have been replicated with surgeons rating their patient outcomes, fund managers rating their investment performance, and students rating their exam scores before results are released. The pattern is consistent: most people, in most domains, believe they are better than most people.

In business, the effects are measurable. Studies of corporate mergers consistently find that acquiring companies overestimate synergies and underestimate integration costs. Venture capital post-mortems show that founders systematically overestimate the probability their startup will survive five years. McKinsey research has shown that project timelines and budgets in major infrastructure investments routinely overshoot estimates by 20-45%. The planning fallacy, a specific form of overestimation, has been documented in construction projects, software development, and government policy programmes.

How it works

Overconfidence bias operates through three distinct mechanisms, and it is useful to know which one you are dealing with because each has a different correction.

Overprecision is excessive certainty about the accuracy of your beliefs. If you ask a group of people to state a 90% confidence interval for a factual question, such as the year the Eiffel Tower was built, their intervals are so narrow that the true answer falls outside the range far more than 10% of the time. People are more certain than their knowledge justifies. In decisions, this produces forecasts that are stated as near-certain when they are genuinely uncertain.

Overplacement is the belief that you perform better than others on most traits. This is the above-average driver problem. It is strongest in domains people care about, where the feedback loop is slow, and where performance is hard to measure precisely. Managers overestimate their leadership skills. Investors overestimate their stock-picking ability. Founders overestimate their product instincts. Overplacement is partly driven by the fact that we have access to our own efforts and intentions but only see other people's outputs, which makes us feel more capable in comparison.

Overestimation is overestimating your absolute performance or probability of success, independent of how others perform. An entrepreneur who estimates a 70% chance of reaching profitability in 18 months when the base rate for their sector is 20% is exhibiting overestimation. This is where base rates are most directly useful as a corrective.

When to watch for it and when it matters most

Overconfidence bias is most dangerous in decisions that are hard to reverse, where the costs of being wrong are large and asymmetric, and where you have limited direct experience in the domain. Starting a business, making a large financial bet, or forecasting the adoption of a new product are all high-overconfidence-risk situations. The very enthusiasm that makes someone willing to attempt something bold also tends to elevate their confidence beyond what evidence warrants.

Overconfidence is also specifically dangerous when the person making the decision is the most senior person in the room. Feedback loops that might correct overconfidence in a junior employee, a manager pushing back, a peer challenging the estimate, stop functioning when no one challenges the decision-maker's assumptions. This is one reason that formal pre-mortems, structured devil's advocate processes, and explicit base-rate checks matter most at the leadership level.

Overconfidence is less harmful in low-stakes, easily reversible decisions, where being wrong is cheap and you will receive feedback quickly. If you are deciding which email subject line to test, overconfidence in your instinct costs you one A/B test cycle. If you are deciding whether to take on a five-year lease for a new office, the same overconfidence in your growth projections costs far more.

Bias to watch

Dunning-Kruger Effect

People with limited knowledge in a domain consistently overestimate their competence, while genuine experts tend to underestimate theirs. The mechanism is straightforward: to accurately assess your own gaps, you need the knowledge you lack. Beginners do not know what they do not know, so they feel capable. Experts know the field's complexity, so they feel uncertain. The implication for high-stakes decisions is that the most confident person in the room is often the least qualified to be confident. Combining a domain novice with genuine decision authority is a reliable recipe for overconfident choices.

Run this framework on your actual decision.

DecisionsMatter.ai walks you through a structured 5-step analysis. Your first analysis is free.

Try the tool →

How to apply it in practice

The most direct correction for overconfidence is to use base rates before you form a subjective estimate. If you are launching a product, look up the success rate for comparable product launches in your market. If you are estimating a project timeline, find data on how long similar projects actually took. The base rate becomes your anchor, and any departure from it requires explicit justification based on features specific to your situation.

A second technique is to construct a pre-mortem before committing to a decision. Imagine that it is 18 months from now and the decision has failed. Write down, in concrete terms, what caused the failure. This exercise forces you to surface the risks your optimism had suppressed. It does not eliminate overconfidence, but it forces the most plausible failure modes into your working model before you commit.

Third, state your confidence as a probability and track it. If you say you are 80% confident a project will come in on budget, that confidence should be correct 80% of the time when you make similar claims. If you track your predictions and your 80% confidence statements are right only 55% of the time, you have direct evidence of your own calibration error. Most people never track their confidence in this way, which is why the bias persists uncorrected through an entire career.

This is one model from the upcoming Decisions Matter book.

30 mental models. 40 cognitive biases. A 5-step decision system. Get the relevant chapter in your inbox when it publishes.

Frequently asked questions

What is overconfidence bias?

Overconfidence bias is the tendency to hold excessive certainty in the accuracy of your beliefs, your abilities relative to others, and your predicted chance of success. Psychologists distinguish three forms: overprecision (too certain about the accuracy of your estimates), overplacement (believing you perform better than most people), and overestimation (overestimating your absolute performance or the probability that a given outcome will occur). It is one of the most consistently replicated findings in behavioural psychology.

What are the three types of overconfidence?

Overprecision is the tendency to believe your estimates are more accurate than they are, for example, setting overly narrow confidence intervals on predictions. Overplacement is believing you are above average on most traits, most famously demonstrated by studies showing 93% of US drivers rate themselves above average in skill. Overestimation is overestimating your absolute level of performance or your probability of success, which drives entrepreneurs to consistently overestimate their startup survival rates.

How do you correct for overconfidence bias?

The most reliable correction is to use base rates. Before estimating your odds of success, look up the actual success rate for this category of decision in this context. If you are launching a new restaurant, find the three-year survival rate for restaurants in your city. Your subjective confidence should be anchored to that number, not to your belief in your own abilities. A second method is to explicitly seek out disconfirming evidence and to ask someone who disagrees with your estimate to explain their reasoning.

Is confidence ever useful in decision-making?

Yes. Confidence that is calibrated to actual competence and evidence is a reliable signal. The problem arises when confidence is not calibrated, when people feel certain without having earned that certainty through evidence, feedback, or relevant experience. In high-stakes decisions, the goal is not to eliminate confidence but to match its level to the strength of your evidence. Calibrated confidence supports decisive action; overconfidence produces decisions built on faulty foundations.


Continue reading
View all Field Notes →
Try DecisionsMatter.ai

References & further reading

© All referenced works remain the intellectual property of their respective authors and publishers. Summaries and interpretations on this page are original commentary provided for educational purposes only.