List your options as columns and your criteria as rows. Assign each criterion a weight out of 100, with weights totalling 100. Score each option on each criterion from 1 to 10. Multiply the score by the weight, sum the results for each option, and compare totals. The option with the highest weighted score is the analytical winner. If that result surprises you, the surprise is worth examining — it means something important is either missing from the matrix or weighted incorrectly.
Step-by-step: building the matrix
Start by writing your options across the top of a table. If you have more than five options, trim to the most viable three or four before you begin. A matrix with eight options becomes unwieldy and the scoring exercise loses discipline.
Down the left side, list your decision criteria. These are the dimensions that genuinely matter for this specific decision. For a job offer comparison, criteria might include compensation, role scope, growth potential, commute, and manager quality. Assign each criterion a weight that reflects its relative importance to you. The weights should sum to 100. This step forces the prioritisation that most informal decisions skip.
Now score each option on each criterion from 1 (very poor fit) to 10 (excellent fit). Multiply each score by its criterion weight, then sum the column for each option. The totals give you a ranked comparison. Most decision matrix tools present this in a spreadsheet; a simple table on paper works equally well.
When the numbers are lying to you
A weighted decision matrix is only as honest as the weights and scores you put into it. The most common failure mode is reverse-engineering: setting weights and scores so that a preferred option wins, then treating the matrix total as objective confirmation. This feels like rigour but is rationalisation. The tell is that you feel relief when a particular option scores highest, rather than genuine surprise or curiosity.
The gut-check test is the most useful corrective. When the matrix produces its winner, notice your immediate emotional reaction. If you feel pleased, consider whether the weights were honest. If you feel disappointed, the matrix may be more useful than you wanted it to be. Either reaction carries signal. A strong negative reaction to the matrix result often means there is a criterion you left out, or a criterion you underweighted because naming it honestly felt uncomfortable.
Post-decision rationalisation
Post-decision rationalisation is the tendency to work backward from a preferred conclusion and construct reasoning that supports it. In a decision matrix, this manifests as unconsciously setting weights to favour the option you already want. The fix is to set your weights before you score the options, and ideally before you know which option will benefit from which weight distribution.
Getting more from the tool
Two practices significantly improve decision matrix quality. First, set your criterion weights before you start scoring options. Committing to what matters most before you see how each option performs removes the temptation to adjust weights to suit a preferred outcome.
Second, run the matrix once, look at the result, then check whether any criterion is conspicuously absent. Decisions often have one factor that people are reluctant to put on the table explicitly, usually because it makes their actual preferences visible. If you notice you have avoided a criterion, add it and re-run. A matrix that includes the things you find awkward to name is far more useful than a tidy one that avoids them.
Don't just read about it — run your actual decision through our AI Decision Assistant.
DecisionsMatter.ai is an AI decision assistant that walks you through a structured 5-step analysis: framing, bias check, pre-mortem, and decision record. Your first analysis is free.
One decision insight a week.
Mental models, frameworks, and decision science — no noise.