K
KnowMBAAdvisory
ProductIntermediate6 min read

RICE Prioritization

RICE is a numerical scoring system invented at Intercom in 2017 to force product teams to defend prioritization decisions with math instead of charisma. Score = (Reach × Impact × Confidence) ÷ Effort. Reach is the number of users affected per quarter. Impact is a multiplier (0.25 minimal, 0.5 low, 1 medium, 2 high, 3 massive). Confidence is a percentage discount (50%, 80%, 100%) applied to the impact estimate. Effort is person-months. The output is a single comparable number that lets you stack-rank a backlog of 50 ideas in a spreadsheet. Teams that adopt RICE rigorously typically cut feature output by 30-40% — and ship more outcomes.

Also known asRICE FrameworkRICE ScoreReach Impact Confidence EffortIntercom RICE

The Trap

RICE without honest impact estimates is theater. The dirty secret: Impact is the most subjective input, and teams routinely inflate it to justify pet projects. A PM who wants to ship dark mode rates it Impact 2 (high). The data later shows it changed retention by 0%. The framework gave them a number to point at, but the number was fiction. Second trap: Confidence becomes a vibes check rather than evidence. Real Confidence requires an experiment, a customer interview, or a benchmark from a similar feature you shipped. 'I feel pretty good about this' is 50% confidence at best, no matter how senior you are.

What to Do

Run RICE in a 60-minute team session, not solo. Each input gets scored independently by Product, Engineering, and at least one customer-facing person — then average. Force every Impact ≥ 2 to cite a comparable past feature with measured retention/revenue lift. Force every Confidence ≥ 80% to cite an experiment, interview log, or analytics query. After scoring, sanity-check the top 5 against the bottom 5 — if anything feels obviously wrong, the inputs are wrong, not the formula. Re-score the entire backlog quarterly; impact estimates rot as the market shifts.

Formula

RICE Score = (Reach × Impact × Confidence) ÷ Effort

In Practice

Intercom's product team published the RICE framework in 2017 after years of using it internally. Sean McBride, then a Product Manager at Intercom, wrote that the team needed a way to compare wildly different ideas — a customer's pet feature against a sales-team request against an engineer's tech-debt cleanup. RICE became the shared language. Critically, Intercom didn't just publish the formula — they published the discipline: every score had to come with notes explaining the inputs, and any score in the top 10 was challenged in a group review. The framework worked because the culture forced honest scoring. (Source: https://www.intercom.com/blog/rice-simple-prioritization-for-product-managers/)

Pro Tips

  • 01

    Effort estimates are systematically wrong by 2-3x. Multiply every engineering estimate by 1.5 before computing RICE — this corrects for optimism bias and prevents 'small' features from gaming the score by hiding integration work.

  • 02

    Add a column called 'How will we measure success?' before any feature can be scored. If you can't write a measurable success criterion, your Impact and Confidence scores are guesses. This single discipline kills 30% of feature requests at the scoring stage.

  • 03

    Reach should be measured per quarter, not lifetime. A feature reaching 10,000 users in 12 months has the same per-quarter reach as one reaching 2,500 users in 3 months — but the second one delivers value 4x faster.

Myth vs Reality

Myth

RICE is the prioritization answer — adopt it and you'll ship better products

Reality

RICE is a forcing function for honest discussion, not an oracle. The value isn't the score — it's that scoring forces the team to articulate Reach, Impact, and Confidence in writing. A team that argues for an hour about whether Impact is 1 or 2 has done the real prioritization work, regardless of the final number.

Myth

Higher RICE scores always win

Reality

Strategic features (platform investments, regulatory requirements, security) often score poorly because their Reach is delayed and Impact is hard to quantify upfront. Reserve 20-30% of capacity for strategic work that bypasses RICE entirely. Otherwise you'll optimize for short-term wins and accumulate platform debt.

Try it

Run the numbers.

Pressure-test the concept against your own knowledge — answer the challenge or try the live scenario.

🧪

Knowledge Check

Challenge coming soon for this concept.

Industry benchmarks

Is your number good?

Calibrate against real-world tiers. Use these ranges as targets — not absolutes.

RICE Score (relative ranking)

Scoring a typical SaaS feature backlog (Intercom-style RICE)

Top tier — build now

> 3,000

Solid — likely build

1,000-3,000

Marginal — re-examine

300-1,000

Skip or kill

< 300

Source: Intercom Product Team — 'RICE: Simple prioritization for product managers' (2017)

Real-world cases

Companies that lived this.

Verified narratives with the numbers that prove (or break) the concept.

💬

Intercom

2014-2017 (framework origin)

success

Intercom's product team — managing a sprawling backlog across messaging, support, marketing, and onboarding products — needed a way to defend prioritization decisions against constant pushback from sales, customers, and executives. The team developed RICE internally over 2014-2016 and published it in 2017. The published version was deliberately simple (four inputs, one formula) because complex frameworks weren't getting used. Within a year of publication, RICE became one of the three most-cited prioritization frameworks in product management.

Inputs

4 (R, I, C, E)

Adoption (post-publication)

Industry standard within 18 months

Time per scoring session

~60 min for 20 features

Reported feature kill rate

~30% of requests

The framework's value isn't precision — it's forcing the team to articulate Impact and Confidence in writing. Most features die not because they score low, but because no one can defend the inputs.

Source ↗
📊

Hypothetical: MidStage SaaS

Hypothetical 2024

mixed

Hypothetical: A 40-person B2B SaaS adopted RICE company-wide. For two quarters, scores looked great — every feature was 'high impact, high confidence.' Q3 retrospective showed almost zero correlation between predicted RICE rank and actual measured outcome. The team realized PMs were inflating Impact and Confidence to get their features prioritized. They added a rule: every Impact ≥ 2 required a citation to a past shipped feature with measured lift. Feature scores compressed dramatically. The 'top features' changed entirely. Six months later, measured outcomes correlated strongly with RICE rank.

Pre-discipline Impact average

1.8

Post-discipline Impact average

0.9

Top-feature reshuffling

7 of top 10 changed

Outcome correlation

0.12 → 0.71

RICE is only as good as the inputs. Adding evidence requirements for high scores is the difference between a real framework and a polite ranking exercise.

Decision scenario

The Inflated Impact Score

You're a senior PM. The team scored 18 features for next quarter using RICE. The #1 feature (RICE 4,800) is a complete redesign of the analytics dashboard, championed by your VP. The score: Reach 12,000, Impact 3 (massive), Confidence 80%, Effort 6 person-months. The #2 feature (RICE 3,200) is fixing a known bug that causes 8% of paying customers to receive incorrect billing. You suspect the dashboard's Impact 3 is inflated.

#1 Feature

Analytics redesign — RICE 4,800

#2 Feature

Billing bug fix — RICE 3,200

Engineering capacity

10 person-months / quarter

Affected paying customers (billing bug)

8% — direct revenue trust impact

01

Decision 1

You can either accept the RICE ranking (build the dashboard, defer the bug fix) or challenge the Impact 3 score on the dashboard. Your VP is the one who scored Impact 3 and may resist the challenge.

Accept the ranking — RICE is the framework, the VP is senior, and overriding the score will damage trust in the processReveal
You build the dashboard for 6 months. Adoption is 14% (well below the 60% the Impact 3 score implied). Meanwhile, the billing bug remains, and three enterprise customers downgrade citing 'data we can't trust' — losing $240K ARR. Six months later, the team reviews the prediction-vs-outcome data: the dashboard's actual measured Impact was 0.5x, not 3x. The framework was followed perfectly; the score was fiction. RICE became theater.
Dashboard adoption: Predicted 60% → Actual 14%ARR lost (billing bug churn): −$240KTrust in RICE process: Damaged once team sees the prediction failed
Push back: ask the VP to cite a comparable past redesign with measured Impact 3 lift. If they can't, drop the score to Impact 1.5 with 60% Confidence and re-rankReveal
Correct. The VP can't cite a past dashboard redesign with 3x measured impact — they confess the score was 'enthusiasm.' Re-scored: Reach 12,000, Impact 1.5, Confidence 60%, Effort 6 → RICE 1,800. The billing bug fix moves to #1. You ship the bug fix in 3 weeks, saving the $240K ARR. The dashboard work moves to a smaller scoped first iteration that ships in 2 months instead of 6, with experiment gates. Discipline preserved; framework strengthened.
Re-scored RICE: 4,800 → 1,800ARR saved (bug fix shipped): +$240KRICE trust: Strengthened — team sees scores get challenged honestly

Related concepts

Keep connecting.

The concepts that orbit this one — each one sharpens the others.

Beyond the concept

Turn RICE Prioritization into a live operating decision.

Use this concept as the framing layer, then move into a diagnostic if it maps directly to a current bottleneck.

Typical response time: 24h · No retainer required

Turn RICE Prioritization into a live operating decision.

Use RICE Prioritization as the framing layer, then move into diagnostics or advisory if this maps directly to a current business bottleneck.