K
KnowMBAAdvisory
MarketingIntermediate6 min read

Lead Scoring

Lead scoring is a system for ranking prospects by their likelihood to become customers, using a numerical score derived from two dimensions: FIT (does this person/company match your ICP?) and INTENT (are they showing buying behavior?). A score crossing a threshold triggers MQL designation, sales handoff, or specific nurture flows. Done well, lead scoring routes the right leads to the right place at the right moment โ€” sales focuses on real prospects, marketing nurtures the rest. Done badly, it becomes a meaningless number that everyone games and no one trusts. The core question: do high scores actually correlate with closed-won deals? If you've never tested that, you don't have lead scoring โ€” you have lead arithmetic.

Also known asLead Qualification ScoringPredictive Lead ScoringBANT ScoringMQL Scoring

The Trap

The classic trap is rule-based scoring built without data validation. A marketing manager assigns +10 points for visiting the pricing page, +5 for downloading a whitepaper, -5 for being a competitor, +20 for being a VP. The numbers feel reasonable but have no actual relationship to closed-won outcomes. Sales notices the scores don't predict anything and starts ignoring them. The whole system becomes elaborate theater. The fix: backtest your scoring model against historical data โ€” would high-scoring leads from 12 months ago have actually closed? If not, your model is broken.

What to Do

Pull 200 historical leads โ€” 100 closed-won and 100 closed-lost. Identify the firmographic and behavioral attributes that distinguish the two groups. Those are your real signals; everything else is noise. Build a simple weighted model from JUST those signals (5-10 variables, not 50). Set the MQL threshold where the model has 60%+ precision (i.e., 60%+ of leads above threshold actually close). Re-validate every quarter. Most companies build a 50-variable scoring model day one and never check whether it works.

Formula

Lead Score = ฮฃ (Fit Attribute Weights) + ฮฃ (Intent Behavior Weights) โˆ’ ฮฃ (Negative Signals)

In Practice

Marketo's own lead scoring journey is the canonical case study. In 2013, they redesigned their scoring from a generic rules-based system to a model trained on actual closed-won data. Sales teams provided 18 months of disposition data. The data team identified that: (1) job title at VP+ was the single strongest predictor, (2) visiting the pricing page within 14 days of an email open was a stronger combined signal than either alone, (3) attending a webinar mattered ZERO without firmographic fit. They cut the number of scored attributes from 47 to 11. MQL precision (% of MQLs that became SQLs) jumped from 22% to 41% within two quarters โ€” without changing any other part of the funnel.

Pro Tips

  • 01

    Score on FIT and INTENT separately, then combine. A common mistake is rolling everything into a single number โ€” losing the diagnostic power. A high-fit, low-intent lead is a long-cycle nurture target. A high-intent, low-fit lead is probably a competitor researcher. A low-fit, low-intent lead is junk. Mixing them into one score throws away the routing information.

  • 02

    Set negative scoring rules. Personal email domains (gmail, yahoo) for B2B targets, job titles like 'student' or 'consultant' for enterprise products, locations outside your serviceable region. Negative scoring is more important than positive scoring for filtering noise.

  • 03

    Score decay is critical. A pricing page visit from 6 months ago is not the same signal as one from yesterday. Without time decay, your model gets clogged with stale signals and produces phantom MQLs from people who lost interest months ago.

Myth vs Reality

Myth

โ€œPredictive (ML) lead scoring is always better than rule-based.โ€

Reality

ML scoring is only better when you have 1,000+ closed-won deals to train on AND clean data. With 50 closed-won deals, ML overfits and produces less interpretable, less stable scores than a well-designed rules model. Most B2B SaaS at <$10M ARR doesn't have enough data for ML to outperform thoughtful rules.

Myth

โ€œMore attributes = a more accurate score.โ€

Reality

Beyond ~15 attributes, additional variables add noise faster than signal. The strongest predictors are usually 5-10 firmographic + behavioral signals; the rest are correlated noise. Scoring on 50 variables makes the model harder to debug, harder to explain to sales, and rarely more accurate.

Try it

Run the numbers.

Pressure-test the concept against your own knowledge โ€” answer the challenge or try the live scenario.

๐Ÿงช

Knowledge Check

Your lead scoring model triggers MQL at score 80+. Of the last 100 MQLs (score 80+), only 18 were accepted as SQLs. Of leads scoring 60-79 (not MQLs), sales informally pursued 40 and accepted 22 as SQLs. What does this tell you?

Industry benchmarks

Is your number good?

Calibrate against real-world tiers. Use these ranges as targets โ€” not absolutes.

Lead Score Precision (% of MQLs that become SQLs)

B2B SaaS with established lead scoring

Excellent Model

> 50%

Good

30-50%

Average

15-30%

Underperforming

8-15%

Random Guessing

< 8%

Source: Marketo Engagement Benchmarks / Forrester Research

Real-world cases

Companies that lived this.

Verified narratives with the numbers that prove (or break) the concept.

๐ŸŸฃ

Marketo

2013-2015

success

Marketo's internal lead scoring overhaul became a publicly documented case study because they sold marketing automation. They cut their scored attributes from 47 to 11, weighted everything based on 18 months of historical closed-won data (with sales providing disposition labels), and built dynamic time decay so old behaviors lost weight. MQL precision improved from 22% to 41% within two quarters. The number of MQLs sales pursued went up; the number marketing generated went down. Both teams' efficiency improved.

Scored Attributes

47 โ†’ 11

MQL Precision

22% โ†’ 41%

Outcome

Acquired by Adobe ($4.75B)

Simpler models built on validated data outperform complex models built on intuition. The biggest scoring wins come from removing signal that turned out to be noise.

Source โ†—

Related concepts

Keep connecting.

The concepts that orbit this one โ€” each one sharpens the others.

Beyond the concept

Turn Lead Scoring into a live operating decision.

Use this concept as the framing layer, then move into a diagnostic if it maps directly to a current bottleneck.

Typical response time: 24h ยท No retainer required

Turn Lead Scoring into a live operating decision.

Use Lead Scoring as the framing layer, then move into diagnostics or advisory if this maps directly to a current business bottleneck.