K
KnowMBAAdvisory
ProductIntermediate5 min read

Card Sorting Method

Card sorting is an information architecture (IA) research method where users group items (written on cards) into categories that make sense to them. Open card sorts let users create their own group names; closed card sorts force items into pre-defined buckets; hybrid sorts blend the two. The Nielsen Norman Group has used it since the 1990s as the cheapest, most accurate way to discover the user's mental model of your product's content. The deliverable is a dendrogram or similarity matrix showing which items users naturally cluster together โ€” and where your existing IA fights the user's intuition. Card sorting is what stands between you and a navigation that 'made sense in the all-hands' but confuses everyone outside the building.

Also known asOpen Card SortClosed Card SortHybrid Card SortIA ValidationTree Testing Companion

The Trap

The trap is using a closed card sort when you don't yet know the right categories โ€” which forces users to confirm your existing structure rather than reveal a better one. The opposite trap: running an open sort with too few users (10+ are needed for clustering signal) and reading meaning into noise. The deepest trap is treating card sorting as a one-time IA exercise rather than a recurring check. As your product grows from 30 features to 300, the navigation that worked at 30 is unrecognizable at 300 โ€” and nobody re-runs the sort.

What to Do

(1) List 30-60 of the most-used items in your product (features, content, settings). (2) Choose the sort type: open if you're discovering categories, closed if you're validating a proposed IA, hybrid if you have some fixed top-level categories. (3) Recruit 15-30 users (NN/g standard for clustering reliability). (4) Run remotely via Optimal Workshop, UserZoom, or a free tool โ€” never moderated for card sorts; the silence and time pressure of a moderator distorts results. (5) Analyze the similarity matrix and dendrogram for clusters. (6) Validate the resulting IA with a tree test (separate study) before redesigning navigation.

In Practice

The Nielsen Norman Group's 'Card Sorting: Pushing Users Beyond Terminology Matches' (Donna Spencer & Todd Warfel, summarized by NN/g) describes a financial services site where the team had organized accounts under 'Investments,' 'Retirement,' and 'Banking.' An open card sort with 24 users surfaced an unexpected cluster: users repeatedly grouped 'IRA,' 'Roth IRA,' and '401(k) Rollover' alongside 'Savings Account' under a category most named 'Long-term Money.' The internal taxonomy (regulator-driven) didn't match the user's mental model (goal-driven). The team added a goal-based secondary navigation. Time-to-find for retirement products dropped 40%. (Source: Nielsen Norman Group card sorting research)

Pro Tips

  • 01

    NN/g's reliability rule: card sort results stabilize around 15 participants and gain little additional signal past 30. Below 15, your dendrogram clusters can flip on a single user's quirky grouping.

  • 02

    Always pair a card sort with a tree test. The card sort tells you how users would organize content; the tree test tells you whether they can FIND something in the organization you built. Both are needed before shipping a navigation change.

  • 03

    Ignore the category names users invent in open sorts โ€” focus on the GROUPINGS. User-invented names are often clumsy ('stuff for taxes', 'money I can't touch yet') but the clusters reveal the mental model. Your team writes the final category labels.

Myth vs Reality

Myth

โ€œCard sorting tells you the right navigation structureโ€

Reality

It tells you the user's mental model โ€” which is INPUT to navigation design, not the design itself. Navigation also has to satisfy SEO, business logic, regulatory constraints, and aesthetic balance. A blind copy of the user's clustering is rarely the right ship.

Myth

โ€œYou can card-sort with 5 users like a usability testโ€

Reality

Usability testing surfaces qualitative friction (5 users plateau). Card sorting requires statistical clustering across many participants (15+ to stabilize). Conflating the two leads to noisy IA decisions.

Try it

Run the numbers.

Pressure-test the concept against your own knowledge โ€” answer the challenge or try the live scenario.

๐Ÿงช

Scenario Challenge

You're redesigning the navigation of a 200-feature B2B product. Your design lead suggests running a closed card sort with the existing 6 top-level categories to 'validate' them with users.

Industry benchmarks

Is your number good?

Calibrate against real-world tiers. Use these ranges as targets โ€” not absolutes.

Card Sort Participants for Reliable Clustering

Open and closed card sorts for IA work

Optimal

20-30

Acceptable

15-19

Marginal

10-14

Unreliable

< 10

Source: Nielsen Norman Group practice standard

Real-world cases

Companies that lived this.

Verified narratives with the numbers that prove (or break) the concept.

๐Ÿ—‚๏ธ

Nielsen Norman Group โ€” Card Sorting Practice

1990s-present

success

NN/g has refined card sorting as the standard IA discovery tool over three decades. Their guidance: use open sorts to discover the user's mental model, hybrid sorts when constraints exist, and closed sorts to confirm a proposed structure. Their published research established the 15-30 participant rule, the dendrogram analysis pattern, and the necessity of pairing card sorts with tree tests for IA validation. Most modern remote card sort tools (Optimal Workshop's OptimalSort, UserZoom) implement NN/g's analytical approach.

Recommended Participants

15-30

Recommended Item Count

30-60 cards

Best Pairing

Card sort + tree test

Reliability Plateau

~30 participants

The cheapest way to discover that your information architecture doesn't match your users' mental model is a card sort. The most expensive way is shipping a redesign and watching support tickets spike.

Source โ†—
๐Ÿงฑ

Hypothetical: Mid-Market HR SaaS Navigation Overhaul

Hypothetical

success

Hypothetical: A 400-feature HR product runs an open card sort with 22 customers across HR-admin and employee personas. The two personas cluster items completely differently โ€” admins group by workflow ('hire,' 'pay,' 'review'), employees group by life event ('I'm sick,' 'I want time off,' 'I have a question'). The team builds two top-level navigations: an admin console and an employee self-service portal. Time-to-find drops 50%+ for both personas; support volume on navigation-related tickets drops materially.

Personas Tested

2 (admin, employee)

Participants

22

Outcome

Persona-specific navigation

Status

Hypothetical illustration

When card sorts reveal that two user segments form fundamentally different mental models, the answer is usually two navigations โ€” not a compromise IA that satisfies neither.

Related concepts

Keep connecting.

The concepts that orbit this one โ€” each one sharpens the others.

Beyond the concept

Turn Card Sorting Method into a live operating decision.

Use this concept as the framing layer, then move into a diagnostic if it maps directly to a current bottleneck.

Typical response time: 24h ยท No retainer required

Turn Card Sorting Method into a live operating decision.

Use Card Sorting Method as the framing layer, then move into diagnostics or advisory if this maps directly to a current business bottleneck.