Opportunity Solution Tree
The Opportunity Solution Tree (OST), developed by Teresa Torres, is a visual framework that connects a desired product outcome to the discovered customer opportunities, candidate solutions, and validation experiments. The tree's root is a single outcome (a measurable behavior change, not a feature shipped). The branches below are customer opportunities โ pains, needs, or desires surfaced through interviews. Below each opportunity sit candidate solutions. Below each solution sit experiments to test whether it will solve the opportunity. The tree forces product teams to maintain explicit traceability from outcome โ opportunity โ solution โ evidence, eliminating the common failure mode of shipping features that aren't tied to any real user opportunity.
The Trap
The trap is treating the OST as a one-time deliverable instead of a living artifact. Teams build a beautiful tree, present it once, then disappear back into feature shipping with no further reference to it. The OST works only when it's updated weekly with new interview insights, killed solutions, and validated experiments. Second trap: OSTs that are too shallow on the opportunity side. Teams generate one or two opportunities (often disguised solutions like 'add a dashboard') and then enumerate many solutions below them. Real OSTs have rich opportunity decomposition โ each big opportunity breaks into 3-7 sub-opportunities โ and the discovery work happens at the opportunity level, not the solution level.
What to Do
Build an OST per quarterly outcome. Step 1: Define the outcome as a measurable behavior change ('increase weekly active users' is too vague; '15% more first-week users send a message' is right). Step 2: Conduct 8-12 customer interviews focused on the user's journey and frustrations relevant to the outcome. Step 3: Cluster interview insights into opportunities (3-7 distinct opportunities per outcome is healthy). Step 4: Brainstorm 3-5 candidate solutions per opportunity. Step 5: For the top-priority opportunity, design experiments that would validate or kill each candidate solution before building. Update the tree weekly. Kill solutions that fail experiments. Add new opportunities as interviews surface them.
In Practice
Teresa Torres developed and codified the Opportunity Solution Tree through her work as a product discovery coach with hundreds of product teams over the 2010s, publishing the canonical treatment in her 2021 book Continuous Discovery Habits. Torres's argument: most product teams confuse delivery (shipping features) with discovery (figuring out what to ship). The OST forces discovery into the team's weekly rhythm by making the connection from outcome to evidence visible. Companies including Atlassian, Adobe, and many SaaS scale-ups have adopted variants of OST as their primary discovery artifact. (Source: Teresa Torres, Continuous Discovery Habits, 2021, https://www.producttalk.org/opportunity-solution-tree/)
Pro Tips
- 01
The same solution showing up under multiple opportunities is a signal โ it suggests the solution might be foundational. Investigate why; sometimes you've found a high-leverage build.
- 02
Limit each tree to ONE outcome. Teams that try to capture all their work in one tree end up with sprawling diagrams that nobody updates. Multiple narrow trees beat one big one.
- 03
The bottom of the tree (experiments) is where most teams cheat. They write 'survey' or 'A/B test' generically. Real experiments have hypotheses, success criteria, and decision rules written before the experiment runs. Otherwise the result becomes whatever the team wants it to be.
Myth vs Reality
Myth
โOST is just a fancy mind mapโ
Reality
Mind maps brainstorm; OSTs validate. The structural difference is the bottom layer (experiments) and the explicit traceability from outcome to evidence. A mind map without experiments is decoration. An OST with experiments is a discovery operating system.
Myth
โBuild the tree top-down once, then executeโ
Reality
Real OSTs are updated weekly. New customer interviews produce new opportunities. Failed experiments kill solutions. Successful experiments promote solutions to roadmap. A tree that hasn't changed in a month is dead, and the team is no longer doing discovery โ they're executing on stale assumptions.
Try it
Run the numbers.
Pressure-test the concept against your own knowledge โ answer the challenge or try the live scenario.
Scenario Challenge
Your team built an OST with the outcome 'increase weekly active users by 20%.' Three months in, you've built and shipped 4 solutions from the tree. WAU is up 3%. Your CEO asks if you should abandon the OST methodology.
Real-world cases
Companies that lived this.
Verified narratives with the numbers that prove (or break) the concept.
Teresa Torres / Product Talk
2016-present (framework codification)
Teresa Torres developed the OST through a decade of product discovery coaching, publishing the framework progressively on her Product Talk blog and finalizing it in 'Continuous Discovery Habits' (2021). The tree's structure emerged from observing failure modes โ teams that ran customer interviews but couldn't connect insights to roadmap decisions, teams that built feature factories with no traceability to outcomes, teams that ran experiments without clear hypotheses. The OST addresses each failure mode with a specific structural element. Torres's coaching practice has helped hundreds of product teams adopt the framework; the book has become required reading for product discovery training programs.
Framework codified
2018-2021
Canonical reference
Continuous Discovery Habits (2021)
Adoption pattern
Per-team or per-product-area, not org-wide
Common failure
Treating tree as deliverable, not living artifact
The OST works when teams treat discovery as a continuous practice, not a phase. The tree updates weekly as interviews and experiments produce new evidence; otherwise it becomes a one-time slide in a quarterly presentation.
Hypothetical: B2B SaaS Product Org
Hypothetical 2024
Hypothetical: A 12-person product team adopted OST after a year of feature-factory output that hadn't moved their north star metric. They committed to one OST per outcome per quarter, with mandatory weekly customer interviews and an explicit kill-list of solutions that failed experiments. The first quarter, the team built 60% fewer features than usual. By month 6, the discipline was producing higher-quality decisions: features in the OST that survived to build had 3x the activation lift of features built before adoption. By month 12, the org-wide product team had adopted the practice; the engineering org reorganized sprint planning around OST-validated solutions only.
Features shipped (Q1 vs prior)
โ60%
Activation lift on shipped features
3x baseline
Customer interviews / week
2-3 per PM
Tree update cadence
Weekly (mandatory)
Shipping less is a feature, not a bug, when the framework is working. The OST's biggest win is permission to kill features that lack opportunity evidence โ which most product orgs lack until they adopt the discipline.
Related concepts
Keep connecting.
The concepts that orbit this one โ each one sharpens the others.
Beyond the concept
Turn Opportunity Solution Tree into a live operating decision.
Use this concept as the framing layer, then move into a diagnostic if it maps directly to a current bottleneck.
Typical response time: 24h ยท No retainer required
Turn Opportunity Solution Tree into a live operating decision.
Use Opportunity Solution Tree as the framing layer, then move into diagnostics or advisory if this maps directly to a current business bottleneck.