Over the course of this series, we’ve explored the complexities of measuring design. We started with the constraints that shape how measurement operates, examined the challenges that arise when trying to measure design effectively, and uncovered the risks of neglecting or misusing measurement. Now, we turn to how to measure design, focusing on actionable methods and tools.

Measuring design effectively requires thinking across three horizons:

  1. Nearby metrics: Immediate impacts like usability, satisfaction, and productivity.

  2. Medium-distance metrics: Collaboration quality and hypothesis validation.

  3. Distant metrics: Long-term effects on retention, customer lifetime value (CLV), and return on experience (ROX).

For each category, we’ll outline the metrics, their descriptions, the cadence for tracking, associated risks, mitigation strategies, and ownership. Finally, we’ll wrap up the series with a comprehensive conclusion tying all these elements together.

Nearby metrics: Immediate impact

Nearby metrics capture short-term, direct signals of design’s value. These are the metrics most design teams start with because they’re quick to implement and closely tied to the user experience.

Examples of Nearby Metrics

  • NPS (Net Promoter Score): A quick way to gauge user satisfaction and loyalty.

  • Task Success Rate: Tracks how effectively users can complete critical tasks.

  • Time on Task: Measures efficiency in task completion.

Key Considerations

While nearby metrics are easy to measure, they carry risks. Over-reliance on these metrics can conflate design with delivery, reducing its perceived strategic value. Metrics like NPS, for instance, can lead to false confidence if used in isolation.

How to Bring These Metrics to Life

Implementing nearby metrics typically involves using tools like SurveyMonkey, Hotjar, or Mixpanel. They’re great for gathering immediate feedback, tracking interactions, and validating usability improvements.

  • Measures user loyalty and satisfaction via the likelihood to recommend.

    • Cadence: Monthly/Quarterly

    • Risks: Over-relies on predictions of future behavior; false confidence when used in isolation; user fatigue.

    • Mitigation: Pair NPS with behavioral data. Use as a directional, not definitive, success indicator.

    • Ownership: Product, Marketing, Design

    • Method of setup:

      • Tools: Delighted, SurveyMonkey, Typeform.

      • Embed surveys post-interaction and aggregate dashboards.

  • Captures user satisfaction with a specific interaction or product.

    • Cadence: After key events

    • Risks: Only reflects single points in time; may hide underlying usability issues.

    • Mitigation: Cross-reference CSAT with retention data and qualitative feedback.

    • Ownership: Product, CX, Design

    • Method of setup:

      • Tools: Qualtrics, Medallia, Google Forms.

      • Deploy short event-triggered surveys automatically.

  • Percentage of users successfully completing a specific task.

    • Cadence: Bi-Weekly/Monthly

    • Risks: Doesn’t explain why tasks fail; focuses too narrowly on micro-interactions.

    • Mitigation: Pair with usability testing and observations to uncover root causes of success/failure.

    • Ownership: Design, Research, Product

    • Method of setup:

      • Tools: Maze, Lookback, Optimal Workshop.

      • Record sessions, test flows, and automate task success rates.

  • Measures efficiency by tracking time taken to complete tasks.

    • Cadence: Bi-Weekly/Monthly

    • Risks: Can incentivize speed over quality; shorter times may not reflect better experiences.

    • Mitigation: Define “ideal task time” in context. Prioritize user success over time savings.

    • Ownership: Design, Research, Product

    • Method of setup:

      • Tools: UsabilityHub, Hotjar.

      • Analyze session recordings or set specific usability tests.

  • Frequency of user errors during key interactions.

    • Cadence: Bi-Weekly/Monthly

    • Risks: Doesn’t capture abandoned tasks or deeper usability issues; may focus on surface-level fixes.

    • Mitigation: Track drop-off rates alongside error data. Combine with qualitative insights.

    • Ownership: Design, Product, Engineering

    • Methods of setup:

      • Tools: Mixpanel, FullStory, Google Analytics.

      • Instrument and track specific actions for errors.

  • CTR, session length, and frequency for critical features.

    • Cadence: Monthly

    • Risks: High engagement can mean confusion, not success; may incentivize “vanity features.”

    • Mitigation: Align engagement metrics to specific goals (e.g., adoption, ease of use). Analyze intent and behavior.

    • Ownership: Design, Product, Marketing

    • Method of setup:

      • Tools: Amplitude, Mixpanel, Heap.

      • Set up behavioral tracking and event funnels.

  • Standardized usability questionnaire measuring perceived ease of use.

    • Cadence: Quarterly

    • Risks: May over-simplify results or fail to pinpoint specific problems.

    • Mitigation: Use SUS alongside behavioral data and qualitative follow-ups.

    • Ownership: Design, Research

    • Method of setup:

      • Tools: Google Forms, SurveyMonkey.

      • Combine survey with observed behavioral data for insights.

  • Compares design/project time estimates to actuals for productivity tracking.

    • Cadence: Sprint-Based

    • Risks: Risks incentivizing rushed work; over-focuses on timelines rather than outcomes.

    • Mitigation: Use as a planning tool, not a success measure. Combine with quality checks and retrospectives.

    • Ownership: Design, Project Management

    • Method of setup:

      • Tools: Jira, Asana, Trello.

      • Track estimates, actuals, and deviations automatically within sprint cycles.


Medium-distance metrics: Collaboration and hypothesis validation

Medium-distance metrics extend the focus beyond immediate usability to capture how design interacts with other teams and validates hypotheses over time. These metrics demonstrate design’s role as a strategic partner.

Examples of Medium-Distance Metrics

  • Experiment success rate: Measures the percentage of design hypotheses validated through A/B tests

  • Customer effort score (CES): Evaluates how easy (or difficult) it is for users to achieve their goals

  • Collaboration quality: Surveys cross-functional teams about the effectiveness of design partnerships

Key considerations

The risks here often involve subjectivity or bias. For example, collaboration surveys might reflect interpersonal dynamics more than actual partnership quality. Similarly, focusing only on validated hypotheses could discourage bold experimentation.

How to bring these metrics to life

Tools like Optimizely, Miro, and Confluence are excellent for managing and tracking medium-distance metrics. They encourage cross-team alignment and document learning.

  • Percentage of design hypotheses validated via A/B tests or experiments.

    • Cadence: Quarterly

    • Risks: Encourages safe, incremental changes; undervalues failed experiments that provide insights.

    • Mitigation: Celebrate learning, not just validation. Document and share insights from both successes and failures.

    • Ownership: Product, Design, Data Science

    • Method of setup:

      • Tools: Optimizely, Google Optimize, Split.io.

      • Set up experiments, analyze success, and aggregate learnings.

  • Measures how easy it is for users to achieve tasks or goals.

    • Cadence: After key surveys

    • Risks: Over-focus on simplicity can oversimplify complex workflows or sacrifices deeper user needs.

    • Mitigation: Balance CES with qualitative research to ensure user needs remain well-met.

    • Ownership: CX, Design, Product

    • Method of setup:

      • Tools: Medallia, Delighted, AskNicely.

      • Send automated, task-triggered CES surveys.

  • Tracks percentage of validated design hypotheses.

    • Cadence: Quarterly

    • Risks: Teams may game results by creating “safe” hypotheses to achieve higher success rates.

    • Mitigation: Reward rigorous, impactful hypotheses, even when results are invalidated.

    • Ownership: Design, Research, PM

    • Method of setup:

      • Tools: Confluence, Miro.

      • Maintain a hypothesis board, test results, and documentation.

  • Cross-team feedback (e.g., PM, Eng) on design’s alignment and collaboration.

    • Cadence: Quarterly

    • Risks: Subjective; can reflect team dynamics over genuine collaboration quality.

    • Mitigation: Treat surveys as a starting point for open discussions. Supplement with examples of collaboration.

    • Ownership: Design, Product, Engineering

    • Method of setup:

      • Tools: Google Forms, CultureAmp, Polly (Slack).

      • Send quarterly collaboration feedback surveys.

  • Quantifies unresolved UX/UI issues that slow development and user success.

    • Cadence: Quarterly

    • Risks: Risk of overwhelming backlogs; may cause friction with teams focused on delivery goals.

    • Mitigation: Regularly prioritize debt with business goals. Frame as investments that improve speed and quality.

    • Ownership: Design, Engineering, PM

    • Method of setup:

      • Tools: Jira, Trello, Airtable.

      • Tag and track design debt alongside sprint backlogs.

  • Number of stakeholders engaging with design research insights.

    • Cadence: Monthly/Quarterly

    • Risks: Visibility alone doesn’t equal impact; research may not influence decisions.

    • Mitigation: Track how insights drive decision-making, not just their reach.

    • Ownership: Research, Design, Leadership

    • Method of setup:

      • Tools: Dovetail, Notion, Slack.

      • Distribute insights and measure engagement rates via analytics.


Distant metrics: Long-term impact

Distant metrics focus on the lasting effects of design on business outcomes. These are the metrics that executives care about most, as they demonstrate how design contributes to revenue, retention, and overall company growth (They’re also, by far, the toughest to materialize).

Examples of distant metrics

  • Customer lifetime value (CLV): Tracks the total revenue attributed to a single user over their lifecycle

  • Churn rate: Measures the percentage of customers who stop using the product or service

  • Return on experience (ROX): Links design investments to measurable business returns

Key considerations

Distant metrics require patience and collaboration with multiple departments. They also carry the risk of attributing outcomes too broadly, making it difficult to isolate design’s specific impact.

How to bring these metrics to life

Business intelligence with tools like Tableau, Looker, and Salesforce can help integrate data from various departments to track these metrics over time.

  • Measures revenue contributions over the lifetime of a user.

    • Cadence: Quarterly/Annually

    • Risks: Difficult to attribute directly to design; slow feedback loops.

    • Mitigation: Pair with behavioral indicators (e.g., adoption, NPS) to bridge short- and long-term impact.

    • Ownership: Product, Marketing, Finance

    • Method of setup:

      • Tools: HubSpot, Salesforce, Tableau.

      • Integrate behavioral data and financial metrics into dashboards.

  • Percentage of users who stop using a product or service over time.

    • Cadence: Quarterly

    • Risks: Often reflects systemic issues beyond design; oversimplifies user departures.

    • Mitigation: Track churn with exit surveys or qualitative studies to understand root causes.

    • Ownership: Product, CX, Design

    • Method of setup:

      • Tools: Mixpanel, Amplitude, Gainsight.

      • Track churn triggers and correlate with experience gaps.

  • Measures costs of serving users relative to service delivery.

    • Cadence: Annually

    • Risks: Incentivizes short-term cost-cutting that may harm user experience.

    • Mitigation: Balance cost metrics with satisfaction metrics to ensure long-term value.

    • Ownership: Operations, CX, Design

    • Method of setup:

      • Tools: Looker, Tableau, Zendesk Analytics.

      • Combine cost data with customer experience metrics.

  • Links design investments to measurable returns like revenue or cost savings.

    • Cadence: Annually

    • Risks: Complex to calculate; requires careful attribution across teams.

    • Mitigation: Collaborate with Finance to model design’s impact. Treat as a directional signal rather than definitive.

    • Ownership: Leadership, Finance, Design

    • Method of setup:

      • Tools: Excel, BI Tools (Power BI, Tableau).

      • Build models linking design efforts to revenue outcomes.

  • Measures percentage of users retained over a defined period.

    • Cadence: Quarterly

    • Risks: High retention may reflect lack of alternatives rather than positive experiences.

    • Mitigation: Pair retention with satisfaction (NPS, CES) and qualitative insights.

    • Ownership: Product, CX, Marketing

    • Method of setup:

      • Tools: Mixpanel, Amplitude, HubSpot.

      • Monitor cohorts, retention trends, and satisfaction feedback.

  • Benchmarks user perception and market competitiveness.

    • Cadence: Annually

    • Risks: Risk of lagging industry changes or relying too heavily on external perceptions.

    • Mitigation: Combine competitive analysis with direct user research and testing.

    • Ownership: Leadership, Marketing, Design

    • Method of setup:

      • Tools: Gartner, Forrester, UserTesting.

      • Conduct competitive analysis and market research annually.


Prioritizing what to measure

Not all metrics need to be implemented at once. Start with those that offer the greatest value and are easiest to adopt:

  1. Nearby metrics: NPS, Task Success Rate, and CSAT are quick wins that provide immediate insights.

  2. Medium-distance metrics: As design matures, focus on metrics like Experiment Success Rate and Collaboration Quality to deepen impact.

  3. Distant metrics: Work with leadership to tie design outcomes to business goals like CLV and ROX

The big finish: The madness and mastery of measuring design

Measuring design is both an art and a science. It’s not about collecting numbers for their own sake—it’s about using metrics as tools to uncover insights, drive conversations, and amplify design’s strategic value.

Key lessons from this series:

  • Constraints: Measurement must operate within the organization’s appetite for data and instrumentation capabilities

  • Challenges: It’s easy to misalign metrics or fail to act on the data collected

  • Risks: Without measurement, design risks being undervalued; with poor measurement, it risks being misunderstood

  • Methods: Implementing a layered approach to metrics ensures a balanced, holistic view of design’s impact

A word of caution

Be careful not to let metrics—especially nearby ones—reinforce the false equivalence between delivery and design. It’s tempting to focus only on immediate outputs like task success or delivery timelines, but this shortchanges design’s full potential. Nearby metrics should be stepping stones to broader, more strategic measures.

The bigger picture

Design metrics aren’t just about proving worth—they’re about fostering understanding and trust. By measuring what matters, educating colleagues about design’s broader contributions, and advocating for strategic impact, you can elevate design’s role within any organization.

As we close this series, remember that the madness of measuring design isn’t a flaw—it’s a feature. The complexity of measurement reflects the complexity of design itself. Embrace it, refine it, and use it to shape the future of your work.


Previous
Previous

Patients are people, not pawns: balancing design, care, and business in healthcare

Next
Next

Why healthcare can’t break up with fax