The challenges in measuring design

Design holds a unique and often misunderstood role.
It’s a force multiplier, a connector of dots, and a facilitator of meaningful change—but rarely the star of the show. This positioning presents a distinct set of challenges when it comes to measuring design’s impact. Let’s explore these challenges in detail.

The servant organization: The promise and peril of service

At its core, design often operates as a service organization. It doesn’t seek the spotlight; instead, it exists to enable others to shine. Design teams work to inspire the activities and conversations that lead to meaningful change—putting the problem in the spotlight and gracefully stepping out of it. They serve as connectors, facilitators, and boosters, bridging gaps between teams and providing clarity where ambiguity once thrived.

This ethos of service is critical to driving impact, but it creates a measurement paradox: how do you quantify the contributions of something designed to be invisible?

For example, when a design team facilitates a strategy workshop that resolves a critical alignment issue or improves cross-departmental workflows, the value lies in the ripple effects. However, these ripples often get attributed to the teams downstream of design, not the design team itself. While the acts of service are often appreciated in the moment, they are just as quickly forgotten. Over time, this transience can erode the perceived value of design, making it harder to justify resources and influence within an organization.

Building relationships and alliances

Being a servant organization demands careful relationship-building. Design teams must cultivate trust and credibility with other departments to ensure their contributions are understood and adopted. These alliances are essential to design’s success, but they come with risks. Misaligned partnerships, leadership changes, or communication breakdowns can quickly erode trust and diminish design’s impact.

The paradox here is stark: design must serve to earn trust, but it cannot serve without trust. This reliance on external factors creates a precarious position, where design’s ability to deliver impact often depends on the cooperation and alignment of others.

Data maturity: the problem with baselines

Another major challenge in measuring design lies in the organization’s data maturity. Often, design teams are asked to demonstrate their value in environments where the foundational elements of measurement—like baselines—don’t exist. Without these benchmarks, any attempt to measure impact becomes an interpretive dance, full of subjective definitions of what constitutes “good.”

Consider this scenario:

A new feature is launched, and its success is measured by user engagement. But what if no pre-launch engagement data exists? Without a baseline, how can anyone definitively claim improvement? This gap often leads to speculative metrics that fail to paint a clear picture of design’s contributions.

For design to succeed in measurement, organizations must prioritize setting baselines—whether it’s through usability studies, satisfaction surveys, or business performance metrics. Yet, in many cases, these baselines are missing or poorly defined, leaving design teams to navigate a foggy landscape where their impact is easily overshadowed.

Openness to qualitative data: insecurity, literacy, and manipulation

Quantitative metrics dominate many organizations, often eclipsing the value of qualitative insights. This creates another significant challenge: getting teams to embrace qualitative data as a valid and valuable form of measurement.

Qualitative data—user interviews, journey maps, usability testing, ethnographic studies, etc.—offers rich context that numbers alone can’t provide. However, insecurity about interpreting qualitative data, a lack of literacy in design methodologies, and even manipulation of insights to fit preconceived narratives can undermine its credibility. For example:

  • Insecurity: Stakeholders may dismiss qualitative findings as “soft” or anecdotal, favoring hard numbers even when they lack nuance

  • Lack of literacy: Teams unfamiliar with design research may struggle to see the connection between user stories and business outcomes, leading to underutilization of insights

  • Manipulation: In worst-case scenarios, qualitative insights are cherry-picked to support a specific agenda, stripping them of their authenticity and value.

To combat these challenges, design teams must act as educators, helping stakeholders understand how qualitative data complements quantitative metrics and providing the literacy needed to interpret insights accurately.

The interplay of constraints and challenges

It’s important to distinguish these challenges from the constraints I talked about in a previous article. Constraints—like appetite for measurement or reliance on instrumentation—are foundational conditions that shape the context for measuring design. Challenges, on the other hand, are the frictions encountered within those constraints, like the lack of baselines, the undervaluation of qualitative data, or the ephemeral nature of design’s contributions.

Addressing these challenges requires a multi-pronged approach: strengthening relationships, advocating for data literacy, and pushing for organizational maturity in measurement practices. By doing so, design teams can better navigate the complexities of their roles and ensure their impact is both recognized and valued.

Previous
Previous

Why healthcare can’t break up with fax

Next
Next

The madness in not measuring design at all