What research can do: A maturity model for government agencies

At Ad Hoc, our goal is to help federal agencies provide digital services that best meet the needs of their users. Building products that are technologically robust is only one part of the complete picture.

Research is one of our core human-centered design competencies. We measure the maturity of our processes, internally and with our customers, by how deeply we integrate research in cross-functional processes that span product, engineering, and design. What does research look like in each phase of this maturity model? What can research do to enable better products in a holistic, human-centered, product-driven approach?

In our experiences working across government, we’ve seen the agencies we support evolve and mature in how they integrate user experience practices into product development. As an agency matures in their human-centered practices, research moves beyond informing discrete deliverables, such as user stories or user testing results, to becoming a cornerstone of concept and product development. As maturity increases, the boundaries between practices blur into a tightly coupled, cross-functional process.

Maturing a process is not easy, even if there is interest. It’s incumbent on human-centered design practitioners to recognize where an agency is with their practice, so they can identify how to help them grow. Cyd Harrell, in A Civic Technologist’s Practice Guide, notes that agencies already have the intention of doing right by their constituents, but they are at different levels in terms of having the structures and practices in place to do this consistently and repeatedly:

Public servants are used to responding to public feedback, but the channels and formats they typically use are very different. One common example is when a member of the public calls out an elected official at a public comment session, and the official responds by directing their staff to fix something without further research. Pausing such flurries in a hierarchical organization to allow for a proper assessment and appropriate design is not always easy. That means the sooner researchers are able to establish a structured practice, the better, so as to have the tools and methods available when they’re needed.

The U.S. Digital Service and 18F have worked closely with many agencies to bring in new practices, by meeting agencies where they are and cultivating room to mature the process as they gain familiarity and comfort with user stories and product thinking. This lays the groundwork for ongoing work once agencies see the value of human-centered design practices.

We see the research maturity model as a continuum, with no value judgments attached to each phase. We hope that by outlining the maturity model phases, agencies and human-centered design practitioners are able to recognize next steps to grow their processes and allow agencies to best deploy their resources and engage partners.

Stage Description Activities and tasks What it looks like
Nascent Awareness that research can enhance user experience of digital services
  • Persona creation
  • User Story development
  • Usability testing
  • Occasional user testing prior to release to validate a desired direction
  • User stories and personas may influence feature design but not the product roadmap
  • Assumptions about users are not tested
  • Research is consultative, not embedded
Repeatable Research is standardized with a regular tool kit
  • Personas, user stories, usability testing
  • Generative/discovery research
  • Journey maps and user flows
  • Systems architecture maps
  • Dedicated UX team members, but collaboration across the team may still be limited
  • New features regularly tested
  • Processes in place to incorporate learnings into backlog
  • Research findings have limited impact in driving roadmaps, prioritizing work, and informing product direction
Progressing Research is applied in novel ways and impacts product direction
  • All of the other activities, often applied in novel ways
  • Learnings consolidated across modalities (site metrics and qualitative research)
  • Research closely aligned with product team, findings to inform product decisions
  • Research in advance of concept development and design
  • Research used to iterate concepts, not just designs
  • Entire team aware of and makes use of research findings as part of engineering and design implementation
Optimized Seamless cycle of research + product + engineering informing development and roadmaps
  • All of the other activities, often applied in novel ways
  • Learnings consolidated across modalities (site metrics and qualitative research)
  • Research is highly targeted, with direct connection to key strategic decisions
  • Product, engineering, and research KPIs support each other
  • Team members are fluent in other team member’s role and expertise, and trust is high

Nascent stage

Early in the maturity process, agencies are building awareness that research can enhance the user experience of digital services. They may have started conducting one-off research projects. These are most often usability studies conducted late in the development cycle, though research findings may also be used to create personas and write user stories. While user testing helps improve product usability and inform design improvements, in this stage research is used to validate direction rather than challenge assumptions or drive product decisions. Research is often a small separate team, delivering findings rather than collaborating with the product team.

Maturing to the next stage is a process of aligning research more closely with product and becoming more proactive in both identifying open questions and familiarizing decision makers with the impact research can have. Even small wins can be used as leverage to further incorporate research into conversations about product direction. Measuring the outcomes of changes based on research findings brings evidence of that impact to the table, and opens the door for more. Gradually build out human-centered processes that are feasible for the team to adapt into their work and to employ on a regular basis.

Repeatable stage

At this point, agencies have ongoing research processes and a standardized toolkit of generative and evaluative methods. Current and prior research is regularly consulted to drive some decisions, usually focused on design. The scope can be larger than usability, incorporating overall user journeys and improving the end-to-end flow through a product. In this stage, research teams are larger in size, but are often still siloed and not heavily involved in product direction or roadmap development. Research projects are directed by predetermined product directions, rather than open questions, assumptions, and decisions to be made.

To progress, organize teams so that researchers are embedded members that can fully understand the product decisions that need to be made. This allows researchers to understand what information gaps are impacting those decisions so they can design research to directly address open questions and identify assumptions that should be tested and challenged. Likewise, as embedded team members, researchers should have a voice in strategy planning. Continue to measure and highlight the impact of acting on research findings, especially as results move to outcomes beyond improved usability.

Progressing stage

Moving into progressing, research looks beyond design and starts the more challenging task of understanding what concepts will lead to better outcomes for users. This requires a higher-level view of broader questioning and more open-ended interpretation and is distinct from user testing. At this stage, research sits at the intersection of users, stakeholders, and the product roadmap, and facilitates dialogue between those parties.

Many of the methods are the same as in previous stages, but the scope is broader and generative research is used to identify product bets, create hypotheses, and ultimately drive product direction. Requirements and product strategy are regularly informed by research findings, and there’s a high degree of collaboration between product, engineering, and research team members. Research methods and processes directly tie back to the larger user questions the team is addressing and the hypotheses it’s testing, with results leading to more than design recommendations and instead informing concept development and overall product direction. Team members are efficient and targeted in identifying and prioritizing UX gaps and needs, and non-UX team members regularly call on research as part of their work.

Maintaining the momentum of cross-functional collaboration and openness to new methods and modalities is the key to ongoing growth and maturation. Getting to the next stage involves extending the overall mindset throughout the organization, so that high-level decision makers appreciate the impact research has on outcomes for users and the agency itself.

Optimized stage

As teams approach the North Star of research maturity, siloes between product, engineering, and research are nonexistent, and there’s a seamless cycle of collaboration that informs product development and roadmaps. All parties have equal representation at the proverbial table, and research enables strategic decision making that informs not only the program but potentially office or agency-level strategy. Research projects are used to reveal the complexities of the entire ecosystem, taking into account a range of stakeholders beyond the end user. At this stage, research is highly targeted. There are no wasted efforts. Every project connects up to a product, office, or agency strategy. Likewise, product, engineering, and research KPIs support each other, and there are few or no internal conflicts in priority. Team members are fluent in other team member’s roles and expertise, and trust is high. Holistic approaches are the norm at the strategy level, and individual research projects feed directly into that strategy.

What research can do for government services

A key identicator for where an agency is in their maturity model is how comfortable they are with ambiguity when it comes to research and research deliverables. There’s less accounting for discrete deliverables, since you’re accounting instead for an outcome. That might look like less concern about documenting Jira tickets, and more focus on how the research will impact larger initiatives like roadmaps, strategies, or end user outcomes.

Making a human-centered product is about more than creating a UI or user story. The focus should be on unearthing core challenges, not surface needs. At the optimized stage, research can help identify which of those core challenges are most pressing, allowing the team to scope an MVP to get the most bang for the buck. Research at this stage also enables the team to ensure that what is built creates value for the larger product or ecosystem. Focused research is more actionable and usable than broad research, and as a research practice matures, it learns to target questions and weigh trade-offs and risks. You might find that you do less research, but each project produces denser, more actionable insights. In turn, the product is designed more thoughtfully and efficiently and is grounded in human-centered principles.