The Ad Hoc Government Digital Services Playbook

May 24, 2018
The Ad Hoc Government Digital Services Playbook

The Ad Hoc Government Digital Services Playbook compiles what we’ve learned from four years of delivering digital services for government clients. Our playbook builds on and extends the Digital Services Playbook by the United States Digital Service. The USDS playbook is a valuable set of principles, questions, and checklists for government to consider when building digital services. If followed, the plays make it more likely a digital services project will succeed. Today, we’re publishing the opinions we developed and lessons we learned while implementing the original plays of the USDS playbook. We want to share our knowledge in hopes that other teams can continue to build on the progress we and many other organizations are making in improving government digital services.

In 2014, we founded Ad Hoc with the same catalyst that created the USDS: the failed launch of HealthCare.gov. Since then, we’ve been using these plays to help government reform the way it serves users, who have come to expect more from the digital products and services they use.

Building digital services for government means orienting and aligning around the user experience, for all audiences and abilities, and doing so securely, protecting users’ privacy and data. To the user of digital services, availability and usability are paramount. Slow, confusing interfaces drive them away and erode their trust. This essential user-centrism is at the core of government digital services. It distinguishes them from enterprise software, where users are expected to have substantial training and domain knowledge, or conform to confusing business-processes-as-software. While government had substantial experience building enterprise software systems prior to 2013, when HealthCare.gov launched, it didn’t have comparable experience delivering digital services, such as those users have become accustomed to in the commercial sector. The challenge of the past four years has been introducing to government the practices and processes that set user-centered services up for success. Our playbook contributes additional detail on how to accomplish this task.

Government is a unique client because it must serve everyone, is accountable to the public (versus the market), and is constrained by legislation and rule-making. Government digital service delivery blends the best practices of modern, consumer-facing software with the security, stability, and accountability that government services require. Ad Hoc’s experience delivering government digital services for HealthCare.gov, Vets.gov, and more, informs this approach. Together with the USDS playbook, the Ad Hoc playbook paints a picture of a robust, effective, and flexible digital services delivery environment.

What we learned implementing the USDS Playbook

For each play from the USDS playbook, we have a key lesson, explanation, checklist, and questions to prompt teams to further examination and introspect. Here is how Ad Hoc delivers digital services.


USDS Play 1: Understand What People Need

Recognize building digital services requires a distinct approach

Digital services are intended for the general public, not experts, and therefore must be designed around users’ needs first and foremost. Business needs — to fulfill policy and organizational objectives — are more likely to be achieved when the design of the service prioritizes how people will use it.

From our experience with digital services, it’s wrong to start with a organization-centric approach, modeling the service around internal data, structures, and processes. This tends to lead to services that are confusing to users, and has little or no emphasis on the quality of their experience. It is important to not burden the user with understanding an agency’s internal structure, processes, or bureaucracy to achieve a goal.

Users expect digital services to be responsive, available, and usable. These expectations lead designers of digital services to make certain well-understood technology choices. We know, broadly speaking, how a successful digital service should be built. The kinds of teams and systems architectures it takes to deliver digital services do not resemble that of traditional enterprise software. Exposing enterprise software to the general public as digital services, or using enterprise software delivery practices to create digital services, has been a recipe for failed technology projects.

Successful digital service delivery teams tend to be cross-functional and fully vertically-integrated, from user research and design all the way through development and operations. They take a product management approach to delivery, which means owning and being responsible for the entire user experience.

Checklist
  • My delivery team has experience with the design, development, and operations of a entire consumer-facing web and/or mobile application.
  • My delivery team has experience conducting user research sessions, and incorporating findings from them, along with data from site metrics and analytics, into the product roadmap.
  • My delivery team has experience deploying to production and supporting a digital service in production.
  • My technology stack resembles a modern delivery stack tailored for end-user services, rather than a collection of enterprise components.
  • My delivery team is responsive to change, and can adapt its development and delivery processes to changing priorities and new information. It is flexible and can deliver results incrementally.
  • My digital service can scale to meet demands that are typical in consumer internet applications.
  • My service is intuitive and does not require extensive training to use.
Key questions
  • Do our users need to know how our organization is structured to use our product?
  • If that organizational structure is, in fact, important, are we relying on tools or products that require an underlying knowledge of how government operates? Are these government-specific processes distorting user interfaces and impacting the user experience?
  • Is our product structured around an internal business process? Has its user interface been modeled around an existing enterprise data set (as opposed to adapting or deriving the data to fit a well-researched user experience)?

USDS Play 2: Address the whole experience, from start to finish

Understand the problem space and the user need before recommending technical solutions

It’s easy for delivery teams to fall in love with new technologies, and search for problems to solve with them. You can have a greater impact, however, by identifying the highest value needs of users and building towards those. Many teams are reticent to conduct this initial user research (sometimes called discovery), because it doesn’t seem to result in tangible deliverables. But what if you find out in your initial research that 85% of your users access your service on mobile? You’ve learned that you should build a mobile experience first, instead of finding that out at the end of the development cycle when it’s much more costly to change course. You can also learn a lot by interviewing stakeholders, especially those in support functions, like a help desk. You can prioritize product features based on help desk requests, allowing the help desk to spend more time on users with more difficult problems. This can also bring down the overall cost of the call center. What you learn from these efforts pays for itself later in cost avoidance.

If you built your product before researching the problem, you now have to spend a lot of extra time and money to go back and retrofit your product. Not only could this have been avoided, but now you have a mobile experience that feels “bolted on” and not baked into the product. Conducting initial discovery research helps you optimize what you’re building for the greatest number of users.

Most projects have limited time and money to spend on development. This makes it even more critical that you are prioritizing the things you absolutely need to build first, and deprioritizing or not even building things that are less important. The best way to establish priority is by conducting user research.

Checklist
  • We conducted initial user research prior to issuing any RFIs or RFPs, or have a flexible contract that allows for this before the start of development.
  • We started with research in the problem space, instead of starting with a technical solution and working backwards.
  • We continue to conduct user research at regular intervals, to make sure we’re building the right thing.
  • Our contracts are structured to permit discovery and exploration research cycles.
  • Our solicitations are looking for details on methods and processes, not solutions, because our solution will evolve as we build and learn.
Key questions
  • Are we providing the time and space for proper discovery without rushing to development?
  • Are we shelving user needs because they aren’t easy to implement with our current solution?
  • Are we assuming or dictating a technical solution?
  • Are we dictating use of a specific technology without a basis for why it can serve users better?

USDS Play 3: Make it simple and intuitive

Design services to be as simple as possible to fulfill the mission

Overly complicated or complex architectures and user interfaces have doomed major projects. Experience has taught that most of these failed services were unnecessarily complicated, and needed to be radically simplified to succeed. This is especially important in high-traffic, high-demand consumer-facing digital services that must serve requests quickly and efficiently. The technical architecture of the service constrains the efficiency of a transaction in the service.

One way to succeed is by choosing proven, commodity software for your technology stack, especially for core components such as databases and application servers. There are well-understood, reliable, and open source components for everything from operating systems, to relational databases, to web servers and more. Use technologies that are familiar to the broader web development world, and architect them in familiar configurations. This will lead to simpler, more efficient implementations that are easier to troubleshoot. Free your team to focus on optimizing the user experience, instead of battling with core infrastructure.

User interfaces can similarly suffer from over-complication. It requires much more thoughtful effort to build a simple, intuitive interface than to let the underlying complexities to seep through. Often, a policy analyst or subject matter expert will map their large set of business rules onto the UI, exposing it directly to the user. Users, however, have different needs than organizations. Through user research, we can discover their specific requirements, but in all cases, a simple, easy-to-understand and easy-to-use interface will help them accomplish their task, which helps fulfill the organization’s business needs.

Checklist
  • We have asked if this feature or component is really, truly needed.
  • We have deployed compute and network resources based on a reasonable estimate of expected demand.
  • We use tried-and-true protocols and standards, avoiding overly-abstracted or complicated components that can be brittle, inefficient, or hard to debug in production.
  • We use “boring” technologies wherever possible, especially in critical core components such as databases, limiting use of more innovative tech to well-defined experiments where effects of failure can be contained.
Key questions
  • Are we relying on extensive documentation to help the user understand how to use our product rather than contextual guides and an intuitive user experience?
  • Is it too difficult to explain our architecture to someone onboarding to the development team?
  • How hard would it be to replace the current delivery team with another delivery team?
  • Are we using established, standard technology, tools, and protocols?
  • Are we solving problems that are not immediately in front of us?

USDS Play 4: Build the service using agile and iterative practices

Modernize and phase-out legacy systems with migrations instead of “big bang” change-overs

Brand-new services are the exception rather than the norm in government. Most commonly, we will be taking an existing system and bringing it up to some new modern level of service. The priority is to maintain continuous service while improving it.

Too often, organizations will build a standalone replacement service that is intended for a hard cut-over from the existing one. One day, 100% of traffic flows to the legacy system, the next day, 100% to the new one. However, some issues cannot be detected until they’re live in production, no matter how much testing is done in a development environment. To mitigate this risk, never roll out a new service by abruptly flipping a switch and redirecting all traffic to the new system. Tooling that allows you to switch just certain demographics or types of users to a new system is widely available. Offer a alpha or beta version that users opt-in to trying out. Or, direct traffic for just a small section of the application to the new system.

In addition to being a risky practice, users tend to dislike vast, sweeping changes with little or no warning. Because users take time to acclimate to a new system, allow them to revert back to the legacy application for a period of time after launch. With this approach, you can also gather analytics on gaps in your information architecture that cause users to revert back to the old version to find something.

The ultimate goal is to deliver user value as quickly as possible. Sometimes this means building intermediate infrastructure to help bridge the gap between legacy systems and newer, user-facing interfaces. Once the underlying legacy system is completely migrated, the temporary infrastructure used to bridge the gap between the legacy system and the new system can be retired. Sometimes people call this “re-work” and avoid it at all costs, but if it helps improve the user experience and outcomes in the short term, it’s well worth it.

Checklist
  • We have a well-defined and limited scope for an alpha release.
  • We have identified well-segmented pieces of the application that are good candidates for initial migration.
  • When building new functionality around legacy components, we have taken steps to ensure the new functions gracefully degrade, and don’t overwhelm the legacy system.
  • We have a plan to, over time, decompose a monolithic legacy system into a set of smaller, more focused microservices, centered around the user’s needs.
Key questions
  • Is our migration strategy oriented around delivering the biggest user value first?
  • Is the timing of our rollout dictated by legacy contractual deadlines?
  • What are the natural user segments we can target for different phases of our rollout?

USDS Play 5: Structure budgets and contracts to support delivery

Create organizational buffers to give space for agile software development

Successful digital services are built by teams that are responsive to change. A common methodology adopted by such teams is known as agile software development. But let’s face it: agile software development, with its short iterations of discovery, delivery, and testing, requires a change to the way we have traditionally structured software procurements in government.

The process of development often mirrors the organizational structure, including the accountability structure, of the agency it is performed in. We know that doing software development with more agility and flexible practices yields better, more user-centered outcomes than traditional “waterfall” methods. Digital services need to be built and operated by cross-functional teams that include researchers, designers, developers, and operations engineers. It’s important that the same team that builds the service also operates it. Contract and team structures often don’t allow this, instead opting to lob code over a wall to the operations team to deploy and operate. This is a recipe for unstable and poorly managed services. Government needs to continue to be mindful of the internal changes it may need to make to accommodate new means of delivery.

In addition to changing team structures to accommodate agile delivery, contract structures must also change. Contracts that are too long or too short tend to slow or disrupt delivery. Instead, structure your contracts as a six month base period with several six month option periods. This allows you to conduct your discovery and prototyping in the base period, and make a more informed decision to either execute an option to continue with delivery, or pivot based on your discovery and prototyping work. It also maintains competitive pressure on the vendor to keep up the quality and speed of delivery.

It takes time for companies to build up strong teams that understand the agency’s requirements and its users’ needs; longer contracts afford this opportunity. Focus on creating the conditions, through contracts, to sustain high-performing teams. Also, make sure to keep your statement of work up to date with the newest information from your project. This will ensure you feel prepared to re-compete the contract instead of exercising an option if it becomes necessary.

Checklist
  • My team understands what agile software development practices look like.
  • My leadership understands we will have different artifacts than a traditional waterfall process.
  • We have set expectations with the team and with leadership that agile looks different than waterfall, but it yields better outcomes over time.
  • We have structured RFPs and SOWs to ensure accountability for a contract matches the needs of agile software development, rather than of a waterfall process.
Key questions
  • Have we communicated to stakeholders and oversight staff how we are tracking progress and success as an agile team?
  • Are we “solicitation-ready” in case we need to change who we’re delivering with?
  • Is our contract structured to balance the need to maintain competitive pressure on the incumbent, but also to permit time and space for vendors to grow teams that meet our needs?
  • Does our contract allow for a discovery and prototyping phase?

USDS Play 6: Assign one leader and hold that person accountable

Find partners that help you understand technology choices, trade-offs, and risks

The overall direction of a product lies with the government business owner. Technologists who partner with government have a responsibility to clearly communicate the trade-offs of different implementation choices. If you aren’t providing clarity to your partners, you’re damaging the relationship and eroding trust. Worse, you are not going to arrive at the best-fit approach for the problem.

In addition to communicating trade-offs and technical decisions clearly, you must also bridge the gap on why more modern technologies and methodologies are beneficial to government and reduce the overall risk profile of a program. Test-driven development (TDD) and continuous integration and deployment are considered best practices in the private sector. They reduce risk for projects by shortening response times during incidents, and by preventing regressions from getting to production. Don’t just throw around buzzwords, relate it to how it helps the product.

Checklist
  • Our key decision-makers understand how their product works at a high level, and understand and can communicate the impact of product and technical decisions.
  • We include security and assessment personnel early on in our discussions on how our modern practices map to security and availability concerns.
  • If our product owner does not have technical experience, our product manager(s) are equipped to clearly and accurately communicate trade-offs and necessary technical details.
Key questions
  • Is our product owner equipped with a framework for evaluating decisions on priority, technical implementation, and scope?
  • Does our development team communicate technical recommendations in the context of the user or business need?

USDS Play 7: Bring in experienced teams

Collaborate with delivery teams on expressing and validating business rules

A common pattern we’ve observed that has been not been successful is when policy and subject matter experts craft feature requirements and functionality for a service, and then throw them over the wall, so to speak, for the technology team to implement. The problem is that engineers then have to deal with often contradictory or conflicting requirements, or ones that are difficult to implement given other constraints such as budget, time, and maintaining a quality user interface and good user experience. Policy folks, in this arrangement, also miss out on the opportunity to explore the possible solution space further, because they are hamstrung by their lack of technical expertise.

Good technologists, be they engineers, designers, researchers, or product managers, are capable of learning new domains and ramping up quickly on subject matters in which they are not expert. A good technologist should be a problem-solving partner for government.

The stuff of software engineering is the encoding of business requirements and rules into software. Policy and subject matter experts should sit down and collaborate with technologists, not just in the initial conception and ideation phase, but in all phases of the process, through research, delivery sprints, and iterations on design and new feature development. Good engineers can suggest tradeoffs, alternate implementation paths, and new ideas based on their familiarity with the capabilities of the technology. Subject matter experts are even served well by sitting in on software demos, user research, and incident response, so they can bear witness to how the software that carries out their goals performs in reality with real users.

Checklist
  • We have discussed business rules with the delivery team as partners in discovering a path to implementation.
  • We make SMEs available to the delivery team at all stages of the service’s lifecycle.
  • We have outlined our business metrics and identified success criteria.
  • As designers and engineers are encoding our business rules into software — as user interfaces and server-side validations and transactions — we have a process for incorporating feedback from them.
Key questions
  • How often do SMEs attend sprint reviews, or listen in on usability test sessions to see the product in action?
  • How do our SMEs communicate business rules to our technical team? Do both parties have a common framework and definitions?
  • How do we balance the business rules with usability?

USDS Play 8: Choose a modern technology stack

Default to building small, focused custom applications built on commodity infrastructure and open source stacks

In our experience, the choice between “custom software development” and “customized COTS” is a false one. Very little is custom-built from scratch anymore: any effective engineering effort will use lots of pre-existing software, from operating systems and databases to libraries and frameworks. COTS products may promise total solutions, but they will inevitably need to be pared-down and customized. Everything has trade-offs. The question is whether you will examine the trade-offs explicitly or not.

Starting with small pieces that perform one function well and gluing them together, along with user-centered design, is the art of making an effective application. The challenge of making an end-to-end full-stack service work well and be coherent is in making its components perform consistently (from a systems performance perspective), such that requests and responses flow efficiently, quickly, and without bottlenecks. These kinds of architectures allow individual software components to scale independently and are more easily replaced in the future. In contrast, customized COTS products tend to be monoliths that lack well-defined interfaces. As such, replacing or upgrading them is slow and expensive.

Checklist
  • My service is designed to solve the problems I have today, not the problems I speculate I’ll have later.
  • My service is deployed to flexible, commodity hosting infrastructure such as a cloud service provider.
  • My service is built on top of layers of primarily open source software.
  • My service is not tightly coupled to a particular runtime environment (for example, a proprietary OS, or a complex or hard to reproduce configuration).
  • My service can be deployed and begin taking requests within weeks or a couple of months at most, rather than 3 months or more.
Key questions
  • Does the government have full control over and access to the underlying data in the system?
  • Does our product require developers from one company or a small handful of companies for support or can anyone support it?
  • Are we able to add features as they are prioritized or are we dependent on someone else’s roadmap?
  • Are we pre-optimizing our system or architecture in anticipation of needs that we are not currently addressing? Can we make things simpler and evolve them over time, instead of trying to guess what challenges we will face in the future?

USDS Play 9: Deploy in a flexible hosting environment

Keep new systems evergreen

Sending systems into a static O&M cycle causes them to degrade over time and become “legacy” systems that can no longer support the evolving needs of their users or interact cleanly with other systems. Legacy systems are the inevitable result of a lack of capital investment or incremental migrations to new infrastructure over time. While the new feature development will certainly slow down or even stop for brief periods in this phase, programs should actively remediate technical debt in their systems and applications to keep them from becoming legacy. Even if a wholesale migration is still required in the future, an actively developed product is going to have a much easier migration path.

Checklist
  • Our team still has an available product owner and maintains an active sprint cycle.
  • Our team makes steady progress on remediating technical debt.
Key questions
  • How difficult is it to add new features to our product or deploy our product?
  • Do we maintain a channel for feedback or usability testing, even if it’s infrequent?

USDS Play 10: Automate testing and deployments

Leave outdated management practices in the data center

Cloud service providers offer a fundamentally different value proposition than legacy data centers: the promise of flexible, on-demand resources, and pay-for-what-you-use. As changes can be made in moments instead of hours or days, the dynamic nature of cloud hosting affords a different tolerance of and approach to risk. It makes no sense to graft legacy data center management practices on top of the cloud, for example, manually modifying infrastructure, applying regressive network rules, and extended downtime from “maintenance windows”. These and other outmoded policies strangle the cloud’s unique advantages and prevent delivery teams from operating effectively. Similarly, formal change control boards and processes, while well-meaning and sensible in more static, legacy data center environments where changes can be very risky and expensive, should be reevaluated when deploying to the cloud. When changes are automated, backed with user acceptance and regression testing, and deployments use best-practice approaches that require no downtime, formally reviewing every change adds needless overhead and slows down delivery teams dramatically.

All digital service infrastructure should be automated, from its creation through its entire lifecycle. It is essential that delivery teams be able to manage their systems directly using their preferred tools. Intermediate cloud management teams that mediate between delivery teams and the cloud are a relic of the legacy data center era. Cloud services provide systems for allocating and isolating broad organization-wide resources like billing and accounts: government should leverage them to give delivery teams direct access to the cloud.

Checklist
  • We are able to create a new environment using our deployment tooling in minutes or hours, not days or weeks.
  • We have re-tooled our change control board or similar processes to achieve the benefits of fast, secure, and automated deployments.
  • We have replaced extensive documentation on our deployments with automated logging and auditing tools.
Key questions
  • Have we updated our security package to reflect our deployment process?
  • Are we using a flexible purchasing vehicle (such as time and materials) to allow for easily scaling up and or down (within limits)?
  • Does our cloud service provider or cloud platform maintainer impose unnecessary restrictions or limits on how often our development teams can deploy?

USDS Play 11: Manage security and privacy through reusable processes

Make the right thing the easiest thing

Government can be process-heavy when it comes to security and privacy. It’s great to ensure these needs are met, but sometimes the process is complicated enough that it has the opposite effect: driving people to make an end-run around these processes just to get things done. This is the worst outcome because it creates incentives for people to neglect both security and privacy.

Instead, the right thing should be the easiest thing to do. Security and privacy requirements can be developed incrementally to better align with agile processes and incremental releases. Generic guardrails and approvals can be set up around how user data is handled (such as GSA’s generic privacy impact assessment for design research) rather than forcing a new process on a research team any time they ask a user different questions. Lastly, privacy and security must be baked into all parts of the digital service, just like usability and scalability, instead of acting as gating factors for a launch or major milestone.

Checklist
  • We can easily update our Authority to Operate (ATO) package incrementally, using partial assessments.
  • We regularly re-evaluate our processes and how our team interacts with them to improve them and encourage the right behaviors.
  • We provide usable tooling with security best practices baked in (password managers, one-time use credentials, etc.).
  • We maintain a data inventory, and do not collect or retain data that is not essential to the operation of a service.
Key questions
  • Are we tracking metrics around security incidents that can be used to measure the effectiveness of our current process?
  • Are our tools easy to use?
  • Can we apply user research and usability testing to make our processes easier to understand and navigate?

USDS Play 12: Use data to drive decisions

Understand the end-user demand profile to appropriately scale resources

Too often, enormous resources are employed for too little user demand after the initial launch. Or enormous resources are marshalled inefficiently and are barely able to serve modest demand. Part of the solution is to build the right architecture for end-user services in the first place. But government should try to estimate expected demand as best it can, and map that to a resource budget and plan.

When systems are overbuilt, or built incorrectly for the problem, they must be scaled-up and out of proportion to work effectively. Money and effort is wasted, and the user experience suffers.

We have also observed this in the form of complicated and hard-to-use interfaces. Most of the development time and effort on digital services is spent on accommodating edge-cases and business rules that impact a small percentage of users. If you think of government forms as a large decision tree of choices implementing these rules, the branching complexity for most services get very high, distorting the user interface. For example, on HealthCare.gov, eligibility for a tax credit can easily be determined for the large majority of households, from a simple set of questions. But the remaining households comprise potentially complicated scenarios, all of which must be accounted for and represented somehow in the form of user interface components. This takes considerable design, engineering, and subject-matter expert resources to achieve, and can result in the large majority of users having a subpar experience. Government must serve everyone, so what can be done?

One approach may be to build “super-user” interfaces alongside the main digital service, following the classic 80/20 rule. The main service would provide a well-designed, quality experience for most users. The remaining users the main service doesn’t accommodate cleanly would be routed to a transparent, time-boxed workflow in which an agency expert or delegate applies their knowledge of the business rules to help move the user’s transaction along.

A “super-user” mode is an experimental approach that needs more examples and test cases before we could fully recommend. The important takeaway for government is to recognize the tension between serving everyone and delivering high-quality user experiences in a timely and cost-efficient manner. Trade-offs must be made, but too often, government has chosen compromising user experience. If it truly wants to deliver user experiences that rival those of industry, it must think creatively about how to best serve everyone.

Checklist
  • Our service is deployed in a flexible hosting environment that can scale automatically on demand without human intervention, like cloud service providers.
  • Excess capacity allocated during peak demand times is automatically scaled-down when demand decreases.
Key questions
  • Can we put basic analytics on our existing legacy system to better understand our demand profile?
  • Do we have data on the upper and lower bounds on the number of potential users?
  • Do we need to serve everyone though the exact same service? Could a supplementary service address our needs as well?
  • How are we compromising the user experience by trying to fit all possible business rules into the same user interface?

USDS Play 13: Default to open

Treat your API users like any other users

Starting with an API first approach has many benefits, such as making large, transformational projects more scalable. But don’t develop an API without testing it with other potential users and hope to see any meaningful adoption outside your immediate team. Similar to open source software, if you build it in a vacuum, you can’t just assume others will be able to use it as well. Users of your API are just like the end users of your website: you need to understand their needs and pain points to be able to build something that works for them.

Checklist
  • Users that aren’t on our direct team are able to find and use our APIs.
  • We have incorporated user research on API users into our development process.
  • We have a way to capture and prioritize feedback/requests from API users and communicate that priority back to the requestor.
  • We have a process for onboarding API users that is automated to the greatest extent possible.
Key questions
  • How do we communicate changes, outages, or other critical information to our API users?
  • Are we allowing users to access their own personal, private data through our API (with proper authentication)?

Share this on