Parking signs and possible futures for LLMs in government

If you’ve been following the rise of large language models (LLMs) over the past year or so, you’ve probably seen many posts on social media like this one.

In this tweet, the author shares a picture of a sight not uncommon in big cities with complex transportation networks – a tall sign with densely packed rules for parking at a specific location stacked one on top of the other. The author feeds a photo of this sign into an LLM (In this case, ChatGPT 4) and asks it to clearly and succinctly determine if parking is allowed at this location at a specific time.

The result is impressive — ChatGPT distills a dizzyingly complex set of rules into a simple, one-sentence answer.

Peter Yang tweets that he'll never get a parking ticket again. His tweet has a screenshot of him showing ChatGPT a photo of multiple street parking signs posted together. His question asks 'It's Wednesday at 4pm. Can I park at this spot right now? Tell me in 1 line.' ChatGPT responds 'Yes, you can park for up to 1 hour starting at 4pm.'
Tweet from Peter Yang

While this might seem like a trivial use of artificial intelligence (AI) apps and services, it’s actually a very precise encapsulation of both the promise of using LLMs and other AI tools to simplify government information, as well as the perils.

Government agencies adopting generative AI tools seems inevitable at this point. Increasingly, agencies are being sold on the power of these new tools to make complex government information and rules easier to find and understand. But there is more than one possible future for how agencies use generative AI to simplify complex government information. What can parking signs tell us about the potential of generative AI and which possible future we are moving toward?

Government information is complex

As evidenced by that ridiculously tall stack of parking signs, government information is often complex and challenging to understand. The parking rules in the tweet referenced above are not a bad proxy for other kinds of rules that governments create. This information is typically dense, often filled with jargon and acronyms, and can be scattered across multiple locations. Even for those who want to follow the rules, it can be difficult because of the complexity of the information.

Simplifying and streamlining complex government rules is often a difficult, time-consuming, and complex undertaking. The better solution to the parking problem at the location in the image above would be to create a single, unified sign with clear rules. But that can be hard. Parking rules, like other rules that government agencies create, are modified over long periods. A single parking location may be subject to multiple jurisdictions — county, city, transportation district, and school district — each with different rules. The cost of coordinating information from these different jurisdictions and simplifying it is more expensive than simply making the signpost higher, and adding a new sign when the rules change. And the cheaper option typically wins out.

In this way, parking rules are not so different from the rules that govern taxes or government benefit programs. These rule sets are developed over long periods and are subject to the jurisdiction of multiple government agencies and different levels of government. Efforts are underway to streamline the rules for these programs and make them easier for people to understand and follow, but time and effort are the critical ingredients. The equivalent of making the signpost higher and adding another sign is still the norm.

The promise of AI and LLMs

Over the past year alone, every single level of government from the smallest town office to the largest federal agency has been inundated with information (and product offerings) that leverage the power of generative AI and LLMs. ChatGPT, OpenAI’s LLM-powered chatbot service that launched in late 2022 and fostered popular interest in LLMs, is reported to be the most rapidly adopted consumer technology in history.

These tools are remarkably good at distilling complex information into easy-to-understand bits of content. They are not perfect, and there are many potential risks of using LLMs that have been well documented. Still, their ability to distill complex information is impressive and is improving at an accelerated rate as more investments and research are funneled into these new tools.

Most federal agencies, and many states and cities, are adopting new AI policies to govern how employees use LLM tools. Widespread LLM adoption by government agencies at this point seems like a foregone conclusion. If that’s right, then it warrants thinking about the possible futures that might arise from this. How might governments use LLMs to change the way they interact with the public, and what are the implications of these potential choices?

How governments use LLMs matters

Governments may choose to use LLMs as a tool for changing the way that people navigate complex rule sets or information published on agency websites. They might use LLMs to rewrite and reorganize their web content, making it easier for people to understand. They might use these AI tools to assist customer service agents who often deal with people with complex problems or who face challenges to web-based content or mobile applications.

Alternatively, and of some concern, governments might opt to forgo reworking their complex rules and information and simply rely on the ubiquity of new generative AI tools to assist people in consuming government information. Remember the parking sign example discussed above – even though reworking a huge stack of complex signs into a coherent, succinct set of rules for parking in one location may make the most sense, the easiest and cheapest option typically wins out. It’s expensive and time-consuming to rework complex rule sets that have been modified and added to over long periods.

If history is any guide, the adoption of AI and LLM tools will follow similar patterns as other consumer technologies. They will continue to become more powerful and more ubiquitous. This is important for governments because as people’s habits and expectations for consuming information and conducting transactions evolve, governments will need to adapt how they interact with the people they serve. The ubiquity of AI and LLMs has implications for how governments design the experience of interacting with those who use government services, just as the ubiquity of the web browser did.

But history also tells us that as new consumer technologies become more popular, access to these new tools will not be distributed evenly. People at the higher end of the income distribution, those with more advanced education and digital literacy skills, and those in closer proximity to the infrastructure supporting the internet will have greater access to these tools. As governments incorporate AI and LLM-powered tools into the experience of using digital services, they must factor in (and offset) this unequal distribution of access to new tools.

With AI and LLMs set to dramatically transform how governments operate in the years ahead, it has never been more important for governments to focus on and improve the experience of using digital services. Understanding the barriers people face today in accessing and using government services is critical. If governments don’t address these issues today and take steps to improve the experience of using digital services, these disparities will be calcified by the advancement of new AI and LLM tools.

The choices that governments make in the coming months and years in adopting these new technologies will determine which future we will get. While there is enormous promise in using these tools to streamline complex government information, there is also tremendous peril.