Expanding Discretion and Accountability in the Context of AI

BY STEPHEN GOLDSMITH ANd JUNCHENG (TONY) YANG • February 26, 2024

Digital tools expand the ways officials engage with the public, providing them more opportunities to anticipate and respond. Meaningful action requires public employees to possess the agility, flexibility, and authority to understand a need and its causes before promptly delivering a solution, as explained in our paper The Responsive City Cycle (Goldsmith & Gardner, 2022) That agility has been restricted by the widely adopted commitment to the Weberian bureaucracy, characterized by rules, standardized processes, and specialization of labor and hierarchy. These structures, which limit problem solving and service delivery, may have been required in an analog era, but as cities become fully digital and technology rapidly improves it becomes possible to fully realize the public servant as a knowledge worker.

Emerging artificial intelligence (AI), and in particular generative AI (GenAI), is poised to significantly enhance the capacity of public systems and employees and in so doing to reshape bureaucratic processes. When we refer to AI in the context of enhancing public systems, we encompass both basic and advanced forms of GenAI. Basic AI applications might include automating routine tasks, such as data entry or scheduling, thereby freeing up human labor for more complex issues. On the other hand, advanced GenAI refers to systems that can generate new content, propose solutions, or even predict future trends by learning from vast datasets. These AI applications are capable of identifying hidden patterns and insights. Advanced GenAI capabilities hold the potential to reshape administrative processes, providing officials with innovative tools to enhance decision-making and ensure reliable public services. This pivotal transformation arrives on the heels of the essential full digitization of operational processes. Such AI-assisted digitization yields vast amounts of usable information, allowing for a more in-depth examination of the role of outcomes and the rules and supervision that guide democratic governance.

Central to an exploration of the role of AI in governance is the concept of discretion. We build on Michael Lipsky’s (1980) definition that refers to discretion as the latitude exercised by government employees in their decision-making processes, especially when taking official action in intricate and uncertain problem spaces. This exercise of discretion occurs when public officials are attempting to solve problems as intended by their training and the applicable rules but where there exists a permissible range of choices about what to do or how to do it. Abuse of discretion would be an extra-legal or unethical decision contrary to the purposes of the job or when a public employee just does not agree with a policy goal and is resisting it.

AI-assisted tools can guide employee decision-making in a wide range of difficult circumstances, from those efforts focused on fraud and abuse to roles like those of police officers or child welfare workers who must exercise a different kind of discretion. Street-level bureaucrats inherently possess discretion because their roles call for nuanced human judgment – a quality that, until now, was considered separate from the role of machines. The concerns of Bovens and Zouridis’s (2002) that human discretion would unfortunately diminish in a wholly automated system is understandable but imprecise. There is ample room for blended actions with better human decision making informed with AI and guided by constant vigilance and evaluation.

Accountability emerges as another pivotal component in this discussion. It encapsulates the ethical, legal, and technical obligations of governmental actors to adhere to standards, justify their actions under societal norms, or face consequences for their conduct. Addressing accountability in the AI context requires both internal and external approaches, where the former advocates accountable behaviors by human actors and algorithms through self-enforced norms and guidelines, and the latter utilizes institutional setups that hold individuals and AI uses explainable, justifiable, and responsible.

In this article we frame the discussion on blended decision-making involving humans and machines, where governmental officials and employees using advanced AI may understand root causes and alternative solutions, rather than either blindly complying with Weberian restrictions or too heavily relying on data-driven, machine-generated instructions. Such a vision resonates with Goldsmith and Crawford’s (2014) concept of accountable discretion, where enhanced data-driven decision-making has been considered a significant enabler to ensure improved public outcomes without eroding democratic controls.

Download and continue reading Expanding Discretion and Accountability in the Context of AI here.

About the Author

Stephen Goldsmith 

Stephen Goldsmith is the Derek Bok Professor of the Practice of Urban Policy at the Harvard Kennedy School and the director of Data-Smart City Solutions at the Bloomberg Center for Cities at Harvard University. He previously served as the mayor of Indianapolis and deputy major of New York City.

Read Professor Goldsmith's full bio here.

About the Author

Juncheng (Tony) Yang

Juncheng (Tony) Yang is a doctoral student at the Harvard Graduate School of Design and research assistant for Data-Smart City Solutions. His research focuses on the institutional arrangements in the tech-enabled “smart cities” context, where emerging information technologies are reshaping citizen engagement, governance, and urban planning and design. Previously he was a researcher at the MIT Real Estate Innovation Lab and the MIT Future Urban Collectives Lab. Juncheng obtained a Master of Science in Urbanism from MIT, a MSc. in Urban Planning from the London School of Economics (LSE), where he received the Royal Geographical Society IBG research fund, and B.Arch, magna cum laude and with distinction from Rice University.