Taking the Final Step: The Importance of Evaluation

By Betsy Gardner • April 26, 2021

The finishing touch of any policymaking is evaluation: “Did this program achieve its stated goals?” “Was it an efficient use of money?” “Did our methods produce the desired outcomes?” Yet all too often, the evaluation stage isn’t completed due to a lack of resources, the absence of dedicated units and evaluation processes, or the limited use of results in future policy-making. However, evaluations are crucially important for local governments that are trying to meet community needs, even with decreased budgets, especially as shared findings and replicable evaluations can help other cities enact similar solutions and avoid costly mistakes. This article will reiterate the importance of evaluations and discuss cost-effective ways to conduct evaluations.

...increase transparency and accountability

While local governments are considered more trustworthy than the federal government, according to polls by The Economist and YouGov, this trust can’t be taken for granted. With increased political polarization and a lack of trust in police departments due to several high-profile deaths of Black Americans in 2020, trust in local government has run into several roadblocks in the past year. Evaluating and publicly releasing findings of government programs can not only help inform future policy decisions, but it can also show residents where money is going and how effectively it’s being used.

Universal Basic Income (UBI) is a policy program that gives cash payments to residents. While this idea has had political support in the U.S. since at least the 1970s, it has gained more popular support in the past few years due to Democratic Presidential candidate Andrew Yang and the impact of the COVID-19 pandemic on the economy. Stockton, California, just completed a groundbreaking UBI pilot program that provided $500 per month for two years to 125 residents (for this program, residents did have a median income below $46,034). There were no regulations or requirements on how the money could be spent; some detractors assumed the money would be spent on things like drugs or alcohol, but the independent, year-long Stockton UBI evaluation proved them wrong. Recipients were two times more likely to attain full-time work than residents who didn’t receive payments, and most of the money was spent on essential needs like food.

This is a great example of how trials with publicly released evaluations can assure residents that city budgets are used for valuable programs. The UBI in Stockton improved the local economy, as recipients spent money on essential items and many moved from part-time to full-time employment, proving that no-strings-attached assistance can stimulate local jobs and spending. Without this type of rigorous testing and evaluating, officials would have a harder time proving the value of UBI programs.

...help other local governments

The research and evaluations done by one government can often be a model for other local officials who are interested in similar programs. For example, other cities are now piloting UBI programs which, thanks to the Stockton study, have been shown to be money well spent. In a city with a smaller budget, officials can find programs in peer cities that are data-driven and evaluated to help guide their own local decision-making.

Especially during the current COVID-19 pandemic, cities should be evaluating their safety measures, communications, and vaccination plans to share with others. Working with multiple cities, the Behavioral Insights Team (BIT), a group dedicated to applying behavioral insights to inform policy and improve public services, tested behaviorally-driven messages about COVID-19 regulations and recommendations to find the most impactful messaging through randomized control trials. Additionally, partner organizations that work with multiple cities can release aggregate data and recommendations; a recent evaluation of government emails regarding COVID-19 vaccinations revealed what the best subject lines and keywords are to ensure higher open rates.

Studying and evaluating these two aspects of local communications around COVID-19 is incredibly valuable for other cities. It is also why cities should share failures as well as successes. In the BIT messaging trials, some city officials went in with ideas about what kind of messaging would most appeal to residents; however, testing showed that some of these assumptions were incorrect. This not only emphasized the importance of evaluating data, but also helped other cities avoid similar pitfalls.

...can be done at multiple levels

Doing an evaluation can prove that money for a program was well spent; however, in order to show any return on investment, cities must invest in an evaluation. This can turn into a tricky “chicken or egg” situation, but there are ways that cities can conduct evaluations and act as good stewards for less. Local universities are great partners for regional or city government; for example, the Nation Building courses at the Harvard Kennedy School connect students with Native and Indigenous nations and organizations that need assistance with research or evaluations.

Another option for funding evaluations is through grants, either ones that are specifically for program evaluation or as part of a broader grant which includes evaluation in the proposal. The Stockton UBI program was funded by a grant that included evaluation, and it is possible to find individual grants for local and municipal governments that specifically fund the evaluation of an existing program.

Yet even without grant funding or outside partners, there are simple ways that cities can level up with evaluations. One way is through A/B testing, a simple form of randomized control where two groups are randomly assigned to receive one of two messages or versions in order to see which performs better (like the open rates of COVID-19 vaccination emails). Another is through regular surveys of participants, which can measure program satisfaction, changes over time, program completion rates, etc. There are also qualitative options, with participants writing or talking about the impact of the program on their own lives.

 

Evaluations are crucial for additional funding, transparency, cooperation, and consistent improvement. In order to assist with this final step, here are a collection of resources:

About the Author

Betsy Gardner

Betsy Gardner is the editor of Data-Smart City Solutions and the producer of the Data-Smart City Pod. Prior to joining the Ash Center, Betsy worked in a variety of roles in higher education, focusing on deconstructing racial and gender inequality through research, writing, and facilitation. She also researched government spending and transparency at the Lincoln Institute of Land Policy. Betsy holds a master’s degree in Urban and Regional Policy from Northeastern University, a bachelor’s degree in Art History from Boston University, and a graduate certificate in Digital Storytelling from the Harvard Extension School.