White Paper: Retrospective Benefit-Cost Analysis

By Jennifer Baxter, Lisa Robinson, & James Hammitt • April 20, 2015

INTRODUCTION

In a previous article, “Benefit-Cost Analysis and the Cities,” we describe how cities can use benefit-cost analysis prospectively to help decision makers and concerned citizens anticipate and weigh the pros and cons of different policy choices. In this article, we consider how benefit-cost analysis can be used retrospectively to promote understanding of the impacts after a policy is implemented. Such analysis aids in identifying needed reforms as well as in improving the conduct of future prospective analyses. The major challenges relate to estimating what would have occurred in the absence of the policy and separating the effects of the policy from other factors.

In the sections that follow, we discuss the conduct of retrospective analysis. We assume that readers are familiar with the basics of benefit-cost analysis and focus on the differences between retrospective and prospective analysis. We begin by describing possible analytic goals, then discuss how to address key issues. We conclude with a list of resources that provide additional information.

WHAT ARE THE GOALS OF THE ANALYSIS?

Retrospective, or ex post, benefit-cost analysis, prepared after a policy has been in place for a period of time, can be useful regardless of whether a prospective or ex ante analysis was previously conducted. Such analysis is at times described as a validation exercise. Its purpose may be to identify opportunities for policy reform, by evaluating whether existing policies are justified in economic terms (i.e., produce positive net benefits) and identifying changes that will decrease their costs or increase their benefits. Retrospective analysis may also provide insights into the accuracy of prospective estimates of costs and benefits, if available, particularly whether they tend to be over- or underestimated, and identify ways to improve the accuracy of future analyses.

If the goal is at least in part to examine opportunities for reform, the retrospective analysis should be accompanied by a prospective analysis that focuses on the potential changes. Future impacts may differ from the impacts of the same policy implemented at an earlier point in time, due to factors such as changes in the characteristics of the industry, the population, or the economy more generally. In addition, the affected entities may have incurred costs that will not be recovered if the policy is altered. If the cumulative burden of a series of policies is important, it may be useful to consider their net effects as a group, rather than assessing each individually.

Retrospective benefit-cost analysis is becoming increasingly common. In the United States, the Obama Administration encouraged the conduct of such analysis for significant Federal regulations in a 2011 executive order (Executive Order 13563). Several reviews of previous studies, including the examples in the reference list at the end of this document, explore whether prospective regulatory analyses tend to be systematically biased. In particular, in its 2005 Report to Congress, the U.S. Office of Management and Budget (OMB) finds that “U.S. Federal agencies tend to overestimate both benefits and costs, but they have a significantly greater tendency to overestimate benefits than costs.” However, it is unclear whether correcting these errors would have supported different policy decisions.

One example of potential bias is a tendency to routinely underestimate the ability of affected entities to reduce costs as they gain experience with a policy. Innovation, organizational learning, or other factors may also lead to larger than predicted reductions in the risks or other problems targeted by the policy. For example, requirements focused on information disclosures (such as labeling foods containing unhealthy ingredients) may cause industry to re-formulate products more than expected. A prospective analysis that assumes that provision of information will only affect consumption patterns, and not affect the products available, may understate both the costs and the benefits of the policy.

In addition to improving the accuracy of future prospective analyses, better understanding how affected entities are likely to respond will help identify more efficient methods of achieving policy objectives. For example, if analysts learn that actual compliance rates are much lower than anticipated, they may want to assess whether increased enforcement of existing regulations will achieve better outcomes at lower cost than introducing new regulations.

HOW SHOULD KEY CHALLENGES BE ADDRESSED?

In many respects, the components of a retrospective analysis are identical to the components of a prospective analysis, as discussed in “Benefit-Cost and the Cities.” Below, we describe three differences that can make the conduct of retrospective analysis particularly challenging. These include estimating the impact of the policy, accounting for timing, and sequencing analytic steps.

ESTIMATING POLICY IMPACTS

Some observers assume that retrospective analysis will be more accurate than prospective analysis because analysts can simply tally the costs and benefits that have actually occurred. However, correctly measuring incremental effects on a retrospective basis presents similar challenges to estimating impacts prospectively and is subject to substantial uncertainty. The key challenge is isolating the incremental effects of the policy. As with prospective analysis, identifying these effects requires comparing two scenarios: the world with the policy (the “incremental scenario”) and the world without the policy (the “baseline scenario” in prospective analysis, or “counterfactual scenario” in retrospective analysis). The relevant comparison is the world without and with the policy, not the world before and after the policy is implemented.

In prospective analysis both scenarios occur in the future; neither is observed. In retrospective analysis, uncertainty may be reduced because the world with the policy (the incremental scenario) can be observed. What were included as probabilities or expected values in the prospective analysis can be replaced with actual outcomes, to the extent that it is possible to separate the effects of the policy from other factors. The analyst may have data on regulatory compliance rates, or may be able to obtain more accurate information on key assumptions, such as the number and characteristics of program participants. In other cases, it may be difficult to isolate the effects of the policy. The incidence of the problems addressed, such as drug addiction or teen pregnancy, may be rising or falling due to the impacts of other programs, innovation, changing demographics, or other factors. The extent to which the policy has accelerated a decrease in incidence, or offset what would have otherwise been an even larger increase, may be difficult to determine. Furthermore, analysts must still model the counterfactual scenario, which cannot be observed and remains uncertain.

In addition to collecting data on the impact of the policy through surveys, interviews, and other methods, analysts can often use other tools from the field of program evaluation. Ideally, the policy would have been designed to allow for a controlled experiment, enabling analysts to empirically estimate the impact of the policy with a high degree of confidence by comparing otherwise-identical treatment (i.e., subject to the policy) and control (i.e., not subject to the policy) groups. However, implementation of a controlled experiment is often at odds with policy goals; policies are usually intended to target all members of the population of concern or, for fairness, to apply equally to everyone.

Alternatively, opportunities for natural or quasi-experimental designs may exist, which exploit natural randomization. For example, analysts may be able to identify comparison groups unaffected by the policy if: (1) the policy is phased in through time (e.g., new products are subject to the policy while similar, older products are exempt); or (2) the policy is implemented differently across geographic areas (e.g., affecting only some neighborhoods). For example, if a food safety program applies only to new establishments, then its effectiveness may be estimated by comparing the safety records of the new establishments to those not affected by the requirements. Care must be taken, however, to account for all of the factors that may affect these differences. For example, older establishments may decide to voluntarily comply with the requirements to more effectively compete with new establishments.

Such controlled or quasi-experiments may provide a better assessment of the effects of existing policies than models relying on assumptions about uncertain factors, because they are based on observed outcomes and data. However, in practice, they may be too small in scale to be extrapolated to a larger program (e.g., at the city-wide rather than neighborhood level) or the conditions necessary for valid inference may be unavailable. Because government policies usually apply to the entire city population, comparable control groups often do not exist. Comparing populations through time may be more feasible; however, changes in underlying economic and other conditions may complicate such comparisons. Some of these challenges may be overcome using simple regression analysis or more sophisticated econometric modeling techniques.

Addressing Timing

Two issues related to the timing of impacts, defining the period of analysis and adjusting for time preferences through discounting, may be addressed differently in retrospective than in prospective analysis. As with prospective analysis, the retrospective analysis should start in the year the impacts were first incurred, even if it predates the effective date of the policy. For example, many affected entities may incur costs in anticipation of new requirements and these costs should be included in the analysis. In general, the analysis should end on the most recent date for which retrospective data are available. At times, it may be desirable to exclude a time period or use statistical methods to separate the effects of rare events (such as an unusually damaging hurricane), so that the effects of the policy under “normal” conditions are more apparent. To the extent that analysts wish to project impacts into the future, the results should be clearly separated and reported, as prospective analysis requires a different set of assumptions about the future baseline and policy scenarios.

Where the benefits and costs of a policy are expected to occur unevenly through time, analysts should consider the full time period over which the policy was implemented. Longer timeframes may be particularly important for those policies where the outcomes are not measurable until many years after the policy goes into effect. In such cases, a longer timeframe ensures that all significant costs and benefits are captured in the analysis. However, if costs and benefits are likely to remain constant through the period of the analysis, it may be sufficient to model impacts for a single year.

If the analyst wishes to compare the results of prospective and retrospective analyses, both must model the same time periods. However, this may not always be possible, particularly if the policy is reviewed within the first few years of implementation. In such cases, analysts should adjust the prospective estimates to exclude years not analyzed in the retrospective analysis when making comparisons.

Regardless of whether impacts occur in the future or the past, time preferences matter. Resources allocated to the program could have been used for other purposes. Benefits accrued earlier are more valuable than those accrued later. Generally, the starting point (base year) is the year the policy went into effect or the first year costs or benefits were incurred. Alternatively, impacts may be reported on an annualized basis. In either case, the stream of costs and benefits should be reported by year and in constant, undiscounted dollars for each year, as well as discounted. If analysts are interested in comparing the results of the retrospective and prospective analyses, they should report benefits and costs in present value terms using the same base year.

Sequencing the Analysis

As with prospective analysis, analysts should design the retrospective analysis to use analytic resources effectively. This generally requires following a phased approach to ensure that the work is carefully targeted and useful for its purpose. Prior to initiating the retrospective analysis, those involved should consider the goals of the effort and define the scope of the analysis accordingly. They should begin with a simple screening analysis to identify key parameters and focus subsequent data collection and modeling efforts.

For example, if the purpose of the effort is to determine whether the benefits of a policy exceed costs and a simple screening analysis can answer this question, additional modeling efforts may not be necessary. If analysts are interested not just in whether the policy was effective, but also in the accuracy of the prospective cost and benefit estimates, additional work may be required.

In some cases, revisiting the prospective analysis from an ex post perspective will provide important insights into the costs and benefits of the policy. In other cases, prospective analysis of the costs and benefits of eliminating or modifying the policy may be useful – instead of, or in addition to, the ex post analysis. In all cases, the level of effort should be tailored to the purpose of the review. The major challenges relate to estimating what would have occurred in the absence of the policy and separating the effects of the policy from other factors.

ACKNOWLEDGEMENTS

This article builds on work completed by the authors under subcontract to the U.S. Department of Health and Human Services (HHS) and under an Intergovernmental Personnel Act assignment by Dr. Hammitt. The views it expresses are our own, and do not reflect the views nor endorsement of HHS.

REFERENCES AND OTHER RESOURCES

Aldy, J.E. 2014. Learning from Experience: An Assessment of the Retrospective Reviews of Agency Rules and the Evidence for Improving the Design and Implementation of Regulatory Policy. Prepared for the Administrative Conference of the United States. https://www.acus.gov/research-projects/retrospective-review-agency-rules

Greenstone, M. 2009. “Toward a Culture of Persistent Regulatory Experimentation and Evaluation.” Published in New Perspectives on Regulation. D. Moss and J. Cisterino (eds.). Cambridge, M.A.: The Tobin Project.

Harrington, W., R. Morgenstern, and P. Nelson. 2000. “On Accuracy of Regulatory Cost Estimates.” Journal of Policy Analysis and Management. 19(2): 297-322.

Kopits, E., A. McGartland, C. Morgan, C. Pasurka, R. Shadbegian, N.B. Simon, D. Simpson, and A. Wolverton. 2014. “Retrospective Cost Analyses of EPA Regulations: A Case Study Approach.” Journal of Benefit-Cost Analysis. 5(2): 173-193.

Lutter, R. 2013. “Regulatory Policy: What Role for Retrospective Analysis and Review?” Journal of Benefit-Cost Analysis. 4(1): 17-38.

U.S. Office of Management and Budget. 2005. Validating Regulatory Analysis: 2005 Report to Congress on the Costs and Benefits of Federal Regulations and Unfunded Mandates on State, Local, and Tribal Entities. http://www.whitehouse.gov/omb/inforeg_regpol_reports_congress/

DOWNLOAD PDF

Topics

About the Author

Jennifer Baxter

Jennifer R. Baxter is a Principal at Industrial Economics, Incorporated (IEc), an economic and environmental consulting firm located in Cambridge, Massachusetts. She has over 16 years of experience designing and conducting economic assessments in support of public policy development and environmental litigation. In the public policy arena, she focuses on analyzing the costs and benefits of complex and innovative federal regulatory programs, particularly proposed regulations affecting public and private natural resource and land use, public health related to environmental contaminants, transportation safety, and homeland security.  Ms. Baxter’s expertise in the area of regulatory analysis is reflected in her work for multiple federal agencies on state-of-the-art methods and guidance development. Ms. Baxter holds a B.A. in Environmental Science from Boston University and a M.E.S. degree in Environmental Policy and Management from Yale University’s School of Forestry and Environmental Studies.

About the Author

Lisa Robinson

Lisa A. Robinson is a researcher at the Centers for Risk Analysis and Health Decision Science at the Harvard T.H. Chan School of Public Health. Her research and teaching focus on the use of economic analysis, particularly benefit-cost analysis, to inform policy decisions. She has spent much of her career assessing the impacts of environmental, health, and safety regulations, developing related methods, and drafting guidance. She was previously a Senior Fellow at the Harvard Kennedy School Mossavar-Rahmani Center for Business and Government and an Affiliate Fellow of its Regulatory Policy Program. In addition, she was a Principal at Industrial Economics, Incorporated; the Director of Policy, Planning, and Budget for the federal Institute of Museum Services; and an analyst at the U.S. Office of Management and Budget. She is the Past President of the Society for Benefit-Cost Analysis and serves on the editorial boards of the Journal of Benefit-Cost Analysis and Risk Analysis. She received her Master in Public Policy Degree from the Harvard Kennedy School.

Email the author.

About the Author

James Hammitt

 

James K. Hammitt is professor of economics and decision sciences at the Harvard T.H. Chan School of Public Health, director of the Harvard Center for Risk Analysis, and visiting professor at the Toulouse School of Economics (France). His research and teaching concern the development and application of benefit-cost, decision, and risk analysis to health and environmental policy in the U.S. and elsewhere. He has served on the EPA Science Advisory Board, its Environmental Economics Advisory Committee, and chaired its panel on expert elicitation and the Advisory Council on Clean Air Compliance Analysis. He served on NRC/IOM committees on methods for valuing health risk for regulatory analysis and on the effects of the energy and food sectors, among others. He holds advanced degrees in applied mathematics and public policy from Harvard, was senior mathematician at RAND, and held the Pierre-de-Fermat chair at the Toulouse School of Economics.

Email the author.