CHAPTER 11 What We Learned – Reforming Regulatory Impact Analysis

CHAPTER 11

What We Learned

WINSTON HARRINGTON, LISA HEINZERLING, AND RICHARD D. MORGENSTERN

This report began by noting some of the controversies surrounding the use of economic methods to evaluate the benefits and costs of new environmental regulations, including the concern by some about the excessive focus on economic efficiency criteria, the limited ability to quantify health and environmental damages, and quite fundamental questions about the monetization of these effects. While recognizing the importance of these issues, we have deliberately placed some of the broader questions beyond the reach of this volume, and have chosen instead to focus on what we believe to be the most tractable question, namely the development and use of regulatory impact analyses (RIAS) by the U.S. Environmental Protection Agency (EPA). A key goal of an RIA should be to help inform regulators, Congress, and the general public about the expected consequences— both the benefits and the costs—of pending decisions.

To provide focus, we decided to examine as case studies three recent, relatively sophisticated RIAS conducted by EPA, and to engage experts, both economists and lawyers, with diverse perspectives on the issues. Our process involved the development of in-depth critiques of the three RIAS, with an opportunity for debate among the authors and outside reviewers, including academic, private, and government experts.

At the time of case selection, each of the three rules chosen had been appealed by various stakeholders, but only one outcome had been reached (the cooling water rule had recently been invalidated by a federal appeals court). Since then, our study has achieved a rare trifecta: all three rules have been overturned by the courts, sometimes for reasons explicitly linked to the economic analyses.

In choosing to dig deeply into individual RIAS, we hoped to focus on the current practice of cost–benefit analysis (CBA) in the regulatory process and to downplay the strictly textbook or philosophical issues that sometimes surround debates about the use of the technique. At the outset, we stipulated that the objective was not to defend or attack CBA, but to improve its use in environmental decisionmaking. Thus, we assumed, as others have, that CBA is here to stay. Our goal is to improve the quality, acceptability, and usefulness of the analyses that are undertaken.

Whereas the challenge for the authors of the RIA critiques was to assess the individual studies conducted by EPA, we the editors took on the task of preparing comparisons and developing a set of recommendations for changes to current practices on which the three of us could jointly agree. At the outset, we recognized that it might not be feasible to reach consensus among ourselves and that any consensus we did reach might not represent meaningful reform.

This chapter presents the results of our work. We make no claims about the revolutionary nature of our recommendations. At the same time, we believe that they are both substantive and achievable, and that by embracing them, EPA, and possibly other agencies, could improve the overall credibility and usefulness of RIAS.

A natural starting point is to review, in brief, the assessments of the three individual RIAS, comparing and contrasting the views expressed by the various chapter authors. From there, we launch directly into our recommendations for reform.

Summary of RIA Critiques

Clean Air Interstate Rule

Nat Keohane and Wendy Wagner develop in-depth and broad-ranging assessments of the RIA prepared by EPA for the Clean Air Interstate Rule (CAIR), a regulation designed to achieve major reductions in power plant emissions of sulfur oxides and nitrogen oxides. In general terms, Keohane and Wagner both see the RIA as a quite competent example of CBA in many respects, including use of clear and consistent baselines, consideration of various categories of benefits and costs, and an innovative treatment of uncertainty. At the same time, they both strongly criticize the RIA for its failure to consider alternative options. They also see the RIA as somewhat unfocused, devoting excessive attention to the estimation of some very small benefit categories, such as emergency room visits for asthma and lower and upper respiratory symptoms in children, and virtually ignoring potentially major issues, such as ozone mortality and ecological damages.

The exclusive focus on the particular policy option selected by EPA rather than on a broader set of alternatives, as mandated in the agency's RIA Guidelines for Preparing Economic Analyses, is identified by Keohane and Wagner as the RIA’s major flaw1. This failure to consider alternatives is all the more surprising considering that the agency had already prepared assessments of competing proposals in unsuccessful efforts to advance the Bush administration's Clear Skies legislation. Wagner goes so far as to label the RIA as principally a litigation-support document, albeit a technically sophisticated one, rather than the genuine aid to decisionmaking envisioned by RIA advocates. Keohane sees the single-option focus as an attempt to mask the greater net benefits that could probably have been achieved by a more stringent standard. Further, he notes that EPA’s approach precludes development of a cost-effectiveness analysis that, ironically might have strengthened the legal underpinnings of the rule.

Keohane and Wagner both express concern that the excessive technical complexity of the document limits its usefulness for nonexperts in the field. Thus, they believe it fails in what should be one of the RIA’s key objectives: providing clear, transparent information to Congress and the general public about the true societal impacts of the CAIR.

Beyond the similarities in their assessments, the issues raised by Keohane and Wagner also differ in important ways. Keohane focuses on various technical aspects of the RIA, including both benefit and cost estimations, the consideration of equity and the differential impacts among sub-populations, the discounting of delayed benefits and costs, and the treatment of uncertainty. He argues that, because even a simple assessment would reveal the presence of large net benefits from the chosen option, little is gained from the excessive detail presented, especially so late in the regulatory process. As an alternative to this false precision, Keohane favors a different focus, namely a simpler, more straightforward study that would be accessible to a broader audience. For example, he favors greater use of physical units, in addition to monetary terms, for estimating benefits.

Wagner takes a somewhat different tack in her proposals for reform. Given her assessment that the RIA is largely designed to protect the agency against legal challenge, she seeks to develop institutional incentives to make the document more relevant to actual decisionmaking. In that regard, she would try to separate the RIAS from judicial review as much as possible. For example, she would reward agencies for high-quality analyses, perhaps by attaching a strong presumption in favor of the policy choices made in the rulemaking—well beyond the “arbitrary and capricious” standard commonly used. Wagner also endorses development of a set of criteria that could help determine whether an RIA met the high-quality standard that would qualify it for more deferential judicial treatment. Further, she would require that the RIA be completed at a much earlier point in the rulemaking process. Consistent with Keohane's approach, Wagner would also encourage more qualitative assessments and a greater emphasis on estimates denominated in natural units rather than monetary terms. Interestingly, Wagner compares this emphasis on early, less technical analyses to the scoping documents prepared for environmental impact statements under the National Environmental Policy Act. Very much like Keohane, Wagner favors a more open, transparent process for decisionmaking and the development of documents to support such a process.

Clean Air Mercury Rule

Alan Krupnick and Catherine O'Neill both present detailed reviews and analyses of the RIA prepared by EPA for the Clean Air Mercury Rule (CAMR), a regulation designed to cut power plant mercury emissions via a cap-and-trade approach. Not surprisingly, Krupnick and O'Neill agree in their assessments of a number of the RIA’s shortcomings. At the same time, there are also important disagreements between them on a range of technical issues as well as on the basic economic efficiency approach adopted in the analysis.

Focusing first on examples of agreement, chapter authors Krupnick and O'Neill acknowledge the daunting task of analyzing the benefits and costs of mercury emissions controls, given the complexities and uncertainties of the underlying problem and the evolving nature of the available scientific information. Nonetheless, they both highlight the very limited set of options considered in the RIA, including the sole focus on emissions trading, and the failure to analyze the costs and benefits of adopting maximum achievable control technology (MACT) standards, especially in light of the prior determination by the Clinton administration that mercury is a hazardous air pollutant as defined under section 112 of the Clean Air Act. They also highlight the failure to consider alternative baselines—for example, treating the benefits and costs of CAIR as ancillary to CAMR, rather than solely defining CAMR as ancillary to CAIR. Further, both authors chastise the EPA for the virtually contemporaneous issuance of the RIA and the underlying regulation, thus undercutting the use of the RIA as a decision tool.

Krupnick and O'Neill also agree on a number of the limitations of the exposure analysis, including the emphasis on freshwater fish consumption, and the failure to consider damages other than IQ loss. With regard to benefits monetization, both authors raise concerns about the netting out of educational costs from the estimates of reduced lifetime earnings attributable to IQ loss. On this point, O'Neill cites Professor Rena Steinzor: “...the good news is that stupider children need less school and earn just a little more money because they are working rather than sitting in a classroom.”2

Beyond the similarities in the issues raised by Krupnick and O'Neill, their disagreements fall into two categories: technical and philosophical. From a technical perspective, O'Neill questions the selection of studies chosen for inclusion in the RIA, noting that in the Bush administration, all judgment calls went “one way.” Krupnick focuses more on likely errors of omission in study selection without suggesting bias. O'Neill sees the failure to quantify or monetize certain benefit categories as a fundamental flaw, whereas Krupnick sees it more as a sign that too few resources were committed to following recent National Research Council recommendations on this issue.

Another technical difference involves their approaches to distributional issues. O'Neill focuses on the high baseline mercury blood-level concentrations among Chippewa or other ethnic populations that have a strong identification with freshwater fish consumption. She is concerned that the RIA does not explicitly consider who will bear the costs and benefits of the rule, nor whether the decision ameliorates or worsens current inequities. She is also concerned that the delays involved in implementing CAMR rather than the MACT standard will cause permanent harm to millions of children. Krupnick hones in on the issue of emissions trading, specifically whether the use of trading will create “hot spots.” Using publicly available EPA data, he finds that for the vast majority of plants, there are no increases in mercury emissions compared to the no-control baseline with or without CAIR in place. At the same time, he does find that several plants are predicted to increase mercury emissions over their baselines when emissions trading is allowed. He chides the EPA for failing to exploit its own data on this highly contentious issue.

Finally, we note the not insignificant philosophical divide between Krupnick and O'Neill. Overall, Krupnick sees the CAMR RIA as a quite reasonable approach in light of data limitations, as well as budget and time constraints. In general, he sees the deficiencies—which are extensive in some cases—as inherently remediable with greater effort on the part of the agency. He recommends allotting more time and resources to enable EPA to collect better studies, to rely less on (arbitrary) assumptions and more on actual data, and to more fully explore relevant policy options. He also proposes that the agency adopt a more thorough, academic-style peer review process.

In contrast, O'Neill calls for a more interdisciplinary approach to the analysis of regulatory issues, with less emphasis on economic efficiency and without use of a single analytical approach purporting to incorporate all considerations. Importantly, O'Neill sees certain resources such as mink and loons, which serve as Chippewa clan symbols, as the type of priceless resources that should neither be ignored nor subjected to traditional cost–benefit analysis. In short, whereas Krupnick seeks greater technical sophistication to enhance the usefulness of the RIA in agency decisionmaking, O'Neill seeks to employ the tools of multiple disciplines to enhance the economic analyses and to make the RIA more accessible to the broader (nonexpert) public.

Cooling Water Intake Structures Rule

The U.S. Supreme Court is now reviewing the decision by the Court of Appeals for the Second Circuit to remand the Cooling Water Intake Structures (CWIS) Phase II rule. As Doug Kysar explains in his chapter, the sole issue before the court is whether a cost–benefit comparison will be allowed in the determinations of the best technology available (BTA) for individual permittees. It bears repeating that the retail-level comparison of benefits and costs goes far beyond the requirements imposed by Executive Order (EO) 12866 to ensure that the benefits of regulations justify their costs. This instruction applies at the rulemaking stage and refers to the comparison of total benefit and cost estimates of a new rule. But, although not required, the case-by-case cost–benefit comparison is much closer to the economists’ notion that benefits and costs should always be compared at the margin.

The cost–benefit test was not the only issue discussed at the appeals-court level. Also at issue was whether the statutory language authorizing the CWIS rule supports the kind of flexibility that EPA inserted into the rule. This flexibility included not only the site-specific comparison of benefits and costs, but also allowed as abatement technologies what looked to environmentalists like impermissible compensatory measures, in particular the use of habitat restoration measures elsewhere in the river reach to compensate for ecological and environmental damages at the site of the CWIS.

The Second Circuit's opinion was pretty clear that this level of flexibility was over the line. For one thing, the same court had remanded Phase I of the rule (applying to new plants) a few years previously because it allowed more flexibility than the court considered statutorily appropriate, and Phase II had attempted to adopt an even more flexible approach to existing plants. The statutory requirement to use the “best technology available” might admit of some wiggle room with respect to the economic hardship imposed on the utility industry and its investors, but seemingly much less so with respect to the environmental consequences (at least according to the Second Circuit).

Both chapter authors Scott Farrow and Doug Kysar identified flaws in the CWIS RIA, but on the whole they seemed to take opposing positions as to the overall effectiveness and value of the document. Farrow's critique is largely restricted to the CBA itself and is generally the technical assessment of a professional economist. Because there are so many criteria for judging, Farrow subjects the RIA to a contents “checklist” developed by Robert Hahn and Patrick Dudley (2007). As Farrow points out, a “good” score on such a checklist does not necessarily indicate a high-quality RIA, but a poor score certainly indicates a bad one. The score for the CWIS RIA is quite high; it is missing or lacking in only a few elements considered essential by economists. Farrow even suggests that as a professional document, the quality of the analysis may achieve a standard required for publishing in a professional journal, were it not for the document's great length. However, Farrow has more reservations about the quality of the data supporting it.

As an aid to decisionmaking, Farrow sees the document as mixed. He argues in particular that EPA based much of the regulatory flexibility introduced into the rule on the adverse environmental impact (AEI) of the rule without ever defining the AEI. Admittedly, this would have been very difficult because the agency did identify a long list of potential impacts qualitatively; it just did not try to provide weights so that an overall assessment of the AEI could be made. Providing such weights for a comparison of benefits with costs and with other benefits is one of the principal objectives of CBA. In this case, so many categories of benefits were left nonmonetized that the cost–benefit comparisons were not meaningful. However, considering the analysis instead as an exercise in cost-effectiveness, Farrow found it to be more valuable. EPA claimed that, compared with cooling towers—a technology that achieves at least a 90 percent reduction in aquatic mortality—a wire screening technology for water intakes would, where applicable, achieve better than half the level of reductions at about 10 percent of the cost. EPA asserted that, under most circumstances, this level of protection could meet the BTA standard, even though it was possible to do much better.

Kysar's assessment of the rule is both broader and more critical. Deeply skeptical of the value of CBA to begin with, he found little in this RIA to change his mind. Three issues in particular concerned him. First, after EPA had prepared a proposed regulation that relied much more heavily on regenerative cooling as the BTA, the Office of Management and Budget (OMB) intervened, evidently encouraging EPA to provide more flexibility in the definition of BTA to reduce cost. In Kysar's view, the OMB intervention went well beyond the regulatory review required by EO 12866. In addition, the substance of OMB’s comments and revisions were not made part of the public record, so it was difficult for parties interested in the outcome of the rulemaking to discover essential elements of the decision process. Moreover, these inputs were not subject to public comment during the rulemaking process.

Second, Kysar complains that the technical information made available to EPA (and made available by EPA) was not adequate to support valuation. As he puts it in his chapter, “CBA ... carries an implicit assumption that the policy space within which EPA operates is informationally rich and probabilistically sophisticated,” pointing to the fact that, of 554 facilities subject to the regulation, only 150 had performed impingement and entrainment studies. A defender of the regulation might point out that a sample of 150 out of 554 is frequently adequate to draw conclusions, depending on how the sample is drawn, but in fact these studies focused only on the direct and indirect mortality of the species subject to impingement and entrainment and not on the larger questions of local and global ecosystem effects. EPA acknowledges and provides a long list of such ecological effects (reproduced in Kysar's chapter), about which little is known and which are unquantifiable based on current information.

On the other hand, steam-electric plants with once-through cooling systems have been in place on the nation's water bodies for nearly 100 years and constitute only one of myriad environmental insults to aquatic systems. The fact that they are still capable, according to EPA, of destroying billions of organisms each year, along with the fact that on most water bodies a relatively small portion of the flow (of a stream) or volume (of a lake or estuary) passes through the steam plant, suggests that the aggregate biota of these water bodies is still very large. This, in turn, may suggest that those systems are not in long-term crisis, at least not from steam-electric generation.

Kysar's third point concerns valuation, and he suggests two alternatives to the individual willingness-to-pay (WTP) approach that is the intellectual backbone of CBA. One, a proposal to value environmental assets at their replacement costs is not likely to find acceptance among economists in academia or at EPA because it assumes what is at issue, namely that the threatened resource is worth saving. Yet statutes like the Clean Water Act take as their premise that natural resources— such as rivers, lakes, and streams—are indeed worth protecting. A methodology aimed at evaluating individual regulations may perform best when it respects this foundational policy determination. The other proposal is to substitute a measure of WTP that is partially or totally determined by group interactions. Some observers in political science, psychology, and similar disciplines believe that this would usefully replace the private, utility-based method with one more appropriate to the valuation of public goods by emphasizing the collective nature of these decisions. As discussed elsewhere in this chapter, group-mediated valuation need not result in an increase in valuation estimates.

Perhaps more critically, the question of valuation gets back to the question of information. If information about the physical consequences of regulatory actions is nonexistent or inadequate, what is the point of valuation? When both valuation and physical-effects data and methods are less than adequate, which offers the largest marginal improvement?

Recommendations for Reform of RIAs

In considering the three RIAS analyzed in this volume, and drawing on our experiences in the field of regulatory assessment, we have developed a series of specific reforms that we believe would enhance the overall quality and usefulness of the substantial studies that are conducted as part of the regulatory development process. We develop a dozen recommendations addressing the content of the RIAS as well as the process by which they are prepared. These recommendations cover five overarching topics:

technical quality of the analyses;

relevance to the agency decisionmaking process;

transparency of the analyses;

treatment of new scientific findings; and

balance in both the analyses and the associated processes, including the treatment of distributional consequences.

In addition, we have developed two recommendations involving future research. Most of the recommendations could be implemented by the agency alone, although in a few cases changes in the governing executive order would be desirable. Only one of the recommendations requires statutory reform, specifically of the Paperwork Reduction Act (PRA) of 1995.

Technical Quality of the Analyses

1. Give greater consideration to meaningful alternative policy options

If an RIA is truly designed to inform and guide regulatory decisionmaking—and not, as Wendy Wagner suggests, simply to serve as a litigation support document or, in Nat Keohane's view, only to provide information about the consequences of a regulatory decision made on other grounds— then it must examine a reasonable set of alternative policy options. An RIA that only compares the proposed action to the existing regulation, such as the RIA produced for the CAIR, or that considers only very limited options, such as the one developed for the CAMR, does little to help decisionmakers determine the appropriate course to take.

As noted at the beginning of this report, CBA and the RIAS that embody it are not intended to be decisive in the regulatory process; they are inputs, or tools, rather than dispositive frameworks. Thus, even with a very high-quality RIA, regulators may well end up selecting an approach that is not the most efficient from an economic perspective, as concerns about equity or other factors may drive the decision in another direction. At the same time, given the acceptance in economic circles of the efficiency criterion and the appeal of quantitative analysis even to those outside the cost–benefit world, EPA decisionmakers may be reluctant to adopt a “second-best” approach by choosing a regulatory option that generates fewer net benefits than an alternative. The path of least resistance is to analyze only one alternative and thereby avoid explaining why a different, more efficient, choice has been rejected. However understandable this may be from a bureaucratic or political perspective, we do not believe this approach is consistent with the underlying purpose of the executive orders governing regulatory analysis. Thus, we recommend that meaningful alternative options be analyzed in RIAS. Although it may be tempting to stipulate some minimum number of alternatives to be considered, we prefer to focus on the term meaningful, which we define to include the full set of options deemed to be technically feasible and legally defensible.

2. Choice of baselines should reveal choices and trade-offs, not conceal them

The expected outcomes of a regulation cannot possibly be understood without reference to what would have happened in its absence. As a result, expected outcomes are routinely measured against baselines, which represent the development of an intricate set of choices made by the regulator to generate a future or a set of alternative futures that would take place if the rule were not issued. They are also known by the more revealing name of counterfactuals.

Constructing a baseline requires a legion of assumptions concerning such matters as future population and economic growth, rates of improvement of existing technologies or replacement by new ones, and trends in future regulation. The credible evaluation of benefits and costs is not possible without a well-constructed baseline or set of baselines. The construction and presentation of baselines are every bit as important to the estimation of net benefits as the construction and presentation of alternative regulations. RIAS should reflect that reality.

A vivid example can be found in Catherine O'Neill’s case study in this report of mercury emissions from coal-fired power plants. Control of airborne mercury emissions was widely anticipated under the new MACT standards enacted as part of the Clean Air Act of 1990. Although EPA did promulgate MACT rules for two important sources (municipal and hospital waste incinerators) in the late 1990s, and began work on a third (emissions from electric power generation) in 2003, agency analysts involved in the technical and economic aspects of the utility MACT rule were instructed by top management to stop their work. Instead, they were to begin drafting a new rule based not on the MACT section of the statute, but on a cap-and-trade policy modeled after the sulfur dioxide (SO2) trading program for fossil electric plants.

The initial regulation, like all MACT regulations, would be required by statute to achieve the emissions reduction performance of the top 12 percent of existing plants and was expected to be implemented around 2007. Its replacement, the CAMR, would only be implemented after the CAIR, a cap-and-trade program for SO2 emissions that was to be phased in beginning in 2010, with a lower cap to be phased in beginning in 2018. The difference in the performance and timing of the two rules could hardly be more dramatic: whereas the MACT rule would require nearly a 90 percent reduction in mercury emissions by 2007, the CAMR would not achieve its objective of a 70 percent reduction until nearly 2030.

We take no position in this report on whether the abandoned MACT rule was or was not a superior rule to the CAMR, which eventually adopted. Certainly, the MACT timetable and stringency would have produced more emissions reductions and would have produced them much sooner, and thus would have produced much greater benefits. But the costs would have been much higher as well. And because the net benefits of the CAMR were negative, at least according to the EPA analysis, moving up and expanding the emissions reductions would only make things worse. Of course, many skeptics of CBA, including O'Neill, would strenuously disagree.

The point is that EPA could and perhaps should have been more informative in the CAMR about the earlier MACT analysis, perhaps including MACT implementation as an alternative to the customary “no policy” baseline. This would have provided a useful historical perspective and made it clear how much broader were the regulatory options than EPA's regulatory documents let on at the time.

3. Develop a checklist of good practices that all RIAs should have, and provide an explanation for missing items

All three of the RIAS examined in this volume violated one or more elements of EPA’s Guidelines for Preparing Economic Analyses.3 Other studies based on larger samples have reported similar findings, including a quite broad range of deviations from the approaches advanced in the guidelines (Hahn and Dudley 2007).

It is not entirely clear why there is such a gap between the agency's guidelines and current practices. Insufficient resources is an oft-cited reason, although it strains credibility to say that after spending more than $1 million to develop a major analytical effort, funds are not available to conduct one or two additional model simulations.

Robert Hahn has long advocated a checklist to assess RIA quality. In fact, it would be fairly simple for an agency to report on its adherence to some basic quality criteria, or to explain why it did not adhere to such criteria. The criteria reported on need not reflect every nuance covered in the guidelines but should focus on certain key topics. For example, they could include some or all of the issues suggested by Hahn and Dudley (2007), as described by chapter author Farrow. Perhaps the Economics Subcommittee of the agency's Science Advisory Board (SAB) could offer guidance on the “top ten” elements to include in an RIA. The EPA administrator could voluntarily report the checklist as a means of strengthening his or her hand with the public, OMB, and the courts, and could present the checklist results in the preamble to a rule, in concert with the actual presentation of the RIA findings. Alternatively, the president, acting through OMB, could require the checklist. In the absence of a sound peer review process, a high score on such a checklist would not provide complete assurance of RIA quality; however, a low score would be a sure indicator of failure. In her chapter in this report, Wendy Wagner proposes that RIAS deemed to be of high quality be given special deference by the courts.

Beyond the use of a checklist, other approaches could be used to encourage quality improvements in RIAS. For example, one could establish a formal review process involving outside experts, based either at EPA or at OMB, to more directly grade or otherwise evaluate RIA quality. Although appealing at many levels, however, such procedures would probably introduce further delays into an already lengthy regulatory development process. Thus, we propose the development of a checklist, with initial implementation to be carried out by EPA, presumably in consultation with the SAB.

Relevance to Agency Decisionmaking Processes

4. Be more strategic about devoting agency resources to the estimation of the benefits and costs of regulation

The value of regulatory analysis, with or without monetary estimates of benefits, is limited by the absence of coverage of important benefit categories. It is also limited by the precision and accuracy of the estimates of the physical effects of regulation. Although these observations may seem obvious, they are sometimes overlooked by both advocates and skeptics of CBA. Sometimes this can result in an overemphasis on certain scientific and economic issues that may not be entirely relevant to the decision. In other cases, the key issues may be underemphasized.

In several of the RIAS considered in this volume, the focus on precision for some relatively low-value benefit categories at the expense of even a rudimentary scoping of other, potentially higher-value categories is inexplicable. For example, both Nat Keohane and Wendy Wagner note the extensive details in the CAIR RIA on emergency room visits for asthma and lower and upper respiratory symptoms in children, and the absence of analysis of major issues such as ozone mortality and ecological damages.

At the same time, except for air quality management for criteria pollutants, most of the research effort into benefits assessment goes into the estimation of WTP for a given environmental improvement. Thus, whereas the models connecting a regulation to its effects are often fairly rudimentary, the WTP estimates are increasingly sophisticated. Certainly among economists, the professional rewards for developing better methods and data for estimating WTP for nonmarket goods exceed the rewards for linking regulation to physical outcomes. Similarly, whereas the incentives of natural scientists are to link causes to physical outcomes, they often ignore or devalue the effects that the behavioral responses of firms and individuals to regulation can have on regulatory outcomes. Research into physical effects usually involves interdisciplinary research combining natural and behavioral scientists. As anyone knows who has tried to do it, such research is quite difficult to do.

Skeptics of CBA can be as indifferent to the physical effects of regulation as they are to the monetary benefit estimates. For the skeptics’ preferred regulatory alternative—best-technology standards—it often doesn't matter very much what the effects of regulation are.

In our view, the usefulness of RIAS would be enhanced if, at the outset of the rulemaking, an explicit judgment is made regarding the best way to allocate resources toward examining the consequences of the regulation. Regulators rarely have all the information they would like about either physical outcomes or their valuation. But not all information has the same value at the margin, and additional forethought about where the biggest payoffs are would probably be well rewarded. In addition to the current intra-agency review of the analytical plans for RIAS, it might be appropriate to send them to the SAB for review, possibly to a special subcommittee established for such a purpose.

5. Make key aspects of the RIAs available to decisionmakers earlier in the regulatory development process

Under current agency procedures, draft RIAS are required to be circulated to top decisionmakers three weeks in advance of final agency review. This applies equally for proposed and final regulations. Reportedly, these deadlines are often not met. However, even when the internal deadlines are met, important opportunities for constructive use of the RIA results in rule development may be missed.

Typically, key elements of rule design are decided fairly early in the regulatory development process, sometimes by midlevel staff. Based on those early decisions, work is begun on monitoring, data collection, development of enforcement strategies, and related issues. If the RIA subsequently finds that the preferred approach is not the most efficient one, strong internal pressures discourage change.

Accordingly, we propose that agency procedures be modified to require that a preliminary RIA be prepared at least six months in advance of final agency review of proposed and final regulations. Understandably, a preliminary RIA may be incomplete and subject to greater uncertainties than the full study. At the same time, this preliminary RIA would characterize the full set of options being analyzed and would provide at least rough estimates of the benefits and costs of each option. It would also provide an opportunity to assess whether the most important benefit (and cost) categories are being assessed, as in recommendation number four. As noted by Wendy Wagner in her chapter on the CAIR, in some respects, a preliminary RIA would be similar to the scoping analysis conducted under the National Environmental Policy Act.4

Transparency of the Analyses

6. Include in RIAs detailed descriptions of expected consequences as physical or natural units, without monetization or discounting

As stipulated in both the Reagan and Clinton executive orders on regulatory review, an RIA is intended to be a document that aids in agency decisionmaking, not only at the level of the technical experts, but also at the level of agency heads and, if it comes to that, the White House. In addition, as Nat Keohane suggests, the RIA could also inform the public about the consequences of agency actions.

These purposes would be promoted if agencies included in their RIAS detailed descriptions of the concrete consequences of their decisions, presented in physical endpoints or natural units rather than solely in monetized and discounted form, at least for the major benefit categories. A key issue is how much detail can be developed with reasonable scientific confidence and at reasonable cost. If, for example, an environmental rule is expected to reduce premature mortality and adverse health conditions, then a range of details about those expected health outcomes may be of interest to decisionmakers; these details might include the expected nature of the death or adverse health condition, the likely age of the populations affected, the likely timing of the effects, and the socioeconomic status of the populations most affected. In cases where a strong scientific basis supports the development of such estimates at a reasonable cost, they should be provided.

In addition, it would be useful to have baseline information on these natural units wherever possible, or to at least include contextual information that gives the reader some perspective on the significance of the changes. For example, if a regulation is expected to reduce the frequency of asthma attacks in sensitive populations, what is the current attack frequency in those populations?

Baseline information of this sort is useful in at least two ways. It allows for a determination of not only the expected absolute change in outcomes, but also the relative or percentage change. It's true that this baseline information is not relevant to the economic criterion of maximizing net benefits—only the marginal conditions are. But that applies specifically to monetary measures. Because good things gain in value as they become scarcer, the change relative to the baseline matters, and decisionmakers might want to know whether the regulatory proposal is going to reduce bad outcomes by 1 percent or 10 percent, for example. If a regulation is expected to reduce fish mortality, by how much are fish populations expected to change relative to the baseline? For example, if billions of fish are dying each year, it should matter whether you have billions or trillions to start with. In addition, having baseline information can provide a sense of perspective that can aid in assessing the credibility of the estimated changes in outcomes.

Agencies would provide this information in a summary chart just as they currently provide monetized and discounted benefit estimates. EPA’s summary tables for the CAIR and the CAMR are good examples of this practice. Indeed, with respect to RIAS on the regulation of the criteria air pollutants, EPA generally does a good job of reporting expected consequences in natural units.

Where regulatory consequences are routinely captured by economic terminology, agencies should continue to supply information about these consequences in economic terms. An agency proposing a rule that will result in greater use of scrubber technology, for example, could report the estimated price of the scrubbers along with the number and expected location, of the scrubbers. But where regulatory consequences are not ordinarily stated in economic terms, where the “price” of a consequence must be divined by reference to complex revealed or stated preference methodologies, the economic description of these consequences should be supplemented by the description of natural units.

For any of the personnel directly involved in decisionmaking—top-level agency officials and White House staff—and even the general public, description of consequences in natural units could serve as useful aids in evaluating agency decisions. Officials unschooled in economics might be confused by the translation of human lives into dollars and by the process of discounting of future illness and other elements of the CBA. Presumably, many would gain additional insights from a comparison of economic costs and tangible consequences expressed in natural units. If the head of EPA, for example, were asked whether average utility customers ought to be asked to pay a penny a day to save billions of fish—an estimate of the cost of the CWIS rule for the typical household (Ackerman and Heinzerling 2003)—she might find this a much more tractable decision than one that invites her to evaluate the economic machinery that EPA deployed to calculate the precise value of those billions of fish. Prominent display of the natural units information will also be helpful to those comfortable with economic valuation because it makes it easier to understand the benefits calculations and judge their credibility. A further advantage of this approach is that it might create added incentives for the agency to develop quantitative estimates of some physical endpoints not typically quantified in the RIAS, such as noncancer health effects.

7. Ensure greater transparency at all stages of the process

As a number of participating authors have noted, RIAS have become huge, dense documents that are almost impenetrable to all but those with training in the relevant technical fields, especially economics. Even to the well-trained eye, RIAS are often opaque; it can be hard to find, for example, exactly what value the agency has placed on human life or exactly which discount rate it has used, over what time interval.

Because an important purpose of RIAS, beyond their use as aids to decisionmaking, is to communicate to Congress and the broader public about the benefits and costs of federal regulations, greater transparency in the analysis would be highly desirable. Accordingly, we recommend that agencies endeavor to make RIAS more comprehensible by nonexpert audiences. Obviously, the complexity of the analysis in RIAS constrains to some extent the degree of transparency that can be achieved. Even so, three quite straightforward changes in practice could considerably improve the transparency of RIAS.

Wherever possible, agencies should use plain English to describe their analysis. They should avoid technical jargon, or at least amplify it with parallel descriptions in plain English. OMB’s Office of Information and Regulatory Affairs already monitors agency rules for plainness of speech; it should monitor RIAS for the same quality.

Agencies should use a similar format, across RIAS, to provide information on the key variables in the economic analysis. They should provide this information in the same location in each RIA. For example, in the portion of the RIA describing the benefits analysis for an environmental rule, the value of a statistical life, value of illness, value of ecosystem effects, discount rate, and time interval for discounting should all be presented in the same order and format across RIAS. One perusing many RIAS could then know exactly where to look in the RIA for information on crucial inputs to the analysis.

The executive summaries of most RIAS focus on the conclusions of the analysis rather than on the methods and assumptions used. With the adoption of a standardized format for summarizing the methods and assumptions, as described above, it might be useful to incorporate the same or similar information into the executive summary.

Several of the other reforms we suggest in this chapter (such as the recommendations that the benefits of regulation be expressed in natural units and that agencies complete a checklist relating to the quality of the RIA) also would enhance transparency.

Treatment of New Scientific Information

8. Update EPA guidance documents for RIAs more frequently to reflect significant developments in the literature

As in the natural sciences, the professional literature on environmental economics is evolving at a quite rapid pace. RIAS typically incorporate a range of analytical and empirical findings from the recent economics literature. Failure to incorporate these new findings into the RIAS can lead to biased estimates of benefits and costs.

Although in principle the concern about updating the RIA guidelines applies to virtually all parameters, the most recent examples involve discounting, the value of a statistical life, and cost analysis. In all of these cases, a similar pattern applied: recent research indicated a departure from past studies, yet the guidelines lagged behind. Fortunately, during the preparation of this volume, EPA has acted in a number of cases to update its approaches. At the same time, it is fair to observe that in the interim several RIAS were produced using the older values, which resulted in various biases in the estimates of benefits and/or costs.

Our purpose here is not to debate the individual issues. Rather, we would emphasize the dynamic nature of the economics literature and the corresponding need for EPA to keep abreast of the changes and, when appropriate, update the guidelines.

9. Reform current practices on nonmonetized benefits in a number of ways

EPA should indicate clearly and up front an enumeration of benefits into at least three categories: those that have been monetized, those that have been quantified but not monetized, and those that have neither been quantified nor monetized. This classification should be summarized in an easy-to-read table in the executive summary of the RIA. In case of substantial disagreement or uncertainty regarding which category an effect of a regulation belongs in, it should be further disaggregated, if possible, until the categorization is no longer ambiguous. Comments on the proposed regulation should be explicitly invited on the definition of major expected effects and their categorization.

Encourage the SAB to provide expedited review for new or innovative analyses presumed to be of high quality, including those unpublished studies that have particular relevance to RIAs. Currently, virtually all studies included in EPA’s economic and scientific assessments are those that have been published in peer-reviewed journals or accepted for publication by such journals. Excluded are those studies still undergoing peer review, which can sometimes be a quite lengthy process, as well as those that represent solid research but are not deemed sufficiently novel to warrant publication in peer-reviewed journals.5

One possible approach to address this problem would be for EPA to encourage the SAB to establish an expedited review process for studies deemed to be potentially important for agency regulatory decisions. EPA should issue guidance on this expedited review process, covering both the nature of the process and the criteria for selecting studies for review. The goal of this expedited review should not be to lower the quality bar for the acceptability of new research, but rather to recognize the complexities of the peer review process and encourage inclusion in RIAS of high-quality research regardless of its publication status.

Consider whether it is better to include some number or distribution of values in place of the default of zero, either as a new scenario or as part of an uncertainty analysis. Notwithstanding the preceding suggestions for the expedited review of economic and scientific papers relevant to regulatory decisions, many regulations will probably still involve some nonmonetized categories of benefits. There are several reasons for this, some unavoidable and some even desirable. First, there may be a consensus that some effects are relatively small and under any reasonable assumption may not contribute much to total benefits. Second, the quantitative effects may be large enough to matter but not well understood or well estimated, in which case proceeding to a potentially arbitrary valuation step will appear to be meaningless twice over. Third, even when estimated, the quantitative effects may be subject to large and possibly asymmetric errors. Estimating WTP for such effects is likely to give misleading results. Climate change is the canonical example; economic estimates using conventional assumptions may greatly underestimate the likely consequences. Fourth, environmental effects may be understood quantitatively, but the link between the regulation and the change in the effect may not yet be established. Similarly, valuation information may be available, but not in a form that links easily to predicted changes in quantitative effects. The well-known mismatch between water quality indicators, which measure decrements in water quality in contaminant concentrations, and recreational benefits, measured by increases in days spent in various recreation activities, is a case in point.

It is in the cases (and there are many) where total compliance costs exceed monetized benefits that the disposition of the nonmonetized benefits plays a crucial role in the regulator's decision. This reality poses what can be a difficult choice for regulatory decisionmakers: either enter a zero for benefits that have not been monetized, running the risk that they will be ignored by decisionmakers, or use some arbitrary values, if for no other reason than to prevent them from being ignored. Obviously, no regulatory decision strictly requires monetization of all benefits; we pay decisionmakers to make decisions in the hard cases, after all. But still, any perspective the RIA can provide on the potential magnitude of those benefits will be helpful to decisionmakers. In addition to providing potentially valuable information, better description and quantification of the value of nonmonetized benefits will provide explanation and justification for observing stakeholders.

EPA has usually opted for leaving out nonmonetized benefits. We believe there is something to be said for the other approach, heretical as it may be: the inclusion of nonzero benefit values for some benefit categories where such values are not currently supported by empirical benefit studies. At worst, including nonzero benefits in such cases is harmless as long as it is understood by decisionmakers that they are not supported by benefit studies. At best, they can prevent decisionmakers from disregarding such categories, and they can force all parties, from decisionmakers to analysts to stakeholders, to try to think through what numbers might be reasonable. If enough observers think that the potential benefits in such categories are sufficiently large, it may give an impetus for research to try to provide real estimates.

Nevertheless, simply assigning an arbitrary benefit number is not likely to gain instant acceptance among many observers. It is worth considering whether there are defensible approaches to assigning such numbers. Below are some options that may be worth considering, including some that have in fact been employed, at least informally, to assign benefits to previously nonmonetized effects or, at least, to put the benefits in other categories in perspective.

Imputation of necessary benefits. Calculate the implicit value of the nonmonetized benefits that, when added to other benefits, make the regulation a break-even proposition. Like all of the methods proposed here, this approach invites the decisionmaker to subject the benefits claim to his or her own judgment and experience. Inevitably, this approach assigns a single value to the total package of nonmonetized benefits. If many disparate effects remain nonmonetized, it may not be easy for decisionmakers to decide whether the resulting value is worth investing in. In other words, this top-down approach is wanting in the detail that might allow the decisionmaker to make an informed decision.

Expert elicitation. Convene a panel of recognized experts in economic benefit estimation, risk perception, and the appropriate natural sciences, and solicit their views on several matters, including the link between the regulatory options and the environmental improvement and the link between environmental improvement and WTP. This is more of a bottom-up approach, in principle at least, that allows explicit valuation of the individual components. At the same time, it raises a different set of methodological issues having to do with disaggregation. Are the experts to assign a monetary value to all of the benefits in the aggregate? Should they assign values to distinct benefit categories? Should they assign benefits to unit changes or to the aggregate change resulting from the regulation?

The convening of an expert panel brings another issue to the forefront that is worthy of consideration by EPA and indeed by all students of regulation. Should the opinions of the scientific experts be limited to the physical effects of the regulation? Or should their views on the monetary valuation of those effects, or at least what the trade-offs might be with other relevant effects, be accorded special weight? It is customary to solicit valuation from random samples of adults, an approach that makes sense when the benefits being valued are familiar to the average person, such as the valuation of health effects or recreation experiences. But is this practice justified when ecological changes are at stake and the environmental effects are subtle, hard to observe, and not directly connected to matters that people care about on a day-to-day basis? At the same time, is it reasonable to turn such authority over to an unelected panel of experts who may have personal and professional biases that can skew results?

Balance in Both the Analyses and the Regulatory Process, Including the Treatment of Distributional Consequences

10. Promote evenhanded treatment of decisions to regulate, deregulate, and decline to regulate

We recommend that agencies’ decisions not to regulate—as well as their decisions to regulate— be subject to regulatory review when they pass the threshold of EO 12866: that is, when they “have an annual effect on the economy of $100 million or more or adversely affect in a material way the economy, a sector of the economy, productivity, competition, jobs, the environment, public health or safety, or State, local, or tribal governments or communities.”6 Because EO 12866 currently applies only to rules, however, agencies’ decisions not to regulate at all do not come within the formal terms of the order. Thus, this recommendation would involve amending the executive order to clarify that agency decisions not to regulate are also subject to regulatory review when they meet the triggering conditions of the order. In the case of deregulation, only a change in practice is required. To keep the process manageable, we would propose that these decisions be subject to regulatory review only when they are formal agency announcements, published in the Federal Register. This limitation would ensure that the process of regulatory review would not be set in motion by every agency decision that might possibly have an adverse effect on the environment.

This recommendation would respond to a long-standing criticism of regulatory review: decisions to regulate are subject to CBA, yet decisions to deregulate or not to regulate at all do not undergo this formal examination. This one-sidedness introduces a potential bias against regulation into the process of regulatory review that is unwarranted (Olson 1984).

The history on this issue is instructive. When the U.S. Department of Agriculture (USDA) Forest Service in 2002 reversed a Clinton-era initiative protecting almost 60 million acres of roadless areas in the national forests, it maintained that the rule would not “adversely affect in a material way... the environment.” The Forest Service noted that an RIA had been prepared for the rule being discarded, but stated that it could not produce a quantitative analysis of its new approach because there was “no experience with implementing the roadless rule, and thus there are no data available” (USDA Forest Service 2005, 25649). When EPA issued its first rule relaxing the requirements for the Clean Air Act's New Source Review program, it did not prepare an RIA because it concluded that the rule would not adversely affect the environment (EPA 2002, 1). When the U.S. Department of Interior proposed trimming back the requirements for consultation with the wildlife agencies under the Endangered Species Act and changed regulatory definitions to make the statute inapplicable to effects resulting from climate change, it noted that the action was a “significant rule” within the meaning of EO 12866, but it did not prepare an RIA (U.S. Department of Interior and U.S. Department of Commerce 2008, 47872). Likewise, when the U.S. Department of Interior proposed easing rules regulating mountaintop mining, it stated that the rule would not have an adverse effect on the environment and it prepared no RIA (U.S. Department of Interior 2004, 1045).

Rather than rest with a conclusory and potentially questionable statement that deregulatory actions have no adverse effect on the environment, agencies should undertake the same process of regulatory review for deregulatory decisions as for regulatory ones when those decisions likely will have a material adverse effect on the environment. In principle, RIAS for deregulatory actions should be relatively easy to conduct because a regulatory RIA already exists. Moreover, a decision to deregulate will not come out of thin air, and the fact that a regulation already exists usually means that there is already some real-world experience with it. This experience provides a basis for analysis that is not available to newly proposed regulations, and agencies should report on this experience and use it to motivate their decisions.

Economic logic supports this recommendation for evenhandedness. Likewise, economic logic supports treating decisions not to regulate at all with the same degree of scrutiny as decisions to regulate. There is no more reason to believe, for example, that EPA’s outright refusal, in 2003, to regulate greenhouse gases in any fashion promoted efficiency than to believe that its decision to regulate conventional pollutants in the CAIR promoted inefficiency. If one kind of decision deserves economic scrutiny, so does the other.

11. Reform the federal data collection request process

The Paperwork Reduction Act (PRA) of 1995, as well as OMB regulations issued in its name, impose stringent requirements on data collection from firms and individuals. To conduct a survey with more than nine respondents, any federal agency and any organization conducting a project sponsored by a federal agency must submit the survey instrument for public comment and OMB approval. These restrictions are intended to minimize the recordkeeping and survey burden on private citizens and firms and to prevent undue invasion of privacy. They apply not only to mandatory data collections, such as Internal Revenue Service forms and EPA data requests to support regulatory development, but also to voluntary participation in WTP surveys if those surveys are supported by federal grants and contracts. In addition, the PRA and OMB regulations require that surveys “must be adequately designed and justified, with an opportunity for public comment” (OMB 2005, 51).

Many researchers who do federally supported research on the benefits and costs of regulations, as well as federal agency personnel who are responsible for developing and supporting economically justified regulations, report horror stories about extensive delays in getting surveys approved. Virtually all would agree that the required public comment on surveys is an unwarranted and unwelcome intrusion on research autonomy. Most would further agree that the PRA and OMB’s interpretation of its requirements are a little too energetic and could be tweaked to make data collection to support regulation more efficient without compromising the goals of the PRA. Below we offer some possible solutions.

Exempt or relax voluntary surveys from the survey size restrictions in some cases. Arguably, there is little distinction between voluntary and mandatory surveys of firms regulated by an agency. What regulated firm wishes to risk being regarded as “uncooperative” by its regulator? However, that issue does not apply to the main concern raised here, namely, surveys of attitudes of and WTP for public goods by private citizens, because such surveys offer little possibility of coercion. In addition, OMB approval is required for surveys that do not directly support regulation; this requirement should be reviewed.

Limit OMB technical review of some survey instruments. No doubt OMB review has limited poor research designs and weak sampling methods in some cases. However, OMB has also rejected surveys where the issues involve unsettled methodological controversies. In some instances, for example, researchers have been denied the use of controlled web-based surveys because they are not in OMB’s view properly randomized. OMB has also rejected the use of cash incentives for completed surveys. Both practices are generally accepted by social science researchers as the only cost-effective ways to recruit an adequate sample and achieve an acceptable response rate.

Eliminate or severely restrict the public comment requirement on WTP surveys, possibly replacing it with a peer review requirement. Most issues of experimental design are quite technical in nature, and self-selected laypersons rarely have much useful to add. Not surprisingly, comments often reflect interest group positions rather than independent professional judgments. However, it might not be inappropriate for OMB to request reviews from qualified professionals, or to invite commentary from all members of relevant scientific disciplines.

Replace OMB review of survey instruments and methodology with peer review by technically qualified persons outside of the federal government. OMB is regarded by many regulatory stakeholders as a nonneutral party, generally hostile to most social regulations. To some extent, these attitudes may be inevitable given OMB’s executive and statutory role as regulatory gatekeeper, but it is not clear that the gatekeeping function should extend to survey quality. Indeed, it sits uneasily with OMB’s recently acquired responsibilities under the Data Quality Act. It is strange for OMB to be, in effect, limiting the acquisition of information through surveys on the front end of the regulatory process and then criticizing the poor quality of regulatory information later in the process.

12. Consider interactions between the distribution of regulatory costs and benefits

Distributional consequences of regulation are important. At present, however, EPA tends to consider the distribution of regulatory costs and the distribution of benefits independently. It is possible, however, that strong and potentially adverse interactions exist, and these interactions should be considered explicitly in the RIA and during the rulemaking process.

EPA has demonstrated considerable concern about distributional consequences of its regulation, although sometimes not on the issues of most concern to environmental advocates. The agency clearly pays some attention to issues of environmental justice and the identification of disproportionately affected communities, but by statute and executive order it is at least as concerned about the impacts of regulatory costs and other restrictions on various types of industrial facilities. Estimates of plant closings remain an important metric in assessments of economic impacts of regulation, and small plants routinely receive exemptions from the more stringent regulations governing larger plants. But small plants may be older and dirtier than their larger counterparts, and are probably located in the more run-down parts of inner cities or small towns, surrounded by low-income and perhaps minority communities. The location and continuous existence of these plants could therefore exacerbate adverse environmental justice outcomes that in other ways EPA explicitly attempts to avoid. At the same time, people living in these communities are frequently employed by the very plants whose actions may be harmful to their health, so that any action against small or old plants conceivably could increase local unemployment precisely at locations where few alternative jobs exist. Thus, any regulatory response here should be considered carefully, based on credible analysis of the potential for injustice, the potential interactions between regulatory costs and benefits, and disadvantaged communities.

Research-Oriented Recommendations

13. Consider the use of group- as well as individual-respondent methods for calculating WTP

Critics of CBA have argued persistently that when considering public goods, it is more appropriate to value them in a collective context than in the individual-consumer context prescribed by welfare analysis. According to this view, people's decisionmaking calculus about public goods is different from their valuation of private goods because the context is different. Their thinking is supposedly less parochial, more future-oriented, and more altruistic. In addition, critics argue that the context in which WTP is elicited in individual surveys is artificial and inconsistent with how individuals actually make market-based decisions. The issue we want to focus on is this: How might those concerns be addressed by the use of group processes to elicit WTP?

Primarily, advocates of group processes have in mind fully group-determined decisions, reached by some deliberative process followed by the exercise of some kind of voting mechanism. Group valuation methods seem to have risen out of the citizens’ jury, a method of illuminating public policy controversies by convening one or more panels of citizens. In the United States, for example, the Hubert Humphrey School at the University of Minnesota has been prominent in its use of citizens’ juries to compare pricing policies to other approaches to deal with traffic congestion. Group valuation methods add a valuation step to the citizens'jury concept. See Sagoff (1998) or Spash (2007) for brief reviews of different approaches to group-determined benefit estimates.

To economists, the problem with this sort of group valuation is that it breaks the link between theoretical welfare economics and CBA. In principle, CBA accepts only one method for valuing the outcomes of social regulations or of public investments: the sum of individual valuations, elicited either directly by survey or indirectly by inference from consumer behavior via autonomous market agents. Moreover, observers from numerous backgrounds see potential practical problems related to group-elicited WTP estimates.

A bigger problem is that little consensus exists regarding how to conduct such group elicitations, and many observers fear that any such estimate may reflect more than just the valuation of the public good or service in question. For this reason, most observers would predict that the use of group methods would probably produce higher WTP estimates than would standard methods. List and his colleagues (2004), for example, agree that social approaches can lead to higher WTP values, but not because the values more faithfully reflect true WTP for the public good in question. Rather, group processes can include individuals’ willingness to be accommodating to the values of others, as well as their signaling of their environmental and social concerns. These latter effects may be valid, but they could be connected to any public good or to no public good and, according to List et al., can only distort the estimates of WTP for the good in question.

However, perhaps it is possible to have a middle ground in which information and attitudes about the public good in question can be aired in a group setting but coupled with private elicitation of WTP in a manner consistent with welfare theory and current practice. To see how this group interaction might help, consider briefly how WTP surveys are typically conducted now. To elicit individual WTP, the general procedure is to conduct a single 15- to 30-minute personal or telephone interview in which the environmental problem or public good to be valued is described in some detail, and a public policy remedy that will regulate the harm is proposed, as is a method of covering its costs. The payment method is designed to make it clear to the respondent that the respondent would have to pay, and so would everyone else. Thus, most well-designed WTP studies attempt to eliminate concerns about free riding (unless altruism is the focus of the research). After the setup is explained to the respondent's satisfaction, a series of yes–no WTP questions is asked. These data are then aggregated across respondents to get the demand curve.

In other words, respondents come to the survey cold, are presented with the environmental problem and potential remedy having perhaps never thought of it before, and then are asked to absorb a great deal of information and make value decisions with very little time for consideration or introspection and without being able to discuss the matter with friends or colleagues. All this despite the fact that most people do spend time thinking about major decisions, and often consult friends and colleagues for advice or additional perspective.

For use values that are broadly familiar to the public and that have more or less direct counterparts in market activities, such as increased availability of outdoor recreation, improved health, or greater commercial fishing yields, estimation of WTP is relatively uncontroversial. These estimates are most often produced by indirect methods, but the fact that they are familiar to the public means that they are also better suited than other benefit categories to individual survey methods. For more obscure or less empirically supported use values, such as the water-purifying and flood-control benefits of wetlands, or nonuse values such as endangered species and habitat protection, few if any market surrogates are available, and survey methods are the only game in town. Unfortunately, such goods are also the ones for which respondents will most likely have greater difficulties in valuation surveys.

Coupling group information provision with individual WTP elicitation has begun to attract empirical attention. In one interesting empirical study of WTP for the preservation of wildlife habitat of endangered geese in Scotland, for example, McMillan et al. (2002) outline a group informational approach they call the Market Stall.7 The authors recruit several groups of participants in a focus group-like setting, explaining to attendees the usual survey preliminaries of problem, potential solution, and payment method. A question-and-answer session follows. Researchers then ask the valuation questions in a format in which participants respond without revealing their answers to other participants. Respondents are then excused and invited back one week later for a follow-up discussion. In the meantime, they are encouraged to do their own research and talk to their friends and families. At the follow-up meeting, participants are once again asked if they have any questions, and discussion is encouraged. When no one has anything else to say, WTP is again elicited privately. For comparison purposes, researchers also conduct a more conventional WTP survey without the group discussion.

The results were dramatic. Compared to the Market Stall participants, the conventional survey participants were nearly twice as likely to indicate they would “definitely pay” (DP; 33 percent to 18 percent). The mean WTP of DP respondents was £15.29 in the survey, compared to £3.67 for the Market Stall participants in the first session and £4.49 for the same participants in the second session. The Market Stall estimates also had much smaller standard errors. Learning about this problem in a group session appears to affect WTP dramatically, but probably not in the direction that most would expect. Obviously, one cannot conclude on the basis of one study that group methods will reliably produce lower estimates of WTP, but it does suggest that we might have much to learn from group processes and that some of these lessons are likely to be surprising.

Recently EPA’s SAB recommended against the use of group sharing of information in WTP surveys and of group elicitation of WTP. In view of the substantial development of literature on these issues, we suggest that EPA revisit this issue.

14. Investigate the WTP to Avoid the Dread Associated with Increased Risk to Oneself or to One's Family

A persistent theme in the debate between proponents and opponents of CBA has been the question of whether the risk perceptions of experts or of laypeople should dominate in public decisionmaking about risk. One lesson from this discussion has been that risk involves more than the probability of material harm. Depending on the circumstances, it can also involve fear, anger, hopelessness, a sense of losing control, and more—myriad emotional and psychological reactions we will gather under the common heading, dread.

To the extent that CBA estimates only the WTP to avoid an increased probability of material harm, and ignores the dread associated with that probability, it may be missing an important category of regulatory benefits. Regulation may reduce both the probability of harm and the dread that often accompanies it. There is no theoretical reason for ignoring the latter in CBA if empirical evidence eventually shows a meaningful WTP to avoid dread. An important concern here is not to use WTP from studies of a health effect that is not expected, such as death in an automobile accident, to estimate WTP of a health effect where dread may play an important role. Comparison of valuation studies for various health effects suggest that differences beyond the direct risks of dying may raise WTP by 0 to 100 percent.

However, substantial practical obstacles may prevent the inclusion of this factor in CBA. People's emotional and psychological reactions to an increased probability of harm are highly contextual; they vary greatly depending on the nature of the risk. Thus, including dread as part of the cost–benefit calculus will either mean doing a great many studies of the WTP for avoided dread or using benefits transfer in a setting in which—because of the variability of WTP, depending on the specific context—it might be quite problematic. Not surprisingly, therefore, our recommendation is to further investigate the economic value of this benefit prior to making a decision to include it in RIAS.

Final Observations

Our recommendations for the reform of RIAS cover a range of topics: the quality of the analyses, relevance to agency decisionmaking, transparency, treatment of new scientific information, and the proper balance in both the analyses and the process, including the distributional consequences. The overall message is clear: improve the quality, scientific credibility, and timeliness of RIAS and, at the same time, make them more transparent and relevant to the decisionmaking process.

The natural pushback is to ask how much these improvements will cost. Presently, a small cottage industry is involved in preparing RIAS, both inside and outside of EPA. At an estimated cost of $1 million for each of the 8 to 10 RIAS produced annually, the agency is already committing substantial resources to this effort.8 Despite a number of cost-reducing proposals among our recommendations, such as a more selective focus on particular topics to be studied in individual RIAS, we recognize that our proposals would probably add to the total costs of developing RIAS. It is also possible that some of our recommendations may be at odds with others. For example, more SAB review might well conflict with the goal of developing a preliminary RIA six months in advance of agency decision meetings.

Recalling that RIAS are generally focused on rules with a minimum of $100 million of annual costs and/or benefits, the potential gains from improved regulatory decisionmaking are large. Unfortunately, the evidence that RIAS actually add net benefits to regulation is limited. Despite one early study demonstrating the gains from RIAS, limited recent data are available on the subject.9 Nonetheless, based on our review of the RIAS examined in this report, as well as other evidence, it is our judgment that recent RIAS have fallen well short of the mark in generating information and analyses that are truly useful to decisionmakers. We appear to be at a crossroads: either we fix the current system or we accept it without major reform. The recommendations developed here represent our judgment on an agenda for the former effort. We hope to spur further debate on these issues to stimulate constructive change.

Notes

1. See http://yosemite.epa.gov/ee/epa/eed.nsf/webpages/Guidelines.html.

2. Steinzor (2008), 122.

3. According to EPA (2000), the guidelines “... establish a sound scientific framework for performing economic analyses of environmental regulations and policies. They incorporate recent advances in theoretical and applied work in the field of environmental economics. The Guidelines provide guidance on analyzing the economic impacts of regulations and policies and on assessing the distribution of costs and benefits among various segments of the population, with a particular focus on disadvantaged and vulnerable groups.” See http://yosemite.epa.gov/ee/epa/eed.nsf/webpages/Guidelines.html.

4. Morgenstern and Landy (1997) also proposed that a NEPA-style scoping exercise be added to the RIA process.

5. An example drawn from the RIAS examined in this volume is the paper by Bell et al. (2004) on ozone mortality. Although the RIA on the CAIR cited the Bell et al. study, it was not included in the agency's benefit calculations because it had not yet been formally accepted for journal publication when the RIA was completed.

6. Executive Order 12866.

7. Strictly speaking, the experiment was to elicit the amount citizens were willing to contribute to compensate farmers for damages to land and crops caused by the protected species on their land. This illustrates a common problem of WTP studies: their connection to a regulation or to a policy outcome is tenuous at best. Based on the description in the paper, the respondent is not told how farmers would respond to the offer of compensation or how the goose populations would respond to the increase in habitat.

8. The estimate of $1 million is from Morgenstern and Landy (1997), based on a dozen RIAS conducted by EPA in the 1980S and 1990S. The Congressional Budget Office (CBO 1997) estimated the cost at about $700,000 apiece, although it highlighted the large variance in costs among different RIAS. Averaging the two estimates and inflating to current dollars yields about $i million.

9. Morgenstern and Landy (1997) found that in a group of a dozen RIAS conducted in the 1980s-1990s, the increase in net benefits of the rules attributable to the RIAS greatly exceeded the costs of the actual studies. For a contrary view on the usefulness of RIAS, see Hahn and Tetlock (2008).

References

Ackerman, Frank, and Lisa Heinzerling. 2003. Priceless: On Knowing the Price of Everything and the Value of Nothing. New York: The New Press.

Bell, M.L., A. McDermott, S.L. Zeger, J.M. Samet, and F. Dominici. 2004. Ozone and Short-term Mortality in 95 U.S. Urban Communities, 1987-2000. Journal of the American Medical Association 292:2372-2378.

Congressional Budget Office (CBO). 1997. Regulatory Impact Analysis: Costs at Selected Agencies and Implications for the Legislative Process. Washington, DC: CBO

Executive Order 12866. 1993. Federal Register 58:51735, October 4.

Hahn, Robert W. and Patrick Dudley 2007. How Well Does Government Do Cost–Benefit Analysis? Review of Environmental Economics and Policy. 1(2): 192-211.

Hahn, Robert W. and Paul C. Tetlock. 2008. Has Economic Analysis Improved Regulatory Decisions? Journal of Economic Perspectives. 22:1, 67-84.

List, John A., Robert P. Berrens, Alok K. Bohara, and Joe Kerkvliet. 2004. Examining the Role of Social Isolation on Stated Preferences. American Economic Review 94(3): 741-752.

McMillan, Douglas C., Lorna Philip, Nick Hanley, and Begona Alvarez-Farizo. 2002. Valuing the Nonmarket Benefits of Wild Goose Conservation: A Comparison of Interview and Group-Based Approaches. Ecological Economics 43:49-59.

Morgenstern, Richard, and Mark Landy. 1997. Chapter 15 (Conclusions), in Economic Analyses at EPA: Assessing Regulatory Impact, edited by Richard Morgenstern. Washington, DC: RFF Press.

Olson, Erik D. 1984. The Quiet Shift of Power: Office of Management and Budget Supervision of Environmental Protection Agency Rulemaking under Executive Order 12291. Virginia Journal of Natural Resources Law 4:1.

Office of Management and Budget (OMB). 2005. Report to Congress on the Benefits and Costs of Federal Regulations. Washington, DC: Office of Information and Regulatory Affairs, OMB.

Sagoff, Mark. 1998. Aggregation and Deliberation in Valuing Environmental Public Goods: A Look beyond Contingent Pricing. Ecological Economics 24: 213-230.

Spash, Clive L. 2007. Deliberative Monetary Valuation (DMV): Issues in Combining Economic and Political Processes to Value Environmental Change. Ecological Economics 63: 690-699.

Steinzor, Rena I. 2008. Mother Earth and Uncle Sam: How Pollution and Hollow Government Hurt our Kids, Austin, TX: University of Texas Press.

U.S. Department of Agriculture (USDA) Forest Service. 2005. Special Areas, State Petitions for Inventoried Roadless Area Management, Final Rule. Federal Register 70:25654, May 15.

U.S. Department of Interior. 2004. Office of Surface Mining Reclamation and Enforcement, Surface Coal Mining and Reclamation Operations, Excess Spoil, Stream Buffer Zones, Diversions, Proposed Rule. Federal Register 69:1035-1048, January 7.

U.S. Department of Interior, Fish and Wildlife Service, and U.S. Department of Commerce, National Marine Fisheries Service. 2008. Interagency Cooperation Under the Endangered Species Act, Proposed Rule. Federal Register 73:47868, August 15.

U.S. Environmental Protection Agency (EPA). 2000. Guidelines for Preparing Economic Analyses, Office of the Adminstrator (EPA 240-R-00-003), Washington, DC. http://yosemite.epa.gov/ee/epa/eed.nsf/webpages/Guidelines.html (accessed January 31, 2009).

———. 2002. New Source Review (NSR) Improvements, Supplemental Analysis of the Environmental Impact of the 2002 Final NSR Improvement Rules, November 21. www.epa.gov/NSR/documents/nsr-analysis.pdf (accessed January 11, 2008).