7. Factors Considered in Review of Proposals
One main criterion for determining whether or not to adopt a particular demand response activity is whether or not that program is cost effective. However, because demand response programs are relatively new compared to other forms of demand-side management such as energy efficiency, there is still a great deal of uncertainty about the best way to measure the cost effectiveness of these programs. The Commission has not yet adopted a standard cost effectiveness methodology, in part because many of the costs and benefits of demand response programs are intrinsically difficult to measure and compare. In part for these reasons, cost effectiveness of an individual program will be one important factor considered in evaluating proposed activities, but it will not be the only criterion relevant to this determination. The following list includes factors that have been considered in evaluating the programs:
1. Cost effectiveness: The cost effectiveness analysis contained in these applications is based on a Consensus Framework proposed by most of the parties in R.07-01-041. This framework is not as broad as the subsequent protocols proposed by Commission staff, which required a sensitivity analysis of many inputs rather than a single benefit/cost ratio for each program and test. However, it does provide a useful estimate for examining the cost effectiveness of programs. For a more detailed discussion of the usefulness and limitations of the Consensus Framework cost effectiveness estimates used in these applications, see Section 7.1 below.
2. Track record of performance for continuation of existing programs: This includes, but may not be limited to, actual load drop (especially compared to enrolled load and estimated load drop), target groups and types of participants, actual cost, how often it was called, actual load drop rate, actual load pick-up rate, and other factors as appropriate.
3. Projected future performance: Expected performance in the future including, but is not necessarily limited to, estimated participation (customers and enrolled load) and estimated load drop at peak times.
4. Cost.
5. Flexibility or versatility: Whether a program can be called under a variety of circumstances, or only in very rare or specialized situations. For example, does the program have multiple triggers? Can it be called on a price responsive basis for simple day to day resource dispatch, as well as for contingency matters such as emergencies? Can it be called in non-summer months to respond to generator outages?
6. Adaptability to changes in the structure of the electricity market: Ability of a program to adapt to the Market Redesign and Technology Upgrade (MRTU) and the new CAISO markets. For example, is a program likely to be able to supply some of the operational characteristics of Proxy Demand Resource or participating load? What interaction or shared dispatch and control could CAISO have with the program?
7. Locational value: Whether the program can be called by location. For example, can the program be activated ("called") by specific location if necessary, particularly in transmission and distribution congestion areas? Does the program help to alleviate a particular geographic challenge? Does it count towards locational resource adequacy or more specific local needs?
8. Integration with advanced metering infrastructure, Smart Grid, and emerging technology: What enabling technologies are required for the program? Would this enabling technology become obsolete or redundant once AMI is installed at the participant customers site? Will the program increase the operational capability of AMI? How might the program contribute to a Smart Grid?
9. Consistency of offerings throughout the state: Are equivalent programs available in or appropriate for other parts of the state? Is the program consistent enough across utilities that commercial customers with multiple facilities can participate easily?
10. Simplicity/Understandability: Can customers understand how the program operates and what is expected of them?
11. Customer acceptance and participation: Are participating customers likely to recognize that the program had been called? Is participation likely to cause customer hardship? Can the customer override an event - if so what does the utility expect will be the rate of customer override?
12. Environmental benefits: Does the program have any particular environmental benefits that other programs do not have? Does the program help with firming intermittent renewable energy?
13. Contribution to existing Commission or state policies and goals: Is the program consistent with statewide goals or policies? For example, will the program simply shift usage from peak to another time or does the program also reduce overall usage? Is it integrated with other demand-side programs? Does it result in significant greenhouse gas (GHG) reductions?
7.1. Usefulness and Limitations of Cost Effectiveness Analysis
The utilities have provided cost effectiveness estimates, as directed,13 based on the Consensus Framework. These estimates consist of benefit/cost ratios calculated using four cost effectiveness tests based on the state's Standard Practice Manual14 for evaluation of energy efficiency programs - the Total Resource Cost (TRC), Ratepayer Impact Measure (RIM), Participant and Program Administrator Cost (PAC) tests. A motion to adopt the Consensus Framework was filed by most parties to R.07-01-041, including CLECA, the party that raised the most concerns about the implementation of that framework in this proceeding.
Though the Commission has not adopted a demand response cost effectiveness protocol, the Consensus Framework represents the most widely supported option available for estimating demand response cost effectiveness. Nevertheless, we recognize that this method is preliminary and not without problems. Several parties have pointed out what they see as deficiencies, inconsistencies or inaccuracies with the utilities' method of estimating cost effectiveness. Claims made by various parties include:
· The utilities are calculating the Avoided Cost of Capacity using combustion turbine costs which are too low.15
· PG&E's gross margins are too high.16
· The utilities used three different discount rates, 17 time horizons and lifecycles to compute the net present value of the benefits and costs.
· The three utilities used different input assumptions to compute avoided costs so that it is difficult to compare the cost effectiveness of the same programs across different utilities.18
· The avoided Transmission and Distribution (T&D) cost for PG&E is calculated incorrectly.19
· No party has provided a convincing argument for the inclusion of avoided T&D costs.20
· The Avoided T&D cost is applied incorrectly.21
· PG&E did not provide an appropriate Avoided T&D cost analysis.22
· The utilities' assumption that participant benefits are equal to participant costs skews the cost effectiveness results, since participant benefits are actually greater than, not equal to, participant costs for voluntary programs.23
· The utility's adjustments to the Avoided Capacity Cost based on LOLE/P calculations are inaccurate and inconsistent.24
· The utility's method exaggerates the benefits and does not include all the costs.25
· The cost effectiveness of statewide programs should not differ that much across the state.26
Some of these criticisms may have merit. We view the utilities' cost effectiveness estimates as, therefore, just that - estimates. SCE notes that "this [demand response] program cycle is the first time the [utilities] have attempted to implement a common framework (the Consensus Framework) for evaluating demand response program cost effectiveness. It is not surprising that the process has revealed quantification differences among the [utilities]."27 We believe that despite the variability in the utilities' calculations, the cost effectiveness analyses contained in these applications represent an improvement over calculations contained in previous demand response applications. We agree with SCE that the differences are unlikely to materially impact the Commission's ability to determine whether the demand response proposals are reasonable and should be authorized for 2009-2011, and should not stand in the way of our review of the application.
We find that the cost effectiveness analyses included in the applications, while somewhat flawed, are sufficient for our purposes in this proceeding. In the long term, we need an improved cost effectiveness methodology that will be implemented consistently by all three utilities in order to accurately measure, compare, and choose among existing and proposed demand response activities. We expect to adopt an improved cost effectiveness method in Phase 1 of R.07-01-041 to get us closer to this goal of a consistent analysis to be used in future demand response applications. It is likely that, as more is learned about the evaluation, measurement, and verification of demand response activities (an area that is not currently well understood), even that methodology can be improved over time. To the extent that there are any deficiencies in the cost effectiveness methodology, parties should raise the concerns in the ongoing Phase 1 of R.07-01-041, and not in this proceeding.
Nevertheless, we acknowledge the issues raised by parties and recognize the limitation of the provided cost effectiveness analyses as we review and evaluate the many proposals contained in these applications. We note that, in particular, there is a wide variation of benefit/cost ratios among the three utilities, making it difficult to compare the relative cost effectiveness of programs across utilities. Even similar statewide programs show large variations in cost effectiveness across the state. This could be due to a number of factors; for example, it could be a result of variations in resource mix, utility infrastructure, local construction costs, and other factors (as claimed by PG&E28) or could reflect differences in assumptions and details used in calculations under the consensus framework. Without a more consistent methodology, we cannot be certain that these disparities reflect real differences in program performance and the actual cost effectiveness results of the three utilities' programs. For example, PG&E's benefit/cost ratios are mostly between 0.5 and 1, SCE's are all close to 1, and SDG&E's are all above 1. It is possible that these varying results reflect differences in calculation, rather than differences in program performance. Despite these problems, we believe that the utilities' cost effectiveness estimates are accurate enough to be used in this proceeding. In most cases, this decision cites the results of the Total Resource Cost (TRC) test, though the results of the Participant Test, Ratepayer Impact Test, and Program Administrator Cost Test have also been analyzed by parties and Commission staff. This is not meant to imply that the TRC costs are preferred to, or more important than, the results of the other three tests. All four tests have been considered; for simplicity, the discussion in this decision uses the TRC tests to compare programs among utilities. We use the utilities' analysis as provided; however, we do so with the recognition that these benefit/cost ratios are only estimates of Demand Response programs' cost effectiveness.
13 Guidance Ruling, February 27, 2008, p. 24.
14 The California Standard Practice Manual identifies the cost and benefit components and cost effectiveness calculation procedures from four major perspectives: Participant, Ratepayer Impact Measure (RIM), Program Administrator Cost (PAC) and Total Resource Cost (TRC). See the Standard Practice Manual at http://www.energy.ca.gov/greenbuilding/documents/background/07-J_CPUC_STANDARD_PRACTICE_MANUAL.PDF
15 CDRC Opening Brief, pp. 5; 6-8.
16 CDRC Opening Brief, pp. 5; 9-10.
17 CLECA March 2, 2009 comments.
18 DRA Opening Brief, p. 33.
19 TURN Opening Brief, p. 15.
20 DRA Opening Brief, p. 18; CAISO Opening Brief, p. 11.
21 CLECA March 2, 2009 comments.
22 TURN Opening Brief, p. 15; CLECA, CLECA March 2, 2009 Comments.
23 CDRC Opening Brief, pp. 6 and 11-13.
24 CDRC Opening Brief, pp. 6; 13-16.
25 TURN Opening Brief, pp. 18-25.
26 CLECA, p. 2 and Exhibit 601.
27 SCE Reply to CLECA Comments, March 5, 2009, p. 2.
28 PG&E Reply Brief, p. 8.