Ordering Paragraph 9 required the Joint RRM/ST Cost Effectiveness Subcommittee to address several issues.
· Cost effectiveness tests for the LIEE Program
· Cost effectiveness tests for measures within the LIEE Program
· Use of these tests in decision making
· Method to address "gross" versus "net" issues
These first two issues are discussed next. No specific methodologies were developed to address the last two issues - the recommendations for addressing these issues were based on public input and group discussions. Therefore, the recommendation on the use of cost effectiveness tests in making decisions and "gross" versus "net" issues are presented in the results section (Section 4).
Ordering Paragraph 9 clearly stated which cost effectiveness tests were required - the participant cost test (PC) and the utility cost test (UC). As stated in the Decision:
"The Participant Cost Test (PC) measures benefits and costs from the perspective of the customer receiving the measures or services. This test compares the reduction in the customer's utility bill, plus any incentive paid by the utility, with the customer's out-of-pocket expenses. In the case of LIEE program measures, where there generally are no out-of-pocket expenses to the eligible customer, the PC basically measures the bill savings associated with the program or measure.
The Utility Cost Test (UC) measures the net change in a utility's revenue requirements resulting from the program. The benefits for this test are the avoided supply costs of energy and demand ("avoided costs") - the reduction in transmission, distribution, generation and capacity costs valued at marginal costs - for the periods when there is a load reduction. The costs for the UC test are the program costs incurred by the utility, including any financial incentives paid to the customer, and the increased supply costs for the periods in which load increased. "1
The "California Standard Practice Manual: Economic Analysis of Demand-Side Programs and Projects" October 2001, provides further specifics regarding these two tests. The formulas from the Standard Practice Manual are presented in Appendix F; for each of these tests, there is a net present value (NPV) formulae that is the benefits minus the costs and a benefit-cost (B/C) ratio formulae that divides the benefits by the costs.2 The inputs and methods used to determine the results for each test are provided next.
D.01-12-020 accepted the NEBs proposed by the LIPPT report. All NEBs presented in the LIPPT report have been included in the calculations in this report. Appendix A presents a listing of the NEBs, a description of the NEB, the measures included for each NEB, and comments on which NEB is recommended for further study in the future.
As stated in the description of the PC included in D.01-12-020, participant costs for the LIEE program are zero. This effectively removes the PC B/C ratio from consideration as division by zero results in an undefined number. Therefore, the PC simply defaults to the NPV formula discussed above, or the net present value of the benefits received by the customer3. These benefits are the bill savings due to the installation of the program measures.
The work that created the LIPPT report also developed a spreadsheet model for calculating LIPPT values. The spreadsheet included all inputs needed to calculate the PC, both with and without NEBs, with one exception. The avoided costs for energy were used instead of energy rates encountered by the customer. Based on how the spreadsheet was set up, by simply substituting energy rates for avoided costs, a bill savings value was calculated for the analysis in this report.
The energy rates used in this analysis for PY2000 and beyond are the same as presented in the "Joint Utility Low Income Energy Efficiency Program Costs and Bill Savings Standardization Report" of March, 2001. The rates by utility are shown in Exhibit 3.1.
Exhibit 3.1
Energy Rates Used in Participant Cost Tests
Utility |
PY 2000 |
PY 2000 |
PG&E |
0.1159 |
0.6537 |
SCE |
0.1040 |
NA |
SDG&E |
0.1179 |
0.5926 |
SoCalGas |
NA |
0.6110 |
All Subsequent Years |
Previous Year * (1 + Escalation Rate) |
The escalation rate was set to 3% per year with an 8.15% discount rate.4 Because these rates do not take into account recent rate increases, the bill savings over time are most likely conservative.
D.01-12-020 recognized that it was not possible to compute a B/C ratio for LIEE participants since the participants have no costs related to the installations. While the PC and UC tests can be computed as the NPV of the benefits, those NPVs have little meaning in isolation. The Joint RRM/ST Cost Effectiveness Subcommittee discussed the difficulties in comparing PC and UC tests if the PC test is simply an NPV dollar value and the UC test is a NPV and a B/C ratio. As part of this discussion, and as directed by D.01-12-020, the group reviewed the relevant portions of D.92-09-080.5 This decision discusses the possibility of using the utility costs to create a benefit cost ratio, while not specifically addressing its use to create a modified participant cost test. On this basis, the subcommittee decided to also calculate a "modified" participant cost test (PCm) whereby the participant benefits are divided by the utility costs to provide a PCm B/C ratio. (As it turns out, this value is the ratio of the bill savings divided by the utility cost, which is already being calculated by the utilities as part of the bill savings reporting.) The utility costs used in the PCm are identical to those in the UC test.
The Joint RRM/ST Cost Effectiveness Subcommittee feels that the creation of the PCm, or some similar ratio, is an important step toward being able to evaluate and rank measures in conjunction with the UC. Without the creation of a participant related ratio, the comparison would be between two different types of measures/units of vastly differing orders of magnitudes.
It is important to note that the NEBs applied in this test are only those benefits that apply to the participants. For example, "fewer customer calls" and any others NEBs that accrue to the utility are not included in the participant cost test or modified participant cost test benefits.
The utility cost test, as defined by the Standard Practices Manual, also has a net-present-value and B/C ratio formula. In the UC test the benefits for the utility are determined using the utility avoided costs rather than the energy rates used in the participant cost test. The avoided cost forecast as adopted by the Commission for PY2000,6 and used in this analysis to value electricity savings, was a statewide kWh value.7 It is anticipated that that future efforts in this area will use the avoided cost values most recently adopted by Commission.
The electric and gas avoided costs used in the determination of benefits for the UC test presented here include energy, transmission and distribution, and environmental externalities. The values used were $0.0452 per kWh for electricity and $0.3580 per therm for natural gas.
The utility costs used do not include incentives paid since no incentives are paid in this program. Likewise, there are no increased supply costs since this is not a fuel substitution or load shifting program. Therefore, the costs used in the UC are the program costs only.
The NEBs applied in this test are only those benefits that accrue to the utility. Therefore, NEBs such as "water and sewer savings" and other NEBs that were determined to accrue to the participant are not included in the UC benefits estimates.
Moving from whole program assessments to measure level assessments significantly increases the complexity of the analysis. The original LIPPT report created utility-specific NEB values per household and multiplied that value by the number of households serviced to obtain an annual monetary value for a non-energy benefit. As such, the OP 9 provision requiring the calculation of measure-specific benefits that include NEBs, means that decisions had to be made regarding allocation of the NEB to a different unit of measurement (i.e., per-measure as opposed to per-household).
While the Joint RRM/ST Cost Effectiveness Subcommittee recognizes that NEBs need to be allocated to individual measures in order to permit their inclusion in measure assessment, it also feels the necessity to document the inherent weaknesses in conducting such a task.
· The LIPPT Report collected and documented NEBs from many disparate sources. The NEBs were developed at a global level. At times, a specific NEB was calculated based on a sampled population of participants, regardless of the exact measures in each participant's home.
· Because these estimates are at the household level, no consistent uniformly applicable criteria exist for distributing the household/program level NEB values amongst the program measures.
· Regardless of how the NEBs are allocated to measures, there is a false sense of precision inherent in the process - it may or may not be true that a certain measure would provide the level of non-energy benefits attributed to it.
Given those caveats, three methods of allocating the NEBs across measures received serious consideration and analysis. These methods weighted the NEB based on the:
· simple association of a measure with that NEB,
· average installations per house in the program for that measure, and
· NPV of the energy savings over the life of the measure.
Each of these three methods is discussed in the order presented above.
Simple Association of a Measure - With the concept of allocation by association, if a measure type is logically associated with an NEB (e.g., lower water costs are associated with faucet aerators) then program level savings for that NEB is divided equally across the associated measures, independent of how many units of each measure were installed. The problem with this approach is that it causes the B/C ratio to fluctuate greatly. As an example using this allocation method, there only a few compact fluorescent lamp (CFL) porch lights installed and the measure subsequently has small benefits. When the NEB allocation is added to the benefit portion of the B/C, it has a huge effect on the B/C ratio. At the other end of the spectrum is the measure of regular compact fluorescent lamps. This measure has large energy benefits already. As such, adding a comparatively small amount more to the benefits results in a tiny change in the B/C ratio. As a result, this method causes changes in the measure level B/C ratios that logically seem to be out of proportion to any rationally expected effects (see Exhibit 3.2). Thus allocation based on the simple association of the measure with an NEB was rejected as an allocation method.
Exhibit 3.2
Example of Weighting Method - Simple Association
The rejection of the Simple Association of a Measure method left two competing methods for allocating the NEBs: (1) the lifecycle monetary benefit of a measure (called the kWh weighting method for simplicity) and (2) the average installations per household method. The Joint RRM/ST Cost Effectiveness Subcommittee discussed at great length how to make a decision between these two methods and whether the chosen method of allocation was appropriate and defendable.
Both methods use the same mathematical mechanics to allocate the NEBs; it is just the weighting values that differ. Exhibit 3.3 below graphically shows how the NEB is allocated for the average installations per household method. As shown there, NEB dollars are only allocated to measures that have been determined to have a relationship to the NEB. (Appendix A documents which measures are included in each NEB.) In Exhibit 3.3 the dollar values are weighted based on the average number of measures installed per home. A measure with a higher average number of measures installed per home would receive a larger proportion of the NEB dollars compared to a measure with a lower average number of measures installed per home. After allocating the dollars for each NEB, the values are summed across a measure to determine the measure specific NEB benefit.
If the kWh weighting method were to be used, the lifecycle monetary savings of the measure would replace the values in the second column (Average Measures Installed per Home). Those measures with a higher lifecycle savings would receive a higher proportion of the NEB dollars. Higher lifecycle savings would be due to measures with high initial energy savings and/or long effective useful lives.
Exhibit 3.3
Illustration of Allocation Method
Given that the mechanics of allocation are identical, the main task that remained was the choice of the criteria to be used to select the "better" or more logical method for distributing NEBs amongst the measures. Many discussions and email exchanges ensued amongst the Joint RRM/ST Cost Effectiveness Subcommittee.
· Since little basis exists for judging what the correct allocation of NEBs should be, on what basis should the group judge whether the changes in B/C were reasonable?
· Should a teleological argument be made - a method is better based on the end results (and the years of experience of the group that gives judgment to that end result)?
· Should a theoretical argument be made based on how the NEBs were estimated in the first place?
As part of the struggle with these and other issues, the group developed the following arguments for and against each of the final two allocation methods.
Allocating based on the average measures installed per household.
Pros:
· Acknowledges that each measure installed can provide a benefit to the customer that cannot be quantified by the energy savings.
· De-links NEB savings from energy savings and can cause dramatic changes in rank when adding in NEBs. If the a priori expectation of adding in the NEBs is that changes should occur, such changes may be warranted.
· The original participant NEBs were developed based on general participation in a low-income type program, which might suggest that they are more directly tied to the number of measures installed per household.
Cons:
· For some measures the participant benefit/cost ratio moves from below 1.0 to above 1.0 based solely on the addition of the NEB. This occurs for those measures with low initial energy benefits and low costs that are installed frequently.
· There are dramatic changes in ranking when moving from rank without NEBs to rank with NEBs. For example, one measure changed from 11th rank without NEBs to 25th rank with NEBs.
· Methods used to determine NEBs that apply to utility benefits were originally tied to energy savings. This allocation method does not seem to be a good fit for allocating utility NEBs.
Allocating based on the lifecycle monetary benefit of the installed measures.
Pros:
· Method of determining utility NEBs was originally tied to energy savings.
· Measures do not change rank compared to other measures in a dramatic fashion. In the PG&E data analyzed, the top ten measures without NEBs are still the top ten measures with NEBs (as are the second and third ten measures).
· Increased energy savings, and the resultant monetary savings, potentially gives participants more opportunity to impact their comfort, safety, etc., suggesting that the NEBs may be directly tied to energy savings.
Cons:
· Directly ties the non-energy benefit to the energy benefit regardless of potential benefits derived from the interaction with the LIEE personnel or the interaction of the measure in the house. Does not acknowledge that benefits can occur that are not correlated to energy savings.
· The measures do not change rank significantly. If one believes a priori that the NEBs should have an effect on measure ranking, then this method does not meet that expectation.
In addition to discussing the advantages and disadvantages of these two methods, the subcommittee discussed potential combinations of the two methods. By about the mid-point in the deliberations, it was generally accepted that the kWh allocation method was particularly applicable to the UC test, since the NEBs that applied to the UC test were highly correlated with energy savings. Thus the majority of the later discussion centered on the best approach to use in allocating the NEBs for the PCm test. Consideration was given to using the kWh allocation method for the UC test and the average measures installed per household method for the PCm, however consensus was that the dramatic changes in the PCm could not be justified. Appendix C and Appendix D have the B/C ratios both with and without NEB for these two allocation methods.
Throughout the Joint RRM/ST Cost Effectiveness Subcommittee discussions, continual attention was paid to the fact that the method employed had to be readily applicable on a mass basis across the utility databases and could not require detailed, minute adjustments.
Recommendation: The Joint RRM/ST Cost Effectiveness Subcommittee recommends that, for the present, both UC and the PC tests should allocate NEBs based on the lifecycle monetary benefit of the installed measures. Given the lack of documented, concrete information on how the measure level NEBs should be distributed, this method allocates the NEBs to the measures without causing significant changes to the order ranking of the UC and PCm test. In lieu of better information, this approach is considered reasonable.
In choosing to allocate participant related NEBs by energy savings, the Joint RRM/ST Cost Effectiveness Subcommittee does not disallow the fact that NEBs in general are intended to capture those effects not reflected in the standard ways of valuing energy impacts. Rather, the Subcommittee seeks to develop a systematic and consistent rule for allocating program-level NEBs to the measure level. Because, in many cases, it can be shown that these NEBs are correlated with energy savings, the Subcommittee believes that allocating participant NEBs according to energy savings yields a more consistent and believable result than allocating them according to the average number of measures installed per household.
In addition, the Joint RRM/ST Cost Effectiveness Subcommittee wants to make clear that the choice of the kWh allocation method is based partly on a shortage of information that might allow other approaches. Its choice as a proxy now should not preclude changes to alternate, more appropriate methods when better information becomes available, or discarding NEBs at the measure level altogether.
It should be noted that measures with no claimed energy saving receive no NEB allocation.
The Joint RRM/ST Cost Effectiveness Subcommittee reviewed several different approaches to screening measures for the LIEE program. Early in the process, the following general three-stage approach to screening measures was agreed.
1. Both the PCm and the UC test results are above the "pass" criteria, the measure is included.
2. One of the two test types falls below the "pass" criteria then the measure should probably be included.
3. Both the PCm and the UC test results fall below the "pass" criteria, the measure is excluded.
Given this three stage approach, the main issue remaining was the selection of the threshold criteria for pass/fail. While many criteria were discussed, several criteria received the majority of the attention. The following descriptions summarize these approaches and describe why they were rejected or accepted.
· Standard threshold of 1.0 (where the benefit exceeds the cost) for both tests. This approach was the first approach discussed since it represents the "traditional" break point for cost effectiveness testing. However, it was considered to be inappropriate at this time since (1) the current program level PCm and UC averages both fall below this value, (2) the PCm and UC current program averages have substantially different values (making the selection of one criteria for both inappropriate for screening), and (3) well over half of the current measures would be eliminated by this approach.
· A threshold value below 1.0 based on the idea that a dollar of services has a higher value to the LIEE customer than the dollar cost to the average ratepayer. While this approach would seem to have some validity in the literature, the identification or choice of a supportable multiplier would be difficult to support, or would have to be arbitrary.
· An average program threshold equal to the current average statewide values for the PCm and UC tests, with no individual measure thresholds. This approach would be consistent with the manner that other energy efficiency programs are managed and would place the emphasis on the program level numbers, which are which are more supportable than the measure level values. This approach was considered non responsive to the order by the Energy Division representative. Additionally, this approach would not have worked for the gas only utility, since their program would have been virtually, if not completely, eliminated, despite the fact that the comparisons demonstrates that on a service area basis their customers are getting LIEE program services comparable to other service territories.
· A measure threshold equivalent to each utility's PY2000 average program. This approach relies on the finding discussed in Section 4, that the program level PCm and UC results are uniform across the state if considered on an electric and gas utility service area basis, indicating that program offerings to LIEE customers are comparable statewide. A corollary to this statement is that each of the individual utility program offerings are roughly comparable. Given this, current individual utility program average PCm and UC values would represent reasonable measure thresholds, and would not bias for or against single fuel utilities. By using the utility specific PCm and UC values, each is measured against its own criteria, and measures are not unduly eliminated from the combined SCE and SoCalGas service area.
The Joint RRM/ST Cost Effectiveness Subcommittee selected the last of these options; the average program PCm and UC test values for each utility, as the threshold selection criteria for measure retention/exclusion. Once this selection was made, many specific details and situations were discussed. These are documented below in order to supply an expanded description of the measure selection process and to give guidance to the Standardization Team who's responsibility it will be to apply this standard.
When applied using the a three level assessment framework for LIEE measures discussed above, the measure level benefit-cost (B/C) ratios, including NEBs should be assessed as follows:
1. Measures that have both a PCm and a UC greater than or equal to the average program PCm and UC for that utility should be included in the LIEE program. This applies for both existing and newly proposed measures.
2. Existing measures with one of the two benefit-cost (B/C) ratios less than the average program PCm and UC for that utility should be retained in the program. New measures meeting this criterion would not be accepted because of the substantial effort required to integrate a new measure.
3. Existing and new measures with both the UC and PCm test results less than the average program PCm and UC for that utility should be excluded from the LIEE program unless substantial argument can be made that significant NEBs are not currently being accounted for in the PCm and UC test values or there are other policy or program considerations that require the measure to be retained.
The applications of these criteria are presented in a tabular form in Exhibit 3.4.
Exhibit 3.4
Measure Assessment/Decision Rules for Retention/Addition
Assessment Test Type |
Decision Rule | |||
Pass/Fail Guideline Number |
Modified Participant Cost Test |
Utility Cost Test |
Existing Measure |
Proposed New Measure |
1 |
Pass |
Pass |
Retain |
Add to Program |
2 |
One Pass/ One Fail |
Retain |
Do Not Add | |
3 |
Fail |
Fail |
Retain ONLY if significant excluded NEBs can be identified. |
Do Not Add |
The more restrictive approach to adding new measures is believed to be justified because adding measures requires added support costs (e.g., development of standards, training, etc.) and measures already in the program have received some level of scrutiny. Additionally, some non-energy related measures are already in the program for policy reasons (e.g., furnace repair/replacement, some minor home repairs). These measures will need to be assessed on a case-by-case basis.
The reasoning behind retaining measures that pass one test and the other test is that either marginal adjustments in the measure offering or changes in economic conditions can swing measures back into a pass/pass situation. The Joint RRM/ST Cost Effectiveness Subcommittee does not want to see measures that have marginal utility precipitously rejected from the program.
Under this approach, the elimination of low cost-effectiveness measures will slowly raise the average program PCm and UC test values. As the average program PCm and UC rise, the pass/fail criteria should not exceed a maximum of 1.0 for either test. This is the point at which the benefits exceed the cost, and it is not reasonable to eliminate measures with a benefit greater than the cost. In addition, it is recognized that for all electric utilities (where the benefits are high) that some added measures may actually reduce the overall utility B/C ratio. This would still be considered appropriate, since the new measure still has a benefit greater than the cost.
The Joint RRM/ST Cost Effectiveness Subcommittee recommends that the program level criteria be held constant for two-year periods and then updated to the average program value of the second year. . The primary recognized exception to this rule would be when a utility institutes a large structural change in the LIEE program, in which case the criteria ought to be updated in the year the program is changed.
The assessment of measure inclusion or exclusion should occur biennially for existing measures to coincide with the biennial program impact evaluation, with new measures being evaluated in the program year in which they are proposed.
The Joint RRM/ST Cost Effectiveness Subcommittee reviewed the following possible issues that could arise from the proposed methodology:
· What should be done when a measure (e.g., ceiling insulation) is slated to be offered to selected subpopulations (e.g., single family, multi-family, mobile homes) based on the test results? The subcommittee decided that it was appropriate to offer measures to selected subpopulations, as long as the criteria is applied uniformly.
· What should be done with measures that are cost effective in the service areas for one or more utilities but not cost effective in other service territories? It was acknowledged that this could occur and was an issue for uniformity of the program statewide. Similar to the previous issue, it is believed that a uniform application of the methodology should occur, but that it should be tempered by the RRM Standardization Team giving close scrutiny to measures that fall into this category.
· What happens to measures that may not appear to make sense based on the B/C ratios of the single measure but that have interactive effects with other measures? The subcommittee felt that interactive measures may need to be considered as part of a complete group of measures (e.g., it may not make sense to eliminate one of the weatherization measures since they act as a group to make the LIEE customer comfortable.)
The Joint RRM/ST Cost Effectiveness Subcommittee realizes that it is likely that the Standardization Team will need to review and make decisions on many cases such as those presented above.
The current utility-by-utility retention/addition criteria are documented in Section 4, Results and Recommendations, which follows.
1 Page 57. R.01-08-027, D. 01-12-020, December 11, 2001, Section V. 2 California Standard Practice Manual: Economic Analysis of Demand-Side Programs and Projects. October 2001. Chapters 2 and 5. 3 In actuality, it is the net-present-value of the participant benefit minus the net-present-value of the participant cost, but with the participant cost term equal to zero, it reduces to only the first term. 4 ALJ Bytof ruling, dated October 25, 2000, in Application (A.) 99-09-049, et. al 5 Section 6.1.2.2 Consideration of Total Resource and Utility Costs. 6 D. 99-08-021 and further adopted in D. 00-07-017. 7 While the electric use is expressed only in kWh, the avoided cost value was developed using a hybrid demand profile. Thus extracting the relative demand contribution is virtually impossible. It is believed that the demand component represents between 1% and 6% of the overall avoided cost using 2001 kW values.