V. CPUC Performance Incentives Plan

Beginning with April 2002 performance, Pacific implemented the OSS performance monitoring and enforcement mechanisms contained in the Commission's performance incentives plan ("PIP").322 To ensure that an ILEC's application for Section 271 approval is in the public interest, the FCC has listed five important characteristics for a performance incentives plan.323 The CPUC's performance incentives plan has these characteristics.

· potential liability that provides a meaningful and significant incentive to comply with the designated performance standards;

· clearly-articulated, pre-determined measures and standards, which encompass a comprehensive range of carrier-to-carrier performance;

· a reasonable structure that is designed to detect and sanction poor performance when it occurs;

· a self-executing mechanism that does not leave the door open unreasonably to litigation and appeal;

· and reasonable assurances that the reported data is accurate.

The PIP was developed in three parts: performance measurement and standards, performance assessment, and performance incentives. On August 5, 1999 we adopted the parties' Joint Partial Settlement Agreement (JPSA) in D.99-08-020, which established performance measurements and standards for Pacific.324 We established the basic performance assessment methods, including statistical tests and criteria, in D.01-01-037, on January 18, 2001. We completed a performance incentives plan for Pacific on March 6, 2002, in D.02-03-023 by establishing the monetary incentive amounts to be generated by deficient performance. We refer the reader to these decisions for details of the performance incentives plan.325 Some of the highlights of the plan are as follows.

A. Performance measurement and standards

Extensive collaboration between the parties in this proceeding resulted in a set of forty-four OSS performance measures that cover a wide range of OSS performance. These measures track performance in nine areas: pre-ordering, ordering, provisioning, maintenance, network performance, billing, database updates, collocation, and interfaces. Appendix III lists the measures in each area. Where applicable, these measures are broken down into sub-measures. Sub-measures were constructed to track performance separately for different service types, for different regions, and for other service distinctions such as the necessity for fieldwork or line conditioning. For example, provisioning time is tracked with separate sub-measures for different service types such as Resale Business POTS, Resale Residential POTS, Resale Centrex, Resale PBX, Resale DS1, UNE loop 8db weighted 2/4 wire analog basic/coin, UNE Loop 2 wire Digital xDSL capable, and many others. For many measures, complete sets of sub-measures track performance separately for four regions, LA, Bay Area, North, and South regions.326 Two examples of separate sub-measures with service type, regional, and other service distinctions are: Bay Area Resale Business POTS No Field Work, Bay Area Retail Business POTS Field Work.

Not all measures or sub-measures are included in the PIP. By design some measures were excluded by parties' agreement that they were either duplicative of other measures or currently used only for diagnostic purposes. Out of the forty-four measures, thirty-nine are used in the PIP.327 In April 2002, with 126 active CLECs, 592 sub-measures produced testable data, resulting in 5867 CLEC-specific performance results.328

The performance measures consist of two basic types: "parity" and "benchmark" measures. Parity measures are used where Pacific's performance to CLEC customers can be compared to Pacific's performance to its own customers because these measures are only possible where Pacific provides the same service for its own customers, termed a "retail analog." Absolute "benchmark" measures are used where there is no retail analog. For example, where a retail analog exists, a parity standard might compare the average time to provision a new service for CLEC customers to the average time for the same activity for Pacific's customers. In contrast, where there is no retail analog, a benchmark might require that ninety-five percent of new CLEC installations be completed within five days.

Performance is measured in five ways for parity and benchmark measures: averages, percentages, rates, indexes, and counts.329 The following examples illustrate these measures.330 An average measure compares the average service installation time for CLEC customers either to the average installation time for Pacific's customers (parity) or to a specific average (benchmark). A percentage measure compares the percentage of due dates missed for CLEC customers either to the percentage of due dates missed for Pacific's customers (parity) or to a specific percentage (benchmark). A rate measure compares CLEC customer trouble report rates either to Pacific's customer trouble report rates (parity) or to a specific rate (benchmark). An index measure compares the percentage of time an interface is available to the CLECs either to an index of the time it is available to Pacific (parity), or to a specific percentage (benchmark). The index measure differs from percentage measures in the way it is assessed, as discussed infra. A count measure allows a certain number of events, such as no more than one repeat trouble in a 30-day period.

As discussed, supra,331 the measurements and the rules established to generate the reported performance data have been audited and determined to be consistent with the rules that define and make the measures operational. Additionally, aided by an external consultant, staff conducted an accuracy check of the data and found problems that were corrected. (Initial Report on OSS Performance Results Replication and Assessment, Telecommunications Division, (June 15, 2001).)

B. Performance Assessment

While the parties agreed on many issues, they were unable to agree on a complete set of performance assessment methods and criteria. To resolve the disputes that remained, we constructed the final assessment method and established the test criteria. We briefly describe the assessments we established.

Different measures require different tests to identify deficient performance, or "failures." Statistical tests are applied to parity measures to distinguish differences likely caused by random variation from differences likely caused by poor performance to CLEC customers. (Interim Opinion on Performance Incentives, D.01-01-037 at 58 - 129 (January 18, 2001).)332 For average-based measures, a t-test is applied to log-transformed scores. Log transformations are used for time-measure data since the distribution of raw scores is skewed, as is typical for time-to-complete-task data. The transformations bring the data closer to the normal curve distribution that is assumed for the t-test.333 For percentage-based parity measures, a Fisher's Exact Test is used on the original non-transformed data. For rate-based measures, a binomial exact test is used, also on the original non-transformed data.

Different critical alpha levels are used in an attempt to control the Type I error probabilities without allowing excessive Type II error, or beta levels. Id. at 83 - 98. We selected a default critical alpha level of 0.10, because we discovered that the beta levels were considerably greater than the conventional 0.05 alpha levels. Id. at 92 - 93. While a 0.10 alpha level still does not balance the two types of error, it reduces the imbalance.334 For assessments of performance in consecutive months, we selected a 0.20 alpha level because the test requirement for consecutive "failures" greatly reduces the net alpha level. (Opinion on the Performance Incentives Plan for Pacific Bell Telephone Company D.02-03-023 at 39 - 41, 51- 52 (March 6, 2002).) (Plan Opinion). We selected a 0.20 alpha level for individual CLEC small sample tests for sub-measures where the CLEC industry aggregate failed. The likelihood of a Type II error increases with small samples and where information suggests that the overall process is not in parity. (Incentives Opinion at 66, Plan Opinion at 39 - 41.) In a complementary fashion we selected a 0.05 alpha level for the largest samples, and for moderately large samples where the CLEC industry aggregate "passed."

Benchmark assessments are simple comparisons without statistical tests. For the larger samples, performance to CLEC customers is compared to the specific standard as established in the JPSA. For the smaller samples, a "small sample adjustment table" is used to account for the fact that even when CLEC customers as a whole receive performance easily meeting the benchmark, small samples can fail the benchmark. For example, if twenty CLECs each placed one order, and only one of those twenty orders was not completed with in the specified time, 95% of the orders would have been completed within the allowed time. With a benchmark of 90%, overall performance would easily pass. However, at the individual CLEC level, nineteen would pass and one would fail. The failure is inevitable in this case since with only one order, the CLEC with the order not completed within the allowed time would have a zero-percent result. Small sample adjustment tables adjust for this problem by allowing a few more "misses" than allowed by the benchmark. 335

Index measures are similar to parity and benchmark measures except that they have neither statistical tests nor small sample adjustment tables. Count measures also do not have statistical tests or small sample adjustment tables.

We established two "consecutive failure" definitions. First, if a sub-measure "fails" three months in a row, it is termed a "chronic failure." Second, if a sub-measure fails five or six out of six months it is termed an "extended chronic failure."336

C. Performance incentives

Instead of outright payments to CLECs or the state general fund as many other states have required, our incentives are billing credits to CLECs and ratepayers. Monetary amounts generated by deficient performance to individual CLECs become billing credits to those CLECs (Tier I). Amounts generated by deficient performance to the CLEC industry as a whole become billing credits to the ratepayers (Tier II). If the amount to be credited to a CLEC exceeds the CLEC's billing, the excess amount is credited to the ratepayers.

We have established limits, or caps, to the credits that Pacific must issue. First, the overall annual cap equals thirty-six percent of Pacific's annual net return from local exchange service in California. The FCC has approved several other states' performance incentive plans with this same percentage of net return liability, viewing it as a reasonably sufficient amount to motivate OSS performance. (Plan Opinion at 82.)

Thirty-six percent of net return from local exchange service in 2001 equals approximately $601 million. The cap applies monthly at one-twelfth of this amount: approximately $50 million. Second, credits are capped at about $16.4 million per month without formal review. We allow Pacific a formal review before requiring incentive amounts between $16.4 and $50 million per month. The credit amounts to individual CLECs are only limited by their billing totals.

Our plan is self-executing. Data recording, assessment, and credit generation is automated. Incentive credits are made without further review, unless the procedural caps are reached.

Our incentive amounts are scaled to performance in a "curvilinear" fashion. Our plan generates relatively smaller percentages of the cap for smaller failure rates and then accelerates the incentive percentages as performance worsens. That is, rather than requiring ten percent of the cap to be credited for a ten percent failure rate, twenty percent of the cap for a twenty percent failure rate, and so forth, we have targeted a four percent incentive amount for a ten percent failure rate, a sixteen percent amount for a twenty percent failure rate, a thirty percent amount for a twenty-five percent failure rate, up to 100 percent of the cap for a fifty percent failure rate. (Plan Opinion at 46.)

Our PIP was not scaled to absolute amounts; it was scaled to match specific percentages of deficient performance with specific percentages of net return. (Id. at 46 - 48 and App. G.) In the Plan Opinion, we explicitly required Pacific to update the incentive cap after new ARMIS data is posted each April. (Id. at 21 and App. J at 1.) We did not explicitly require that the incentive amounts themselves be updated even though they are based on the cap. ARMIS data shows that Pacific's annual net return increased by 9.28 percent from 2000 to 2001.337 However, Pacific has informally agreed to adjust incentive amounts that are less than the cap. We will make this requirement explicit as well, infra. Pacific shall update these amounts beginning with the May 2002 performance. The caps, the base amount and the parity simulation payment-reduction amount will be increased by 9.28 percent for the months of May 2002 through April 2003, and will be adjusted with the same timing and method thereafter.

We also recognize that even with perfect performance, residual Type I errors could result in Pacific having to credit significant amounts to the CLECs and ratepayers even though they experienced no actual performance discrimination. To provide some mitigation for this event, we allow Pacific to discount the credit amounts generated by the plan when performance reaches performance levels matching or exceeding the parity simulations we established in D.02-03-023. (Id. at App. J, § 3.9.) The discount is designed to match the amount generated by the plan so Pacific will not be liable for giving credits to the CLECs and ratepayers when performance is optimal.

Pacific implemented our performance incentives plan beginning with performance for the month of April 2002. Pacific's "failure rate" for individual CLEC results in Category A was 6.7 percent, and the plan generated incentive amounts totaling $673,390. Pacific credited $532,880 to the CLECs and $140,510 to the ratepayers.338 A more detailed summary of the credits from the first month's implementation is provided in Appendix IV.

Parties have raised concerns that our PIP does not provide sufficiently strong incentives for chronically deficient performance. (Application for Rehearing of Opinion on the Performance Incentives Plan for Pacific Bell Telephone Company, Participating CLECs339 at 13 (April 8, 2002).) The possibility remains that Pacific could treat the incentive credits generated by extended chronic failures as the "cost of doing business." While this issue may have seemed moot insofar as we have only constructed a plan for an initial six-month implementation period, we find it prudent to establish a contingency mechanism to fill any gap that may arise between the end of the six-month period and the adoption of any necessary revisions. In this regard, we find that it is important to continue the current PIP until it is revised regardless of the time it might take to revise it. Additionally, we find it important to add an additional treatment for deficient performance that may continue beyond the six-month period.

It is difficult to know with confidence what incentive amounts will actually motivate the desired OSS performance. However, if OSS performance for a particular sub-measure continued to be deficient for longer than six consecutive months,340 it would be reasonably clear that the amounts were too low, and that Pacific may be treating the incentive amounts as the "cost of doing business." To provide stronger incentives when such performance continues past six months, it will be reasonable to increase the incentive amount for any such sub-measure. Not only are such continuing performance "failures" increasingly accurate assessments, but they also represent increasing competitive harm. To provide incentives to prevent such continuous deficient performance we will automatically increase the payments for months with deficient performance when an "extended chronic failure" continues.341 When an extended chronic failure occurs three or more months in a row, payments for a failure will be doubled from that required for an extended chronic failure for that month.342 Every three months thereafter, incentive amounts will be doubled again for continuing extended chronic failures.343 For example, after twelve or fifteen months of continuing extended chronic failures, the incentive credits would be four or eight times the amount required for extended chronic failure for those months, respectively.344 Additionally, since continuing extended chronic failures would indicate that Pacific is not providing parity OSS performance, Pacific should not be eligible for mitigation under section 3.9 of the PIP. (See Plan Opinion at App. J. at 10.)

322 Established in D.02-03-023.

323 Bell Atlantic New York Order, 15 FCC Rcd at ¶ 433.

324 On May 24, 2001, we adopted changes to the JPSA in D.01-05-087. Although not as complete, performance measurements and standards for Verizon California were also established in the JPSA.

325 We also make several updates or modifications to the PIP, infra.

326 North and South regions encompass the North and South portions of the state except for the LA and San Francisco/Oakland Bay areas. Measures for statewide services, such as billing, interface availability, and network performance are only measured statewide and not regionally.

327 Three of these thirty-nine measures are not currently operational. Measure 4 (Percentage of Flowthrough Orders) has not been implemented because parties have not agreed on its definition. Measures 29 (Accuracy of Usage Feed) and 36 (Accuracy of Mechanized Bill Feed) depend on data from the CLECs that the CLECs currently are not submitting.

328 While there are over 1,500 possible sub-measures, many are not utilized due to CLEC inactivity or definitions yet "to be determined."

329 One exception is that there is no "count" measure for parity comparisons.

330 These descriptions are simplified for the purpose of illustrating the measures and do not necessarily document actual measure specifications.

331 See the earlier section discussing the PriceWaterhouseCoopers' audit.

332 Readers unfamiliar with statistics, or those who prefer a more detailed description of these tests, should refer to these decision pages.

333 See Id. at 113 - 116 and App. J for a detailed discussion of log transformations.

334 Calculation of beta levels assumes a certain level of deficient performance for statistical detection. We refer the reader to sections on critical alphas and beta levels in D.01-01-037 and D.02-03-023 for the assumptions behind our findings regarding beta levels.

335 Small sample adjustment tables are not used when the aggregate result fails.

336 Additionally, as discussed in this decision infra, we add a third consecutive failure definition, "continuing extended chronic failure."

337 Pacific's net return from local exchange service in California was $1,527,942,000 in 2000, and $1,669,771 in 2001. See http://www.fcc.gov/wcb/armis/db/ and the Incentives Opinion,  

App. C.

338 Preliminary figures indicate lesser rates and amounts for May performance, likely because April performance included a conversion to a new OSS system, causing performance decrements, which were resolved by May.

339 AT&T, New Edge Networks, PacWest, WorldCom, and XO.

340 Or continues to be an "extended chronic failure," which is identified as five "failures" in any consecutive six-months, with the higher incentive amounts continuing in months that "fail" until two consecutive months "pass."

341 We will apply this feature to both Tier I and Tier II assessments, even though currently there is no "extended chronic failure" assessment for Tier II. This feature will be applied beginning at the ninth month "as if" Tier II had "extended chronic failure" assessments.

342 For this to occur, performance would have to have been identified as failing eight or nine months in a nine-month period.

343 An extended chronic failure continues until two consecutive months "pass," even though no incentive credits are generated for the single months where performance is not identified as failing.

344 The probability of a Type I error, or net critical alpha, decreases as the test requires failures in more consecutive months. For example, with a single-month 0.20 critical alpha, under parity conditions, failing five or more times out of six consecutive months has a probability of 0.0016; failing eight or more times out of nine consecutive months has a probability of 0.000019; failing ten or more times out of twelve consecutive months has a probability of 0.0000045; and failing twelve or more times out of fifteen consecutive months has a probability of 0.000001.

Previous PageTop Of PageNext PageGo To First Page