V. CPUC Performance Incentives Plan

Beginning with April 2002 performance, Pacific implemented the OSS performance monitoring and enforcement mechanisms contained in the Commission's performance incentives plan ("PIP").334 To ensure that an ILEC's application for Section 271 approval is in the public interest, the FCC has listed five important characteristics for a performance incentives plan.335 The CPUC's performance incentives plan has these characteristics.

· potential liability that provides a meaningful and significant incentive to comply with the designated performance standards;

· clearly-articulated, pre-determined measures and standards, which encompass a comprehensive range of carrier-to-carrier performance;

· a reasonable structure that is designed to detect and sanction poor performance when it occurs;

· a self-executing mechanism that does not leave the door open unreasonably to litigation and appeal;

· and reasonable assurances that the reported data is accurate.

The PIP was developed in three parts: performance measurement and standards, performance assessment, and performance incentives. On August 5, 1999 we adopted the parties' Joint Partial Settlement Agreement (JPSA) in D.99-08-020, which established performance measurements and standards for Pacific.336 We established the basic performance assessment methods, including statistical tests and criteria, in D.01-01-037, on January 18, 2001. We completed a performance incentives plan for Pacific on March 6, 2002, in D.02-03-023 by establishing the monetary incentive amounts to be generated by deficient performance. We refer the reader to these decisions for details of the performance incentives plan.337 Some of the highlights of the plan are as follows.

A. Performance Measurement and Standards

Extensive collaboration between the parties in this proceeding resulted in a set of forty-four OSS performance measures that cover a wide range of OSS performance. These measures track performance in nine areas: pre-ordering, ordering, provisioning, maintenance, network performance, billing, database updates, collocation, and interfaces. Appendix III lists the measures in each area. Where applicable, these measures are broken down into sub-measures. Sub-measures were constructed to track performance separately for different service types, for different regions, and for other service distinctions such as the necessity for fieldwork or line conditioning. For example, provisioning time is tracked with separate sub-measures for different service types such as Resale Business POTS, Resale Residential POTS, Resale Centrex, Resale PBX, Resale DS1, UNE loop 8db weighted 2/4 wire analog basic/coin, UNE Loop 2 wire Digital xDSL capable, and many others. For many measures, complete sets of sub-measures track performance separately for four regions, LA, Bay Area, North, and South regions.338 Two examples of separate sub-measures with service type, regional, and other service distinctions are: Bay Area Resale Business POTS No Field Work, Bay Area Retail Business POTS Field Work.

Not all measures or sub-measures are included in the PIP. By design some measures were excluded by parties' agreement that they were either duplicative of other measures or currently used only for diagnostic purposes. Out of the forty-four measures, thirty-nine are used in the PIP.339 In April 2002, with 126 active CLECs, 592 sub-measures produced testable data, resulting in 5867 CLEC-specific performance results.340

The performance measures consist of two basic types: "parity" and "benchmark" measures. Parity measures are used where Pacific's performance to CLEC customers can be compared to Pacific's performance to its own customers because these measures are only possible where Pacific provides the same service for its own customers, termed a "retail analog." Absolute "benchmark" measures are used where there is no retail analog. For example, where a retail analog exists, a parity standard might compare the average time to provision a new service for CLEC customers to the average time for the same activity for Pacific's customers. In contrast, where there is no retail analog, a benchmark might require that ninety-five percent of new CLEC installations be completed within five days.

Performance is measured in five ways for parity and benchmark measures: averages, percentages, rates, indexes, and counts.341 The following examples illustrate these measures.342 An average measure compares the average service installation time for CLEC customers either to the average installation time for Pacific's customers (parity) or to a specific average (benchmark). A percentage measure compares the percentage of due dates missed for CLEC customers either to the percentage of due dates missed for Pacific's customers (parity) or to a specific percentage (benchmark). A rate measure compares CLEC customer trouble report rates either to Pacific's customer trouble report rates (parity) or to a specific rate (benchmark). An index measure compares the percentage of time an interface is available to the CLECs either to an index of the time it is available to Pacific (parity), or to a specific percentage (benchmark). The index measure differs from percentage measures in the way it is assessed, as discussed infra. A count measure allows a certain number of events, such as no more than one repeat trouble in a 30-day period.

As discussed, supra,343 the measurements and the rules established to generate the reported performance data have been audited and determined to be consistent with the rules that define and make the measures operational. Additionally, aided by an external consultant, staff conducted an accuracy check of the data and found problems that were corrected. (Initial Report on OSS Performance Results Replication and Assessment, ("Replication Report") Telecommunications Division, (June 15, 2001).)

B. Performance Assessment

While the parties agreed on many issues, they were unable to agree on a complete set of performance assessment methods and criteria. To resolve the disputes that remained, we constructed the final assessment method and established the test criteria. We briefly describe the assessments we established.

Different measures require different tests to identify deficient performance, or "failures." Statistical tests are applied to parity measures to distinguish differences likely caused by random variation from differences likely caused by poor performance to CLEC customers. (Interim Opinion on Performance Incentives, D.01-01-037 at 58 - 129 (January 18, 2001).)344 For average-based measures, a t-test is applied to log-transformed scores. Log transformations are used for time-measure data since the distribution of raw scores is skewed, as is typical for time-to-complete-task data. The transformations bring the data closer to the normal curve distribution that is assumed for the t-test.345 For percentage-based parity measures, a Fisher's Exact Test is used on the original non-transformed data. For rate-based measures, a binomial exact test is used, also on the original non-transformed data.

Different critical alpha levels are used in an attempt to control the Type I error probabilities without allowing excessive Type II error, or beta levels. Id. at 83-98. We selected a default critical alpha level of 0.10, because we discovered that the beta levels were considerably greater than the conventional 0.05 alpha levels. Id. at 92-93. While a 0.10 alpha level still does not balance the two types of error, it reduces the imbalance.346 For assessments of performance in consecutive months, we selected a 0.20 alpha level because the test requirement for consecutive "failures" greatly reduces the net alpha level. (Opinion on the Performance Incentives Plan for Pacific Bell Telephone Company D.02-03-023 at 39-41, 51-52 (March 6, 2002).) (Plan Opinion). We selected a 0.20 alpha level for individual CLEC small sample tests for sub-measures where the CLEC industry aggregate failed. The likelihood of a Type II error increases with small samples and where information suggests that the overall process is not in parity. (Incentives Opinion at 66, Plan Opinion at 39-41.) In a complementary fashion we selected a 0.05 alpha level for the largest samples, and for moderately large samples where the CLEC industry aggregate "passed."

Benchmark assessments are simple comparisons without statistical tests. For the larger samples, performance to CLEC customers is compared to the specific standard as established in the JPSA. For the smaller samples, a "small sample adjustment table" is used to account for the fact that even when CLEC customers as a whole receive performance easily meeting the benchmark, small samples can fail the benchmark. For example, if twenty CLECs each placed one order, and only one of those twenty orders was not completed with in the specified time, 95% of the orders would have been completed within the allowed time. With a benchmark of 90%, overall performance would easily pass. However, at the individual CLEC level, nineteen would pass and one would fail. The failure is inevitable in this case since with only one order, the CLEC with the order not completed within the allowed time would have a zero-percent result. Small sample adjustment tables adjust for this problem by allowing a few more "misses" than allowed by the benchmark.347

Index measures are similar to parity and benchmark measures except that they have neither statistical tests nor small sample adjustment tables. Count measures also do not have statistical tests or small sample adjustment tables.

We established two "consecutive failure" definitions. First, if a sub-measure "fails" three months in a row, it is termed a "chronic failure." Second, if a sub-measure fails five or six out of six months it is termed an "extended chronic failure."348

C. Performance Incentives

Instead of outright payments to CLECs or the state general fund as many other states have required, our incentives are billing credits to CLECs and ratepayers. Monetary amounts generated by deficient performance to individual CLECs become billing credits to those CLECs (Tier I). Amounts generated by deficient performance to the CLEC industry as a whole become billing credits to the ratepayers (Tier II). If the amount to be credited to a CLEC exceeds the CLEC's billing, the excess amount is credited to the ratepayers.

We have established limits, or caps, to the credits that Pacific must issue. First, the overall annual cap equals thirty-six percent of Pacific's annual net return from local exchange service in California. The FCC has approved several other states' performance incentive plans with this same percentage of net return liability, viewing it as a reasonably sufficient amount to motivate OSS performance. (Plan Opinion at 82.)

Thirty-six percent of net return from local exchange service in 2001 equals approximately $601 million. The cap applies monthly at one-twelfth of this amount: approximately $50 million. Second, credits are capped at about $16.4 million per month without formal review. We allow Pacific a formal review before requiring incentive amounts between $16.4 and $50 million per month. The credit amounts to individual CLECs are only limited by their billing totals.

Our plan is self-executing. Data recording, assessment, and credit generation is automated. Incentive credits are made without further review, unless the procedural caps are reached.

Our incentive amounts are scaled to performance in a "curvilinear" fashion. Our plan generates relatively smaller percentages of the cap for smaller failure rates and then accelerates the incentive percentages as performance worsens. That is, rather than requiring ten percent of the cap to be credited for a ten percent failure rate, twenty percent of the cap for a twenty percent failure rate, and so forth, we have targeted a four percent incentive amount for a ten percent failure rate, a sixteen percent amount for a twenty percent failure rate, a thirty percent amount for a twenty-five percent failure rate, up to 100 percent of the cap for a fifty percent failure rate. (Plan Opinion at 46.)

Our PIP was not scaled to absolute amounts; it was scaled to match specific percentages of deficient performance with specific percentages of net return. (Id. at 46 - 48 and App. G.) In the Plan Opinion, we explicitly required Pacific to update the incentive cap after new ARMIS data is posted each April. (Id. at 21 and App. J at 1.) We did not explicitly require that the incentive amounts themselves be updated even though they are based on the cap. ARMIS data shows that Pacific's annual net return increased by 9.28 percent from 2000 to 2001.349 However, Pacific has informally agreed to adjust incentive amounts that are less than the cap. We will make this requirement explicit as well, infra. Pacific shall update these amounts beginning with the May 2002 performance. The caps, the base amount and the parity simulation payment-reduction amount will be increased by 9.28 percent for the months of May 2002 through April 2003, and will be adjusted with the same timing and method thereafter.

We also recognize that even with perfect performance, residual Type I errors could result in Pacific having to credit significant amounts to the CLECs and ratepayers even though they experienced no actual performance discrimination. To provide some mitigation for this event, we allow Pacific to discount the credit amounts generated by the plan when performance reaches performance levels matching or exceeding the parity simulations we established in D.02-03-023. (Id. at App. J, § 3.9.) The discount is designed to match the amount generated by the plan so Pacific will not be liable for giving credits to the CLECs and ratepayers when performance is optimal.

Pacific implemented our performance incentives plan beginning with performance for the month of April 2002. Pacific's "failure rate" for individual CLEC results in Category A was 6.7 percent, and the plan generated incentive amounts totaling $673,390. Pacific credited $532,880 to the CLECs and $140,510 to the ratepayers.350 A more detailed summary of the credits from the first month's implementation is provided in Appendix IV.

Parties have raised concerns that our PIP does not provide sufficiently strong incentives for chronically deficient performance. (Application for Rehearing of Opinion on the Performance Incentives Plan for Pacific Bell Telephone Company, ("CLEC Appl. Rehear."), Participating CLECs351 at 13 (April 8, 2002).) The possibility remains that Pacific could treat the incentive credits generated by extended chronic failures as the "cost of doing business." While this issue may have seemed moot insofar as we have only constructed a plan for an initial six-month implementation period, we find it prudent to establish a contingency mechanism to fill any gap that may arise between the end of the six-month period and the adoption of any necessary revisions. In this regard, we find that it is important to continue the current PIP until it is revised regardless of the time it might take to revise it. Additionally, we find it important to add an additional treatment for deficient performance that may continue beyond the six-month period.

It is difficult to know with confidence what incentive amounts will actually motivate the desired OSS performance. However, if OSS performance for a particular sub-measure continued to be deficient for longer than six consecutive months,352 it would be reasonably clear that the amounts were too low, and that Pacific may be treating the incentive amounts as the "cost of doing business." To provide stronger incentives when such performance continues past six months, it will be reasonable to increase the incentive amount for any such sub-measure. Not only are such continuing performance "failures" increasingly accurate assessments, but they also represent increasing competitive harm. To provide incentives to prevent such continuous deficient performance we will automatically increase the payments for months with deficient performance when an "extended chronic failure" continues.353 When an extended chronic failure continues354 three or more months in a row after it is initially established,355 payments for a failure will be doubled from that required for an extended chronic failure for that month.356 Every three months thereafter, incentive amounts will be doubled again for continuing extended chronic failures.357 For example, after twelve or fifteen months of continuing extended chronic failures, the incentive credits would be four or eight times the amount required for extended chronic failure for those months, respectively.358 Additionally, since continuing extended chronic failures would indicate that Pacific is not providing parity OSS performance, Pacific should not be eligible for mitigation under section 3.9 of the PIP. (See Plan Opinion at App. J. at 10.)

In its comments to the draft decision, Pacific opposes this "continuing extended chronic failure" ("continuing failure") feature. Explaining its opposition, Pacific asserts: (1) that the new feature "upsets the careful balance" that was struck by the Commission in D.02-03-023, (2) that it is not appropriate to add this feature in this proceeding, (3) that the new feature is being added "without due process," and (4) that the record does not support our conclusions regarding continuing performance failures. SBC Pacific Bell Telephone Company's (U 1001 C) Opening Comments ("Pacific Comm."), August 12, 2002, at 7-9.359

However, separate from the balance we struck for the performance incentives plan in D.02-03-023, this new feature targets a different problem. The balance we struck consisted of a targeted "curvilinear" "scaling" of monthly total incentive amounts to overall performance across all sub-measures for each CLEC separately and for all CLECs combined. The "continuing extended chronic failure" feature addresses a different problem, a problem that was raised at numerous times in the performance incentives proceeding.360 If Pacific provided excellent service for a high percentage of sub-measures but continuously failed particular sub-measures, incentive amounts for the failed sub-measures would be low and relatively unlikely to motivate corrective action.361 The continuing failure feature corrects this deficiency and ensures that any continuously poor performance on an individual sub-measure generates increasing incentives until such time as it is corrected. The primary goal of the performance incentives plan is not to maintain a balance established without information about what Pacific will do, but to motivate good OSS performance for each sub-measure.362 The new feature fills a potential gap in the plan by addressing problems at the individual sub-measure level. The balance that Pacific cites is preserved unless Pacific chooses not to correct continuously identified failing performance. Given such a choice, it would be evident that the balance did not meet the goals of the plan and should be altered.

Adding this feature in this proceeding is appropriate. The performance incentives plan is the primary tool to ensure the public interest after Pacific gains 271 approval.363 Without a comprehensive plan that adequately addresses future possibilities, the public interest would be in doubt. Adding this feature now before the FCC reviews Pacific's application will enable the FCC to make a more complete review and should give them greater confidence that future performance has comprehensive assurances.

Pacific has been afforded due process. All parties have had the opportunity to review and comment on the new plan feature. Additionally, the issues behind the feature have been raised, reviewed, and commented on during the collaborative sessions and in previous reports, comments, and decisions.364

The record supports the addition of the continuous failure feature. In that we cannot know the future, we cannot cite evidence here of future performance or evidence for the future effects of our plan on Pacific's performance.365 We have added this feature so that if future performance provides such evidence, then the plan will automatically increase the amounts. These increased incentive amounts will not be imposed without the new evidence that Pacific is not correcting performance failures. Nine months of virtually continuous failures will be sufficient evidence to show that an incentive amount is insufficient to motivate performance corrections.

Pacific asserts that because performance can statistically fail with "slight" differences, no incentive amount increases should be imposed, and consequently the record does not support adding the continuing failure feature. In arguing their position, Pacific asserts that such "slight" but statistically significant differences can occur even though performance to both Pacific's customers and CLEC customers was "excellent." Pacific Comm. at 8-9. This problem is most likely to occur for large samples. Interim Opinion at 95. We have deliberated this issue at length in resolving similar problems. For example, addressing large sample problems would likely exacerbate small sample problems. The record contains considerable discussion of the problem where reducing one error type increases another.366 To not increase incentive amounts because we might increase penalties for Pacific's statistically failing, but "good" performance, would likely also result in no increased penalties for any persistently bad performance. In establishing our plan we cannot go forward treating identified failures as if they are not failures. Plan Opinion at 73, fn. 55. During the first review we may discover a better way to accomplish this balance between erroneously treating good performance as bad or bad performance as good. However, in the interim, it will be important to address continuing failures. Our interim approach is a cautious one, with little chance for Type I error.367

Fine adjustments to the continuing failure feature are difficult. We have previously considered assessing "material differences," a potential solution for "slight" but statistically significant differences. Pacific Comm. at 9; Interim Opinion at 94-95, 98-100. The parties suggested ways to implement a "material difference" feature that would automatically account for identified performance differences that were not of sufficient magnitude to be harmful. See Interim Opinion at 94-95, 98-100. However, none of these suggestions led to complete proposals, although we encouraged parties to develop "material difference" proposals for possible adoption at the end of the initial implementation period. Id. The fact that we cannot at this time impose a perfect remedy for continuously failing performance should not cause us to neglect the potential problem.

Pacific is concerned that the new feature will increase incentives for sub-measures where performance parity is beyond their control or where performance differences are too small to be harmful. See Pacific Comm. at 9. Where performance sub-measures fail continuously from circumstances beyond Pacific's control, the measures should be amended. Plan Opinion at 73, fn. 55. This realization is likely what motivated Pacific to file a motion requesting the performance measure changes we subsequently adopted in D.02-06-046.368 Where statistically significant performance differences are "slight" or "transparent" and caused by different workloads for Pacific's tasks versus CLEC tasks, Pacific should work with the parties and the Commission to amend the measures.369 To "fix" these problems instead by ignoring all continuously failing sub-measures would neither be appropriate nor in the public interest.

For all the above reasons we are not persuaded by Pacific's assertions, and we adopt the continuous failure provisions. However, we note that there may be time to "fine tune" or adjust these provisions before they can increase incentive amounts for any future continuous failures. We ask the parties to examine these provisions in the six-month review and craft better provisions if possible. At the same time we ask the parties to correct performance measurement problems that may lead to continuous failures not reflective of Pacific's performance. We understand that parties are currently examining and negotiating performance measure modifications in anticipation of completing work for the periodic review of those measures.370

334 Established in D.02-03-023.

335 Bell Atlantic New York Order, 15 FCC Rcd at ¶ 433.

336 On May 24, 2001, we adopted changes to the JPSA in D.01-05-087. Although not as complete, performance measurements and standards for Verizon California were also established in the JPSA.

337 We also make several updates or modifications to the PIP, infra.

338 North and South regions encompass the North and South portions of the state except for the LA and San Francisco/Oakland Bay areas. Measures for statewide services, such as billing, interface availability, and network performance are only measured statewide and not regionally.

339 Three of these thirty-nine measures are not currently operational in the PIP. Measure 4 (Percentage of Flowthrough Orders) produces performance data, but has not been implemented in the PIP because parties have not agreed on the respective standards for this measure. Measures 29 (Accuracy of Usage Feed) and 36 (Accuracy of Mechanized Bill Feed) depend on data from the CLECs that the CLECs currently are not submitting.

340 While there are over 1,500 possible sub-measures, many are not utilized in the PIP due to CLEC inactivity, and some are not utilized because the respective standards have yet "to be determined."

341 One exception is that there is no "count" measure for parity comparisons.

342 These descriptions are simplified for the purpose of illustrating the measures and do not necessarily document actual measure specifications.

343 See the earlier section discussing the PriceWaterhouseCoopers' audit.

344 Readers unfamiliar with statistics, or those who prefer a more detailed description of these tests, should refer to these decision pages.

345 See Id. at 113 - 116 and App. J for a detailed discussion of log transformations.

346 Calculation of beta levels assumes a certain level of deficient performance for statistical detection. We refer the reader to sections on critical alphas and beta levels in D.01-01-037 and D.02-03-023 for the assumptions behind our findings regarding beta levels.

347 Small sample adjustment tables are not used when the aggregate result fails.

348 Additionally, as discussed in this decision infra, we add a third consecutive failure definition, "continuing extended chronic failure."

349 Pacific's net return from local exchange service in California was $1,527,942,000 in 2000, and $1,669,771 in 2001. See http://www.fcc.gov/wcb/armis/db/ and the Incentives Opinion,  

App. C.

350 Preliminary figures indicate lesser rates and amounts for May performance, likely because April performance included a conversion to a new OSS system, causing performance decrements, which were resolved by May.

351 AT&T, New Edge Networks, PacWest, WorldCom, and XO.

352 Or continues to be an "extended chronic failure," which is identified as five "failures" in any consecutive six-months, with the higher incentive amounts continuing in months that "fail" until two consecutive months "pass."

353 We will apply this feature to both Tier I and Tier II assessments, even though currently there is no "extended chronic failure" assessment for Tier II. This feature will be applied beginning at the ninth month "as if" Tier II had "extended chronic failure" assessments.

354 In the draft opinion, we used the word "occurs" in the text here instead of "continues" as we used in the ordering paragraph. Pacific interpreted this feature to require three consecutive failures in the seventh, eighth, and ninth months. Pacific Comm. at 7. We have changed the text here to be consistent with the ordering paragraph and our intention to only require an extended chronic failure condition to be continuing, and not to require a failure each month.

355 To better clarify in which month increases are first required, we have added this clause to the draft decision.

356 For this to occur, performance would have to have been identified as failing eight or nine months in a nine-month period. The ninth month is the first month eligible for these increased incentive amounts.

357 In our review of the parties comments regarding this feature, we recognized that after an initial doubling of incentives, the feature would allow a doubling of incentives even though performance was only failing every other month. This was not our intention for this feature. We have clarified our intention and the respective definitions as follows. An extended chronic failure continues until two consecutive months "pass." The "continuing extended chronic failure" continues until the most recent six months, viewed alone, would not be identified as an "extended chronic failure." No incentive credits are generated for the single months where performance is not identified as failing. The following ten-month examples provide clarification with "F" representing a monthly failure, "p" representing a pass, and "CECF" representing a "continuous extended chronic failure":

358 The probability of a Type I error, or net critical alpha, decreases as the test requires failures in more consecutive months. For example, with a single-month 0.20 critical alpha, under parity conditions, failing five or more times out of six consecutive months has a probability of 0.0016; failing eight or more times out of nine consecutive months has a probability of 0.000019; failing ten or more times out of twelve consecutive months has a probability of 0.0000045; and failing twelve or more times out of fifteen consecutive months has a probability of 0.000001.

359 AT&T and XO filed reply comments supporting the addition of this feature. Reply Comments of AT&T Communications of California, Inc. (U 5002 C) on the Draft Decision Granting Pacific Bell Telephone Company's Renewed Motion for an Order that It Has Substantially Satisfied the Requirements of the 14-Point Checklist in Section 271 of the Telecommunications Act of 1996 and Denying that It Has Satisfied Section 709.2 of the Public Utilities Code, August 19, 2002, at 2-3; Reply Comments of XO California, Inc. (U 5553 C) on Draft Decision Granting Pacific Bell's Renewed Motion for an Order that It Has Satisfied §271 of the Telecommunications Act and Denying that It Has Satisfied §709.2 of the California Public Utilities Code, August 19, 2002, at 5.

360 For example, see Assigned Commissioner's Ruling on Performance Incentives, (R.97-10-016/I.97-10-017), November 22, 1999, at 12-13, 15-16; Replication Report at 7-9; Interim Opinion at 97; Administrative Law Judges' Ruling, (R.97-10-016/I.97-10-017), March 2, 2001, Workpaper #17, item 8, Workpaper #18, item 3d, Workpaper #24 at 8, Workpaper #21 at 4, Workpaper #27 at 4, Workpaper #28 at 14; Comments of the Participating Competitive Local Exchange Carriers Regarding Performance Remedies Plans, (R.97-10-016/I.97-10-017), May 18, 2001, at 4, 9; Replication Report at 7-9; Plan Opinion at 32-35, 37; CLEC Appl. for Rehear. at 5-9, 13.

361 Incentive amounts are increased by a multiplier derived from overall performance. The multiplier is the overall percentage, in percentage points, of identified failures for either individual CLECs (Tier I) or all CLECs (Tier II). Plan Opinion at 47-48 and App. J, Sections 1.5, 3.6.13, 3.73, and 3.8.5. When Pacific's overall failure percentage is low, even persistently bad performance will have a relatively small multiplier, and thus relatively low incentive amounts. See also Plan Opinion at 32-33.

362 By "good" performance, we refer to the FCC's guidelines. Bell Atlantic New York Order at ¶ 44. See also, Id. at App. B, ¶¶ 2-4, 13-17. While the FCC recognized that before making a separate determination of discrimination it would consider more than just the statistical test result ("the totality of the evidence" fn. 45), such analysis is not possible in a self-executing plan, which must rely on these tests. Any additional evidence must be automated into the plan along with corresponding test criteria. Neither the cited New York plan nor our plan has the automated evidence or criteria features that Pacific desires - features that identify "slight" and "likely transparent." statistically significant differences. See Pacific Comm. at 9; Bell Atlantic New York Order, App. B.; and Incentive Opinion at 100.

363 See Bell Atlantic New York Order at ¶¶ 429, 433.

364 For example, see text and cites in Assigned Commissioner's Ruling on Performance Incentives, (R.97-10-016/I.97-10-017), November 22, 1999, at 12; Interim Opinion at 59-69, 83-102, 117-122, and App. K at 1-10; Replication Report at 7-9; Plan Opinion at 28-38, 50-54.

365 However, we note that recent evidence indicates that although Pacific's overall single-month performance is approaching the levels predicted in the parity simulations, the longer-term repeated-failure levels are much higher than would be expected by these same simulations. In November 2001, the actual extended chronic failure rates (Cat. A, 0.0108) were over twenty times the corresponding simulated parity rates (0.0005). Plan Opinion at 37. These high rates occurred even as Pacific anticipated the imminent adoption of our performance incentives plan. The parties' final data runs, proposals, and comments for a plan were completed in June 2001 (Plan Opinion at 7), the plan's draft decision was mailed in November 2001 (Draft Decision of ALJ Reed, R.97-10-016/I.97-10-017, November 21, 2001), the plan was adopted in March 2002 (Plan Opinion at 98), and was implemented for April 2002 performance (Id.; Administrative Law Judge's Ruling to Facilitate Implementation of the Operations and Support System Performance Incentives Plan for Pacific Bell Company, R.97-10-016/I.97-10-017, April 29, 2002).

366 For example, see the text and cites in Assigned Commissioner's Ruling on Performance Incentives, (R.97-10-016/I.97-10-017), November 22, 1999, at 12; Interim Opinion at 84-85, 90, 95-96, 119-121, App. K at 1; and the Plan Opinion at 28, 39-41.

367 Type I error would disadvantage Pacific. The probability of a Type I error is very low for nine-month continuing-failure identifications (less than one chance in 10,000), and considerably lower for longer continuing-failure identifications. See infra. In contrast, the chance of statistical error that disadvantages the CLECs, Type II error, is very high for this feature, and may approach certainty. See Plan Opinion at 51-52.

368 Opinion Modifying Decision 01-05-087 to Update Performance Measures for the Performance Incentive Plan for Pacific Bell Telephone Company, D.02-06-046, June 27, 2002. Performance differences were caused by the fact that Pacific customer UNE loop provisioning typically does not require LNP scheduling and activating, whereas CLEC customer provisioning does require the LNP work. Because an independent third-party organization takes an industry-standard minimum of three days to activate LNP, and Pacific's provisioning for its own customers takes only about two days, the sub-measures always failed. D.02-06-046 changed the parity standard to a benchmark standard to better reflect Pacific's actual performance. Id. at 3-4.

369 Pacific states that compared to Pacific's queries, CLEC pre-ordering loop qualification queries are more often multiple-line queries and consequently take more time. Pacific Comm. at 9. While Pacific's assertions, that the differences are "slight" and "likely transparent" and represent "excellent performance," seem plausible and reasonable, we make no findings regarding these assertions. For example, the approximately thirty percent increased time (estimated from Pacific's figures, Id.) may be an increment to other delays, which in total are perceptible, or may result in extra costs for the CLECs, and thus present an impediment to competition. These issues should be discussed and resolved in the performance measures review.

370 E-mail from Gwen Johnson of Pacific Bell to the parties in R.97-10-016/I.97-10-017, "SBC/Pacific Bell's proposed changes to the CA PM JPSA" dated June 14, 2002; e-mail from Evelyn Lee of WorldCom to the parties in R.97-10-016/I.97-10-017, "OSS OII: Rule 51.1 Notice re Formal JPSA Review" dated June 25, 2002; e-mail from Diane Ottinger of Verizon to the parties in R.97-10-016/I.97-10-017, "CA071502 Verizon JPSA," dated July 15, 2002.

Previous PageTop Of PageNext PageGo To First Page