March 2000 Workshop

In their reply, the CLECs recommended that the Commission hold a workshop on the new Pacific "white paper" proposal. ORA recommended that the Commission convene workshops to review all the various proposals. In all, the comments raised several issues requiring further discussion. To respond to the recommendations and address the unresolved issues, the assigned ALJ and staff facilitated a three-day workshop on March 27, March 28 and March 30, 2000. The workshop was divided into three daylong segments devoted to exploring the respective new Pacific and ORA plans, and further refining the components of a hybrid model.

The three segments of the workshop focused exclusively on the performance assessment part of the overall performance remedies plan (i.e., performance measurement, performance assessment, and incentive payment parts). The sessions did not include any substantive discussion of the performance measurement and incentive payment components of the remedies plan.39

For the purposes of the workshop sessions, the parties were to assume as given all prior work on performance measurements and benchmarks (on the separate parallel track pursuant to Commission Decision 99-08-020), including any current constraints. Parties were also to assume that any emergent performance measurement plan would use the performance measurements and benchmarks resulting from the concurrent performance measurement phase of the proceeding. Finally, the parties were asked to delay incentive payment data modeling until the Commission selected the performance assessment model, or models.

The goal of the workshop was to develop more fully the three distinct performance measurement plans. These three plans were (1) the Pacific plan, (2) the ORA plan, and (3) a hybrid plan. All workshop participants were to assume on each specific plan's day that they were advocates for that particular plan and that all participants would be jointly developing the "best" possible model for that specific plan type (i.e., hybrid, ORA, or Pacific). Where there were problems with various aspects of any plan, participants were asked to cooperatively recommend potential solutions for those deficiencies.

Participants also were asked to jointly determine if any of the plans were "fatally flawed" in any area, and if so, why. They were asked to follow the plan principles presented in the November 22, 1999 ACR, and to assume that the task before them would be to refine each particular plan type so as to be practical, capable of implementation and as simple as possible. Workshop participants were given an opportunity to advocate on behalf of their own plan on that specific plan's day, and to critique a competing plan on that plan's day. However, the intent of the sessions was to help refine each plan so that any one or all could be applied during the six-month pilot test period.

For each of the three plans, the assigned ALJ and staff focused on the respective model, element by element. There was a "straw man" or hypothetical proposal for each model element and either (1) a group decision was reached on that element or (2) a group modification was made to the hypothetical proposal. Discussion remained on each model subcomponent until a group "best" decision was reached, or it was evident that no decision could be reached and that the participants could only "agree to disagree." At the end of each plan subcomponent, a court reporter transcribed the group's findings on that plan element for the record.

Workshop Recommendations and Positions

Hybrid Performance Measurement Plan

At the workshop, Pacific, Verizon CA, the CLECs and ORA all agreed to use the Modified Z-test to develop a hybrid performance measurement plan. Most of the parties also agreed that since they had selected the Modified Z-test, the use of a two-step standard Z-test procedure and other modifications40 were no longer applicable in terms of the "Hybrid model." Verizon CA, however, supported using permutations, deltas and exact distributions in conjunction with the Modified Z-test.

The CLECs agreed to the initial hypothetical recommendation to treat benchmarks as limits without relying on statistical tests. Pacific and Verizon CA concurred with this as long as special tables based on statistical charts are used for all benchmarks. Pacific and the CLECs further agreed to produce two sets of consensus tables of acceptable misses for sample sizes scaled from 1 to 100 at a 10-percent alpha level. One set of the tables would represent percentage-based benchmarks, and the other would represent average-based benchmarks. ORA opposed the proposition of treating benchmarks as absolutes without reliance on any statistical testing. (Reporters' Transcript (RT) at 1107, lines 10-24.)

All the parties assented to the second hypothetical Hybrid model recommendation that the Commission re-evaluate the benchmarks after a six-month pilot test period. However, Pacific's concurrence was subject to some preliminary data calibration occurring prior to the pilot. Moreover, the CLECs stressed that real penalties and incentives should be in effect during the six months.

In the discussion on sample sizes, Pacific, Verizon CA, the CLECs and ORA all supported the hypothetical recommendation that the time period for measurement of the sample be kept on a monthly basis. The second recommendation was that each party should precisely specify what minimum sample size it selects between five (5) and twenty (20). Pacific stated that it would go to a sample size of 5, with the proviso that there be mitigation measures to offset such a small sample size. Pacific further maintained that although it would apply the Modified Z-test for parity measures down to a minimum sample size of 5, it would not agree to use data or apply a permutation test below 5. Pacific argued that permutation testing was costly. In substantiation, Pacific agreed to submit operational cost calculations for permutation testing41.

The CLECs selected a sample size of 5 and declared that if the minimum sample size were to be below 5, they would prefer permutation testing to be used. ORA would support a minimum sample size of 5; however, below 5 it would not support using the data. Verizon CA would support a minimum sample size of 20 with permutation testing. Below, Verizon CA would prefer to discard the data. Between 5 and 20 Verizon CA would prefer to use permutation testing, but without being subject to incentive payments. (Verizon CA May 5, 2000 Reply Br. at 5) Verizon CA strongly advocated permutation testing, and agreed to jointly submit with the CLECs after the workshop a description of a permutation testing protocol42.

Following the 1999 performance incentive workshops, the parties identified six sub-measures43 as "rare sub-measures." The parties purported to have agreed that there would not be an application of the minimal sample size to those measures or sub-measures identified as "rare." However, it was unclear from the briefs submitted after the 1999 workshops whether the parties still agreed as to what constituted the list of rare sub-measures. Thus, the third hypothetical sample size recommendation was to identify the measures or sub-measures upon which there was agreement that there would not be an application of the minimal sample size.

The parties agreed that rare measures or sub-measures would be those that rarely saw activity, yet were important to the CLECs. Pacific and the CLECs agreed to reanalyze the issue and submit as a workshop document any suggestions, additions or deletions to the group of six rare measures and submeasures.44 The rare measure list identifies those measures (or submeasures) where the measure would still be used at a sample size of one.

The parties also discussed how to make the Hybrid model operational for parity measures with no permutation testing and with sample sizes between five and twenty. To further the analysis, Pacific acceded to provide in two parts the "data on sample size for CLECs by submeasures." Pacific specified that one part of the analysis would show the percentage of the total data elements that would be used (not discarded). The second part of the analysis would show the percentages for the resulting sample sizes that would be used, relative to the entire set of samples. The company also offered to provide the absolute numbers, not just the percentages, for the previous two months of data.45 (RT at 1135, lines 12-28.)

Pacific suggested that staff consider different remedy amounts when analyzing this data in the context of the "small sample world" versus the "large sample world." It questioned the reliability of the data if used with certain of the recommendations in the small sample realm. The CLECs proposed two recommendations to make the Hybrid model operational. First, small CLECs could be pooled into a sufficient aggregation to meet the minimum sample size. Second, a "mean plus standard deviation" similar to the ORA proposal could be used. (RT at 1136, lines 7-12.) Verizon CA supported the small CLECs pooling proposal, stating that it merited further exploration. (RT at 1136, lines 13-15.)

Staff set forth two hypothetical recommendations on the Commission model's alpha level. Staff proposed that a 10-percent alpha level be used for the Modified Z-test. All the parties agreed to compromise at the 10-percent alpha level for the sake of developing the Hybrid plan. To the second proposal, that parties not calculate multiple alpha levels going forward, Pacific alone agreed to refrain.

In the January opening comments on the Hybrid model proposal, Pacific asserted that certain performance measures are based on failure rates for which no standard deviation has been defined. Thus, while a test similar to a Modified Z-test might be crafted for most of these measures, a Z-test could not be calculated for at least one of them as currently defined. (Pacific Opening ACR Comments at 5, footnote 5.) During the discussions on the Hybrid model the parties identified Measures 15, 16 and 19 as measures that might require special treatment or alternative application rules. At the conclusion of the Hybrid model discussion, Pacific, the CLECs and Verizon CA agreed to recommend a common solution to staff for these three measures.

Pacific, the CLECs and Verizon CA each detailed their respective lists of necessary enhancements to the Hybrid model. Pacific identified three necessary elements. The need to: (1) mitigate for random variations; (2) develop a procedure that deals with such excludable events, such as force majeure; and (3) establish an absolute cap for maximum exposure. Pacific noted that it was willing to pay up to $120 million in payments without evidentiary hearings in its latest incentive proposal. (Pacific ACR Reply Comments at Appendix 1.)

The CLECs maintained that their essential enhancement to the Hybrid model would be to convert all benchmarks associated with averages into percentage-based benchmarks. As a result, the benchmarks would be simplified and unified into one category.46 Verizon CA specified five necessary enhancements to the Hybrid model. It would like the Hybrid model to either consider or perform correlation measures during the six-month trial period. Verizon CA would like the Hybrid to treat small sample sizes as they are being treated under the Bell South model.47 It would also like the Hybrid model to consider real customer materiality48 in contrast to statistical measurement differences. Verizon CA emphasized that all of the different measurement components are tied together, and some of its parts may have an aggregate effect that the Hybrid needs to consider. Finally, Verizon CA asked the staff to consider Pacific's white paper proposal as a tool to resolve many of the sample size issues or to satisfy the concerns about mitigation.

ORA Performance Measurement Plan

Foremost, ORA's plan attempted to adhere to the ACR's core guiding principal that any model under the Performance Remedies Plan be simple to implement and monitor. Thus, the first ORA hypothetical recommendation ("straw man") was simplicity as an overarching principle. During the facilitated workgroup discussions, Pacific noted that while striving for simplicity was one of its concerns, there are more pressing substantive issues. The CLECs urged completeness and effectiveness in the remedies plan over mere simplicity. Verizon CA stated that the emphasis should be on fairness and accuracy, and simplicity should be one of several core principles. However, it asserted that if there were two plans equally effective and fair, it would prefer the simpler plan. Ultimately, Verizon CA suggested, ORA's plan may not be operationally simple.

ORA observed that the ILECs and the CLECs have proposed a mixed system with benchmark measures without any statistical tests to determine performance failure for some measures. ORA opposes using a mixed system. It argued that the same system should be applied to all performance measures, and that statistical tests are either relevant or they are not. (ORA ACR Opening Comments at 4.)

In its white paper proposal submitted in January, Pacific embraces the concept of a "same" system for both parity measures and benchmarks. However, Pacific asserts that benchmarks without statistical tests demand of the ILEC an unreasonably higher standard of performance (to avoid missing the benchmark) in the context of small sample sizes as opposed to large sample sizes. In contrast to ORA, Pacific asserts that statistical testing is relevant.

Both ORA and Pacific propose moving to a uniform system, but in different directions. The Pacific white paper plan advocates converting every performance measurement to a statistical test. The ORA plan advocates converting every performance measurement to a simple means and variance analysis, without any more formal statistical tests. The CLECs disagree that there is a need for a "same system." They contend that parity measures and benchmark measures need to be treated differently. Finally, Verizon CA notes that while the second ORA strawman recommendation of consistency in terms of a "same system" concept is laudable, it is unnecessary.

The ORA plan argues that there are no provisions to prevent service deterioration in the Performance Remedies Plan. It states that current service levels can only be maintained if standards are based on historical, rather than future data. The current plans may have built-in reversed incentives such that if the ILECs were to increase the variability of their own processes, they could reduce incentive payments even though the CLECs receive worse performance. That is, the poorer the ILEC performs, the poorer the parity performance for the CLECs, but the larger variability would effectively prevent discrimination detection. To militate against this possibility, one of the straw man recommendations under the Hybrid plan was to monitor ILEC means and variances and compare them to historical values49.

Responding to ORA's recommendation to base standards on historical data, Pacific questioned how the historical period would be defined and how the historical data concept could be operationalized. Pacific stated that it saw the ORA white paper as a conceptual approach that had not yet been specified to an operational level. It also requested a more detailed description of what monitoring ILEC means and variances would entail.

The CLECs advised that when one uses historical data in the context envisioned, there is a need for a lot of data. Overall, the CLECs were content with real-time data over historical data. However, they support monitoring the means and variances in order to mark improvement in Pacific's performance and to record where the CLECs stand in terms of Pacific's historical performance. Verizon CA noted that the data fluctuates substantially from month to month. Verizon CA maintained that there are inherent limitations in the depth and breadth of historical data necessitating adjustments. In addition, Verizon CA supported continuing to monitor the ILEC means and variances.

In its white paper proposal, ORA argued that neither Z-test, nor any other parametric test, should be used during the performance remedies plan six-month pilot period because many of the underlying performance measurement series are not distributed normally.50 ORA argued that such abnormal distributions violate a fundamental assumption of the Z-test. Pacific supported using the Z-test during the pilot. It indicated a willingness to look again at the Z-test after the pilot, but wanted more specifics on what this would encompass.

Verizon CA commented that ORA's proposal to reject all statistical tests during the pilot is too extreme. Yet, it acknowledged that ORA's concern about normality was justified. Verizon CA suggested that ORA's approach should be considered after the six-month pilot is completed. At the workshop, Verizon CA cautioned that two factors should be taken into consideration. First, how to calculate the test statistics; and, second, how to use the calculation. Verizon CA noted that given assumptions of normality are met, one could consult "look-up" tables. Outside the range of normality, one could use permutation testing and exact distributions.

The CLECs alone directly addressed the ACR's questioning of the use of any Z-tests. The CLECs recommend the use of existing parametric tests. However, they maintained that if actual experience does not justify confidence in the results, the test simply should be based on the number of observations that fall above some specified level. Essentially, this would convert measurement cases into counting cases. At that point, the CLECs propose to use the upper ten percent quantile51 of the observed ILEC sample. CLEC statistical expert Dr. Colin Mallows of AT&T performed simulations and found that for some alternatives this non-parametric test is much more powerful than the Z-test. (CLECs Reply ACR Comments at 4-5.) In the workshop, the CLECs supported using "some flavor" of the Z-test during the pilot.

The ACR urged moving toward more aggregation of the measures over time in order to simplify the performance remedies plan. The aggregation effort should take all double counting out of the measures to the extent that there is correlation and interdependence between a number of the measures. In response, ORA stated that there are a total of 44 performance measures with over 1000 submeasures. It expressed concern about possible correlation between these measures. ORA argued that the ILECs' OSSs could be adequately measured with fewer performance measures, since many of the measures may be cross-correlated with each other and may not be needed. ORA's plan recommends that correlation tests be run for all the performance measures. It also submits that no performance measure should be included if it has a correlation greater than 0.80 with any other performance measure.

During the workshop, Pacific supported the hypothetical ORA plan recommendation for correlation testing. Pacific agreed that eliminating measures would help. To date, there has been no correlational statistical analysis or scientific modeling of the measures. However, given the contentiousness surrounding the issue, Pacific is willing to address the matter at a later time. Pacific admitted that the issues of correlation and interdependence had not yet been raised in the Performance Measurement Phase.

The CLECs pointed out that there was not a lot of data until recently to determine correlation. They do not want to get sidetracked with correlation issues at this point. While not adverse to a goal of reducing or adding measures if there is a legitimate rationale, the CLECs are opposed to a casual reduction of measures. They maintained that, at this point, Pacific and the CLECs see no further correlation between any of the submeasures. Verizon CA concurred with the plan recommendation as well as the ACR's desire to reduce the number of performance measures, if supported by the data. It asserted that the data is not currently available, and will not be available until after the six-month pilot. Verizon CA stated that the performance incentive phase would be the proper forum to address the issues of correlation and interdependence.

ORA's plan recommends a minimum sample size of twenty (20). It argues that a performance measure should only be used in the pilot if two requirements are met. First, that it satisfies a minimum sample size of twenty; and, that the measure is not highly correlated (greater than 0.80) with any other measure. At the workshop, Pacific, Verizon CA and the CLECs opposed ORA's recommended minimum sample size.

ORA's plan also recommends that parity be defined as a situation in which the average measured results for the CLECs served by a particular ILEC are within one standard deviation of the average measured results that the ILEC provides to its internal company units. ORA proposes that the ILEC average be based on historical and not future data. However, what exact historical period should be used for monitoring ILEC means and variances is unclear.

In terms of the workshop discussion, the straw man recommendation was to use the most recent historical fiscal or calendar year for the ILEC. None of the other parties supported ORA in its selection of one standard deviation. With a one-tail test and assuming normality, one standard deviation corresponds to approximately 84 percent the normal distribution, or a 16-percent alpha. While ORA's recommendation reasonably approximates the 80/20 rule, it is somewhat inconsistent with the Office's prior recommendation of a 5-percent alpha. For a one-tail test a 5-percent alpha corresponds to approximately 1.645 standard deviations. However, to facilitate the workshop discussions, staff proposed using a 10-percent alpha or approximately 1.282 standard deviations for the sake of developing the ORA model.

ORA proposed that the benchmark be defined as the historical mean of the series plus one standard deviation. Consequently, any performance worse than the benchmark would trigger a penalty. ORA argued that the best demonstration of parity would be actual, not estimated, performance, even when experts using reasonable information make the estimates in good faith. The Office contended that proxies could be used in place of benchmarks in many cases. Since they are based on actual data, proxies are clearly superior to arbitrary benchmarks. ORA recommended that the Commission investigate their use before adopting arbitrary benchmark measures, and urged that benchmarks be used only in cases where there are no retail analogues and no proxies for those retail analogues.

Pacific, Verizon CA and the CLECs rejected ORA's benchmark proposal. They maintained that the reason they initially established benchmarks was because there were no retail analogues. Technically, there is no historical time-series data to calculate the mean and standard deviation for benchmarks under ORA's definition. Ideally, normalizing the benchmarks through proxies (assuming fairness and simplicity) is preferable to the current negotiated values. However, re-creating benchmarks distinct from those established in D.99-08-020 would be impractical, contentious and time-consuming at this juncture. The parties accepted staff's recommendation to treat benchmarks as limits, as agreed to in D.99-08-020, in the context of the ORA plan.

Finally, staff asked the parties to help identify any other requirement conditions that need to be specified to make the measurement component of ORA's plan operational. In response, WorldCom introduced the "SiMPL Plan"52 during the workshop. The SiMPL Plan would calculate the ILEC's historic performance percentiles and compare the relative CLEC performance results in those intervals. For example, non-parity would be identified if more than 10 percent of the CLEC's results were above the ILEC's 90th percentile. Other percentile comparisons would be made as well. WorldCom explained that this feature could assist in shaping CLECs' service expectations. It also contended that the plan is easy to administer since ILEC compliance is based upon whether the count of ILEC and CLEC events within each of three performance zones is proportional. (2000 MCIW Workpaper # 3 at 4.53) WorldCom characterized the SiMPL Plan as an alternative to the Modified Z-test in furtherance of the workshop assignment to collaboratively refine each model into the best that it could be. (Post-Workshop Opening Brief of AT&T, Covad, MGC Communications and WorldCom at 4-5.)

Pacific objected to WorldCom not presenting the SiMPL Plan in writing in advance of the workshop, and asserted that it saw "only minimal connections, at best" between the SiMPL54 and ORA plans. (Pacific Opening Comments on Performance Remedies Workshop at 6.) Pacific described the SiMPL Plan as "fatally flawed"; simple only in that it does not require statistical testing to make the final determination of which measures were missed; and "inherently unfair to the ILEC." (Id. at 6-7.) Pacific concluded that the net result of the SiMPL Plan would be either to guarantee superior service to the CLECs or to plunge the ILEC into a spiraling series of costly service improvements that ultimately would not shield it from remedy payments. (Id. at 7.)

Pacific's White Paper Proposal

Pacific's revised Performance Remedies Plan, issued in January 2000, incorporates a number of new principles. First, Pacific maintains that there should be minimal payment of remedies when the ILEC provides parity service that is compliant with the standards of acceptable performance. This revised principal is similar to the ACR principal that "the plan should impose smaller penalties on Pacific for discriminatory performance that could be merely the result of random variation, and impose larger penalties for seriously deficient performance." (ACR at 12.) The ACR recognized this principal as a relative one, offset by benefits that the ILEC receives when it is not actually providing parity service but also is not measured as out of compliance.

Underscoring its first new principal, Pacific states as a supporting principal that the plan should not provide incentives for the ILEC to engage in behaviors to escape remedy payments other than performance improvements. It also insists that the plan should provide payment to the CLEC only for poor performance by the ILEC and not as a normal course of business. Further, Pacific restates the CLEC principal that the risks of Type I and Type II errors should be shared equally between the CLECs and the ILEC. Finally, Pacific asserts in its revised plan that samples of various sizes should be used provided the data they supply support valid decision rules.

Pacific's revised plan distinguishes between two definitions of parity service delivery. The company selected the definition that it contends recognizes and incorporates the variability of service delivery processes, i.e., the impossibility of delivering service exactly the same way every time. Thus, Pacific prefers the assertion that "parity of service delivery is achieved whenever the results for the CLEC and the ILEC are not significantly different." It notes that the key is to find a way to operationalize the meaning of "significant" when applied to ILEC and CLEC results. Pacific states that this is a statistical question that may be answered using models of the processes that produce the data to be evaluated. It is possible to calculate the probability of observing any particular difference between the results of the ILEC and CLEC given the assumption that parity service is being delivered. The probability of the observed difference in results is the mechanism for deciding the significance of the difference between ILEC and CLEC.

Pacific's white paper proposal advocates a definition of compliance that it maintains diminishes the disadvantages of measuring compliant service where there are no retail analogues. Instead of comparing CLEC results in absolute terms against a benchmark, CLEC results are compared in relative terms against a standard. CLEC results are compared to a standard using a statistical test to evaluate the compatibility of those results with the standard. Consequently, "the results for the CLEC are compliant if they are not significantly different from the standard."

Pacific's revised plan contends that a key aspect of the use of standards and statistical tests is that the same criterion for the probability of failure (under the assumption of compliance) can be used as is used for parity measures. (Pacific's Opening Comments to the ACR, Attachment I at 4) Moreover, this probability can be made nearly constant for all sample sizes. Pacific disputed the CLEC's assertion that introducing standards at this late stage of the development of the remedy plan threatens to jeopardize all the difficult negotiations that went into the setting of benchmarks. The company insists that all standards may be derived from already agreed upon benchmarks by using a straightforward, objective formula.55 The agreed upon benchmarks would remain intact and both sides would reap the benefits of using standards. (Id.)

In the revised plan, Pacific continues to propose a 5-percent alpha for parity measures. The white paper is not clear on what alpha level equivalent Pacific recommends for benchmarks with statistical tests. Pacific also contends that it is willing to go to a minimum sample size of 5 for parity measures, provided its white paper proposal for benchmarks is used. It recommends using the same minimum sample size of 5 for benchmarks.

Finally, Pacific recommends setting aside the forgiveness rules of its original plan, and sets forth an alternative mechanism for mitigating random variation. With this mechanism, Pacific proposes to focus on the CLEC as the unit of analysis and determine whether the total relationship between the ILEC and the CLEC shows evidence of discrimination or whether any failures observed can be ascribed to random variation. (Id. at 14.) Thus, Pacific would use a table to evaluate all the sub-measures for a single CLEC in lieu of forgiveness rules.

At the workshop, the CLECs disagreed with a performance assessment that uses statistical significance testing on benchmarks. They maintain that such a focus increases the complexity of the FCC's "a meaningful opportunity to compete" standard. The CLECs also contend that benchmarks are a surrogate for parity. Thus, benchmarks should not be treated the same way as parity measures. The CLECs support the existing treatment of benchmarks as tolerance limits not targets, as Pacific's plan would suggest. (RT at 1170-72.) Further, the CLECs continue to assert that there is a need for a mitigation plan for both Type I and Type II errors, and that all submeasures should be treated the same over time regarding both these categories of errors. (RT at 1170, lines 16-20.)

Verizon CA argued at the workshop that overall it supported Pacific's white paper model; however, it would like to see how certain specific elements of the model would be implemented. Verizon CA prefers permutation testing below a sample size of 50, and thinks the Modified Z-test down to a sample size of 5 presents problems. Within the context of the Pacific model, Verizon CA favors a 5-percent alpha and supports the concept of benchmarks with statistical testing. (RT at 1174, lines 7-24.)

ORA, noting concerns about the assumptions inherent in any parametric testing, reiterated that if the Commission adopts either the Hybrid or Pacific model we base them on historical data. ORA also suggested that we reassess the choice of alpha level, specific level of benchmarks, and the values of the small sample tables when more historical data becomes available. Moreover, ORA did not accept Pacific's argument that false negatives (Type II error) are unimportant because they do not harm the CLECs. It stated that performance incentives are fundamentally aimed at encouraging ILECs to provide parity of service and to dissuade attempts to discriminate, with the goal being to allow competition to proceed uninhibited. The fact that the attempted discrimination was unsuccessful does not mean that the performance incentive plan should not consider the attempt. (ORA Opening Brief at 3.)

39 By ruling, the assigned ALJ advised the parties that the Commission would address the incentive components (including incentive structure, incentive amounts, and who receives incentive payments) after it determined the performance measurement and assessment plan components.

40 Such as the unequal variance Z-test, exact distributions, permutations, and deltas. 41 2000 Pacific Workpaper #9 42 2000 GTE Workpaper #8. 43 Sub-measure Nos. 26, 27, 30, 40, 41 and 43. 44 2000 Pacific/GTE Workpaper #10 45 2000 Pacific Workpaper #12. 46 The CLECs stated that they would also be proposing this within the Performance Measurement Phase of this proceeding. 47 "Statisticians for Bell South and AT&T have recently proposed a `correction' to the Modified Z test that accounts for the skewness in the underlying distributions. They believe that this correction makes the `modified-modified t' essentially equivalent to permutation testing at small sample sizes." Verizon CA Opening Brief at 23 (April 28, 2000). 48 "Customer materiality" refers to whether the customer could actually perceive a difference between the performance to ILEC customers versus to CLEC customers, regardless of any statistical difference. 49 Pacific and Verizon CA agreed to provide staff with the incumbent local exchange carriers' historical means, variances and sample sizes for their retail parity measures and submeasures from September 1999 going forward through June 2000. 50 As Pacific characterizes it: "no normal distributions and relatively few large samples." In fact, "samples" in question may not really be "samples," but rather time-series population observations. 51 A quantile is a portion of a distribution. An upper ten-percent quantile designates the highest ten-percent of results in a distribution, i.e., those results above the 90th percentile. 52 The Simplified Measurement of Performance and Liability Plan. 2000 MCIW Workpaper No. 3. 53 Dr. George Ford's paper on the SiMPL Plan. 54 Pacific refers to it as "the Ford Model." 55 Id. at 20, Appendix III.

Previous PageTop Of PageGo To First PageNext Page