Appendix J: California Performance Incentives Plan

1. GENERAL PRINCIPLES

2. THE ASSESSMENT OF PARITY AND COMPLIANCE

Testing Procedures Applied to Sub-measures

According to their Basis and Type

Basis

Parity

Benchmarks

Averages

Modified t-test applied to the logs of the data except for Measures 34 and 44 for which the test is applied to the raw data.

Benchmark is used as an absolute comparison standard

Percentage

Fisher's exact test applied to all sub-measures.

Small Sample Adjustment table is applied where applicable, otherwise the benchmark is used as an absolute standard.

Rates

Binomial test applied to all sub-measures

Small Sample Adjustment table is applied where applicable, otherwise the benchmark is used as an absolute standard.

Index

The performance difference is compared to an absolute standard

The performance is compared to an absolute standard

Count

No sub-measures of this kind

The CLEC numerator is compared to the benchmark as an absolute standard. Applicable to LNP sub-measures in Measures 20 and 23.

3. CALCULATION OF INCENTIVE VALUES

4. SPECIFIC MEASURES TO WHICH INCENTIVE PAYMENTS APPLY

4.3 Ordering

4.4 Provisioning

5. ROOT CAUSE ANALYSIS

5 PERFORMANCE INCENTIVE PAYMENTS

6. Clarifications and illustrations to aid performance incentive plan implementation.

General Issues.

Application of the Small Sample Adjustment Table to sub-measures where low values are associated with good service is done by subtracting the benchmark from 1 and using the result as the point of entry into the table.

The Small Sample Adjustment table is applied to aggregates as well as CLEC observations.

Aggregations of Count-based sub-measures are evaluated by comparing the average of the numerators for all the CLECs in the aggregation to the benchmark for the sub-measure.

The following definitions are used throughout:

An Observation is the data for a single CLEC on a sub-measure in a single month.

An Aggregate is any collection of observations within a given sub-measure in a single month.

A Single-month evaluation is a pass/fail test on an observation or an aggregate using the single-month evaluation rules given in Exhibit 3, section B.

A Repeated Failures evaluation is a pass/fail test on an observation or aggregate using the repeated failures evaluation rules given in Exhibit 3, section B.

An Ordinary Failure is a failure determined using a single-month evaluation.

A Chronic Failure is an observation or aggregate failure that is determined using the repeated failures evaluation and is at least the third in a string of consecutive months of repeated failures (allowing for months with inactivity). Once a sub-measure has a chronic failure, all subsequent failures using the repeated failures critical alpha criterion will be deemed chronic until two consecutive passes are obtained or three months intervene with no activity.

An Extended Failure is an observation or aggregate failure that is determined using the repeated failures evaluation and that is preceded by at least five repeated failures in the preceding six months of tests (allowing for months with inactivity) Once a sub-measure has an extended chronic failure, all subsequent failures using the repeated failures critical alpha criterion will be deemed extended chronic until two consecutive passes are obtained or three months intervene with no activity.

The denominator used to calculate the Adjusted Base Amount is taken as the total number of remedy-relevant observations for those CLECs having reportable data for the month. The aggregate measures, 24, 42, and 44, contribute just the number of sub-measures with data.

The following formulae specify how payments are calculated in each category

General Parameters.

Category A.

Category B.

Category C.

Special Issues.

The CLECs qualifying for Category B incentive payments are those that touch sub-measures in Measure 2, 3, and 40.

Category C is applied to all sub-measures.

The Category C failure rate is determined by the number of single-month failures in the month in question.

The rules for entering and leaving the chronic state (there is no extended chronic state) are the same as those for the other categories.

EXHIBIT 1

FACTUAL ANALYSIS

The following incidences are reasonable exceptions that can be used to mitigate a statistical finding of out-of-parity (or benchmark miss) provided that the incident impacted the CLEC to such a degree as to make otherwise compliant performance non-compliant:

I. Significant activity by a third party external to Pacific Bell* (not controllable by Pacific Bell)

II. Environmental events not considered force majeure

III. Failure of CLEC process/system or those of a third party vendor, including a Service Bureau Provider, acting on behalf of CLEC

*Note: Pacific Bell's sub-contractors or other Pacific Bell agents are not considered an external third party.

EXHIBIT 2

FORECASTING PLAN

CLECs shall submit forecasts to Pacific Bell for the following categories of products/services:

FORECAST MAPPING TO PERFORMANCE MEASURES

 

TYPE OF FORECAST

 

Service Order

Collocation

Interconnection

Pre-Ordering

· 1 - Av. Response Time

X

 

Ordering

· 2 - Av. FOC Notice Interval

· 3 - Av. Reject Notice Interval

X

X

 

X

X

Provisioning

· 5 - Percent of Orders Jeopardized

· 6 - Av. Jeopardy Notice Interval

· 7 - Av. Completed Interval

· 9 - Coordinated Customer Conversions

· 9A - Frame Due Time Customer Conversions

· 10 - PNP Network Provisioning

· 11 - Percent of Due Dates Missed

· 14 - Held Order Interval

· 15 - Provisioning Trouble Reports

· 16 - Percent Troubles in 30 Days for New Orders

· 18 - Av. Comp. Notice Interval

X

X

X

X

 

X

X

X

 

TYPE OF FORECAST

 

Service Order

Collocation

Interconnection

Maintenance

· 19 - Customer Trouble Report Rate

· 20 - Percent of Customer Trouble not Resolved within Est. Time

· 21 - Av. Time to Restore

· 23- Frequency of Repeat Troubles in 30 day period

     

Network Performance

· 24 - Percent Blocking on Common Trunks

· 25 - Percent Blocking on Interconnection Trunks

· 26 - NXX Loaded by LERG Effective Date

   

X

Billing

· 28 - Usage Timeliness

· 29 - Accuracy of Usage Feed

· 30 - Wholesale Bill Timeliness

· 31 - Usage Completeness

· 32 - Recurring Charge Completeness

· 33 - Non-recurring Charge Completeness

· 34 - Bill Accuracy

· 35 - Billing Notice Completion Interval

· 36 - Accuracy of Mech. Bill Feed

X

X

X

 

X

X

X

 

TYPE OF FORECAST

 

Service Order

Collocation

Interconnection

Database Updates

· 37 - Av. Database Update Interval

· 38 - Percent Database Accuracy

· 39 - E911/911 MS Database Update Interval

X

   

Collocation

· 40 - Av. Time to Respond to Collocation Requests

· 41 - Av. Time to Provide a Collocation Arrangement

 

X

X

 

Interfaces

· 42 - Percent of Time Interface is Available

· 44 - Center Responsiveness

     

Exhibit 3

Decision Model

Revised from D.01-01-037, Appendix C

2 In prior drafts of this plan, Categories A, B, and C were designated Categories 1, 3, and 4, respectively. The category designated Category 2 in prior drafts is not used in this plan.

Previous PageTop Of PageNext PageGo To First Page