The Order Instituting Rulemaking (OIR) for this proceeding calls for us to examine the service quality results for Pacific and Verizon in Phase 2B, and consider regulatory changes - including alteration of the NRF framework to account for any problems we find - in Phase 34:
In Phase 2 of this proceeding, the Commission will assess how service quality has fared under NRF. This assessment will focus on the quality of service provided to end users by Pacific and Verizon. Issues that are beyond the scope of this proceeding include the following: (1) the quality of service provided by Pacific and Verizon to other carriers; (2) requests for relief that are better addressed in complaint proceedings or enforcement OIIs; and (3) issues regarding universal service.
. . .
In Phase 3, the Commission will consider whether and how NRF should be revised to achieve the Commission's goal of high-quality service. Parties will have an opportunity in Phase 3 to recommend specific revisions to NRF that should be considered by the Commission in light of the record developed in Phase 2 regarding how service quality has fared under NRF. There will not be an opportunity in Phase 3 to litigate issues of fact regarding service quality. All litigation of factual issues pertaining to service quality must occur in Phase 2.5
. . .
Parties may also offer recommendations in Phase 3 regarding how NRF should be revised to promote the availability of high quality services, such as a system of financial carrots and sticks tied to measurements of service quality.
Therefore, in this decision, we make factual findings regarding the service quality performance of Pacific and Verizon over the NRF period (January 1, 1990 to the present), but do not propose regulatory changes at this juncture. Because the NRF period is lengthy, we do not simply focus here on the carriers' most recent performance. Rather, we examine their performance over the entire NRF period, and where we find evidence of problems with the service quality of either company at any time during that period, we identify the problem. In some cases, the most recent data may indicate that quality is improving, and if that is the case we point it out. By the same token, if the positive trend is of short duration, and past problems endured over a significant period of time, we point this out as well. We make every effort to distinguish statistically significant trends from changes in performance that are artifacts of the graphical scales used to illustrate our data or changes that are best understood as random variation.
Concerning the task of assessing the service quality of Pacific and Verizon, as well as the effects of NRF, we face some methodological obstacles. First, we have little service quality data from the period preceding the adoption of NRF. Thus, it is not possible to compare the quality under NRF with the service quality preceding the adoption of NRF. Second, we find that the data included in service quality measures changed over time, sometimes because of a change in corporate organization, sometimes because of a change in technology, and sometimes because of a change in the mixture of services sold. Thus, even when a service measure remained stable over time, the activities measured may have changed dramatically. Third, different companies have interpreted a measure differently. Thus, it is difficult to compare the performance of one company with another. Fourth, during the period under study, virtually every regulatory jurisdiction adopted some version of price cap regulation. Moreover, the data introduced into this proceeding concerning a reference group of firms did not distinguish which companies were under price cap regulation and which were under rate of return regulation. Thus, it is not possible to compare the service quality of companies under price cap regulation with the service quality of those under rate of return regulation.
To answer our questions concerning the level of service quality and the effect of the change to price cap regulation, our investigation uses a variety of different measures and methods for assessing service quality. Each methodology has both advantages and disadvantages. Moreover, no single methodology provides a definitive answer.
In order to assess the service quality of Pacific and Verizon, we examine the direct measures of the provision of certain services. In particular, our GO 133-B defines specific measures associated with the quality of telecommunications services and sets standards for all but one. For Pacific and Verizon, we compare their performance against each standard and determine whether there are statistically significant trends in service quality over the measurement period. Similarly, using FCC's ARMIS measures, we compare the performance of Pacific and Verizon against a reference set of utilities and against each other. In addition, we also assess the performance of Pacific and Verizon on Merger Compliance Oversight Team (MCOT) measures, also adopted by the FCC. Although we will discuss the strengths and weaknesses of each of these measures in the discussion sections below, it is important to remember that these are measures of utility performance, not necessarily measures of overall "service quality." Moreover, and most importantly, we do not know precisely to what extent consumers view these attributes as important to service quality. Indeed, it is highly likely that consumers will view "missed appointments" by the telephone company as a more serious flaw in service quality than waiting 20 seconds on the phone for a customer service employee to answer.
To address the larger issue of how customers view the quality of service offered by Pacific and Verizon, we rely on survey data that directly ask customers their view of service quality. The record in this proceeding includes several surveys of customer satisfaction with each utility. In particular, the record includes a survey conducted by ORA addressing the quality of service for both Pacific and Verizon.
Pacific has presented the results of a survey it conducts as part of its ARMIS filings made to the FCC, known as ARMIS 43-06 and as part of the monitoring reports it files with this Commission (PA 02-04). In addition, Pacific presented the results of two surveys conducted by external firms, one by IDC and the other by JD Power.
Verizon also presents its ARMIS 43-06 survey. In addition, Verizon notes that it surveys its California customers by conducting over 1,000 interviews per month covering Directory Assistance, Consumer and Business Provisioning (which covers installation of new service), Consumer and Business Repair (which covers diagnosis, repair, and restoration of existing service), and Consumer and Business Request and Inquiry (which covers requests and inquires directed to the Business Office regarding customer bills, products and services, prices, and company policies).6
In general, each survey has particular strengths and weaknesses. Moreover, since customers only infrequently interact with a telephone utility, general surveys can provide a measure of service quality that lags behind current conditions. Other surveys, which sample customers that have recently interacted with the utility, provide other measures of service quality. In our analysis below, we will assess the value of the evidence provided by each survey and use it to inform our overall assessment of service quality.
Finally, although the average experience that a customer has with a phone company offers an important factor in our assessment of service quality, we also are concerned with the quality of service provided to customers when things go wrong. To aid in our assessment, we also examine the data accumulated by the Commission's consumer service concerning complaints lodged by customers concerning the utility's service. In addition, we also examine the record of formal legal complaints adjudicated by the Commission for each utility during the period covered by NRF.