CS1352 (Planner)
"CS1352 said that her supervisor reviews survey results every month. At that time, her supervisor tells CS1352 to sell the survey. CS1352 stated she tells customers they may be surveyed and that she wants a 5+. She also explains the scoring scale to them, adding that a 1 to 4 is not a passing score."
According to CPSD, SCE management played a key role in selling the survey. CPSD argues that SCE's management required its planners to sell the survey at every point of contact with its customers. SCE witness Ken Trainor, the director of the Design Organization (2000-2004), agreed that planners were instructed to mention the 5+ survey score at each point of contact with customers. (TR V6 p. 771:3-8.)
Exhibit 11, sponsored by SCE, includes the following e-mail from management to planners:
CUSTOMER SATISFACTION SURVEY
3. What is your technique for "selling" customers on completing service surveys with a 5+ rating"
· ULOG - About "My" service.
· Use "delighted" in conversations with customers.
· Use scale? Don't use scale.
· Mention to customers the rating of 5 - 5+.
· Part of the job is to discuss the survey.
· Uncomfortable selling the survey - try to make it positive.
· Customer satisfaction is something learned.
· Personal vs. technical skills.
· First 5 minute impression.
· Common sense knowing how they are, their personality.
· Ask customers to take their time in responding to the survey.
· At times customers remind the planner.
· Obtain good customer phone number - not their voice mail.
· Follow-up call - next day/within a week after job is completed.
· If you are meeting with the customer who is going to be surveyed, tell them the survey is about the planners service and that anything less than a 5/5+ is the same as a zero. This should be explained at the beginning and again at the completion of the project.
· Initially tell the customer they will be surveyed. Must be presented with a positive attitude.
· Remind the customer throughout the job that they will be surveyed.
· Find the right person to put on the "ULOG."
· Negotiate with customer.
· If there is any question that the customer is not going to give me a 5/5+, I do not put them on the "ULOG."
· Our job is to provide excellent customer service.
· How do we work with the individuals who are uncomfortable selling the survey/providing excellent customer service? and
· Mr. 5+, comments from the customer.
SCE admitted that the scope, extent, and general rumors about such misconduct suggest a general failure on the part of the Transmission and Distribution Business Unit (TDBU) management to detect and investigate these issues. SCE concluded that the ability of the Director of Design to lead the Design Organization had been compromised. He was transferred from that position. (Exh. 11, p. 70.)
SCE's own investigation led it to conclude that the survey itself was widely perceived by planners as being confusing, unfair, and misguided. In interviews, virtually every planner voiced concerns that customers did not understand the 5/5+ survey scale, and, accordingly, might inadvertently give SCE a failing score even if the customers were reasonably satisfied with SCE's level of service. Similarly, most planners questioned the validity of what effectively became a pass/fail scoring system. (Exh. 11, p. 75.)
SCE conducted an investigation of the Planning organization by interviewing essentially every current planner and by conducting comprehensive computer data analyses of meter and work orders to determine which techniques were used. As a result of this investigation, set forth in Exhibit 11, SCE found that planners who attempted to falsify customer contact information used the following methods:
1. With meter orders, which do not require a customer contact telephone number, some planners deliberately removed or failed to input customer contact information into the appropriate customer contact field.
2. Some planners entered letters in the customer contact field instead of numbers.
3. Some planners scrambled the digits of a customer contact number or entered random digits in the customer contact field.
4. Some planners transposed the last digits of a customer contact number. Several employees who engaged in this practice marked a transposed contact number by entering a "99" or "98" in the extension field.
5. Some planners substituted an unrelated telephone number for legitimate customer contact information. Several employees who engaged in this practice utilized a ""5+ Customer List," inserted their own cell, office, or home telephone number, or inserted the contact information of other SCE employees or their own family members.
SCE assessed the impact that these methods may have had on customer satisfaction survey results:
1. With respect to Method 1, SCE determined that blank contact information would have been reported by Maritz as an "invalid number." However, as SCE witness Carl Silsbee explained, blanks were a small fraction of this category and frequently appeared along with other, apparently valid numbers in other fields. Furthermore, there are a number of valid reasons why that field might have been left blank. Thus, Method 1 would not have a statistically significant impact on PBR rewards or penalties.
2. With respect to Methods 3, 4 and 5, SCE determined that these falsified contact numbers likely would have been captured by Maritz disposition codes for "Wrong Number." The percentage of planning customer transactions associated with the "Wrong Number" disposition codes, however, is comparable to other survey areas, which tends to show that the frequency of falsification in planning was not statistically significant.
3. With respect to Method 4, more detailed analysis was performed on the practice of transposing the digits of customer contact numbers and coding the extension field by entering a "98 or 99." By conducting this analysis, SCE found that the frequency of this practice would have only resulted in at most 3 or 4 omitted surveys per year, which is statistically insignificant for PBR purposes.
4. SCE also performed more detailed analysis with respect to Method 5. Specifically, SCE found that the practice of substituting an SCE employee's number in place of the actual contact only resulted in nine misreported surveys, which is far below the level necessary to have a measurable impact on survey results. Likewise, SCE found only 295 sample records for customers identified on the "5+ Customer List," which is insufficient to have a measurable impact on overall survey results.
SCE argues that CPSD and TURN wrongly contend that the invalid numbers reported by Maritz for the Planning organization represent intentionally falsified customer contacts or evidence that should have caused SCE's management to be concerned by such actions by planners. SCE says this analysis is unjustified and wrong. SCE analyzed the invalid numbers Maritz was reporting from 2001 to 2003 and found that they were explained nearly entirely by systems and procedural miscommunications (e.g., SCE work orders carrying a "999" designation, cell phones numbers, etc.), not by data falsification. In addition, SCE refers in its Reply Brief to the statements of a number of employees who said that they never altered or put incorrect customer contact information on work orders, nor were they ever told to do so. (SCE R.B., pp. 33 - 34.)
SCE analyzed the assumption that invalid numbers can be used as a proxy for assessing the degree of SCE employee falsification of customer contact information by reviewing customer transaction records for 2000-2003 that were classified as invalid by Maritz. The analysis attempted to replicate the screens used by Maritz to reject records as invalid other than the screen involving the sampling quotas. The purpose of the analysis was to try to identify those records that could reasonably be explained based on factors unconnected with employee misreporting.
The following table shows the results of SCE's analysis:
TABLE 2 (Exh. 1, p. 88)
Determination of Residual Invalid Records
Year |
Total Invalid |
Explainable Cause |
Residual Invalid |
Residual Invalid Percentage |
2003 |
34,767 |
32,159 |
2,608 |
8% |
2002 |
27,330 |
19,636 |
7,694 |
28% |
2001 |
15,076 |
10,258 |
4,818 |
32% |
2000 |
11,025 |
7,933 |
3,092 |
28% |
SCE maintains that this table shows that a significant portion of the invalid records - 70% to 90% - can be readily explained.
The explainable causes were (1) meter or work orders where SCE itself requested the work, which made them ineligible for the customer satisfaction surveys; (2) meter or work order records classified as invalid due to lack of contact information; (3) record screens used by Maritz would reject records as invalid even with a valid telephone number (these screens included: records with a toll-free area code since Maritz would not call these numbers; records with a rate schedule code that identified the customer transaction by sampling quota, such as the district in which the work was located), and (4) records with a missing telephone number in a primary field were marked as invalid, even though there was a seemingly valid telephone number in one of the other telephone number fields.
SCE's explanation of the invalid numbers is not persuasive. Its own investigation (Exh. 11) finds that planners manipulated data and described the methods used to do so. In addition, we have the testimony of numerous planners and supervisors of planners who described how they manipulated data sent to Maritz.3 Both the investigation and the testimony show that these planners knew what would cause Maritz to reject a number and they used that knowledge to screen out customer interactions that might result in negative customer satisfaction surveys. Furthermore, the planner testimony shows that management encouraged the manipulation of the customer satisfaction data collection process: If a given customer contact might give a poor survey, management instructed the planners to leave the contact blank, put in an incorrect number, insert an 800 number or otherwise do what was necessary to ensure that the Maritz data screen would exclude the contact. There were hundreds of employee interviews which show management stating in one form or another, "Do what you have to do" to get a good survey response. This was not only said to individuals, but also in planning meetings with twenty or more planners present. (See statements of Shull, CS 1134, CS 1144, CS 1250, CS 1118, etc.)
In its direct testimony in this proceeding, SCE asserts that the data falsification and other misreporting was done only by "certain employees" and was "limited to some of SCE's district offices and a limited number of employees." However, SCE's 2004 investigation report concludes that misconduct was likely widespread:
At least 36 Design Organization personnel engaged in deliberate misconduct including alteration of customer information before the data were transmitted to the independent survey organization, Maritz Research (Maritz). It is probable that other Design employees also acted in this manner, although we are not likely to be able to identify those additional individuals or reliably estimate the number of such occurrences. (Exh. 11, p. 2.)
We believe the employees who reported unethical behavior were accurate in their statements. They risked their jobs, potential promotions, and the respect of their fellow employees to testify against the interests of their employer. Most SCE employees (62%) state that fear of retaliation would prevent them from reporting unethical behavior. (Exh. 14, p. 8. Results of Employee Focus Groups on Ethics.) Overcoming that fear and reporting unethical behavior (especially their own) lends credibility to their statements.
We find that data manipulation and falsification were pervasive throughout the Design Organization and most, if not all, district offices. We further find that it is reasonable to conclude that data manipulation and falsification affected the PBR customer satisfaction data significantly enough that it cannot be relied on for the purpose of granting PBR rewards.
8. Management of the Design Organization Knew or Should Have Known of Data Manipulation
To assess culpability, the most important issue in this investigation is whether SCE management had direct knowledge of manipulation or falsification of customer contact data. CPSD asserts that SCE management (i.e., above the level of first-line supervisors) knew that customer contact data was falsified. SCE asserts the only evidence in support of this accusation are the uncorroborated allegations of a handful of planners against Gregg Fine, former Southeast Division Planning Manager, and Dale Shull, former Vice President, Power Delivery. This evidence, in SCE's opinion, falls short of proving that either of them had actual knowledge of the data falsification that was only uncovered after SCE's investigation.
The hierarchy of management of the Planning Organization with responsibility for customer satisfaction during the 1997-2003 period was:
1. Richard Rosenblum - Senior Vice President - Transmission and Distribution Business Unit (TDBU)
2. Dale Schull - Vice President - Power Delivery - Planning Division
3. Ken Trainor - Director of Design (appointed 2001; prior Project Manager)
4. Gregg Fine - Project Manager
5. Supervisors
6. Planners
8.1. Rosenblum
Mr. Rosenblum testified that he did not know whether pressure was placed on employees to sell the survey. Selling the survey was a company-sponsored program to communicate with customers; it tells them that we were trying to give them very high levels of service, 5, 5+ is how we described it, and that if we had not, could they please tell us what we could do so they had 5 or 5+ service. That was certainly part of what has been characterized as selling the survey. It was purposely done. (Tr. p. 299.) He said as vice president he was the person ultimately responsible for actions taken by planners, but he was not aware of any planners telling a customer that if he did not get a 5 or 5+ it is a failure or he will loose his job. (Tr. pp. 300-301.)
8.2. Schull
Dale Schull did not testify but he was interviewed by SCE's outside counsel on March 12, 2004. This interview was memorialized and was admitted as Exh. 47. In that interview, Schull said that he had heard of individuals leaving a telephone number line on a screen blank, changing a number, or putting an invalid number in the database in order to avoid a negative survey. "He indicated that this was natural: the linemen and planners knew that the meter readers are able to self-select and they think the system is not fair. Additionally, the corporate goals of 1997 - 1999 became a doubled-edged sword and gave planners an incentive to play games." (Schull, Exh. 47, p. 1.) He said that he and Fine took a zero tolerance attitude with people who "screwed with the numbers." "He pointed out that there were only two ways to play with the numbers: (1) put in the phone number of someone who knows the answers to the survey (like a brother-in-law who knows to say 5+ when called) or (2) slip digits in the phone number. He emphasized that anyone could do these two things: Planners who were doing these things were counseled; some supervisors were moved to other positions. In other words, individuals were disciplined." Id., at 2 He said, he knew of specific instances of game playing, but he did not know the names of planners that had engaged in such conduct. Moreover, he did not ask Fine for the names. (Id., at 3.) Schull indicated that Fine ran the customer satisfaction effort by himself, and Schull met with Fine twice a month from 1997 to 2000.
Schull stated that planners often talked and suggested that planners taught each other how to avoid a bad survey. He stressed that planners are being evaluated on the surveys. He explained that in 1996, many planners retired but, in 1997, construction in California took off. Unfortunately, by 1997, there were no linemen, planners, or even apprentice programs for linemen and planners. SCE had to find people, set up training programs, and then train people. While he and his colleagues were trying to drive all these things along, they also had to deal with the customer satisfaction numbers. They just could not leave those numbers at 65%. So they had to hire people, have a strategy, and drive the customer satisfaction numbers. He said that he made Ken Trainor the Director of Planning after 1997 reorganization, knowing that he would have the toughest job. Trainor was technically competent, committed to a good product, and had great leadership skills. He handpicked him just as he had handpicked Fine to be the "Customer Satisfaction Czar." He had been to planning meetings and customer satisfaction was always a topic. He said that he never heard anyone in the field talking about leaving out a telephone number or putting a wrong telephone number into the database. However, Fine did know. He gave very clear instructions to Fine to get it to stop. He believed Fine was carrying out his instructions, in part, because planning supervisors who Fine knew were gaming the system were being removed or relocated.
Nevertheless, in his opinion, some people were still gaming the system, including supervisors. He indicated that he was sure that every planner at one time or another had slipped a digit when they had encountered a customer they knew would hurt their scores. He pointed out that the temptation must be great, especially when the planners know how difficult it is to get caught. Telephone numbers are sometimes entered incorrectly. He asked Fine about misconduct at each meeting. Fine told him that it was hard to find, but they agreed that if Fine could find it and prove it, then he needed to counsel the person and deal with the planning supervisor. Sometimes it meant firing the planning supervisor. He said that as a vice president, he did not have the time to go into the specifics of the counseling or the discipline.
Finally, he said he had not talked to Rosenblum about these issues; he had never reported that there was a problem. He did not know the planner's name in any of these situations. He did not get into that level of detail. Rather, he felt that SCE had to deal with the issue through training.
8.3. Trainor
Trainor testified that in 2001 he was promoted to Director of Design, responsible for all planners in the Design organization; prior to that he was a project manager. Fine and Williford, both project managers, reported directly to him and he reported to Schull. He testified that he knew nothing of widespread manipulation of the survey prior to SCE's investigation.
He testified that "selling the survey" was shorthand for legitimate efforts to explain the survey to customers and in the process identifying ways to genuinely improve customer service. It described efforts in Planning and in other areas of the company to be sure customers who might be surveyed by Maritz actually understood the survey categories and how they were used at SCE. In the process, planners often asked customers if they were "delighted" or "fully satisfied" with the service they received. If they were not, customers were asked what else could be done in order to achieve those levels of satisfaction. Prior to the PBR investigation, SCE management had no reason to believe that there was anything improper about these efforts. He admitted that in some Planning offices, individual planners and some supervisors took selling the survey to unacceptable extremes. But that should not obscure the fact that if conducted as management expected, selling the survey was appropriate and the communication to customers was part of SCE's efforts to improve customer satisfaction.
Trainor gave some examples of planner interview comments about selling the survey, such as: (Exh. 1, pp. 50-51.)
"In selling the survey, CS 1275 says his people tell the customer they may be contacted by an independent survey firm, the scale is 1 through 5+, and if we're doing less than 5 or 5+ to please tell us what we can do or let a supervisor know what we can do. They tell a customer that 5 or 5+ is what we're aiming for, but do not indicate that only a 5 or 5+ counts."
"When a job is near completion, CS 1300 stated that planners are taught to `sell the survey' to their customers by telling the customer that they may be asked to participate in a survey. The planners explain to their customers that they are striving for a 5 or 5+. If they are not going to receive a 5 or a 5+, they are instructed to request that the customer call their supervisor to let the supervisor know what else the planner could do. She denies informing the customer that a `4' or below is a bad score."
He said that telling customers that only 5 and 5+ scores counted was also accurate given the PBR structure which would not have been intuitive to customers.
Trainor said Planning made a presentation to planners on customer satisfaction on May 1, 2002 which featured a chart based on an analysis of Maritz verbatims through March of that year. It shows a 75% 5/5+ score and estimates that an additional 13% could be achieved through improved communication and 5% through selling the survey. These additional percentages were derived from the verbatims: surveys where a score below 5 was attributed to poor communications with the customer in the verbatim comments and surveys where the customer's verbatim indicated complete satisfaction, but the score was a 4 or below. To Planning management this latter group represented customers who did not understand the survey scale. Had they been given that explanation, their score would have reflected their verbatim. This was the proper role for selling the survey.
In regard to planner turnover, he identified offices that had unusual and sustained personnel shortages. He said South Bay was one, in the initial 2001-2002 time frame; Foothill was another that had a big turnover and had some relatively low scores in that time frame; San Jacinto was one which was chronically short; there was a huge growth of new business workload out there. Compton was another office that had a pretty high turnover of employees and shortage with a pretty sizable workload they had to face.
He concluded by saying that CPSD's allegations are wrong. There were not clear indications to Planning management and SCE executives during the PBR period that lower level personnel must have been falsifying customer satisfaction survey data. He identified CPSD's indications as: (a) broad management awareness and support for selling the survey efforts; (b) supposed trends of steady or improving customer satisfaction results in the face of severe shortages of trained planners and dramatically increasing workload; (c) the consistently large percentage of invalid customer telephone numbers reported by Maritz for Planning compared to other surveyed activities; and (d) customer satisfaction results unaffected by the California energy crisis.
Trainor contended that each of these supposed early warnings for management was nothing of the sort. In fact, it was management's intent in selling the survey to genuinely improve customer satisfaction, and these efforts were successful. CPSD's assumption of a pattern of customer satisfaction unaffected by available personnel and workload is simply wrong when one looks at the actual data. Customer satisfaction survey results do vary with these and other conditions at the office and regional levels - indicating to management that these surveys were indeed reflecting planner performance accurately.
8.4. Fine
Gregg Fine - Project Manager, was appointed by Rosenblum and Schull to be the principal contact for the customer satisfaction survey. He did not testify at the hearing but his methods of improving survey scores were discussed in great deal. Auchard, an SCE employee in the Planning department, testified by deposition (Exh. 65, p. 22):
Fine described what he meant to sell the survey.
Explain to the people what "delighted" meant, how the scale worked from 1 or, actually, zero to 5-plus, that the 5 or 5-plus scores were the only thing that mattered, so we had to explain to the people that if we didn't get that we were basically getting - in equating it to school, it would be an F. You either get an A by getting a 5 or 5 plus, or you get an F. So we would have to explain to them that 5 or 5+ is what we were shooting for and then if we didn't get that, what could we do to get those scores from them.
To determine Fine's (and other employees) involvement in data falsification, CPSD interviewed at least 40 SCE employees and reviewed statements of hundreds more. Excerpts from those interviews were introduced in Exh. 90. Many of the interviews discussed Fine and other senior managers. Exhibit 90 is voluminous and impractical to include in this decision. We set forth a representative sample.
CS1134 (Planning Supervisor) recalled a planner meeting in 1996 or 1997 that took place in the Huntington Beach Planning Department, in which Gregg Fine was talking about customer satisfaction. He recalled Gregg Fine making the following statements about customers who were not happy with the service: "If that customer doesn't like us, he won't be getting a phone call;" and "If that I on the ULOG screen looks like a 7, then it's a 7."
CS1134 stated he read a June 3, 2004 letter at his termination hearing in which he made the following statement about SCE management's knowledge of planners' gaming the customer satisfaction survey scores: "The Managers and former Superintendents across T&D have always had absolute direct knowledge of unethical behavior." He stated he made this statement because in at least two customer satisfaction meetings with managers and superintendents in attendance, he heard Gregg Fine encouraging planners to change customer telephone numbers to avoid a bad survey. He stated Gregg Fine told the planners "If the customers don't like us, they won't be getting phone calls." And "You can change numbers. They look like this."
CS1250 (Planner) recalled another meeting that took place while she worked in Ontario. This was a meeting at which all the Ontario planners were present, along with the supervisor, CS1183, and Fine. At one point, they discussed customer satisfaction and what they could do to improve their scores, such as implementing the seven points of contact. At the end of this discussion, she recalled that Mr. Fine stated that they should do whatever it takes, and that he would deny he ever said that.
CS1144 (Planner) stated that intermediate level supervisors gave more implicit than explicit direction. He said that Gregg Fine, the Southeast Division Planning Manager, has told planners that they are smart people and should do what they have to do to obtain good scores . . . . He does recall Ken Trainor being in meetings where planners have discussed deleting or putting in incorrect contact numbers. CS1144 indicated that Ken Trainor stated that he does not want to know about changing number but planners should just do what they have to do. . . . He heard such statements a couple of times from Gregg Fine and Ken Trainor in regional meetings upstairs in the Santa Ana Service Center. He added that he also recalls one meeting at which Dale Shull, then Vice President of Power Delivery, similarly told the planners to do what they had to do but that Dale Shull should not be informed about what actions are taken.
CS1116 (Planner) recalled a planner meeting in late 2000, in which Gregg Fine talked to the planners about customer satisfaction and how important it was for planners to sell the survey to customers. He recalled that somebody asked Gregg Fine what a planner should do if they did not have a chance to meet with the customer and sell the survey. Gregg Fine told the planners "Well, if that's not a good number, then make it into one." CS1116 quoted Gregg Fine as stating "if that's not a good number, you'd better do something to the number."
CS1118 (Planner) recalled a customer satisfaction meeting in the summer of 1998 or 1999, in which five levels of senior management (Dale Shull, Gregg Fine, Mike Keller, Dick Karper, and Ken Trainor) confronted the planners in the South Bay District Service Center about their low customer satisfaction scores. He recalled that CS1116 (Planner) asked the five managers what the planners should do when they had a customer that was not happy with the service. In response to that question, Gregg Fine asked the planners "[d]id you get the right number that needs to go on that job?" CS1118 said he understood this to mean, what his supervisor, CS1158, instructed the planners numerous times before in planning meetings: "You make sure you find the contact, if it's the ditch digger that's digging the ditches that is going to be happy out there you get that number and to get 5, or 5+." CS1118 stated that none of the managers in attendance contradicted Gregg Fine's suggestion that planners should find the best customer contact who would provide a good customer satisfaction score.
CS1129 (Planner) confirmed that finding the best customer contact on a project was part of selling the survey. He stated he was instructed by Gregg Fine, Division Planning Manager, and all his supervisors to find the best customer contact, the person who would provide him a good customer satisfaction survey score and designate that person as his customer contact in the work order. He stated the person would have to be someone who was involved with the project. He recalled in a planner meeting someone asked if he could select the person laying sprinklers as a customer contact. CS1129 stated management concluded since the person laying sprinkles was involved with the project, that that person could be selected as a customer contact.
CS1115 (Designer) said Gregg Fine, Division Planning Manager, drilled into them the need for planners to score 5+ on their customer satisfaction surveys. She stated Gregg Fine wanted the planners to give out the 5+ pens and the 5+ mugs, and to make sure that not one customer got by not knowing about the survey. She said Gregg Fine "drilled" the need for planners to sell the survey, and she stated she never heard him suggest any other way to achieve good customer satisfaction other than by selling the survey. She stated that "[e]very time you see him (Gregg Fine) it was nothing like: Hey, CS1115, how are you doing? It was like: Are you getting that 5+? How is that 5+ going? It was a nightmare."
CS1137 (Planner) stated SCE management (Gregg Fine, Division Regional Manger) instructed her in selling the survey to tell the customers that anything less than a 5 or 5+ is a failing grade.
CS1134 (Planning Supervisor) confirmed that management directed planners to sell the survey to their customers and to explain that any rating less than a 5 or 5+ was inadequate or a failure.
CS1130 (Planner) stated that "[w]e were required to answer the phone, Five-plus is a must. And we were required to mention the survey in the first contact with the customer and explain the survey to them." "I must provide you with five-plus service." "Like I said, I had to answer --- I was required to answer the phone, Five-plus is a must. Hi, this is CS 1130. How can I help you. I said 5+ is a must."
8.5. Supervisors
CS1113 (Planning Supervisor) stated he instructed his planners to explain the survey scale to their customers and to tell them a score of 1 to 4 was a failing grade or a zero, and a 5 or 5+ was a passing grade. He stated that SCE management directed planners to tell their customers anything less than a 5 or 5+ was a failure. He stated management considered it important for planners to sell the survey because they learned from the customer verbatims that some customers who were happy with the service provided poor customer satisfaction survey scores.
CS1113 confirmed he made the following statement to the SCE investigators: "The more the survey is sold, it is more likely to result in a five-plus score. Some customers tell the planners to stop selling the survey and say, `I know. Five-plus.'" CS1113 stated he learned from his planners that customers were getting sick and tired of hearing about the customer satisfaction. He confirmed that he made the following statement during his February 15, 2004 interview with SCE: "The customer is contacted too often. For example, depending on the stage of a project, there may be eight points of contact such as when SCE receives money, when a contract is signed, etc. At every customer contact, the planner is encouraged to sell the survey. A customer could be contacted up to eight times if Maritz calls at the end of a project."
CS1132 (Planning Supervisor) stated management at monthly planner meetings emphasized the need for planners to sell the survey to their customers, and he said management considered selling the survey an important means to increase customer satisfaction scores.
As we have said, the most important issue in this investigation is whether SCE management had direct knowledge of manipulation or falsification of customer contact data. SCE, in brief, presents the issue thusly:
CPSD first alleges that SCE management had actual knowledge of falsification because one planner accused Shull and five planners accused Fine of providing vague direction to select the customers who would provide the highest survey score. These accusations against Shull and Fine have never been corroborated. In fact, both Shull and Fine unequivocally denied any knowledge of planners selecting the customer who would provide the highest survey score as the contact despite the existence of other customers more directly involved in the project. Accordingly, the fact that these vague and uncorroborated accusations were made by a handful of individuals - out of the literally hundreds that were interviewed - cannot suffice to impute knowledge on SCE management that this practice was actually occurring.
CPSD also alleges that SCE management had actual knowledge of falsification because three planners accused Fine of providing vague direction to falsify customer contact information. These accusations by a handful of planners were never corroborated and, in fact, the overwhelming number of planners actually refuted that such statements were ever made. Thus, there is no reliable evidence that SCE management knew that customer contact data was falsified. (SCE O.B., p. 56.)
SCE contends that the accusations against Shull and Fine have never been corroborated. However, according to the strict legal definition, corroboration is not required in this investigation. Evidence Code Section 411 states:
§ 411. Direct evidence of one witness sufficient
Except where additional evidence is required by statute, the direct evidence of one witness who is entitled to full credit is sufficient for proof of any fact. (Stats. 1965, c. 299, § 2, operative Jan. 1, 1967.)
Cross Reference
Abortion, corroboration of testimony of prosecutrix, see Penal Code § 1108.
Accomplice testimony, corroboration, see Penal Code § 1111.
Birth, time and place,
Probable time and place, see Health and Safety Code §§ 103460, 103480.
Proof to satisfaction of court, see Health and Safety Code § 103475.
Corroboration, when required,
Abortion, see Penal Code § 1108.
Accomplice testimony, see Penal Code § 1111.
Destroyed will, acts constituting revocation, see Probate Code § 6120.
False pretenses, see Penal Code § 532.
Prostitution, procuring female under 18 for, see Penal Code § 1108.
Soliciting commission of certain offenses, see Penal Code § 653f.
Treason, see Const. Art. 1, § 18; Penal Code § 37.
We assume SCE uses the term to mean more than a few or, perhaps, more than five. But, regardless of SCE's definition, the evidence is overwhelming that management knew of the manipulation and falsification.
9.1. Early Warning Signals
SCE contends that its management had no early warning signs that customer satisfaction survey results were obviously inflated by falsification of customer contact data. CPSD asserts that SCE management turned a blind eye to the early warning signs that customer satisfaction results for planning were clearly inflated by falsification of customer contact data.
The first early warning sign was the lack of trained planners. SCE informs us that SCE's Voluntary Retirement Offer (VRO) program was put into effect in 1996, and the Design Organization was hit hard by retirements and transfers to other organizations. (See Exh. 11, p. 18.) As soon as the full magnitude of the departure was clear, Design Organization management initiated the Design Service Representative (DSR) program. This was a recruiting effort to locate people outside SCE who had a combination of technical and customer service skills. Recruits were then trained in the necessary skills in order to take Planner 1 positions. In the next four years, the DSR program brought over 100 new planners to the organization. And, from 2001 through 2004, a new training program was implemented, which brought an additional 80 new planners.
At about the same time, new construction boomed. Between 2000 and 2003, SCE went from planning and installing about 55,000 new meters per year to approximately 73,000 meters per year. Older, more established areas saw increased work due to SCE's infrastructure replacement program and new inspection and maintenance programs. With their older infrastructure, these established regions added to the workload of the Design Organization.
All of these circumstances dramatically increased individual planners' workload. At the same time, attrition remained high. Furthermore, the Design Organization had fewer planners who had experience as linemen, or other field positions. A decade ago, one-half of all planners worked in other SCE positions prior to becoming planners; today, approximately 80% of planners have been hired directly into their jobs.
The evidence is unclear regarding the number of planners prior to the retirement offer, but we estimate it must have been about 250. Mr. Trainor testified that while SCE only had 116 planners in 1997, the number climbed to 162 in 1998, 188 in 1999, and 207 in 2000. Following a dip to 175 planners in the energy crisis year of 2001, Planning staffing levels increased steadily to 202 in 2002, 288 in 2003, and 296 in 2004.
SCE argues that the evidence shows that offices with fewer planners had lower scores than more fully staffed offices. Further, if Planning was in such a complete state of disarray that it could only achieve high survey results by cheating, one would expect to see survey results plummet after the falsification and selling of the survey efforts terminated in 2004 with SCE's investigation. In fact, however, there was no appreciable decline in survey results in 2004 and 2005 from the levels achieved in 2002 and 2003. Finally, customer satisfaction results decreased as expected during the California Energy crisis in 2001.
Another early warning sign, contends CPSD, was the large number of invalid numbers reported by Maritz. SCE claims the invalid numbers do not represent falsified customer contacts. It says in mid-2002, Planning management became aware for the first time that Maritz was reporting that over one third of the numbers being supplied to it electronically for its Planning customer satisfaction surveys were being rejected by Maritz as invalid numbers. While the invalid numbers facts are somewhat complicated, SCE's extensive reconstruction of the events in 2002 to 2005 and its detailed examinations of the invalid numbers show that most of the numbers in the Maritz "invalid number" category were due to computer system conflicts between the Maritz number selection and screening programs and the various customer number fields used in Planning.
CPSD argues that senior management knew of and condoned the manipulation and falsification. During the period 1997-2003 senior management knew or should have known:
1. In 1996 SCE through a retirement program caused massive personnel cuts of experienced planners.
2. In 1997 SCE had no training program to replace the large numbers of planners who retired.
3. From 1997 through 2003 SCE hired technically unqualified planners primarily for their communication skills.
4. Planning offices during the period 1997-2003 were understaffed with competent, experienced planners.
5. During 2001, SCE and its customers suffered through the California electric crisis, with SCE experiencing a financial crisis. The financial crisis caused SCE to again cut costs (e.g., SCE stopped using contract planners).
6. During the period 1997-2003 many planners were improperly selling the survey.
7. During the period 1997-2003 many planners were improperly manipulating customer contact telephone numbers to prevent unfavorable comments.
8. During the period 1997-2003 survey results were rising to exceed benchmarks set based on pre-1996 standards when planner staffing was more than adequate and technically competent.
Figure 3 tells the story, graphically:
Customer satisfaction survey results show that in 1994-1995 with a full complement of experienced planners SCE's customer satisfaction results received 5 and 5+ about 75% of the time. In 1996 about half the planners retired and in 1997 results suffered; yet in 1998-2000, despite understaffed offices, inexperienced planners, and a greater workload, survey results substantially exceeded 1994-1995. The leveling off after 2001 can be attributed to the fact that the inexperienced planners, over time, had gained experience, plus the pressure to sell the survey was in full swing.
SCE asked the question, "Why did management fail to discover or stop this problem?" and answered it in Exhibit 11, p. 76, where it finds that the Design Organization management's (by management, SCE refers to personnel above the front-line supervisor level) failure to discover or stop the misconduct is disturbing. It believes a partial explanation may be that, during the time period at issue, Design Organization management was struggling with the unique challenges presented by a new focus on customer satisfaction, an exploding workload, and a serious shortage of experienced and technically skilled planners. In addition, a lack of training and a team-oriented culture contributed to the behavior. SCE contends it is likely that management's emphasis on customer satisfaction survey results, in the face of both rumors of impropriety and strained resources, conveyed to planners an unintended message that such misconduct was acceptable or at least tolerated. Finally, SCE asserts, it also is clear that senior management believed that employees would act in an ethical manner and simply failed to anticipate that certain employees would manipulate customer contact information in these circumstances, given the ease of doing so.
We agree that management's emphasis on customer satisfaction survey results conveyed to planners the message (which we find was intended) that such misconduct was acceptable. We disagree with SCE's argument that it "simply failed" to anticipate that employees would manipulate customer contact information. For seven years, as discussed below, some management actively encouraged employees to manipulate survey data, and other senior management, who knew or should have known that the data was suspect, filed for, and received, PBR awards.
10. Impacts of Selling the Survey,
Data Falsification, and Data Manipulation
Critical to this proceeding is the issue of what impact, if any, the data falsification and manipulation had on the survey results. Although a hotly contested issue, the picture that developed from the written and oral testimony is quite clear: the data falsification and data manipulation that occurred in Planning had a material impact on the survey results and, thus, on PBR.
SCE responded to CPSD's and the intervenors' assumptions:
1. To test the assumption that the invalid number category is largely if not entirely made up of deliberately altered customer contact numbers, SCE examined the customer transaction records for 2000-2003 that were classified as invalid by Maritz.
2. To test the assumption that the widespread nature of data falsification necessarily impacted the survey data, SCE examined the volume of sample meter and work orders sent to Maritz to determine what level of data falsification would be necessary to impact survey results; it also examined the actual survey data for any impact.
3. To test the assumption that service declined over the PBR period due to understaffing, SCE looked at the survey data by district to see whether the scores by district reflected the understaffing that occurred in some districts.
4. To test the assumption that selling the survey biased the survey results upward, SCE hired an independent consumer survey expert to conduct and supervise a study to see what impact selling the survey had on survey results. SCE also retained a specialist in psychological measurement, to examine from a psychological perspective the possible impacts from selling the survey.
10.1. Assumption 1: Invalid Numbers as
a Proxy for Data Falsification
SCE witness Carl Silsbee tested the assumption that invalid numbers can be used as a proxy for assessing the degree of falsification of customer contact information. He directed an analysis of the customer transaction records for 2000-2003 that were classified as invalid by Maritz. The analysis attempted to replicate the screens used by Maritz to reject records as invalid. The purpose of the analysis was to try to identify those records that could reasonably be explained based on factors unconnected with employee misreporting.
We have discussed this above (Section 7) and are not persuaded that the "explainable causes" could be attributed to non-manipulation of data.
10.2. Assumption 2: The Widespread Nature of Data Falsification by Planners Necessarily Impacted Survey Results
SCE tested the assumption that the widespread data falsification by planners necessarily impacted survey results by looking at the volume of sample meter and work orders provided to Maritz to see how often a planner would have had to engage in wrongdoing in order to impact the survey results.
Comparing just the meter and work order samples actually provided to Maritz with the actual number of surveys Maritz conducted, Silsbee concluded the ratio is quite small. He testified that during 2002, SCE provided Maritz with a total of 83,791 meter and work orders over the 20 two-week sampling periods. From that sample, Maritz only surveyed 1,357 customers, for a 1.6% completion rate. Silsbee also looked at the completion rate during all the years from 1998 to 2003, finding that it ranged between 1% and 3% with an average of about 1.6%.
Because only a small number of customer transactions are actually surveyed in comparison to the number sampled and an even smaller number in comparison to the overall population of customer transactions, there is, Silsbee explained, a significant dilution effect:
Employees did not know in advance which meter or work orders would be selected or which would ultimately be surveyed. Thus, any attempt by planners to manipulate contact information in the overall population of meter and work orders would be greatly diluted by the survey selection process. For example, with a 1.6% completion rate, a service planner would need to manipulate contact information on an average of 62 sample transactions (and a larger number of transactions in the overall population) in order to impact on average, a single survey. (Exh. 1, p. 92.)
To detect any statistical trends in the survey data that would reflect falsification, SCE retained Dr. Richard Berk, a professor of Criminology and Statistics. Recognizing that it would never be possible to quantify precisely the number of times planners altered data and how often such conduct might have occurred, Dr. Berk ran a series of analyses to explore from a variety of angles what impact data falsification may have had on the survey results.
First, Dr. Berk compared the proportion of 5 and 5+ scores for SCE employees known to have altered customer contact data with other SCE planners. Next, he compared the proportion of 1 and 2 scores for SCE employees who were known to have altered customers contact data with the remaining population of planners. He then ran a comparison of the proportion of 5 and 5+ scores of all SCE districts.
SCE asserts that Dr. Berk's result were quite definitive. As Dr. Berk explained, there is simply no evidence that the data falsification had any impact on the survey data:
I conclude that there is just no evidence that the planners who fabricated the contact information were sufficiently effective in their efforts to make any material difference in the fraction of 5 and 5+s in the overall quality of the survey.
The theory that the opposing parties have put forward requires that starting fairly early in the data we have, 1998 to 2000, and so on, you'd expect a gradual, but continuous, and pretty dramatic increase in the fraction of surveys where 5 and 5+ dominated.
So I think of it as kind of a shark fin. You'd expect a dramatic increase to a tip at about 2003, at which point SCE stepped in and put an end to this stuff. And you'd expect then a dramatic drop, just like a shark fin . . . . And there is nothing in the data whatsoever consistent with the shark fin. As I described a few moments ago, it drops in 2001 and is largely flat from there on. (R.T. pp. 907-909.)
We agree with CPSD that the widespread nature of data falsification by planners necessarily impacted survey results. We reject Dr. Berk's statistical analysis. Instead we are persuaded by the evidence in Figure 3, above. Customer satisfaction results increased by 20% to 30% during 1997-2000, from pre-1997 levels, and were higher than pre-1997 levels from 2001-2003, despite a 50% reduction in experienced planners in 1996; despite gradual increases of inexperienced planners from 1997 through 2001 (planners who were expected to learn on the job after a short training period); despite a substantial increase in business 1997-2003; and despite understaffing in Valencia (CS 1132), South Bay (CS 1120, CS 1118), Santa Ana (CS 1119, CS 1140), Huntington Beach (CS 1136, CS 1116), Covina (CS 1135, CS 1133), Ontario (CS 1111, CS 1130), Ventura (CS 1131), San Jacinto (CS 1117, CS 1145), Ridgecrest (CS 1122), and more. (All employee statements are found in Exh. 90.)
Dr. Berk's analysis contradicts reality. The base level of performance was determined when SCE had a full complement of experienced planners. The evidence shows a 20%-30% increase in customer satisfaction after half of the experienced planners retired and were slowly replaced by inexperienced people handling an increased workload. No theoretical analysis can make the facts evaporate. The increase in customer satisfaction survey results was caused by employee falsification and manipulation encouraged by management.
10.3. Assumption 3: Customer Service
Declined Over the PBR Period
SCE looked at individual district results during the PBR period to test CPSD's theory that Planning offices with personnel and workload problems showed improved customer satisfaction scores that could only be explained by data falsification and manipulation. Those districts offices included offices in San Jacinto where new home construction was booming between 2000 and 2003, and the South Bay office, which serves the beach communities south of the Los Angeles airport. What SCE discovered is that in these districts scores actually lagged behind the average customer satisfaction scores.
Again, we agree with CPSD. With all the personnel and workload problems endemic to SCE during the PBR period, the only reasonable explanation for increased customer satisfaction scores is the admitted widespread manipulation and fabrication of data. SCE's discovery that some overworked offices had low survey scores only confirms CPSD's position. We can only conclude that the manipulation had to be greater than shown by the actual evidence submitted to have overcome scores from poorly performing offices.
10.4. Assumption 4: Selling the Survey
Results in an Upward Bias
SCE tested the assumption that selling the survey would necessarily result in increased 5 and 5+ scores through the testimony of Dr. Andrew Morrison and Dr. Lee Cooper. Dr. Morrison designed a field test to gather empirical data regarding the effect on customer satisfaction survey results of two different communications that were used to sell the survey during the PBR period. Dr. Cooper provided insight into how consumers would likely react to various selling-the-survey communications. Each of these experts concluded that the practices at issue here did not increase customer satisfaction scores. In fact, they had no discernable impact.
Seeking to separate fact from speculation, Dr. Morrison designed and conducted a field test of two selling-the-survey communications used by SCE employees, including planners. He was seeking to determine what impact, if any, selling-the-survey communications had on survey results. The first script merely informed the customer that he or she might be surveyed, coupled with the desire to provide 5+ service. This was the suggested script used in the phone centers, but also tracked pre-survey communications planners used with their customers. The second script added language tracking the "completely satisfied or delighted" language of the survey as well as informing the customer that any score less than a five would be a failing score. This harder sell was used by some planners. A third script that made no mention of a survey or the 5+ scores was read to the control group. The key observations that resulted from the field test were:
1. Use of pre-survey communications, as tested, do not have a statistically significant impact on raising the customer satisfaction score;
2. The majority of respondents in the two test script groups have no recall of the closing script; and
3. Differences in satisfaction scores are more closely correlated to the type of transaction than the selling-the-survey script.
From this, Dr. Morrison concludes that results in this test demonstrate that selling-the-survey communications do not result in statistically significant increases in either the overall SCE satisfaction or PBR transaction 5 or 5+ ratings.
SCE argues that the evidence of Dr. Cooper, who made an assessment of the various pre-survey communications planners and other SCE employees used to inform customers about the potential survey, is persuasive. Dr. Cooper concluded that even those communications which he did not think constituted best practices had no impact on increasing the mean customer satisfaction survey results, and could actually have had the exact opposite effect. He testified:
Selling the survey is not a well-defined phrase. It includes a number of practices that should be (and have been) stopped, others that are easily modified into acceptable practices, and still others that are acceptable as they stand. Even the bad practices do not necessarily lead to changing the mean satisfaction score. They may have no effect or may leave the mean unchanged and merely act to polarize ratings. They could well increase the negativity of dissatisfied customers. (Exh. 7, p. 18.)
Dr. Cooper emphasized that the only empirical evidenced presented in this proceeding supports his conclusion.
Q: You write that: Even the bad practices do not necessarily lead to changing the mean satisfaction score. Can we conclude that there is a chance that may change the score?
A: There is no empirical evidence that they do. The best empirical evidence we have on this point comes from both Dr. Morrison in his controlled experiment and from the empirical results that were obtained in 2004 and 2005 after all these bad practices had been ceased. In both those cases there essentially was no discernible difference. So whatever bad practices preceded the 2004 and '05 years didn't seem to have an impact on the empirical results after they were stopped. (4 R.T. 560-61.)
The phrase "selling the survey" by itself implies active encouragement: "sell- to cause to take . . . to make or try to make sales . . . to promote the sale
of . . . to persuade . . . convince . . . " [Websters New Word Dictionary 1974, p. 1293.) When used by SCE it clearly means "to promote, to persuade, and to convince;" not the standards for an objective survey. There was no ambiguity: "Do what you have to do" was the clear message SCE conveyed to its employees.
Selling the survey resulted in an upward bias. Again, we agree with CPSD. Dr. Morrison's test, conducted in 2006, well after the events at issue, did not replicate the situation as it was in the years 1997-2003. It especially did not (and could not) replicate planners who could lose bonuses, promotion, pay increases, or even their jobs, if survey results were poor. CPSD expresses it, thusly:
SCE's study . . . provides no information even relevant to how SCE planners communicated with customers during 1997 through 2004. The reasons are:
1. The SCE test was a telephone communication between a customer representative unknown by the customer. By contrast, SCE's planner communications were often in person at the jobsite, between a planner and a customer who knew each other.
2. The SCE test made a single request for a high survey score. By contrast, from 1997-2003 SCE management instructed planners to "sell the survey" each time they dealt with customers. Thus, with repeated dealings occurring in planning, so did repeated planners requests for high survey scores.
3. The language used in the test was milder and less subject to bias than "anything less than a 5 or 5+ is a failure and doesn't count," and "I'll get in trouble with my boss if you score less than 5+," or other statements planners stated from 1997 through 2003.
4. The type of transaction is completely different. The test involves many minor matters such as turning on service in which customers tend to be happy. Planning customer satisfaction often involves significant layouts of money and construction deadlines, which creates more at stake in customer satisfaction.
5. CPSD assumes service was satisfactory for the 2006 test. The evidence shows SCE provided substandard planning service during 1997-2003. No SCE witness has tested whether this major difference affects test results. (CPSD O.B. 80-81.)
We are dealing with two surveys, the Maritz survey and the Morrison survey. Both are invalid: Maritz, because it is the result of SCE employees' manipulation and fabrication of data; and Morrison's, because it could not replicate the circumstances of 1997-2003.
In reaching our conclusion on both surveys, we have considered variations in the survey methodology, choice of the target population, the sampling design used, the questions asked, and how the questions were asked. (See Roper v. Simmons, 543 U.S. 551, 617 (2005) (Scalia J., dissenting) (quoting Atkins v. Virginia, 536 U.S. 304, 325-27 (2002) (Rehnquist C.J., dissenting)) (citing R. Groves, Survey Errors and Survey Costs (1989); Click Billiards, Inc. v Sixshooters, Inc., 251 F.3d 1252, 1263 (9th Cir. 2001); Prudential Ins. Co. v. Gibraltar Financial Corp., 694 F.2d 1150, 1156. (9th Cir. 1982).
Meter readers read between 400 and 500 meters per day. Meter readers on average will have a customer contact while reading a meter only about 5% of the time. Meter readers carry a hand-held recording device used primarily to record the meter usage they observe. The device has a set of five survey buttons that allows information to be gathered for use in a SCE survey or study. The Number 2 button was reserved for use in connection with the Maritz customer satisfaction survey. Meter readers were instructed to press the Number 2 button when they had a "meaningful and memorable" customer contact, which was described as any verbal interaction with the customer - positive, neutral, or negative. There has never been a practical way to monitor meter readers to ensure they were properly recording every single positive, neutral, and negative customer contact. Thus, a meter reader who had a negative customer contact could have kept that customer out of the pool of potential survey participants by simply failing to push the Number 2 button. Because of this limitation in the customer selection process, SCE proposes to refund the entire $2.4 million of the customer satisfaction rewards attributable to meter reading results.
CPSD concurs with SCE's refund proposal for meter reading, but additionally proposes to penalize SCE $3.5 million as the meter reading component of the customer satisfaction incentive mechanism. CPSD claims that SCE should do more than refund the $2.4 million. CPSD contends that the opportunity for selection bias, and SCE's failure to monitor or audit the selection of potential survey respondents for meter reading provides the basis for the Commission to impose a PBR penalty.
SCE has received $28 million in PBR customer satisfaction awards and has requested $20 million more. The PBR customer satisfaction award has four categories: planning, field delivery (meter reading), phone centers, and business offices (Figure 1). The customer satisfaction results of each category are averaged to determine if there should be an award or a penalty.
At the close of hearings, CPSD requested the opportunity in a second phase of this proceeding to review (1) the components of customer satisfaction other than planning and meter reading as well as (2) the system reliability incentive mechanism. SCE supports CPSD; it says the refund and penalty recommendations should be limited to the amount of the customer satisfaction incentive tied to planning and meter reading. Both SCE and CPSD agree that the refund of PBR rewards related to planning and meter reading should be 30%, or $14.4 million, of the $48 million reward. The Presiding ALJ agreed. He said he would consider only the customer satisfaction incentive mechanism corresponding to planning and meter reading in this Phase 1.
In opposition, DRA recommends that SCE refund $28.0 million and forgo $20 million for the period 1997-2003 for its customer satisfaction PBR program due to data falsification and manipulation of the customer satisfaction survey results. TURN concurs. DRA argues that because of extensive problems with SCE's survey process a full refund of all PBR rewards earned during this period is appropriate. DRA's recommendation relies on the extensive investigation done by CPSD, as well as its own. In DRA's opinion, the impacts of selling the survey, falsifying customer contact information, and employee discretion over choice of customer contacts support denial of all customer satisfaction awards. DRA had presented evidence that other SCE divisions aggressively encouraged employees to sell the customer satisfaction surveys. Field service representatives, whose customer satisfaction performance affects PBR, handed out pens, nightlights, post-it notepads and door hangers, etc. with 5+ language. All Phone Center employees were sent a document entitled "5+ Service Script Team Competition" (Ex. 54, pp. 7-8.) describing a 5+ service script, where customers would be told the customer satisfaction score that SCE desired before the customers had been surveyed; the employees were told they would be monitored. A document, "Red Light: Promoting 5+ Customer Service," also encouraged Phone Center employees to sell surveys. (Exh. 54, pp. 9-10.) DRA contends that selling the survey in this manner is inappropriate and had an improper affect on the customer satisfaction measurements. Because of the impropriety of SCE practices in regard to customer satisfaction surveys, DRA recommends that all PBR awards connected to customer satisfaction be refunded.
TURN maintains that the Commission has sufficient evidence on the record in this first phase to decide, as a matter of policy, that SCE should at a minimum refund and forgo all $48 million in PBR customer satisfaction rewards it has claimed. TURN points out that employees and management manipulated the mechanism for seven years; those seven years of bad data were submitted to the Commission; and it is now impossible to evaluate the impact of performance based ratemaking on service quality.
The recommendations of TURN and DRA amount to 100% of the total customer satisfaction rewards SCE claimed. That result, SCE argues, if adopted by the Commission, would prejudge the determination of the reasonableness of the rewards attributable to the components of customer satisfaction to be evaluated in Phase 2.
We agree with DRA and TURN. We will order SCE to refund all $28 million of customer satisfaction awards it has received and to forgo the $20 million in awards it has requested. The Presiding ALJ acted prematurely when he limited Phase 1 to customer satisfaction for planning and meter reading. This record has sufficient evidence to determine the entire scope of PBR customer satisfaction awards. The issue has been thoroughly briefed by the parties.
We see no useful purpose to consider manipulation and falsification in Phone Centers and Field Delivery (non-meter reading) in Phase 2. Even if we were to find no manipulation and falsification in Phone Centers and Field Delivery we could not permit a company found to have falsified and manipulated customer satisfaction data to retain $33.6 million of customer satisfaction awards. The process was tainted; fraud and manipulation were widespread. Merely because 100% of the surveys are not shown to be affected, does not require us to parse the awards. Further, all PBR customer satisfaction rewards could be forfeit due to poor performance in just the planning and meter reading departments. The floor penalty mechanism of the incentive mechanism could result in a complete refund of the $48 million of rewards, because the mechanism has as an integral component the prevention of deterioration in all four areas. (See Figure 1.) It would not be unreasonable to conclude that planning performance and meter reading were worse in 1997-2003 than in previous years, based on SCE's own description of various staffing and workload problems, and order a refund of the PBR rewards based on this outcome.
We have said in other parts of this opinion that it is impossible to reconstruct the events of 1997-2003 because of SCE's fraud and manipulation. We cannot say with certainty that the floor penalty applies, but we can prevent SCE from benefiting from its own wrong.
In D.96-09-092 we authorized a health and safety PBR incentive. The standard was based on data collected from 1987-1993 for four categories of injury and illnesses incidents: (1) first aid incidents,4 (2) non-lost time, (3) restricted duty, and (4) lost time. These last three categories all constitute the California Division of Occupational Safety and Health (OSHA) recordable incidents. This PBR component is based on the sum of first aid incidents and OSHA recordable incidents. First aid incidents were added with the idea that another data point would result in a more stable database. In our midterm review of SCE's PBR, we decided that no adjustments were necessary (D.99-12-035, p. 29). Later, in D.02-04-055, we extended SCE's PBR.
In 1998, SCE's Audit Service's Environmental Health and Safety (EH&S) group conducted an audit of SCE's OSHA recordkeeping function. The audit results were inconclusive, and, accordingly, an additional audit was scheduled for 1999, which was done. Audit Services conducted another audit of SCE's corporate OSHA recordkeeping in 2001. It was not until SCE's investigation of PBR that Audit Services became aware that first aid incidents were part of the PBR mechanism. (Exh. 12, p. 23.)
13.1. Reporting of First Aid Incidents
SCE's investigation revealed that SCE did not establish a system to collect all first aid incidents and therefore such data were underreported. SCE explains that it is difficult to quantify the impact of its underreporting of first aid incidents because the underreporting errors were present in both the data used to establish the employee health and safety incentive mechanism benchmark as well as the subsequent measurements SCE reported to the Commission when measuring its performance against those benchmarks.
SCE's PBR baseline was based on a recordkeeping process that did not capture all first aid incidents during the period 1987 - 1993. As a result, SCE's pre-PBR records for first aid incidents, from which the historical data were drawn to establish the PBR benchmark, were not comprehensive. SCE agrees these obstacles to accurate reporting of all first aid incidents carried over to the PBR mechanism. SCE adds, not all of SCE's employees were instructed to report and record all first aid incidents. Further, there were different standards for internal safety performance measures than there were for PBR measurements.
SCE admits that its failure to adequately collect first aid data significantly impacted the PBR health and safety results. SCE's investigation revealed that: "[b]ased on Mohave's records alone, SCE is aware of 185 first aid incidents in 2006 and 286 first aids in 2003 that were not reported to SCE's Workers' Compensation Department. The addition of these first aid cases from Mohave alone results in a substantial increase to SCE's prior reported total of 72 first aids for 2002 and 49 first aids for 2003." (Exh. 12, p. 33.) In 2002 the number of unreported first aid incidents at one facility was 257% of the total reported throughout the company. In 2003, the number of unreported first aid incidents at one facility was 584% of the total reported throughout the company.
13.2. Reporting of OSHA Recordable Incidents
SCE's 1999 audit of corporate OSHA recordkeeping reviewed records from calendar year 1998. The report's significant finding was "that more than 100 recordable injuries had been erroneously omitted from SCE's 1998 OSHA log." This resulted in at least a 4% increase in OSHA recordables rather than the 8% decrease that had been reported at year's end: a 12% differential. (Exh. 12, p. 19.) In January 2000, SCE conducted a review of its San Onofre Nuclear Generating Station's (SONGS) OSHA recordkeeping for 1999. In February 2000, it issued a draft report "finding that, out of approximately 100 internally reported injuries, 30 had been incorrectly classified for OSHA purposes. This number included 13 injuries that should have been classified as recordable, rather than non-recordable." (Exh. 12, p. 21.) SCE has identified several hundred additional OSHA recordable incidents in the five and one-half year period from 1999 through the first two quarters of 2004. During that time period, SCE reported 3,466 OSHA recordable injuries. (Exh. 12, p. 39.) SCE believes these numbers are significant.
The reasons given by employees for the OSHA recordable discrepancies are various. SCE's investigation showed that many SCE employees are reluctant to report injuries because of a belief that some level of work-related physical wear and tear is inevitable. (Exh. 12, p. 33.) Employees also stated that they wanted to "tough it out" and work through the injuries. Second, SCE acknowledges that the current safety incentive programs unintentionally discourage the reporting of injuries. In a number of interviews, employees and supervisors stated that safety incentive programs acted as a disincentive for injury reporting. Particularly, when safety incentives are group-based (as they are in some business units), injured employees may want to avoid reporting their injuries and jeopardizing safety incentive compensation not just for themselves, but also for the rest of their group. Third, some employees do not wish to jeopardize their own individual safety incentives. Others explained the failure to report an injury as resulting from the fear that they would be disciplined for engaging in the underlying unsafe behavior that resulted in the injury. (Exh. 12, p. 34.)
Fourth, some supervisors have discouraged employees from reporting injuries and have advised or tolerated the use of various methods to avoid reporting OSHA recordable incidents. Among the methods used to disguise injuries and avoid internal reporting are: employee self-treatment; treatment by personal physicians rather than the company doctor; timecard coding of lost time as sick days or vacation; etc. (Exh. 12, p. 35.)
SCE believes that some of this conduct appears to be motivated by a desire to ensure that a work group's safety statistics reflect as few OSHA recordable injuries as possible. Supervisors whose work groups have a high number of OSHA recordables may be subject to discipline and/or negative performance evaluations, and may lose compensation. (Exh. 12, p. 36.)
SCE found that the primary incentive focus of SCE employees and their managers and supervisors was the employee safety goal in the Results Sharing bonus program. (Exh. 12, p. 47.)
SCE concludes that:
[E]mployee compensation - both safety bonuses and Results Sharing - have significantly contributed to under-reporting of less severe work-related injuries (e.g., sprains, and contusions). While some employees have stated in interviews that the safety bonuses were not large enough to motivate non-reporting many more employees and supervisors mentioned the safety bonuses (and other safety rewards and incentives such as free meals, movie tickets, etc.) as a factor in under-reporting of non-severe injuries. This effect appears to be heightened when safety bonuses are group-based, because injured employees may be more reluctant (or subject to greater pressure) not to report injuries if the report affects other employees' safety compensation. This pressure to avoid reporting may also increase in the situations where the safety bonus progressively increases, as occurs during a no-injury `streak'. (Exh. 12, p. 49.)
SCE concluded that the PBR OSHA recordables data are inaccurate. (Exh. 12, p. 33.)
13.3. SCE's Position
SCE reviewed its records for a more thorough investigation. First, it found that before PBR, SCE's existing system did not capture accurately all work-related first aid incidents that occurred across the company. At the time there was no regulatory requirement to track these first aid incidents. With the adoption of the PBR mechanism, however, SCE admits that it did not undertake the review of the existing system that might have revealed this flaw. SCE asserts that while the existing system did capture hundreds of first aid incidents annually in the years used as the baseline for the PBR mechanism, the threshold for what constitutes a first aid incident is so low that it is virtually certain that not every first aid incident was being reported.
To accurately capture all first aid incidents, every employee who applies a band-aid to a minor cut or ice to a sore knee must report that treatment. In a company of over 12,000 employees, many of whom are engaged in physically taxing and sometimes hazardous work, SCE believes it is not feasible to ensure compliance with such a requirement. Accordingly, SCE reasons, this injury and illness component was inappropriate from the beginning because of inherent inaccuracy. The metric was based on first aid incident data that was not reliable; as a result, underreporting of first aid incidents occurred. This influenced both the setting of the benchmark and the results that were reported to the Commission. Following its investigation, SCE concluded that this mechanism should not be the basis for either rewards or penalties. Consequently, SCE has proposed to refund to ratepayers the $20 million it has collected for 1997 - 2000, with interest, and to withdraw its pending advice letter requests and forgo $15 million in health and safety rewards for 2001 - 2003.
13.4. CPSD's Position
CPSD looks at the evidence from a different perspective. CPSD asserts that SCE's failure to keep accurate records for the health and safety PBR was brought about both by SCE's ineffectual management practices and by the deceptive modus operandi of SCE. CPSD says SCE did not keep accurate records for first aid and OSHA-recordables and, therefore, the actual level of employee safety cannot be reliably estimated. Further, as a result of SCE's practices, CPSD asserts the actual health and safety of employees at SCE has suffered because of not detecting and repairing unsafe conditions. CPSD recommends maximum PBR penalties for every year in question (1997 - 2003), beyond SCE's proposed refunds and forgone requests.
CPSD argues that because first aid data were mandated by the PBR, SCE should be held responsible for its failure to keep accurate records of data relevant to the metric. This is particularly important because of the impact that first aid under-reporting had on the PBR metric. CPSD says that SCE has repeatedly maximized the difficulty of collecting first aid data in this proceeding, but this misses the fundamental point: first aid data were required by the PBR metric. Thus, SCE had a duty to the Commission, and by extension to ratepayers, to develop measures to accurately capture and report these data. The obligation to collect and report these data under the PBR was known to SCE's management, and was not fulfilled.
CPSD stresses that collecting first aid data is much more than an exercise in record keeping. It has a direct bearing on health and safety. There is a relationship between near misses, first aid accidents, more serious injuries, and a fatality. For every 330 accidents one would expect 300 minor ones, 29 OSHA-recordables or serious injuries, and one very serious injury if not a fatality. (This relationship is known as Heinrich's Triangle.) Near misses, first aid incidents, and other data related to minor injuries are informative in terms of preventing major injuries; collecting data on such minor injuries improves health and safety outcomes.
CPSD argues that it is obvious that there will be many more first aid incidents than OSHA-recordables. Consequently, inaccuracy of first aid data has a proportionately greater effect on the PBR metric than OSHA-recordable data. CPSD concludes that SCE's failure to implement adequate monitoring procedures of this data, along with other management practices that resulted in under-reporting, leads to a strong inference that the actual data would have resulted in full PBR penalties. SCE's Exhibit 12 subheading states this very clearly: "SCE's Under-Reporting of First Aid Incidents Significantly Impacted the PBR Results." (Exh. 12, p. 33.)
13.5. Discussion
In our original PBR decision (D.96-09-092), we ordered SCE to review the health and safety issue in a mid-term review. In that mid-term review, SCE proposed that the existing mechanism not be adjusted, and we agreed. (D.99-12-035.) Regardless of why first aid data were included, as well as its usefulness, SCE agreed to have it included in the PBR metric. Despite its subsequent determination that the "metric was flawed," and that it was an inappropriate basis on which to determine rewards and penalties, SCE was required to follow the Commission decision. SCE management knew or should have known about the problems with the collection of first aid data, but failed to implement timely changes to correct those problems, or modify the metric.
Cal/OSHA requires that employers record work-related fatalities, injuries and illnesses. (Cal Code Reg. tit. 8 § 14300, Cal. Labor Code § 6410 (employers are required to keep a log and summary of all recordable occupational injuries and illnesses).) This includes any "[m]edical treatment beyond first aid." (Cal. Code Reg. Tit. 8 § 14300.7(b)(1)(D); § 14300.7(b)(5)(B) (definition of what constitutes first aid). These regulations are very similar to the federal OSHA requirements. (29 USCS § 657, 29 C.F.R. 1904.7.)
CCR Title 8 Sec. 4300.7(b)5(B) states:
For the purposes of Article 2, "first aid" means the following:
1. Using a nonprescription medication at nonprescription strength (for medications available in both prescription and non-prescription form, a recommendation by a physician or other licensed health care professional to use a non-prescription medication at prescription strength is considered medical treatment for recordkeeping purposes);
2. Administering tetanus immunizations (other immunizations, such as Hepatitis B vaccine or rabies vaccine, are considered medical treatment);
3. Cleaning, flushing or soaking wounds on the surface of the skin;
4. Using wound coverings such as bandages, Band-AidsE, gauze pads, etc.; or using butterfly bandages or Steri-StripsE (other wound closing devices such as sutures, staples, etc. are considered medical treatment);
5. Using hot or cold therapy;
6. Using any non-rigid means of support, such as elastic bandages, wraps, non-rigid back belts, etc. (devices with rigid stays or other systems designed to immobilize parts of the body are considered medical treatment for recordkeeping purposes);
7. Using temporary immobilization devices while transporting an accident victim (e.g., splints, slings, neck collars, backboards, etc.);
8. Drilling of a fingernail or toenail to relieve pressure, or draining fluid from a blister;
9. Using eye patches;
10. Removing foreign bodies from the eye using only irrigation or a cotton swab;
11. Removing splinters or foreign material from areas other than the eye by irrigation, tweezers, cotton swabs or other simple means;
12. Using finger guards;
13. Using massages (physical therapy or chiropractic treatment are considered medical treatment for recordkeeping purposes); or
14. Drinking fluids for relief of heat stress.
(c) Are any other procedures included in first aid?
No. This is a complete list of all treatments considered first aid for purposes of Article 2.
Article 2 refers to "Employer Records of Occupational Injury or Illness." (8 CCR 14300.7.)
SCE failed to research the concept of "first aid." SCE witness Mr. Silsbee (manager of Regulatory Economics) testified: "[W]hen I was developing the safety incentive mechanism, I did not have in mind a real clear understanding of the definition of first aid. (4 RT 503, 11-13.) Dr. Sahl (director of Environmental Health and Safety) asserted: "[T]here was . . . no definition of what a first-aid injury was amongst the parties. (7 RT 881, 24-27.)
There is no reason that SCE should claim that it did not know what was considered to be a first aid for the purposes of PBR for seven years. The regulations are explicit and are easily available. They should be known to any company official that needs to report OSHA recordables. That official must determine whether an incident is a first aid incident or an OSHA recordable incident in order to correctly report to OSHA.
A comparison of first aid incidents relative to OSHA-recordables should have also been a clear sign to SCE that first aid incidents were being underreported.
Table 3 (Exh. 86)
Annual PBR Filings Health and Safety
1997 |
1998 |
1990 |
2000 |
2001 |
2002 | |
First Aid |
511 |
320 |
227 |
174 |
96 |
72 |
Total OSHA |
702 |
684 |
586 |
457 |
413 |
299 |
TOTAL |
1,213 |
1,004 |
813 |
631 |
509 |
371 |
Table 3 shows that the number of reported first aid incidents was less than OSHA recordables in every year SCE requested rewards. We agree with CPSD that SCE should have known that the number of first aid incidents should be far greater than the number of OSHA incidents. Despite the unusually small number of reporting first aid incidents, SCE, year after year, sought and received PBR rewards.
SCE's inaccurate health and safety reporting was not limited to first aid incidents. We also find, based primarily on SCE's Exh. 12, that the OSHA recordable statistics were inaccurate from 1999 to 2004. This provides an additional reason why the Commission cannot rely on SCE's health and safety data for the purposes of the PBR incentive mechanism.
We agree with and adopt SCE's proposal to refund to ratepayers the $20,000,000 SCE has collected for 1997, 1998, 1999, and 2000, with interest, and to withdraw its pending advice letter requests and forgo $15,000,000 million in health and safety rewards for 2001, 2002, and 2003.
Regardless of our findings regarding SCE's failure to properly record first aid incidents and OSHA recordables, we believe it is too speculative to infer that during the years in question SCE would have incurred PBR safety penalties. Therefore, we reject CPSD's proposal to apply full PBR penalties.
14. Revenue Requirements for Results Sharing
in 2003, 2004, and 2005
In addition to the PBR program, SCE has an incentive pay plan called the Results Sharing program, established in 1995. This program links compensation to employees' annual job performance, business unit performance, and company performance. All full-time employees in the Transmission and Distribution Business Unit (TDBU), Customer Service Business Unit (CSBU), Generation, Shared Services, and IT business units are eligible to earn a cash bonus based on team (business unit or department) and SCE performance measured against stated goals. Depending on how well the business unit performs compared to its goals, salaried exempt employees can earn from 0% to 6% of their annual pay, and non-exempt employees can earn from 0% to 3% of their pay. (D.04-07-022 pp. 207-211.) Like other utility expenses it is 100% funded by the ratepayers through the revenue requirement approved by the Commission in SCE's periodic general rate case (GRC). The program has changed considerably since its inception in 1995. Through collective bargaining, represented employees became eligible for Results Sharing program awards and the number of eligible employees has nearly doubled. Also, the target and maximum payout percentages have changed. Originally, the maximum payout for represented employees was $400, whereas by 1999 those employees had a maximum payout of 6% of pay, which equated to approximately $3,600 per represented employee.
During the pendency of the GRC that resulted in D.04-07-022 (A.02-05-004 and I.02-06-002), the proceeding in which the Commission considered an appropriate forecast for Results Sharing for 2003, 2004, and 2005, DRA's predecessor, the Office of Ratepayer Advocates (ORA), petitioned to reopen the proceeding to inform the Commission about an SCE audit's preliminary findings that SCE employees had misreported customer satisfaction results; to notify the Commission about possible effects on the GRC and other proceedings; and to recommend remedies appropriate to the information recently divulged to the Commission and its staff.
Because of the importance of our discussion granting ORA's petition in D.04-07-022 to the disposition of this case, we provide a lengthy excerpt:
We will accept ORA's uncontested recommendation to reopen the record and take the attachments to its petition into evidence. . . . We take the action described below on the basis of this evidence.
Our staff is currently investigating the issues raised in ORA's petition, including the concern that SCE's PBR mechanism may have been seriously compromised by employee fraud over a period of several years. This staff investigation is still in progress, and we are not prepared at this time to adopt a specific procedural course of action, whether in this proceeding or elsewhere. ORA's
recommendation to keep this GRC open therefore will not be approved. Nevertheless, we retain the right to reopen this proceeding on our own motion, initiate a new proceeding, or take other action that we deem appropriate after our staff investigation has reached the appropriate stage for formal action.
This decision does not adopt safety, reliability, or customer satisfaction incentive mechanisms that were proposed in this GRC. ORA's request that we reject such mechanisms on the basis of the concerns raised in its petition to reopen is therefore moot.
ORA's request that SCE's rates be made subject to refund is based on ORA's belief that the reported data falsification may require a change to the adopted revenue requirement. For example, ORA believes that "revenue sharing pay" (which we understand is a reference to SCE's Results Sharing program, addressed in Section 7.7.2.3.2 herein) may have motivated certain employees to falsify customer satisfaction data. As we understand ORA's proposal, the Results Sharing expenses adopted herein (among other costs) would be subject to refund. We do not understand that ORA's proposal would necessarily limit the subject-to-refund amount to the Results Sharing program, since ORA proposes that possible refunds be limited to revenue requirement reductions which flow from issues influenced by the data falsification.
Claiming that it is unnecessary and inappropriate to make rates subject to refund, SCE states that it has already committed to refunding any PBR rewards it has received inappropriately, and that the same would be true of any adopted revenue requirement subsequently found to be tainted. Referring to a March 15, 2004 letter from SCE Chairman John Bryson to Commission President Michael R. Peevey, in which SCE explicitly commits to promptly refund any inappropriately received reward, SCE states that "the same commitment applies to any affected revenue requirement adopted in this or any other proceeding." (SCE response, p.3.) (Emphasis added.) In addition, SCE states that in a March 8, 2004 meeting between its General Counsel and the Commission's General Counsel, SCE stated that it would not assert retroactive ratemaking or any other technical defense to refund of amounts found to have been inappropriately collected from ratepayers. (Id.) However, SCE does not believe that the entire revenue requirement at issue in this GRC should be subject to refund.
...
We agree with SCE that it would be inappropriate to make the entire revenue requirement subject to refund. In any event, the parties may not be far apart on this question. ORA proposes that possible refunds be limited to "revenue requirement reductions which flow from issues influenced by the data falsification," not the entire revenue requirement at issue in this GRC. This may not be substantively different from SCE's commitment to refund "any affected revenue requirement." Given this commitment, SCE has in effect agreed largely, if not entirely, to the substance of ORA's proposal. Ordering that the adopted rates be subject to refund represents our confirmation of SCE's own public commitment to return any amounts inappropriately collected from ratepayers.
Since the investigation into the data falsification and its ramifications is still underway, we are not in a position to specify the revenue requirement dollar amount that is subject to refund, or even the expense category (whether Results Sharing, customer satisfaction survey expenses, etc.). Accordingly, we will not embrace SCE's proposal to limit the amount that is subject to refund to the $24.536 million portion of Results Sharing costs that SCE calculated is attributable to the Transmission and Distribution business unit. We will instead draw upon SCE's own wording in making subject to refund "any affected revenue requirement" shown to be associated with the customer satisfaction data falsification investigation. (D.04-07-022 pp. 285-288.)
14.1. Parties' Positions
DRA recommends that the Commission order SCE to refund $84.406 million, which DRA characterizes as the costs associated with Results Sharing incentives and bonuses that were based on forecasts derived from manipulated data. The total includes $64.039 million attributable to the TDBU, $10.290 million attributable to the Generation and CSBU, and $10.077 million collected in rates in 2004, but not paid to employees. (DRA/Godfrey, Exh. 83, pp. 2-3.)
First, DRA recommends that the Commission order SCE to refund $64.039 million, the entire portion of SCE's 2003 to 2005 revenue requirement related to Results Sharing for TDBU. DRA's recommendation is based on the revenue requirement of $24.536 million attributable to TDBU for Results Sharing for each of the years in which the 2003 GRC authorized rates were in effect. The 2003 GRC decision authorized $24.536 million for test year 2003 and attrition years 2004 and 2005 for TDBU Results Sharing. Because the rates did not go into effect until May 2003, DRA has allocated $14.997 million for 2003, in addition to $24.536 million for each of the years 2004 and 2005, for a total of $64.039 million.5
The basis of DRA's recommendation is that the recorded data used by the Commission to authorize the $64.039 million is tainted by fraud that occurred in TDBU, and therefore, funds should be returned to SCE ratepayers who were harmed by the fraud. The 2003 authorization was calculated by utilizing a two-year average of actual Results Sharing incentives paid out to SCE employees for the years 1999 and 2000. (DRA/Godfrey, Exh. 81, p. 2-10, n. 10.) DRA asserts the Results Sharing amounts paid out to SCE employees in 1999 and 2000 were fraudulently tainted by improper achievements in customer satisfaction and health and safety. Without the fraud, the payouts would have been lower and the authorization adopted in the GRC would have been different. As such, DRA contends, it is improper to authorize ratepayer funding based on inaccurate and dishonestly obtained data, and the revenue requirement tied to this data should be refunded.
While Results Sharing costs include other goals besides the customer satisfaction and safety goals, DRA argues, these components represent a substantial portion of the costs which form the basis of the Results Sharing revenue requirement. Because customer satisfaction and safety results were tainted during 1999 and 2000, Results Sharing payouts recorded during those years are necessarily inaccurate and should not have been used to set the 2003 Results Sharing revenue requirement. SCE's testimony shows that for 1999, customer satisfaction and safety combined to account for 30% of SCE's Results Sharing goals in TDBU. (SCE, Exh. 1, pp. 113, 114, Table XII - 1 and Table XII-2.) For 2000, Customer satisfaction and safety accounted for 40% of Results Sharing goals in TDBU. (Id.) Considering the yearly $24 million allocation for these programs, DRA concluded that the whole Results Sharing allocation during those years is unreliable and should not have been used for a test year forecast. At the very least, DRA recommends that SCE should refund 30% to 40% of the TDBU Results Sharing, for each year covered by the 2003 GRC (2003, 2004, 2005).
Second, DRA recommends that SCE refund to ratepayers $10.290 million in Results Sharing incentives it collected in rates and paid out to employees for the years 2003, 2004, and 2005 attributable to CSBU and Generation. For CSBU, DRA recommends a refund of $7.263 million, and for Generation, DRA recommends a refund of $3.027 million. DRA's recommendation is based on the fact that the underlying data supporting the revenue requirement request for those years, the 1999-2000 recorded data in CSBU and Generation, is tainted. DRA submits that the safety data used to support the Results Sharing program for Generation was tainted by inaccurate reporting of OSHA recordable incidents, and the customer satisfaction measurements, including meter reading data were inaccurate. DRA does not recommend a full refund of all Results Sharing dollars for these units because, according to DRA, the data in the business units is not as compromised as that in TDBU. Rather, DRA recommends SCE refund the portion used for actual Results Sharing payouts during 2003, 2004, and 2005, which in its opinion, represents a fair proxy of the fraud that occurred in these areas in 1999 and 2000.
Third, DRA recommends that SCE refund the $10.077 million difference between the authorized revenue requirement and recorded Results Sharing expenses for 2004. DRA asserts that in 2004, since SCE paid out less for Results Sharing than the full allocated amount due to SCE's internal investigation of inappropriate behavior, SCE should refund the difference to its customers.
SCE recommends no refunds of any Results Sharing incentives. (SCE/Cogan, Exh. 1, p. 109.)
Preliminarily, SCE argues that DRA's Results Sharing disallowance proposal is beyond the scope of the OII. SCE says that the OII refers only to potential disallowances or refunds related to the PBR mechanism; DRA's demand for a disallowance even larger than the refund of all rewards SCE has calculated under PBR is unwarranted. SCE asserts that DRA is wrong when it contends that Ordering Paragraph 1.c of the OII justifies its attack on the Results Sharing program. That sentence provides that a purpose of the OII is to determine "other increased rates or other damages if any, wrongfully caused, and the refunds and other relief associated with such wrongdoing," which is a specific reference to data falsification and manipulation in the Planning organization. The OII contains a long list of issues, each having some direct relationship to the outcome of the PBR mechanism itself. However, SCE contends that nowhere does the Commission state or even hint that this investigation was to encompass potential refunds of incentive compensation that is a part of total employee compensation previously found reasonable by the Commission.
SCE asserts that DRA is also wrong when it cites the subject-to-refund provision of D.04-07-022 as an indicator that the Commission intended to review the impacts of data falsification on revenues authorized in SCE's 2003 GRC. SCE interprets the subject-to-refund provision of D.04-07-022 as prohibiting SCE from claiming retroactive ratemaking as a bar to the refund of amounts inappropriately collected from ratepayers after the issuance of that decision. However, SCE argues that the subject-to-refund provision did not make a review of the reasonableness of Results Sharing incentive compensation a part of this OII. SCE maintains that given that the PBR mechanism under review here commenced January 1, 1997 and terminated on December 31, 2003, the subject-to-refund provision could only apply to Results Sharing revenues collected during a period of a little more than one-half of a year out of the seven years the PBR mechanism was in operation. Results Sharing payouts made in 2004 and 2005, after SCE's internal investigations had concluded and corrective actions had been implemented, were free of any of the data falsification issues associated with customer satisfaction surveys for Planning or concerns about the reporting of incidents under the employee health and safety incentive mechanism.
In SCE's test year 2003 GRC, SCE argues, the forecast of Results Sharing expense was reflected in a total compensation study, which was a study by an independent consultant that was jointly managed by SCE and ORA. SCE's forecast of total compensation was found to be reasonable, i.e., that it did not exceed competitive employment market levels. Those revenues for total compensation were reflected in rates effective May 22, 2003, as a result of D.04-07-022. The revenues authorized for Results Sharing are like any other authorized base rate expense, i.e., they are part of authorized O&M revenues that SCE management may allocate to employees as incentive compensation or spend on other O&M expense as required. SCE states that the Commission held that as long as total compensation is within market levels, the mix of base pay, benefits, and incentive pay should be at the discretion of utility management and that ratepayers should fund the full amount of Results Sharing. The Commission adopted a forecast expense of $73.4 million (based on the average expense for 1999 and 2000) for SCE's Results Sharing incentive program for recovery in rates.
SCE argues that DRA uses arbitrary and inconsistent approaches to propose refunds of Results Sharing related to the TDBU versus Generation and CSBU and that DRA utterly fails to support either approach with any analysis that the outcome of Results Sharing expenses on a forecast or actual basis would have been any different. DRA's proposal is in effect a duplicate penalty because the same actions DRA contends justify the refund of all PBR rewards are assumed to justify the refund of all TDBU Results Sharing expense, and significant portions of CSBU and Generation business unit Results Sharing expenses. When viewed in the context of the fact that SCE spent more than its authorized O&M revenues, that the refund of PBR customer satisfaction rewards constitutes a significant self-penalty, and that SCE is providing full restitution of PBR safety rewards it collected from ratepayers, the DRA's proposal for additional refunds is not justified.
SCE maintains that DRA`s proposed refund of $64.039 million attributable to SCE's TDBU for the period May 22, 2003 through December 31, 2005 is inappropriate because: (1) the refund of revenues, if any, should be limited to the amounts paid by TDBU for the Results Sharing where it can be shown that the Results Sharing expense or forecast was affected by data falsification in the Planning organization; (2) DRA incorrectly assumed that the entire forecast of TDBU Results Sharing expense was affected by data falsifications; and (3) DRA proposed to refund 100% of the forecast Results Sharing expense for 2004 and 2005 when DRA concedes that it is not alleging any evidence of fraud in those years.
SCE contends DRA's proposed refund of all forecast TDBU Results Sharing expense for 2003-2005 is unreasonable and arbitrary. It is premised on the assumption that the entire forecast amount ($24.536 million per year) was tainted by improprieties related to achievement of customer satisfaction and employee safety Results Sharing goals in 1999 and 2000, when at most any fraud in these areas could have only affected 40% of the total Results Sharing forecast. DRA did not check to see if TDBU met its Results Sharing goals in those two years, nor consider that the customer satisfaction and safety goals did not constitute the entirety of the TDBU goals for those years, nor whether falsification or other inappropriate actions were sufficiently widespread or successful to have caused different Results Sharing payouts in 1999 and 2000. DRA did not consider what the forecast of Results Sharing expense would otherwise have been in the absence of such actions and conducted no independent investigation or analysis of achievement of Results Sharing goals. DRA did not demonstrate that the TDBU authorized revenues for Results Sharing in the 2003 GRC would have been lower because payouts recorded in 1999 and 2000 were improperly inflated due to data falsification in 1999 and 2000. DRA simply assumes in effect that the Commission would have authorized no funds whatsoever for TDBU Results Sharing expense on a forecast basis, and therefore all of the revenues actually authorized should be refunded. SCE asserts that is an unreasonable and arbitrary assumption.
SCE argues DRA's contention that all of the 1999-2000 expense data were tainted is irrelevant. No forecast is perfect. Even assuming some portion of the forecast Results Sharing expense might have been in error, it does not detract from the Commission's conclusion that total compensation, regardless of how it was derived and including the Results Sharing forecast, was within market standards on a going-forward basis. When SCE incurred Results Sharing expense during the period 2003-2005, it did so with the understanding that it could have paid these amounts as base compensation or incentive compensation and that either would have been a reasonable use of the revenues authorized in the 2003 GRC as total compensation.
SCE believes that its testimony showed: (1) customer satisfaction and employee safety do not constitute the entirety of Results Sharing goals for TDBU; (2) the safety goal for Results Sharing is measured by OSHA-recordables, thereby eliminating the main concern about the achievement of the PBR safety metric; (3) the customer satisfaction goal for TDBU encompasses far more operations than Planning; and (4) even within Planning, data falsification had no impact on achievement of customer satisfaction goals. Therefore, DRA's assumption that all of the payouts in 1999 and 2000 were tainted is clearly in error. Only 30% to 40% of TDBU's Results Sharing goals are related to customer satisfaction and employee safety.
SCE argues that DRA's refund proposal for TDBU Results Sharing is inconsistent with its proposal for refunds for other business units. SCE says the approach DRA used and considered appropriate for the CSBU and the Generation business unit was not used by DRA for TDBU. As shown below, only a portion of the authorized revenue requirement for 2003 to 2005 was paid to TDBU employees due to achieving Results Sharing goals for customer satisfaction and employee safety.
Table 4 (Exh. 1, p. 120) |
||
TDBU Results Sharing Payouts 2003-2005 Related to Customer Service and Safety Goals ($Millions) | ||
Year |
Customer Satisfaction |
Safety |
2003 |
3.3 |
0. |
2004 |
1.7 |
3.5 |
2005 |
1.9 |
1.0 |
Total |
6.9 |
4.5 |
DRA used this approach as the basis for its recommendation for refunds of Results Sharing amounts associated with Generation and CSBU, but ignored them in the case of TDBU. To be consistent, SCE argues, DRA's recommendation for TDBU should be limited to no more than the $3.3 million paid in 2003.
SCE points out that unlike its proposal for TDBU, DRA proposes to refund actual Results Sharing payouts for 2003 through 2005 instead of forecast revenues. Moreover, DRA attempts to limit its refund to Results Sharing payouts attributable to customer satisfaction and safety goals for Generation and CSBU. However, SCE asserts, DRA did not attempt to prove that these Results Sharing payouts were "affected by falsification" as required by D.04-07-022. Instead, DRA incorrectly assumes the entire payout is affected by falsification, just as DRA incorrectly assumed the entire forecast for TDBU was tainted.
SCE maintains there is no evidence of data falsification with respect to customer satisfaction within CSBU during 1999 and 2000 which was the basis for the 2003 GRC forecast or during 2003-2005 when the payouts were earned. The Planning organization is within TDBU, and while meter reading is within CSBU, DRA conducted no analysis or investigation to demonstrate that meter reading customer satisfaction survey results were affected by data falsification, nor has it performed any analysis to demonstrate the effect on Results Sharing if meter reading results had been tainted by data falsification. SCE explains that selling the survey within the phone centers occurred after 2000 so it could have had no impact on the 1999-2000 results. It is not data falsification, and its use in the phone centers was strictly regulated. DRA's proposal fails to account for the fact that CSBU's Results Sharing payout reflects results obtained for phone centers, field services, local business offices and authorized payment agencies, and major accounts. SCE determined there were no data falsification issues in the CSBU areas included in the PBR customer satisfaction mechanism.
SCE also argues that safety is measured differently for PBR (first aid plus OSHA recordables) than for Results Sharing (only OSHA recordables). Because of this difference and because the OSHA recordable errors were insignificant, SCE argues no disallowance should occur related to the health and safety aspects of Results Sharing.
SCE disagrees with DRA's recommendation that SCE refund the difference between the amount allocated for Results Sharing in 2004 and the amount actually recorded. SCE contends this refund is strictly punitive. The difference between the amount authorized and the amount actually paid out for Results Sharing in 2004 was not pocketed by shareholders. In 2004, SCE's recorded O&M expenses (including the $10.077 million not spent on Results Sharing) exceeded its authorized O&M expenses by approximately $104 million. During the period affected by the 2003 GRC cycle (May 22, 2003 through December 31, 2005), SCE spent hundreds of millions of dollars more on O&M in total than the amounts authorized by the Commission. SCE argues that this benefit has accrued to ratepayers.
14.2. Discussion
We find that SCE's argument that DRA's Results Sharing proposal is beyond the scope of the OII is without merit. SCE's position is inconsistent with a clear reading of the OII, D.04-07-022, and SCE's testimony in this proceeding. In the OII, DRA was asked to participate to add additional insight into the ratemaking effects of SCE's fraud that were not covered by CPSD's investigation. As the OII states, "[w]e invite . . . the Commission's Division of Ratepayer Advocates (DRA) to actively participate in this proceeding. The proceeding involves important and basic ratemaking matters that will benefit from the expertise and participation of DRA." (I.06-06-014, p. 5.) The OII orders that the proceeding determine "other increased rates or other damages if any, wrongfully caused, and the refunds and other relief associated with such wrongdoing." (OII, Ordering Paragraph 1(c).) Ordering Paragraph 1(a) singles out PBR rates so, logically, the phrase "other rates" in paragraph 1(c) refers to rates besides PBR, such as Results Sharing.
In D.04-07-022, we said:
Since the investigation into the data falsification and its ramifications is still underway, we are not in a position to specify the revenue requirement dollar amount that is subject to refund, or even the expense category (whether Results Sharing, customer satisfaction survey expenses, etc.) . . . We will instead . . . mak[e] subject to refund "any affected revenue requirement" shown to be associated with the customer satisfaction data falsification investigation. (D.04-07-022, pp. 287-288.)
In D.04-07-022, Finding of Fact 240 states:
240. SCE has committed to refund any revenue requirements adopted in this or any other proceeding affected by data falsification by certain employees associated with the customer satisfaction survey.
Conclusion of Law 56 states:
56. To the extent affected by data falsification by certain employees associated with the customer satisfaction survey, SCE's authorized revenue requirements should be made subject to refund.
Ordering Paragraph 15 states:
15. To the extent affected by data falsification by certain employees associated with the customer satisfaction survey, which data falsification is being investigated by SCE (sic) as described in Exhibits 415, 416, 417, and 418, SCE's authorized revenue requirements adopted in this or any other proceeding are subject to refund.
SCE witness Cogan specified which portion of Results Sharing dollars are tied to customer satisfaction and safety performance and the overlap between that performance for PBR reward purposes and Results Sharing bonus purposes. (SCE/Cogan, Exh. 1, pp. 113-115, Table XII-1, Table XII-2, Table XII-3.) DRA served its testimony on September 13, 2006 and SCE never filed a motion to strike the Results Sharing testimony. SCE deposed DRA Results Sharing witness Godfrey on her proposals and made no issue of DRA's proposals. DRA cross-examined SCE's witnesses on Results Sharing issues, and SCE did not challenge the relevance of DRA's questions.
D.04-07-022 makes subject to refund "any affected revenue requirement" shown to be associated with the customer satisfaction data falsification investigation. (D.04-07-022, pp. 287-288; OP 15.) SCE's testimony acknowledges this and even reads it broadly to include safety information manipulation that had not been uncovered at the time of that decision: "it is reasonable to extend this order to any revenue requirement affected by data falsification of illness and injury reporting." (SCE/Cogan, p. 117.)
The OII is clear. It specifies an investigation into PBR rates and other rates subject to refund. Results Sharing rates are subject to refund and, if tainted, the tainting arises from the same facts that taint the customer satisfaction awards and the health and safety awards. DRA makes the compelling argument:
Considering all the ratemaking issues stemming from the fraud also makes sense in terms of administrative economy and efficiency. It would be impractical to relitigate SCE's fraud in customer satisfaction and safety in another proceeding to determine the impact on the Results Sharing rates when enough information has been presented here to accurately do so. If the Commission follows SCE's proposal and reads the OII very narrowly to only include PBR impacts, and not Results Sharing and survey costs, then the Commission would have to either open a new proceeding or reopen SCE's 2003 GRC proceeding to rehear all of the evidence presented here on customer satisfaction and safety fraud, in order to satisfy its previous D.04-07-022. This would be extremely inefficient, costly and unreasonable. Furthermore, it is unnecessary given it has been litigated pursuant to the Commission OII directive. (DRA, R. B. p. 7.)
We conclude that the Results Sharing revenue subject to refund is included in this investigation.
TDBU includes SCE's service planning group. As indicated earlier, we have found that service planning employees manipulated and fabricated data between 1997 and 2003, and the tainted data was used to determine whether SCE would receive a PBR penalty or reward under the customer satisfaction incentive mechanism. SCE used the same tainted customer satisfaction data to determine Results Sharing payouts. The Commission then relied upon SCE's 1999 and 2000 Results Sharing payouts to determine a forecast for 2003, 2004, and 2005, and the Commission authorized a revenue requirement for SCE based on this forecast (D.04-07-022, Finding of Fact 177.) Since the customer satisfaction data used by SCE to determine the TDBU's portion of the Results Sharing bonuses in 1999 and 2000 were tainted, we should require a refund of the associated revenue requirement.
We also found problems with the data SCE recorded to determine SCE's PBR penalty or reward under the employee health and safety incentive mechanism. The TDBU's portion of the Results Sharing bonuses in 1999 and 2000 relied on some of the same problematic health and safety data. The Results Sharing payouts were based on OSHA recordable incidents, rather than being based on both OSHA recordable and first aid incidents, as was the case for the PBR mechanism. However, as discussed previously, we found that OSHA recordable data was inaccurate. In fact the evidence demonstrates that the Results Sharing safety goals were actually what motivated employees to under-report OSHA recordable incidents. (Exh. 12, p. 47.)
Since the OSHA recordable data was inaccurate, the Commission should not have relied on the Results Sharing payouts in 1999 and 2000 to develop the TDBU portion of the Result Sharing forecast for 2003, 2004, and 2005, and we should require a refund of the associated revenue requirement.
In our opinion, the entire portion of the revenue requirement related to Results Sharing and tied to TDBU's customer satisfaction and health and safety data should be refunded to the ratepayers. It is reasonable to conclude that the widespread manipulation of customer satisfaction data and underreporting of OSHA recordable injuries described in SCE's own internal investigation had an impact on the data on which both PBR awards and the Results Sharing revenue requirement forecast for 2003, 2004, and 2005 were based. The same fraudulent behavior that justifies ordering SCE to return and forgo PBR awards justifies ordering SCE to return the affected revenue requirement associated with Results Sharing. SCE has failed to prove that any of the data related to these revenue requirements is reliable.
We reject SCE's argument that the revenue requirement should be preserved since DRA did not demonstrate that the payouts recorded in 1999 and 2000 were improperly inflated due to data falsification in 1999 and 2000. The evidence in this case clearly demonstrates that data that was used as a basis for the 1999 and 2000 payouts was tainted. That provides us sufficient justification to require a refund of the associated revenue requirement.
We also reject SCE's argument that no refund is appropriate since the Commission found in D.04-07-022 that SCE's forecast of total compensation, including both base pay and Results Sharing, was reasonable. Therefore, according to SCE, if the Commission had approved a lower forecast for Results Sharing the Commission would have approved more for base pay. D.04-07-022, however, does not support SCE's conclusion.
In D.04-07-022, the Commission included considerable discussion about the appropriate method for forecasting Results Sharing. In fact, the Commission rejected forecast proposals put forward by SCE and ORA, and instead adopted an alternative methodology (i.e., basing the forecast on actual payouts in 1999 and 2000). (D.04-07-022, [mimeo.], pp. 214-216.) If the Commission had based its conclusion on the reasonableness of the total compensation then the discussion of alternative methodologies for Results Sharing would have been unnecessary.
Furthermore, while the Commission found that "SCE's total compensation for all employees is presumed to be equivalent to the market level"6 and "SCE's Results Sharing program does not result in total compensation that exceeds competitive employment market levels,"7 the discussion supporting these findings indicate that a range of total compensation levels would be reasonable. Specifically, the decision explains that the compensation study that the decision relied upon had a margin of error of plus or minus 5%. Given the margin of error, the decision concluded that while SCE's total compensation was shown to be 4.3% above the comparable market total compensation, it was within the margin or error. (D.04-07-022, [mimeo.], pp. 202-203.) Thus, the Commission could have also concluded that a lower total compensation level was reasonable, so a decision to lower the Results Sharing forecast would not necessarily have translated to a higher base salary forecast.
However, we reject DRA's recommendation that we refund the entire results sharing revenue requirement attributable to TDBU. As described above management used other goals and metrics besides customer satisfaction and health and safety to measure performance and determine results sharing payouts. There is no evidence of manipulation of data in any of these other areas, and we, therefore, cannot conclude that related revenue requirement was "affected" by fraud.
We conclude that SCE should be required to refund the entire revenue requirement associated with customer satisfaction and health and safety goals for TDBU's Results Sharing program. SCE's Exhibit 1, Table XII-3 demonstrates that the Results Sharing customer satisfaction level for TDBU was determined by averaging three "areas of responsibility" including service planning, grid operations, and construction and maintenance. (p. 115.) Since we have determined that service planning customer satisfaction data from 1999 and 2000 was tainted, and the tainted data was used to calculate an average, the average is tainted. In other words, we do not need to consider whether the data in the other two areas (grid operations and construction and maintenance) was tainted to conclude that TDBU's entire customer satisfaction Results Sharing results in 1999 and 2000 were tainted.
Our view of the proper method to compute the refund is based on SCE witness Cogan's testimony about which portions of the TDBU Results Sharing payouts were based on customer satisfaction and health and safety goals. (SCE/Cogan, Exh. 1, pp. 113-115, Table XII-1, Table XII-2, Table XII-3.) In D.04-07-022, we found that the Results Sharing program should be based upon a two-year average of actual payouts for 1999 and 2000. (Finding of Fact 177.) The total Results Sharing amount authorized for test year 2003 was $73,432,000. This amount covers all the business units with Results Sharing programs, not just TDBU. TDBU's share of the total was $24,536,000. The rates went into effect May 22, 2003, so the pro rata portion of the revenue requirement for 2003 was $14,997,000. (See D.04-07-022, pp. 314-317.) For 2004 and 2005, TDBU received $24,356,000 of the over $73 million Results Sharing revenue requirement. Between 30% and 40% of that amount was attributable to customer satisfaction and health and safety. The other 70% to 60% in TDBU was attributable to goals based on "performing CPUC required circuit patrols and detailed overhead and underground inspections, California Independent System Operator required inspections, and system maintenance." Id. We will use the average of the 1999 and 2000 percentages, or 35%.8 For 2003, the portion of the TDBU revenue requirement attributable to customer satisfaction and safety, or 35% of the total, is $5,249,000. For 2004 and 2005, 35% of the total TDBU revenue requirement is $8,588,000. The sum of the 2003, 2004, and 2005 revenue requirements that we will require SCE to refund is $22,424,000 and is shown in Table 5:
Table 5 - Portion of Revenue Requirement Attributable to TDBU Results Sharing.
2003 |
$5,249,000 |
2004 |
$8,588,000 |
2005 |
$8,588,000 |
TOTAL |
$22,424,000 |
We find that a refund of the revenue requirement associated with the CSBU and Generation Results Sharing programs is reasonable for similar reasons.
CSBU includes SCE's meter reading group. As indicated earlier, we have found that the customer satisfaction data for SCE's meter reading group was unreliable. SCE used the same metering reading customer satisfaction data to determine Results Sharing payouts. The Commission then relied upon SCE's 1999 and 2000 Results Sharing payouts to determine a forecast for 2003, 2004, and 2005, and the Commission authorized a revenue requirement for SCE based on this forecast. (D.04-07-022, Finding of Fact 177.) Since the customer satisfaction data used by SCE to determine the CSBU's portion of the Results Sharing bonuses in 1999 and 2000 were tainted, we should require a refund of the associated revenue requirement.
Meter reading was just one of several areas within CSBU with customer satisfaction goals for Results Sharing. However, the results from the different areas were averaged together, so unreliable data in the meter reading area taints the overall average. (Exh. 1, p. 115, Table XII-3.) Therefore, we conclude that the entire revenue requirement associated with the Results Sharing customer satisfaction goals should be refunded.
CSBU and Generation also had safety goals for Results Sharing, and we have found that SCE's OSHA recordable incident data for 1999 and 2000 was unreliable. Therefore, since the Commission relied on the Results Sharing actual payouts in 1999 and 2000 to set SCE's revenue requirement, we should require a refund of the portion of the revenue requirement related to safety goals.
We agree with DRA's conclusion that SCE's actual payouts in 2003 to 2005 represent a reasonable proxy of the fraud that occurred in these areas in 1999 and 2000. Therefore, we will order SCE to refund an additional $10,290,000 for the results sharing revenue requirement for CSBU and generation.
Based on the forgoing discussion we conclude that SCE should refund a total of $32,714,000 as summarized in the Table 6.
Table 6 - Total 2003 to 2005 Revenue Requirement to Be Refunded TDBU $22, 424,000 CSBU and Generation and CSBU $10,290,000 TOTAL $32,714,000 |
||||
14.3. Results Sharing Amounts Not
Paid and Other Issues
We reject DRA's recommendation that SCE should refund the $10.077 million difference between the authorized revenue requirement and recorded Results Sharing expenses for 2004. As noted by SCE, a basic principle of cost-of-service ratemaking is that authorized amounts can be moved from one purpose to another at management's discretion. Given that SCE's recorded O&M expenses significantly exceeded its authorized O&M expenses in 2004, we can reasonably conclude that the amount that SCE did not spend on Results Sharing was spent for another purpose to the benefit of ratepayers. Therefore, we reject DRA's proposed refund of the amounts not paid in 2004.
DRA recommends the disallowance of $3.5 million for payments SCE made to Maritz to conduct PBR customer satisfaction surveys, and that approximately $0.3 million of expenses associated with 5+ trinkets (e.g., coffee mugs, golf balls, night-lights) be refunded because these trinkets could upwardly bias customer satisfaction results.
SCE argues there is no factual or legal basis for DRA's recommendations since only slightly more than 10% of the surveys conducted by Maritz were Planning surveys, and DRA has not demonstrated that the expenses for trinkets were subject to refund.
We reject DRA's proposed disallowances because funds not attributable to manipulation and falsification are not subject to refund. The independent survey company was necessary for many issues other than planning. Furthermore, there is no evidence that the trinkets manipulated anyone.
The Utility Workers Union of America, Local 246 (UWUA) represents approximately 700 SCE operations, maintenance, technical, clerical, and emergency services employees at SONGS and a much smaller number at several other SCE facilities. Its primary interest in this proceeding is employee health and safety. UWUA's witnesses testified that soon after the 1997 implementation of SCE's PBR, significant changes were introduced at SONGS in management practices and personnel affecting the reporting and recordkeeping of industrial illnesses and injuries. These changes included: (1) introduction of new criteria for recording injuries, deviating from Cal OSHA standards, in order to reduce numbers of recorded injuries; (2) intimidation by senior management in post-accident interviews, discouraging employees from reporting their injuries; (3) systematic discipline for employees who reported workplace injuries; (4) institution of group bonus programs that created peer pressure to discourage reporting of injuries; (5) intervention by management with treatment by health care professionals to prevent injuries from being recordable; and (6) reassignment of management personnel who challenged these changes. We have discussed these issues above. SCE does not deny they occurred, but maintains that they did not significantly change the PBR statistics nor the Results Sharing awards.
UWUA, while complementing the SCE investigation that led to Exh. 12, asserts that the most important deficiency in SCE's internal investigation is the failure to recognize the importance of the safety culture of the workplace. This has resulted in top-down administrative remedies to problems that demand the involvement of the very employees who must ultimately implement safe practices on the job, and who suffer more than the loss of monetary rewards for failures of the safety program. UWUA believes that SCE's new programs will not cure the workplace safety culture nor the damage that SONGS has suffered.
UWUA recommends that the Commission direct SCE to allocate $2 million per year from shareholder funds for the next five years to fund a collaborative program to identify SONGS safety issues and address them through research and training programs. In the alternative, UWUA requests that the Commission adopt its recommendation in principle, followed by a workshop involving SCE, UWUA, CPSD, and other interested parties to develop a program and budget for the collaborative program. CPSD supports this recommendation.
16.1. SCE
SCE agrees that at SONGS the Site Safety group responsible for classifying injuries as OSHA recordable misclassified a significant number of injuries as non-recordable. The misclassifications were ongoing for five years and several internal company departments were aware of the controversy (SCE's Worker's Compensation Division, Audit Services, and the Law Department) and, although they took some steps to resolve it, the misclassification continued. SCE Exh. 12 states that: " . . . Site Safety (with the support of SONGS management) declined to change most of these classifications, even though most of the challenged classifications were clearly inconsistent with Cal OSHA regulations. Additionally, there is evidence that SONGS Site Safety systematically failed to evaluate and internally report potential and actual industrial hearing loss injuries." (Exh. 12, p. 52.)
SCE points out that if UWUA desires to cooperate with management to reinvigorate the safety culture at San Onofre a vehicle already exists for such cooperation. SCE has established a Safety Operations Council that will act as a clearinghouse for best safety practices used across the company. Union leadership from the International Brotherhood of Electrical Workers (IBEW) has agreed to participate. SCE maintains that SONGS does not need another, separately funded group to make improvements to its safety program. Further, SCE argues, UWUA's proposal is an attempt to bolster UWUA's position in contract disputes with SCE. UWUA's attempt to insert the Commission into ongoing union/management issues should not be endorsed. UWUA's proposal calls for an equal number of SCE and union representatives with an independent safety expert chosen by CPSD. Given the likelihood of a deadlock between management and union, this idea would place CPSD, through its representatives, in the position of overruling SCE management on the operational practices of the plant. SCE believes such an outcome would violate the UWUA collective bargaining agreement and replace SCE's judgment with that of a CPSD-appointed consultant.
16.2. Discussion
Providing for employee safety is an essential concern and important function of the Commission. Section 451 of the Public Utilities Code states that every utility shall maintain its equipment and facilities in a manner necessary to promote the health and safety of its employees. Through General Orders (GO) (e.g. GO 95, GO 112 E, GO 128, GO 167) this Commission has addressed employee safety, which CPSD is authorized to enforce. Commission attention to employee safety is not preempted by the collective bargaining relationship. Like Cal OSHA, the Commission addresses employee safety independently as a matter of public policy. SCE is not entitled to deference in fashioning a remedy because it is responsible for safety at SONGS.
We have found numerous safety violations at SONGS with complicity of SCE management. But we recognize SCE's recent efforts improve SONGS safety. What is needed is good faith cooperation between union and management. We agree with SCE that UWUA's proposal is not needed. Ordering a five-year $10 million program with CPSD as the arbiter will most likely insure that $10 million is spent over five years, with CPSD casting the deciding vote. That result can be achieved much sooner by having CPSD conduct a workshop with good faith participation by all parties. We are extremely reluctant to intervene in what we perceive as a union-employer negotiation, but will do so if employee safety requires intervention.
The PBR matrix provides both rewards and penalties. Rewards for performing better than baseline; penalties for performing worse than baseline. (See Figure 1.) We have found that SCE, because of falsification and manipulation of data is not entitled to PBR awards. CPSD recommends that PBR penalties should be assessed against SCE at the maximum amount of $21 million for poor planning and meter reading performance, and $35 million for poor employee safety performance. SCE opposes any PBR penalties for poor performance.
CPSD argues the Commission will never be able to accurately gauge the actual state of SCE's customer satisfaction 1997 through 2003 because SCE's widespread manipulation and falsification destroyed the accuracy of the survey results meant to measure customer satisfaction. SCE must accept the consequences of the uncertainty SCE itself caused. SCE accrued large PBR rewards for each year from 1997 through 2003 despite ample evidence that conditions were ripe for poor service during that time.
CPSD asserts that SCE bears the burden to prove that absent its widespread manipulation and falsification of PBR data, SCE would have avoided PBR penalties. CPSD contends that SCE's own wrongdoing has made unavailable the best evidence of actual customer satisfaction and employee health and safety from 1997 through 2003. The burden of proof is shifted to the defendant where the defendant has destroyed the evidence. (McGee v. Cessna Aircraft Co., 139 Cal. App. 3d 179 (1983).) Because of SCE's data falsification and manipulation evidence of PBR statistics, favorable or unfavorable, was destroyed, changed, or simply never kept. The Commission must take into account the evidentiary inference that "evidence which one party has destroyed or rendered unavailable was unfavorable to that party.9 (Cedars-Sinai Med. Ctr. v. Superior
Court, 18 Cal. 4th 1, 11 (1998).) CPSD concludes that SCE cannot rely on the fact that the evidence is missing and manipulated to avoid the imposition of penalties. Cal. Civ. Code § 3517 is clear: "No one can take advantage of his own
wrong."10
In a case involving fraud committed by PG&E, the court stated:
It is inconceivable that the Legislature intended the PUC would be powerless to award reparations where a public utility obtained a tariff rate by fraudulent means. Any other interpretation would fly in the face of the maxims of jurisprudence that "[n]o one can take advantage of his own wrong" (Civ. Code, § 3517), and `[f]or every wrong there is a remedy" (Civ. Code § 3523). "A court of equity does not allow one to take advantage of his own fraud and will refuse to lend its aid to assist in enforcing a fraudulent imposition upon government, public, or private individuals." (Wise v. Pacific Gas & Electric Co., 77 Cal. App. 4th 287, 300 (1999).)
SCE strenuously opposes any PBR penalty on both legal and factual grounds. In regard to legal analysis, SCE argues that the principles of Ev. Code § 413 do not apply to this case. There has been no "failure to explain or deny" nor has there been any SCE conduct that corresponds to "willful suppression of evidence" as that term has been used in the cases citing Ev. Code § 413. Those
cases typically involve the intentional destruction of evidence qua evidence.11 SCE argues CPSD's attempt to apply § 413 fails for two reasons. First, evidence of planner misconduct - all improperly recorded phone numbers - still exist. Second, in contrast to the typical application of Ev. Code § 413, any loss (or more accurately, substitution) of data was not in anticipation of its evidentiary value in a future proceeding. Section 413 of the Evidence Code cannot therefore be the source of any negative presumption against SCE. Here there is a wealth of customer satisfaction survey data that remains available for analysis (e.g., survey results before PBR began, survey data after falsification and selling of the survey were stopped, verbatim comments, and customer and job information.
SCE contends there is no factual basis on which to base a PBR penalty. It claims it has been candid in acknowledging that some planners attempted to defraud the customer satisfaction survey process. Neither that admission nor the burden of proof should deny consideration of its evidence that the attempts to defraud had no impact on PBR results or relieve CPSD and other parties of their obligation to meet that evidence with compelling proof of their own. SCE concludes by declaring "where in this record is there any evidence that SCE has taken advantage of the misconduct it has itself identified?" (SCE, R.B. p. 15.)
In this investigation, we believe it would be too speculative to impose a PBR penalty. We are not prepared to infer the worst customer satisfaction. We conclude that refunding PBR rewards and forgoing future PBR rewards is a reasonable remedy. We cannot find support for imposing a PBR penalty. However, as discussed in the next section, we conclude that a statutory penalty is required.
18. Fines for Violating Statutes,
Commission Decisions, and Rule 1.1
Our authority to impose fines is set forth in the Public Utilities Code as follows:12
2107. Any public utility which violates or fails to comply with any provision of the Constitution of this state or of this part, or which fails or neglects to comply with any part or provision of any order, decision, decree, rule, direction, demand, or requirement of the commission, in a case in which a penalty has not otherwise been provided, is subject to a penalty of not less than five hundred dollars ($500), nor more than twenty thousand dollars ($20,000) for each offense.
2108. Every violation of the provisions of this part or of any part of any order, decision, decree, rule, direction, demand, or requirement of the commission, by any corporation or person is a separate and distinct offense, and in case of a continuing violation each day's continuance thereof shall be a separate and distinct offense.
2109. In construing and enforcing the provisions of this part relating to penalties, the act, omission, or failure of any officer, agent, or employee of any public utility, acting within the scope of his official duties or employment, shall in every case be the act, omission, or failure of such public utility.
The Commission, in D.98-12-075 (84 CPUC2d 155, 182-85) set forth the principles to be applied to the imposition of fines.
The purpose of a fine is to go beyond restitution to the victim and to effectively deter further violations by this perpetrator or others. For this reason, fines are paid to the State of California, rather than to victims.
Effective deterrence creates an incentive for public utilities to avoid violations. Deterrence is particularly important against violations which could result in public harm, and particularly against those where severe consequences could result. To capture these ideas, the two general factors used by the Commission in setting fines are: (1) severity of the offense and (2) conduct of the utility. These help guide the Commission in setting fines which are proportionate to the violation. (D.99-12-075, 84 CPUC 2d at 182.)
CPSD recommends a penalty of up to $51.1 million for customer satisfaction PBR violations and up to $51.1 million for health and safety PBR violations. The $51.1 million is computed on daily violations over seven years at $20,000 per violation. (7 x 365 x $20,000 = $51.1 million.)
Because of the severity of the violations, the length of time over which the violations occurred, the amounts of money involved in the violations, and the culpability of management in the violations (both in commission and omission) we find that SCE shall be fined $30 million as discussed below.
The record of this proceeding demonstrates that SCE has violated several statutes, Commission decisions, and Rule 1.1. These violations are summarized below.
Violation of § 702
Section 702 states:
702. Every public utility shall obey and comply with every order, decision, direction, or rule made or prescribed by the commission in the matters specified in this part, or any other matter in any way relating to or affecting its business as a public utility, and shall do everything necessary or proper to secure compliance therewith by all of its officers, agents, and employees.
In D.96-09-098 (68 CPUC2d 275) we established SCE's PBR for its transmission and distribution base revenue requirements, effective January 1, 1997. In regard to the customer satisfaction and health and safety PBR standards of D.96-09-098, SCE failed to do those things, "necessary or proper to secure compliance therewith by all of its officers, agents, and employees." SCE management encouraged SCE employees to manipulate and falsify data submitted to obtain PBR rewards for customer satisfaction and health and safety.
Violation of § 451
Section 451 states:
451. All charges demanded or received by any public utility,
. . . for any product or commodity furnished or to be furnished or any service rendered or to be rendered shall be just and reasonable. Every unjust or unreasonable charge demanded or received for such product or commodity or service is unlawful.Every public utility shall furnish and maintain such adequate, efficient, just, and reasonable service, instrumentalities, equipment, and facilities, . . . as are necessary to promote the safety, health, comfort, and convenience of its patrons, employees, and the public.
All rules made by public utility affecting or pertaining to its charges or service to the public shall be just and reasonable.
The charges received by SCE must be just and reasonable. SCE collected $28 million for customer satisfaction PBR rewards and $20 million for health and safety PBR rewards in rates based on data known to management to be false or misleading. SCE also collected $32,714,000 in 2003, 2004, and 2005 in rates for Results Sharing based on data which SCE knew to be false and misleading at the time the rates were requested.
Violation of Rule 1.1 of the Commission's
Rules of Practice and Procedure
Rule 1.1 states, in relevant part, as follows:
3 See Exh. 90, CS 1138, CS 1116, CS 1136, CS 1119, etc.
4 First aid is defined by SCE as relatively minor medical care such as band-aids, non-prescription medication, hot and cold treatment, etc. (Exh. 12, December 3, 2004 Report, p. 1, n. 2.)
5 DRA makes an arithmetic error in its Opening Brief. In footnote 3 of its Opening Brief, DRA indicates that its recommended refund is derived by adding $14.997 million for the partial year 2003 and $24.536 million for each of 2004 and 2005. These numbers sum to $64.069 million, not $64.039 million as indicated by DRA.
6 D.04-07-022, Finding of Fact 169.
7 D.04-07-022, Finding of Fact 178.
8 Percent of Results Sharing revenue requirement represented by customer satisfaction survey and health and safety statistics attributable to TDBU. (Exh. 1, pp. 113-119.)
9 Evidence Code § 413. Party's failure to explain or deny evidence:
In determining what inferences to draw from the evidence
or facts in the case against a party, the trier of fact may consider,
among other things, the party's failure to explain or to deny by
his testimony such evidence or facts in the case against him, or
his willful suppression of evidence relating thereto, if such be
the case.
10 Galanek v. Wismar, 68 Cal. App. 4th 1417, 1428 (1999); Murdock v. Murdock, 49 Cal. App. 775 (1920), "The fraudulent party cannot himself avert his fraud and claim as his right any advantage resulting from it. To permit him to do so would be to contradict the plainest principles of law. No man can be permitted to found any rights upon his own wrong." (Id. at 783, 784) See also, Cal. Civ. Code § 2224:
One who gains a thing by fraud, accident, mistake, undue influence,
the violation of a trust, or other wrongful act, is, unless he or she has
some other and better right thereto, an involuntary trustee of the
thing gained, for the benefit of the person who would otherwise have had it.
11 Spoliation is the destruction of evidence in anticipation of its relevance to pending or future litigation. Williard v. Caterpillar, Inc., 40 Cal. App. 4th 892, 907 (1995); People v. Zamora, 28 Cal. 3d 88, 93 (1980) (destruction of civilian complaints against police officers despite "the knowledge that such records were subject to defense discovery" in resisting arrest case entitled defendant to § 413 presumption); Karlsson v. Ford Motor Co., 140 Cal. App. 4th 1202, 1224 (2006) (auto manufacturer's attempts "to conceal evidence from being used at trial" entitled plaintiff to § 413 presumption). SCE points out that individual planners altered data hoping to keep their scores high; most of them had never even heard of PBR, and certainly this proceeding had not yet begun.
12 Unless otherwise noted, all statutory references are to the Public Utilities Code.