V. Both HM 5.3 and the SBC-CA Models Are Flawed

In comments, workshops, and hearings during the course of this proceeding, Joint Applicants and SBC-CA have lobbed numerous criticisms at each other regarding alleged flaws in HM 5.3 and the SBC-CA Models. These criticisms can be quickly summarized.

The essential criticism of the HM 5.3 model is that it ignores generally accepted engineering and network design standards to instantly construct a brand new, fully functioning network at a single moment in time. Through the use of unrealistic and unsupported inputs, SBC-CA contends that HM 5.3 drastically understates the size of the network, minimizes the costs to maintain it, and lacks the capability to provide all the services that are provided over SBC-CA's network today.

Specifically, SBC-CA contends that HM 5.3 does not adequately represent customer locations, does not comport with how an engineer designs plant, and relies too heavily on subjective judgment for the prices of purchasing and installing network facilities. (SBC-CA/Tariff Decl., 2/7/03, p. 4.) SBC-CA asserts that HM 5.3 does not account for all the costs required to build and maintain the network. For example, SBC-CA claims that HM 5.3 relies on unrealistic labor assumptions to construct a non-functional network that cannot handle all of SBC-CA's customer demand. According to SBC-CA, HM 5.3 fails to account for the substantial costs that carriers incur to accommodate growth and respond to demand changes. As a result, SBC-CA maintains that HM 5.3 provides a "static view" of a network that assumes a level of efficiency that no real carrier can achieve and does not reflect how real-world telecommunications firms operate. Moreover, SBC-CA contends that HM 5.3 fails the cost modeling criteria set forth in the June 2002 Scoping Memo. In particular, SBC-CA alleges that it was not given sufficient access to the intricacies of the customer location process used in HM 5.3.

In contrast, JA contend that SBC-CA's cost models are deeply flawed and do not adhere to TELRIC standards because they rely almost exclusively on embedded data from SBC-CA's legacy network rather than forward-looking network configurations. Further, JA maintain that the SBC-CA Models do not meet the Commission's cost study criteria and do not permit ready adjustment to eliminate these inherent flaws.22

JA allege that SBC-CA's models suffer from structural flaws stemming from a basic misconception of the purpose of competition. JA claim that:


The purpose of local competition is not to ensure that [SBC-CA] is "made whole," or somehow recovers every penny it spends no matter how foolishly. Rather, one of the purposes of competition is to force entrenched incumbents such as [SBC-CA] to become more efficient. In a competitive market, there is no guarantee that a company will recover every dollar it spends. That lack of a guarantee is exactly what forces companies to spend wisely and operate efficiently. (JA, 2/7/03, p. 43.)

JA argue that, in some ways, forward-looking costs overcompensate incumbent carriers because much of the investment in the network to provide UNEs was incurred years ago and the loop plant has long since been fully depreciated. According to JA, "SBC-CA does not incur any incremental investment cost to allow competitors to use that loop plant. Nonetheless, under TELRIC, [SBC-CA] is entitled to recover investment costs for such loop plant as if [SBC-CA] had to install it all over again." (JA, 2/7/03, p. 41.)We find that both models are flawed and do not allow us complete flexibility to modify inputs and test various outcomes. HM 5.3 uses a customer location database as an input, and this database is built on a set of assumptions we do not necessarily agree with and are unable to modify.

We find the loop modeling and customer location process in HM 5.3 lacks transparency, limits the Commission's ability to test various scenarios, and can be faulted for the accuracy of customer locations. Even if we could modify the cluster process used in HM 5.3, we are unsure what affect this would have on its final cost results. In addition, HM 5.3 contains myriad inputs that are at the low end of what we consider reasonable. While we can modify most of these inputs, we were not able to modify all input assumptions to our satisfaction, particularly certain inputs related to labor costs. We are also not able to modify the interoffice transport module of HM 5.3 to overcome the criticisms that it underestimates demand for interoffice transport, may not adequately incorporate optical interface equipment, and is insensitive to demand changes. If we could satisfactorily modify HM 5.3's labor inputs, these changes would most likely increase cost inputs in HM 5.3. If we could modify interoffice inputs related to demand and equipment, we are uncertain what effect this would have on rates. Therefore, the only conclusion we can draw from these areas that we cannot modify is that HM 5.3 may underestimate some labor-related forward-looking UNE costs.

In contrast, the SBC-CA models contain numerous inputs based entirely on the characteristics of SBC-CA's current network operations. SBC-CA claims, "[T]he key to a proper, TELRIC-compliant, long run analysis is to permit all facilities and characteristics to be variable and to assume replacement or change only where it is shown efficient to do so." (SBC-CA, 3/12/03, p. 8.) SBC-CA's approach essentially challenges other parties to prove that its embedded network is not efficient and forward-looking. However, this claim runs counter to FCC requirements that an incumbent LEC bears the burden of proving that its costs do not exceed forward-looking levels. (See 47 C.F.R. 51.505(e).) By using the current network as the starting point and failing to adequately support why the cost data related to the current network should be considered forward-looking, SBC-CA's models run contrary to the definition of TELRIC. Many of SBC-CA's modeling inputs, which include loop investment and design characteristics, expense levels, and labor inputs, have not been sufficiently justified as forward-looking. Some of these inputs can be modified to what we consider forward-looking levels, but many cannot. The inputs we are unable to modify include SBC-CA's loop length assumptions, loop cabling inputs, and numerous inputs embedded in annual cost factors such as structure sharing percentages and labor installation assumptions. Further, we are unable to modify SBC-CA's expense assumptions to remove potential shared and common costs, and Project Pronto expenses. Although we make limited adjustments to the SBC-CA models for expenses related to unregulated services, affiliate transactions, and retiree costs, it is unclear if our adjustments adequately remove the overestimates in these areas. Finally, we are unable to modify demand assumptions and other factor inputs in SBC-CA's interoffice transport model. Most of the input modifications that we would make to SBC-CA's models would modify input assumptions from historical levels to levels that we consider forward-looking. If we could properly modify loop lengths, loop cabling inputs, structure sharing percentages, labor installation factors, interoffice demand, and Project Pronto and shared and common costs, these changes would most likely lower the SBC-CA model results. Therefore, we find that the SBC-CA models over-estimate forward-looking UNE costs.

Thus, although we have undertaken the time-consuming and exhaustive task of modifying many of the inputs used in both models to levels that we conclude are reasonable, there are significant flaws in both models we are unable to modify and we are not satisfied that the changes we are able to make completely solve the structural flaws we have identified in both models. Initially, we determined that because we could not rely on the results of either model in its entirety, the logical solution was to average the results of both models. We ran both models with our preferred inputs and used the results to create a "zone" of reasonable UNE rates. After running both models with our chosen inputs and finding that the results from the models converged to a much narrower range, we determined that reasonable UNE rates lie somewhere within the zone created by the two models' results. The ALJ issued a Proposed Decision that considered the results of each model as an endpoint and adopted the midpoint of the two models' results as SBC-CA's permanent UNE rates. Commissioner Wood issued an Alternate Decision that mirrored this methodology, but differed only in two modeling inputs.

Parties then filed comments on the Proposed Decision and Alternate Decision. They identified errors made during the Commission's modeling runs, and disputed several of the chosen inputs. In addition, the comments suggested that the Commission should reconsider modeling and input changes that were suggested in the parties' filings and apparently overlooked in the modeling runs supporting the Proposed and Alternate Decisions. After reviewing the comments and correcting what we agree are errors and valid modeling changes, we find the SBC-CA models fail our modeling criteria to such a significant extent that we cannot reasonably rely on them to set UNE rates. This is discussed in more detail in Section V.D below, but we briefly summarize our findings here.

The bulk of the problems appeared when staff attempted to make changes to SBC-CA's annual cost factor module in response to comments from both SBC-CA and JA. These changes related to the cost of capital, affiliate transaction expenses, non-regulated expenses, and building factors. The SBC-CA models required extremely significant efforts to pinpoint which inputs to modify, time-intensive manual manipulations to modify inputs, and the modeling results were volatile and sometimes counterintuitive. During these final modeling runs to respond to comments, Commission staff experienced great difficulty in replicating results in the SBC-CA models. On several occasions, Commission staff ran the SBC-CA models multiple times with what they believed were identical inputs, but each run provided different rate results, with basic 2-wire loop rates varying by more than a dollar.23

We find it is unduly burdensome and unreasonable to continue using a model that requires such extensive and time-consuming manual manipulation, is prone to errors when modifying inputs, and produces erratic results that we cannot easily replicate with a reasonably level of certainty. We find the SBC-CA models do not provide us the ability to derive UNE rates with an acceptable level of confidence that is a basic modeling requirement. Therefore, we will abandon the approach used in the Proposed Decision and Alternate to use the SBC-CA model results as an endpoint for a zone of reasonable UNE rates. Instead, we will use the HM 5.3 model to set permanent UNE rates for SBC-CA. In the pages that follow, we will describe in further detail the key flaws that we found with HM 5.3 and the SBC-CA models. We will focus our discussion on the major structural flaws identified by the parties, and our conclusions regarding these alleged flaws based on our own staff analysis of the two models. For the most part, this discussion will pertain to those portions of the models that are not easily changed by modifying the inputs. In a separate section, we will discuss the various disputes over modeling inputs and which inputs we have chosen to use in our own modeling runs to determine final UNE rates for SBC-CA using HM 5.3. Although we no longer rely on the SBC-CA models to set UNE rates, the decision still describes the input selections for the SBC-CA model runs we made before abandoning its use.

Fundamentally, Joint Applicants and other parties contend that the SBC-CA Models fail the TELRIC standards set by the FCC. (See JA, 2/7/03, p. 40, ORA/TURN, 2/7/03, p. 9.) The TELRIC methodology is intended to replicate the pricing that would occur in a competitive market if an existing firm had to match the prices offered by a new entrant who would build facilities using the lowest-cost, most efficient technology and network configuration available, assuming the location of existing wire centers. (47 C.F.R. Section 51.505(b).) The FCC TELRIC regulations, as upheld by the U.S. Supreme Court, explicitly state that embedded, or historical, costs shall not be considered when calculating forward-looking UNE costs. (47 C.F.R. Section 51.505(d).)

Generally, we agree with the criticism that SBC-CA's models rely too heavily on SBC-CA's embedded network, both for network configuration and costs. JA contend, and our own analysis shows, that SBC-CA's cost models are replete with embedded inputs and assumptions that are not readily modified to reflect forward-looking costs or configurations. We will discuss in detail in Sections V.A.1, and V.A.3-5 below examples of the embedded network assumptions that we found. In addition, TELRIC requires the calculation of the forward-looking cost over the long run of the total quantity of the facilities and functions attributable to a UNE. (47 C.F.R. Section 51.505(b).) JA claim that SBC-CA's studies "fail to put the `T' in TELRIC." (JA, 2/7/03, p. 48.) Indeed, SBC-CA admits that "we don't develop a TELRIC on a total basis." (Workshop Transcript (TR.), 12/5/02, p. 408.) We found that in some portions of SBC-CA's models, particularly the model for interoffice transport, it was either difficult or impossible to determine and/or modify the total quantity of the facilities or functions upon which the cost modeling was based, as required by TELRIC.24

As we will discuss below, the SBC-CA models merely replicate to a great extent SBC-CA's existing architecture based on historical network design. Overall, we found that we could not make meaningful modifications to many of the SBC-CA model inputs because we could not extract individual inputs from aggregated data, or compare and verify inputs to public information. This prevented us from modifying many of SBC-CA's embedded cost and configuration assumptions, such as loop input assumptions in SBC-CA's loop module known as "LoopCAT," demand assumptions in SBC-CA's interoffice model, and expenses calculated by annual cost factors.25 Although we could modify some of SBC-CA's model inputs, we eventually came to many "dead-ends" and found that we were unable to modify important model inputs to our satisfaction.

While JA provided a detailed "restatement" of the SBC-CA models containing suggestions for modifications to cost factors and engineering assumptions, 26 SBC-CA disputed this restatement. We find that JA have pointed out many dubious areas in the SBC-CA models that warrant scrutiny. Indeed, review by Commission staff in many of these areas led to further questions regarding the SBC-CA modeling inputs and assumptions. On the other hand, it is not reasonable for us to accept the JA restatement without resolving the underlying disagreements over modeling inputs and engineering assumptions. The corrections suggested by JA in its restatement are numerous, unclear, and often unsupported. It is not possible in the time allotted to examine each of the almost 100 categories of corrections proposed by JA in over 300 pages of declarations, particularly when the significance or priority of each of these numerous corrections is unknown. We cannot accept the restatements of the SBC-CA models without substantial further review that it is not reasonable to undertake. Instead, the Commission's analysis focused on what it considered key flaws and modeling inputs rather than all of the areas outlined by the parties. In a few limited areas, we did attempt to apply these additional corrections, particularly with regard to expenses in the SBC-CA models. Ultimately, these suggested changes to the SBC-CA models became moot when we abandoned use of the SBC-CA models to set UNE rates.

Overall, we find SBC-CA's models estimate the cost to rebuild the network SBC-CA has in place today, with some changes for forward-looking technology, but not necessarily with the lowest cost network configuration. In short, we conclude that the SBC-CA models do not meet the FCC's TELRIC standard and the structural problems inherent in the models do not allow sufficient modification to overcome these flaws. We will now discuss the specific problems that we encountered in each of the SBC-CA models.

In reviewing SBC-CA's LoopCAT module, we found we agreed with many of the parties' criticisms that it does not conform to TELRIC requirements to reflect forward-looking costs based, in part, on the lowest cost network configuration. Below, we discuss these criticisms, which principally relate to LoopCAT's reliance on embedded network data, its design point calculation, and the lack of integration of loop models.

There is no dispute that LoopCAT relies extensively, if not exclusively, on costs and facilities derived from SBC-CA's current network. SBC-CA's witness Sneed gives an overview of SBC-CA modeling approach and describes how "[t]he investments and network characteristics are based on the actual network in place necessary to serve [SBC-CA's] customers, modified where needed to incorporate forward-looking technology." (SBC-CA/Sneed Decl., 10/18/02, p. 4.) Sneed describes how LoopCAT uses annual cost factors to convert investments into annual costs. As Sneed states, "These factors are based on the costs that [SBC-CA] actually incurs, as these are the best indicator of the forward-looking costs that will be experienced in a network serving California." (Id. pp. 4-5.)

JA criticize LoopCAT's reliance on embedded data, including outside feeder plant routes, plant mix, unit costs of construction, cable sizing, fill factors, and installation costs. According to JA:


The use of embedded data ensures that [SBC-CA] will not model an efficient network, as prescribed by TELRIC, but rather will propose substantially inflated costs. For example, [SBC-CA's] reliance on embedded data for unit costs of construction ignores the economies of scale inherent in the TELRIC "total demand" approach, thereby significantly overstating costs. Similarly, [SBC-CA's] reliance on embedded data causes the inclusion of many undersized pieces of equipment in the network, rather than recognizing that today's demand can be served by far fewer, larger sizes of cable, DLC terminals and FDIs. Thus, again, [SBC-CA] ignores economies of scale that would be inherent in a TELRIC-compliant calculation. (JA, 2/7/03, p. 72.) (Footnotes omitted.)

For example, JA and ORA/TURN contend that LoopCAT's embedded cabling characteristics reflect an aggregation of incremental loop construction over many years, rather than a forward-looking design with cable sized to meet total demand. JA claim that LoopCAT models two 100-pair cables where an engineer would place one 200-pair cable at a lower cost if she were rebuilding the network today to serve current demand. (JA/Donovan-Pitkin-Turner Decl., 2/7/03, para. 25-27.) Thus, JA claim that LoopCAT fails to reflect the fact that today's demand can be served more efficiently and with greater economy of scale through the use of larger equipment. (Id.)

Similarly, TURN's witness Roycroft explains that engineering cost models typically use cable sizing guidelines to identify the capacity of cables needed to provide an efficient network design and a reasonable level of spare capacity. LoopCAT, however, does not use cable sizing conventions that would permit the model to optimize the design of its network. (ORA/TURN/Roycroft Decl., 2/7/03, p. 27-29.) Instead, LoopCAT relies on a mix of embedded outside plant design and hypothetical plant design, neither of which reflect forward-looking approaches. Roycroft alleges that even though users can adjust LoopCAT's fill factors, this will not modify the inventory of cables deployed and one is asked to assume that SBC-CA's existing network cabling reflects optimum design. (Id., p. 29.) In other words, LoopCAT is structurally incapable of modeling an efficient, forward-looking network and users cannot modify the model's assumptions to alter fundamental design assumptions. (ORA/TURN, 2/7/03, p. 8.) ORA/TURN contend that:


[SBC-CA] is attempting to turn the world on its head by claiming that a cost model that is based on embedded costs is a forward looking model, and urging the Commission to reject the [HM 5.3] model because it does not employ an embedded costing approach specifically rejected by the FCC. (ORA/TURN, 3/12/03, p. 5.)

Moreover, JA contend that SBC-CA's annual cost factors, or "linear loading factors," which are used throughout LoopCAT to calculate investment costs to engineer, furnish, and install (EF&I) loop facilities, violate TELRIC because they are based on installation activities related to SBC-CA's embedded equipment and embedded network design, and they cannot be properly audited. (JA, 2/7/03, p. 77.) According to JA:


...it is impossible to identify the costs associated with a particular piece of equipment because the linear loading factor is a purported average relationship between embedded installation cost and embedded material cost derived from overly broad categories of equipment. (Id., pp. 77-78.) (Footnote omitted.)

JA maintain that loading factors based on historic data can be problematic because the relationship between material investment and installation activities from historic data may not reflect forward-looking practices. (JA/Donovan-Pitkin-Turner, 2/7/03, paras. 97-98.) Moreover, loading factors can distort installation cost differences based on material prices. In other words, loading factors can make it appear that installation costs rise as material prices rise. (Id. para. 100.)

Our review of LoopCAT confirms that it contains embedded data that SBC-CA derived from its current network experience and it is not possible to modify many aspects of LoopCAT to test forward-looking assumptions or differing network configurations.

First, we find that LoopCAT uses embedded cabling characteristics rather than cable sizing conventions. As ORA/TURN point out, the inventory of cables is a fixed input built on the assumption that existing cabling is optimal. SBC-CA has not met its burden of proving that its existing cable inventory, which reflects incremental growth in the network over many years, is optimal if the network were rebuilt today to meet current demand and reasonably foreseeable growth.

We agree with ORA/TURN and the Joint Applicants that the FCC has made clear it rejects embedded cost approaches to modeling. In defining TELRIC, the FCC spoke of "designing more efficient network configurations" and a forward-looking cost methodology wherein a "reconstructed local network will employ the most efficient technology." (First Report and Order, para. 685.) In the FCC's brief defending TELRIC to the Supreme Court, the FCC stated:


The incumbents appear to be proposing a methodology based on "actual" cost in today's market, of duplicating "actual" existing networks in all physical particulars - or stated different, the "application of up-to-date prices to out-of-date properties." Economists, including those upon whom the incumbents rely, uniformly agree that such a measurement is "economically meaningless." The FCC considered, but rejected, such an approach as "essentially an embedded [i.e., historical] cost methodology," which would produce "prices for interconnection and unbundled network elements that reflect inefficient or obsolete network design and technology." (Reply Brief of the Petitioners United States and the FCC, Verizon v. FCC, July 2001, pp. 6-7, (citations omitted); as cited by ORA/TURN/Roycroft, 3/12/03, p. 9.)

We find that LoopCAT's reliance on embedded cable characteristics, and the lack of cable sizing conventions to optimize network design, renders the model incapable of adequately estimating forward-looking costs and directly contradicts FCC guidance that TELRIC should assume reconstruction of the network, based on existing wire centers, in a least-cost configuration. In comments on the Proposed Decision, SBC-CA contends the Commission could modify the cable inventory inputs to LoopCAT. (SBC-CA, 6/1/04, p. 19.) While it is true that the Commission could tinker with the cabling inputs, the lack of an optimization feature is more troubling. We conclude that even if modifications were made to cable inputs, there is no evidence that LoopCAT will generate optimal sizing for a least-cost reconstructed network.

Second, we find that LoopCAT's extensive use of factors prevents us from making meaningful modifications to LoopCAT to test varying input assumptions. Specifically, we could not extract individual inputs from LoopCAT's aggregated annual cost and linear loading factors, or compare and verify individual inputs to public information. While SBC-CA's filings and workpapers traced input costs to SBC-CA's internal accounting codes, we could not match this internal accounting data to SBC-CA's publicly available cost data, i.e., ARMIS27 filings. Thus, we are asked to rely on SBC-CA's historical accounting information without any ability to compare it to public information to verify its reasonableness.

In certain cases, the aggregation of inputs into factors, which are used liberally throughout LoopCAT, means we are not able to dissect the various factors into component pieces to isolate, for example, installation times, crew sizes, or material prices. Hence, we cannot fully understand how SBC-CA derived its investments costs or make meaningful modifications to these factors. For example, LoopCAT uses EF&I factors for pole, conduit, and cable installation which are critical elements in modeling the loop network. Indeed, SBC criticizes HM 5.3 for its various inputs relating to pole, conduit, and cable installation. Despite criticizing the HM 5.3 model inputs, SBC cannot show how the inputs in LoopCAT compare to those in HM 5.3, particularly for installation times, crew sizes, and material prices. Ultimately, we are asked to accept the factors that SBC-CA has created from its actual data, without knowing the assumptions embedded in them. Further, without knowing the assumptions embedded in the factors, we cannot test the sensitivity of the model with a changed input.

Another example where we disagreed with SBC-CA's input assumptions involves structure sharing percentages.28 Specifically, we wanted to modify LoopCAT's structure- sharing percentages to match those used by the FCC in its Universal Service Inputs Order.29 We found that it was not possible to isolate and modify the structure sharing rates that SBC-CA had built into its loading factors for conduit and cable investment. Despite criticism of its input assumptions, SBC-CA states that:


[SBC-CA's] structure sharing factors capture the efficient amount of structure sharing taking place in [SBC-CA] California's network today. [SBC-CA] properly assumes that the current rate of facilities sharing will continue into the future and be equivalent to the rate of sharing in a forward-looking environment." (SBC-CA, 3/12/03, p. 40.)

Noticeably absent from this rebuttal is any indication of how to determine the structure sharing percentages that are embedded in SBC-CA's models. Essentially, we are asked to accept LoopCAT's structure sharing percentages without knowing what they are, or being able to modify them.

Even though SBC-CA faced criticism from other parties for its failure to identify key input assumptions,30 it did not provide assistance in its rebuttal comments to decipher its various factors and inputs. Instead, it repeated its assertions that its input assumptions regarding network characteristics represent the most sensible measure of forward-looking network characteristics. (SBC-CA, 3/12/03, p. 9. See also Id., pp. 40 and 42.) SBC-CA makes this assertion without delving into an explanation of how to decipher its input assumptions to identify crew size, installation time, or material prices. Consequently, we cannot compare, for example, installation crew sizes in the model to what SBC-CA uses today because the data is too aggregated and SBC-CA does not offer information on current practices. Thus, the SBC-CA data has been aggregated to such an extent that we are unable to isolate discrete inputs and determine their validity.

SBC-CA argues that the Commission should accept its modeling approach based on actual costs and factors because its current network is forward-looking. SBC-CA claims that because it has been operating under incentive regulation for over ten years, it has a strong incentive to make economically efficient choices throughout its network, such as in the amounts of spare capacity in its network. (SBC-CA/Tardiff, 2/7/03, p. 9.) SBC-CA contends that when designing a forward-looking network, it is far better to use a model that reflects actual customer locations, actual cable placements, actual employee needs and work times, and the actual size and capacity of the network. (SBC-CA, 2/7/03, p. 7.)

We do not find this argument convincing for several reasons. First, as we have just described, the parties and Commission staff were unable to decipher the SBC-CA's various factors to understand what SBC-CA used for its "actual" inputs. Indeed, the FCC noted that ILECs have asymmetric access to cost data and therefore put the burden on ILECs to prove their UNE rate proposals are reasonable and forward-looking. We do not find SBC-CA has met its burden because we cannot decipher its actual cost inputs. Although SBC-CA heavily criticized the inputs in HM 5.3 regarding installation times, crew sizes, and material prices, we cannot compare the HM 5.3 inputs to what SBC-CA assumed for these same items because they are aggregated into cost factors. This means that we cannot test the sensitivity of the model with a changed input and we cannot easily compare or replace SBC-CA's inputs with other public information, such as the information used as inputs in HM 5.3.

Second, we find it too simplistic for SBC-CA to assert that the current network has already achieved all efficiencies that are possible, particularly when it did not provide examples so that we can compare actual install times or material costs from its current operations with those built into the SBC-CA models. SBC-CA aggregates current network information into large bundles of inputs and then claims that these input bundles must be correct because they are based on actuals. SBC-CA's witness Tardiff makes high-level comparisons between SBC-CA's current operating costs and HM 5.3 results to attempt to show that SBC-CA's current costs are far different from what HM 5.3 has modeled. (SBC-CA/Tardiff, 2/7/03, p. 19.) We find these comparisons meaningless because we cannot make direct comparisons between SBC-CA's inputs and those used in HM 5.3.

In comments on the Proposed Decision, SBC-CA reiterates its argument that structure sharing percentages and other factors used in its model can be modified. (SBC-CA, 6/1/04, p. 19.) MCI/WorldCom comments that JA provided detailed restatements of many of SBC-CA's modeling inputs that were ignored by the Commission. (MCI/WorldCom, 6/1/04, p. 16.)

We agree with both SBC-CA and MCI/WorldCom that it is technically possible to modify the factors underlying the SBC-CA models, and indeed, JA provided a detailed restatement of LoopCAT that purportedly accomplished that goal. What neither party mentions is the degree of dispute over how to change these factors and the lack of meaningful information from SBC-CA explaining the components of its various factors. SBC disputed the JA restatement at length and, as we have already explained, it is not reasonable, given the public interest in reaching a conclusion to this proceeding in a reasonable time frame, to devote the considerable additional resources to resolve the disputes underlying each of the JA's detailed restatement suggestions.

Furthermore, while SBC-CA provided guidance to staff on certain factor changes through a data request response, other parties have not had an opportunity to review these proposals. It is simply not possible for the Commission to modify the various factors in the SBC-CA models with any degree of confidence given the level of dispute over the modeling inputs and engineering assumptions. Ultimately, our negative experience attempting to modify SBC-CA's cost factors to modify expenses renders this issue moot, but at the same time convinces us it is unwise to attempt further factor changes in the SBC-CA models.

Fundamentally, SBC-CA bears the burden of proving that its cost proposals are forward-looking. Since the Commission was unable to decipher many of SBC-CA's input assumptions or satisfactorily modify them, we find SBC-CA has failed to meet its burden.

Furthermore, we find that LoopCAT's loop network configuration is not forward-looking because it combines existing feeder lengths with an approximated loop distribution length. SBC-CA claims that "actual lengths of loops in the networks are used to calculate Loop TELRICs," because loop information is pulled from SBC-CA's Loop Engineering Information System (LEIS) database containing 17 million records, and "LEIS captures the true distances between a customer's premises and Pacific Bells' central offices..." (SBC-CA, Smallwood, 10/18/02, p. 10.) We agree with ORA/TURN and JA that SBC-CA's claim that loop lengths are based on actual distances is misleading. In actuality, LoopCAT does not model loops equivalent to actual loop lengths that exist today, but approximates one distribution loop length for each distribution area based on an engineering concept known as the "design point." Essentially, LoopCAT assumes all loops in a distribution area are one-half the length of the longest distribution loop segment that might be built in the next twenty years.

SBC-CA does not explain in its opening comments how it uses the design point to estimate loop lengths. It was only after JA criticized the design point estimation technique, that SBC-CA explained the design point concept with the following brief explanation:


[SBC-CA] estimates its distribution length based on the actual design point information that is contained in its database. The design point reflects the longest possible distribution length in a distribution area. [SBC-CA] makes the reasonable assumption that customers will be distributed throughout a distribution area, and based on that assumption, uses half of the design point length as an estimate of the average distribution length in the area. (SBC-CA/Smallwood, 3/12/03, pp. 66-67.)

At technical workshops in June 2003, SBC-CA further explained the design point and how it was used to approximate loop lengths. In response to questions from Commission staff during the workshops, SBC-CA's witness Smallwood explained that loop lengths in LoopCAT were estimated based on adding actual feeder lengths and one-half of the design point distance. (Workshop Tr., 6/26/03, p. 809.) SBC-CA's loop planning guidelines define design point as "The longest loop in any plant segment, expressed in feet from the CO." (SBC-CA Errata, 5/1/03, LROPP guidelines, p. 103.) These same guidelines explain up front that "[p]lans should be based on growth expectations for the next 20 years." (Id., p. 3.) SBC-CA witness McNeill clarified that the "design point is that existing or potential customer location in the distribution area that's the furthest away from the serving-area interface." (Workshop Tr., 6/26/03, p. 811, emphasis added.) According to McNeill, potential customer locations are projected by SBC-CA engineers based on building permits or discussions with planning commissions and developers. (Workshop Tr., 6/26/03, p. 812.) McNeill hypothesized that 75% of the loops used in the design point calculation are actual customers, and 25% are potential customers. (Workshop Tr., 6/26/03, p. 838.) In other words, LoopCAT assumes all loops in each distribution area are the same length-- i.e. one half of the maximum projected loop distribution segment--based on a 20-year growth forecast of the longest potential loop. Thus, LoopCAT models all customers in a given distribution area as if they are all exactly the same distance from the central office, and does not employ any weighting or other criteria to assume a varied distribution of loop lengths.

There are three major problems with SBC-CA's use of the design point to calculate loop lengths. First, the use of the design point means that loop lengths in LoopCAT are not based exclusively on actual loop lengths, but on an undisclosed engineer's view of possible future loop lengths based on a 20-year growth forecast. We find that a forecast period of 20 years is too long for the purposes of this TELRIC costing exercise and is not reasonable. A 20-year forecast cannot be construed as "reasonably foreseeable short term growth," which is the standard the FCC has used in its own modeling efforts. (Inputs Order, para. 200.)31

Second, we agree with JA that LoopCAT may not correctly determine cable gauge because its calculations are not based on the longest loop served, but on half that distance. (JA/Donovan-Pitkin-Turner, 2/7/03, p. 38, n. 45.) Because the cable gauge is based on an average length that will be shorter than the length of some actual loops, the cable might not provide adequate service to customers with loops longer than the average. A related criticism is that because LoopCAT uses embedded locations and distances for remote terminals, coupled with a hypothetical "design point" distribution length, the model does not have any logic to recognize that some loops exceed the 18,000 foot restriction on copper length for forward-looking loops. Loops that have copper lengths exceeding 18,000 feet will not work without additional equipment such as load coils, which have not been incorporated into the model. (JA, 2/7/03, p. 74.) Indeed, JA claim, and SBC-CA does not dispute, that approximately 100,000 of the loops modeled in LoopCAT will not operate within SBC-CA's own design principles because they are longer than 18,000 feet. (Id., JA 2/7, p. 74; see also Workshop Tr., 6/26/03, p. 819.)

Finally, we are unable to modify the design point in LoopCAT because we have no record-based information on actual loop lengths, and it is uncertain how we would determine the portion of the design point distance that is based on potential future customers. The design point distance and loop length calculations are part of the "preprocessor" to LoopCAT, which Commission staff is not able to run or modify on its own. In other words, even though we disagree with SBC-CA's design point calculation, we cannot run our own version to eliminate the portion of the loop based on "potential" customers and use only the loop lengths of actual customers today.

As a result, we find that SBC-CA's use of the design point to calculate loop lengths results in a loop network design that is not forward-looking and does not use the lowest-cost network configuration.

In comments on the Proposed Decision, SBC-CA contends the Commission could modify the design point. (SBC-CA, 6/1/04, p. 19.) Again, it is true that the Commission could pick a different length for the design point and re-run the preprocessor and LoopCAT. However, SBC-CA fails to acknowledge that any modifications the Commission would make to the design point would be highly arbitrary since we have no record basis for separating actual loop lengths from potential ones. Thus, even though we know that approximately 25% of the loop lengths upon which the design point is based are not actual loops in place today, we do not know the length by which the design point is inflated. It is simply not possible to make any meaningful modification to the design point distance because there are no facts in the record to help us discern SBC-CA's actual loop lengths today.

We find that LoopCAT does not mirror a forward-looking network because it does not attempt to model multiple dwelling units (MDUs). We agree with JA that LoopCAT inappropriately inflates costs for residential loops by installing network interface device (NID) and drop equipment to terminate six lines for every residence served by SBC-CA in California, rather than modeling the appropriate premise termination equipment for multiple-dwelling units that make up a large percentage of households served in California. By not including the appropriate equipment for MDUs, the SBC-CA model inflates loop costs by assuming each residence requires termination for six lines and that each customer account requires a separate drop. (JA, 2/7/03, p. 17.)

In comments on the Proposed Decision, MCI/WorldCom maintains that JA proposed a modification for this problem. (JA/Donovan-Pitkin-Turner, 2/7/03, paras. 222 through 232.) SBC-CA agrees that a fix is possible and proposes its own unique methodology. (SBC-CA, 6/1/04, p. 20.) We will not make a modification to LoopCAT to account for MDUs because we find it illogical to make LoopCAT more exact in this area than HM 5.3. Our review of HM 5.3 shows that it does not model premise termination equipment for MDU's either. Since our initial aim was to run both models with nearly identical inputs and assumptions, we disagree with the concept of modifying the SBC-CA models to account for MDU's, while leaving HM 5.3 unchanged. Furthermore, since we are no longer relying on the SBC-CA models to set UNE rates, this entire issue area is moot.

JA contend that SBC-CA's cost models are not integrated for 2-wire, DS-1, and DSL loops. Instead, SBC-CA's models calculate costs for these loops on a stand-alone basis. JA contend that this lack of integration distorts costs, and violates TELRIC and CCPs 8 and 9 which require consistent treatment of costs across all services and elements. By artificially segmenting its cost studies for basic loops, DS-1 loops and DS-3 loops, SBC-CA ignores the efficiencies of sharing facilities that TELRIC requires. (JA, 2/7/03, p. 76.) For example, JA contend that 2-wire loops and DS-1 loops share the same structure, such as poles, conduits, and trenches. Similarly, 2-wire loops, DS-1 loops, and DSL loops share the same DLC systems, and all services share the same central office facilities. (Id., pp. 75-76.)

We find that SBC-CA's failure to integrate all of its services in its cost studies overstates forward-looking cost by ignoring the fact that several services share much of the same network infrastructure. Through this failure, SBC-CA's models do not reflect the full effect of the economies of scope and scale within SBC-CA's network. We agree with JA that SBC-CA's failure to integrate its various loop models and capture the network effects of this total demand inflates true per unit cost of these UNEs and is an impermissible departure from TELRIC principles.

Joint Applicants criticize SBC-CA's switching investment cost module known as "SICAT," contending it is not forward-looking because it uses a short run approach to determine the amount of switching investment. Specifically, JA contend SICAT is based on average purchases over a five year period (1998 through 2002), which, in most cases, involves the higher cost to add a growth line to an existing switch. (JA/Ankum Decl., 2/7/03, para. 112-117.) According to JA, the SICAT model produces a higher short run average cost for switching investment, which is then applied to the capacity to serve the entire network. In this fashion, SICAT overstates long run switching costs. (JA, 2/7/03, p. 42.)

In addition, JA contend that SICAT is not based on California specific switching information, but is instead based predominantly on switching cost investments from the other states in which SBC-CA operates. (Id., pp. 87-88.) Specifically, SICAT develops per line costs based on recent purchases in other states. Thus, in JA's view, SICAT is not sufficiently based on California demand nor does it attempt to identify the number or type of switches necessary to serve California. (JA/Ankum, 2/7/03, paras. 11 and 121-127.) Given SICAT's short term purchasing period and its use of information from other states, JA maintain that SICAT does not model a network designed to meet total demand, but simply calculates a per-line average cost of switches based on non-California data and then improperly uses that average to calculate switch investment.

We find that these two principle disputes with SICAT can be addressed by modifying SICAT inputs. Principally, in our runs of SICAT we have changed the input assumptions regarding the percentage of new and growth lines that are purchased over the modeling period. This should address JA's concern that SICAT uses too high a percentage of higher priced growth lines. We are less concerned that SICAT is not based exclusively on California switching information. Our own review shows that SICAT contains a mix of California switching data and pricing information from SBC's multi-state switching contract. In persuading the Commission to reexamine UNE switching rates, JA argued that the multi-state switching contract allows SBC-CA to obtain a better price for its switching purchases than if SBC-CA negotiated and purchased for its California network alone. (A.01-02-024, 2/21/01, p. 8.) We find this a reasonable assumption and we are not persuaded that SICAT is fatally flawed because it incorporates some non-California switching information.

SBC-CA uses the "SBC Program for Interoffice and Circuit Equipment" (SPICE) to identify UNE rates for dedicated transport and SS7 links. JA claim that SPICE violates TELRIC because it relies entirely on SBC-CA's embedded network rather than a forward-looking one, and is not constructed based on a determination of total network demand. (JA, 2/7/03, p. 98.) Rather, SPICE determines investment based on a database of the existing circuits in SBC-CA's current network without demonstrating that this embedded network reflects the total demand for each service and all UNEs supported by the network. (Id.)

As a result, JA maintain that SPICE produces flawed results because it proposes costs significantly higher than the prior OANAD rates, without sufficient explanation or justification, during a period of productivity gains in telecommunications technology. (Id., p. 94.) Further, JA contend that SPICE limits the ability of the parties to propose changes to inputs and assumptions in order to modify costs. For example, there is no way to ensure the SPICE model has considered all possible routes that could be the "least-cost" path or to modify the structure sharing assumptions embedded in SPICE. (Id., pp. 96-97.) Nevertheless, JA witnesses Mercer and Murphy propose modifications to a few of the major SPICE inputs in an attempt to restate its results. (JA/Mercer-Murphy, 2/7/03, p. 40-51.)

SBC-CA responds that SPICE is based on the SBC-CA's current total network demand for interoffice transport circuits, and SPICE assumes that a forward-looking interoffice network would mirror SBC-CA's existing network. (SBC-CA, 3/12/03, p. 74.) SBC-CA counters JAs' allegations that costs are declining for interoffice transport by noting that per circuit investments have actually increased slightly from 1998 to 2001. (Id., p. 75.) SBC-CA contends that the "least cost path function" in SPICE reconfigures circuit paths to choose the least cost route. (Id.) Moreover, SBC-CA disputes JA's proposed modifications to the SPICE model, arguing the proposed modifications lack support. (SBC-CA/Cass, 3/12/03, p. 37-40.)

We agree with JA that SPICE does not meet TELRIC requirements. In our own review of SPICE, we were unable to determine the level of demand that it is designed to serve so that we could vary it and check the model's sensitivity. Essentially, to borrow a phrase coined by JA, SPICE "fails to put the `T' in TELRIC." At the technical workshops, SBC-CA's witness Cass was questioned extensively on how one could determine the total investment modeled in SPICE and the apportionment of that investment based on demand for certain services. Cass admitted that the SPICE model is not based on total investment. (Workshop Tr., 12/5/02, pp. 439-441.) Cass stated that it was not possible to pull a total investment figure out of SPICE without making demand assumptions because SPICE starts with the network in place today to serve all SBC-CA customers and calculates a per unit "node investment." (Id., 12/5/02, pp. 437-439.) Cass responded that the only way to determine total investment was to make assumptions about demand. (Id., p. 438.) Commission staff also inquired how to segment the interoffice demand between voice services and other advanced and unregulated services that use the interoffice network. SBC-CA's witness stated that it was not possible to segment demand in this manner. (Workshop Tr., 6/24/03, pp. 557-558.) We find it unreasonable that we cannot determine the total investment modeled by SPICE or the demand SPICE is intended to serve.

Essentially, SBC-CA asserts that its embedded network is a priori a forward-looking efficient network. (JA/Mercer-Murphy Decl., 2/7/03, para. 7.) When describing inputs to the SPICE model, SBC-CA states that actual data is used "because the facilities utilization of an efficient firm today is the best estimate available of the facilities utilization that an efficient firm will have in the forward-looking environment." (SBC-CA/Cass Decl., 10/18/02, p. 11.) In other words, SBC-CA claims that the characteristics of its existing network, including its current utilization level and the demand it is designed to serve, are automatically forward-looking, without giving us the ability to know what that demand level might be. We do not accept the unsupported assertion that SBC-CA's current network is automatically forward-looking, particularly when we cannot determine the demand SPICE serves in order to test differing assumptions.

Finally, we agree with JA that SPICE contains other inputs that are difficult to understand or modify. For example, SPICE uses historic structure sharing levels that it does not identify and that cannot easily be modified without knowing the assumptions embedded into SBC-CA's factors. (JA/Mercer-Murphy, 2/7/02, para. 21-23.) In addition, SPICE uses pole and conduit factors derived from SBC-CA's embedded network that correlate cable investment and structure investment (i.e. the more expensive the cable, the more expensive the structure) without evidence to support this correlation. (Id., para. 24-26, and 27.) Further, EF&I factors in SPICE are based on historical network data without showing a direct causal relationship between equipment costs and installation costs. In other words, SBC-CA's EF&I factors assume that more expensive equipment is automatically more expensive to install. (Id., para. 78.) We find these characteristics of SPICE are problematic. Similar to our discussion of the flaws in LoopCAT, we find that the use of factors in SPICE aggregates inputs into bundles that we cannot dissect in order to understand the underlying inputs, compare them to other public information or the inputs SBC-CA criticizes in HM 5.3, or test the effect of different input assumptions.

We find merit to portions of the suggested restatement of SPICE provided by JA, which are limited to changes to a few key inputs. In our final modeling runs of SPICE before abandoning the SBC-CA models, we made further changes to SPICE so that our two interoffice transport models, HM 5.3 and SPICE, ran with similar inputs. In their analysis, JA witnesses Mercer and Murphy modified two key fill factors in SPICE and two EF&I factors. In our model runs of SPICE, we find it reasonable to incorporate some of these changes in order to model forward-looking network utilization. Namely, we agree with JA to modify the SONET and common equipment fill factor from 58% to 85%. We find this higher percentage is reasonable given that an even higher fill factor was used by SBC in TELRIC modeling in other states and by the FCC in its modeling. (JA/Mercer-Murphy, 2/7/03, p. 41-43.) We modified the fiber fill factor in SPICE to 54%, which is an average of the fill factors used in other SBC states, as provided by JA. (Id., p. 43-44.)

With regard to EF&I factors in SPICE, we agree with JA that the circuit equipment EF&I factor should be lowered because SBC-CA's factor is out of line with factors it has proposed in other states. It is reasonable to run SPICE with a circuit equipment EF&I factor of 2.6, which is an average of factors modeled in other SBC states.32 We did not ultimately implement this change because we decided to abandon the SBC-CA models before making this change.

As we have already discussed, SBC-CA uses Annual Cost Factors (ACFs) to convert the investments in its models into annual costs and expenses. In simplest terms, ACFs are ratios of capital costs and operating expenses per dollar of plant investment, built on the assumption that capital costs and expenses have a direct relationship with investments. (SBC-CA/Cohen, 10/18/02, p. 2.) There are four types of ACFs in SBC-CA's cost studies: (1) capital cost factors, (2) operating expense factors, (3) investment factors, and (4) inflation factors. JA and other parties provide numerous criticisms of the ACFs and expense calculations in the SBC-CA models, which we now describe.

JA criticize SBC-CA's cost model for its use of ACFs to calculate the expense portion of UNE costs. JA claim that these ACFs contain numerous computational errors and incorrectly assume that SBC-CA's 2001 ARMIS expense data, on which the ACFs are based, is efficient and forward-looking. (JA, 2/7/03, p. 101.) JA allege that SBC-CA failed to make forward-looking adjustments to its historical expense data to reflect future savings from potential technological innovations and cost savings from corporate mergers. (Id., p. 105.)

JA witnesses Brand and Menko provide a 100-page declaration detailing their allegations of eighteen categories of methodological errors in the development of SBC-CA's ACFs. They suggest twelve categories of corrections, and an additional six categories where corrections are not possible because of insufficient data. (JA/Brand-Menko, 2/7/03, p. 24.) According to Brand and Menko, if all of their suggested corrections are made, SBC-CA's ACF's should be reduced by 44.9%. (Id., p. 101.) XO joins JA in criticizing SBC-CA's modeling of expense factors related to support assets, building expenses, and shared and common costs. (XO, 2/7/03.)

Generally, SBC-CA responds that it is reasonable to assume that its baseline current expense and investment data reflect those of an efficient provider given the discipline imposed on SBC-CA by regulatory, shareholder, and competitive pressures. (SBC-CA, 3/12/03, p. 28.) SBC-CA disputes the eighteen categories of corrections proposed by JA's Brand and Menko and contends that any adjustments to current expenses would be speculative.33

Both models use factors to estimate expense levels based on investments. We do not find the fact that SBC-CA used a factor approach is, by itself, a flaw. We are not as troubled by SBC-CA's use of embedded ARMIS data to calculate expenses as we are by the fact that the factors do not allow us to isolate and understand individual input assumptions, or compare and verify inputs to public information such as ARMIS, or the inputs that SBC-CA criticizes in HM 5.3. Once again, as in LoopCAT and SPICE, we find ourselves having to rely on SBC-CA's aggregation of its historical accounting information into factors without understanding individual input assumptions. Therefore, we reiterate our finding that the ACFs SBC-CA uses to estimate expenses cannot be disaggregated to understand the underlying inputs or to compare them to other public information.

While JA propose detailed corrections to SBC-CA's ACF model, it is not reasonable to review each of the eighteen disputed expense categories in the time allotted, particularly when the adjustments are disputed. As with JA's detailed restatement of LoopCAT, the Commission will focus on the most significant expense categories where the suggested corrections are reasonably explained and where modifications can be easily implemented. In the sections below, we address the most significant of the categories raised by JA and XO and either modify the SBC-CA ACF model, or describe why that is not possible.

JA and XO maintain that because SBC-CA has abandoned the methodology used to derive costs in the prior OANAD proceeding, the newly derived costs are not coordinated with the prior cost study. In particular, certain expense categories from the prior OANAD proceeding are now used to develop SBC-CA's annual cost factors, even though these same expense categories were used to derive the 21% shared and common cost mark-up percentage in the prior OANAD proceeding. (JA/Brand-Menko, 2/7/03, pp. 40-44.) According to JA, SBC-CA's witness Cohen confirmed that no adjustments were made to SBC-CA's ACFs to remove shared and common costs. (Id., p. 42.) Thus, JA and XO contend that SBC-CA's current cost studies include some portion of shared and common costs, and therefore, double counting occurs when the 21% shared and common cost markup is added to these new UNE costs. (JA, 2/7/03, p. 53; XO, 2/7/03, p. 42.) XO contends that if the Commission employs SBC-CA's cost models to set UNE prices, it cannot use the existing 21% shared and common cost markup, but must undertake a review of the markup separately. (XO, 2/7/03, p. 46.)

In response, SBC-CA confirmed that "[SBC-CA] employed its standard approach for deriving ACFs without explicitly analyzing revised [shared and common] costs because those are not at issue in this proceeding." (SBC-CA/Makarewicz, 3/12/03, p. 22.) SBC-CA maintains that no parties have done an analysis to confirm that shared and common costs are included in SBC-CA's ACFs. SBC-CA contends it is equally possible that costs not recovered by the shared and common cost markup were inadvertently excluded from the ACF analysis and are not recovered anywhere. (Id.)

We agree with JA and XO that there is reason to be concerned whether the current cost studies, which use a different methodology than the prior OANAD cost studies, may incorporate shared and common expenses that are already accounted for in the 21% markup. There is no dispute that SBC-CA has used a different cost methodology in this proceeding as compared to the prior OANAD proceeding, and SBC-CA confirms that it did not attempt to reconcile the shared and common costs currently collected through the 21% markup with the direct UNE costs calculated through the ACF study it proposes here. A lack of analysis and hard evidence of double-counting does not mean that the potential for it does not exist. During a deposition, SBC-CA witness Smallwood testified that SBC-CA's multi-state cost study was designed to comply with FCC directives and recover as much of a UNE's direct incremental cost as possible to reduce common costs. JA are correct that this is a different approach than was used in the prior OANAD proceeding, where the Commission noted that the OANAD treatment of shared and common costs was contrary to the FCC directive. (JA/Murray Decl., 10/18/02, pp. 34-35, citing D.98-02-206, mimeo. at 18, n. 24.) This means that SBC-CA's proposed UNE costs may now categorize some costs as direct UNE costs that had been considered shared and common costs when the 21% markup was determined in the prior OANAD proceeding. Smallwood admits that SBC-CA made no adjustments to the annual cost factors in its new cost studies to recognize costs that are included in the existing shared and common cost markup. (XO, 2/7/03, p. 42.) Thus, it is reasonable to conclude that SBC-CA's ACFs contain some portion of shared and common costs.

XO contends that if the Commission adds a 21% markup to the UNE costs resulting from SBC-CA's models, it should either adjust the ACFs to mitigate the impact of this double counting, or undertake a new markup calculation. We have stated several times that we would not review the 21% markup in this limited proceeding.34 If we were to make any adjustments to SBC-CA's ACFs to remove shared and common costs, they would be highly speculative. We find that this is yet another area of the SBC-CA models that we are not able to modify to our satisfaction in order to derive reasonable UNE costs.

In comments on the Proposed Decision, JA argue that the Commission ignores suggested adjustments to the SBC-CA models to remove double counting of shared and common costs. JA witnesses Brand and Menko describe how they reduced SBC-CA factors by 5.2% to eliminate costs they contend are already recovered in the 21% markup. (JA/Brand-Menko, 2/7/03, p. 43 and p. 101.) XO comments that the Commission ignores its detailed recalculation of the markup factor to 7.9%. (See XO/Montgomery, 2/7/03, p. 41-46.)

We find Brand and Menko's description hard to follow on this subject and it is unclear what expenses they have removed. We decline to adjust the SBC-CA factors as proposed by JA on this topic because we do not understand the basis of the changes. Moreover this entire discussion is moot when we abandon use of the SBC-CA models. Clearly, the potential for double-counting of shared and common costs exists and is a major flaw in the SBC-CA models. SBC-CA provides no assurance they have adequately addressed this criticism. Nevertheless, we can ignore all suggested corrections to the SBC-CA models with regard to shared and common costs because we have decided to abandon use of the models.

JA contend that SBC-CA has not eliminated certain expenses from its ACF cost study such as i) non-regulated expenses unrelated to UNEs, ii) affiliate transaction expenses, iii) DSL-related Project Pronto35 expenses, and iv) annual amortization of post-retirement benefits, known as the "Transitional Benefit Obligation" (TBO).36 JA maintain that all of these expenses are inappropriate to include in a study of forward-looking recurring UNE costs because they are either not current operating costs or are costs related to unregulated activities. (JA, 2/7/03, p. 104.) In addition, JA contend that SBC-CA overstates expenses related to central office building space.

JA contend that SBC-CA included investments and expenses related to its non-regulated activities when it developed its ACFs. These non-regulated activities include services related to customer premise equipment, inside wire maintenance plants, and billing and processing of third-party customer bill payments. (JA/Brand-Menko, 2/7/03, pp. 25-26.) According to JA, it verified that these unregulated expenses are included by comparing ARMIS reports for regulated and unregulated expenses with the inputs used in SBC-CA's ACF cost study. (Id.) JA propose removal of these expenses, as identified through ARMIS reports, which reduces SBC-CA's cost factors by 6.7%. (JA/Brand-Menko, 2/7/03, p. 25-27 and p. 100.)

SBC-CA responds that it appropriately relied on total expenses and investments when calculating per unit expense factors. (SBC-CA/Makarewicz, 3/12/03, p. 9.) For example, SBC-CA claims that its cable maintenance cost per unit will remain the same regardless of what service is using the cable. SBC-CA provides the analogy of a salesperson that has a company car that is used both for business (regulated) and personal (non-regulated) purposes. SBC-CA reasons as follows:


Accurate per mile maintenance expenses for the vehicle are calculated using data for its entire use rather than the "arbitrary" distinction between business use and personal use. Likewise, it is appropriate for [SBC-CA] to use total account balances for plant expenses and investments to calculate its ACFs, and to apply the same ACFs to measure the costs for any services/elements whose provisioning relies on plant investment and expense. (Id., p. 10.)

We disagree with SBC-CA's reasoning. We doubt that the company in SBC-CA's example, which issues a car for business use, would happily pay higher maintenance costs if the salesperson used the company car for excessive personal use. Likewise, we do not agree that expenses SBC-CA incurs for its unregulated businesses, such as inside wire maintenance, or billing services to third parties, should be considered when determining the expenses related to its UNE operations. SBC-CA admits that it has included expenses related to unregulated activities in developing its ACFs. We will adopt the modification proposed by JA to SBC-CA's ACF model to eliminate expenses for non-regulated activities, which is based on a comparison of regulated and non-regulated ARMIS data.

JA contend that SBC-CA inappropriately includes expenses related to transactions with its affiliated companies in its ACFs, both for services sold by SBC-CA to its affiliates, and for services purchased by SBC-CA from its affiliates. JA note that expenses related to affiliate transactions have increased four-fold since the merger of Pacific Telesis and SBC Communications in 1997. (JA/Brand-Menko, 2/7/03, pp. 28-30.) JA maintain that $1.1 billion in expenses related to providing services to SBC affiliates should not be attributed to California UNEs. (Id., p. 32.) Further, they propose costs for services purchased from affiliates need to be adjusted to forward-looking levels because SBC-CA pays for what it purchases from affiliates at "fully distributed cost" which is an embedded cost methodology. (Id. p. 34.) JA suggest a 30% disallowance for purchases from affiliates to adjust for what JA contend are inflated transaction costs. (Id.)

SBC-CA does not deny that expenses related to affiliate transactions are included in its 2001 expense information that was used to develop its ACFs. SBC-CA contends no adjustment is needed for services purchased from affiliates, such as payroll processing, procurement, fleet operations, and information technology, because SBC-CA pays the lower of fully distributed cost or fair market value for its purchases. (SBC-CA/Henrichs, 3/12/03, p. 15.) We find that SBC-CA has provided sufficient justification that no adjustment is required to its ACFs for services purchased from affiliates.

SBC-CA does not respond to the JA argument that expenses for items sold to affiliates are not related to provisioning UNEs. We find that it would be unreasonable for SBC-CA's ACFs to include expenses related to services SBC-CA has performed on behalf of its affiliates and which do not relate to the provisioning of UNEs. SBC-CA has not met its burden of proof to assure us that these costs have been adequately removed from its ACFs. In our final runs of the SBC-CA models, we removed approximately $301 million in affiliate transaction expenses from SBC-CA's ACF study. This differs from the $1.1 billion cited by JA because upon review, the workpapers of their witnesses Brand and Menko did not support removal of any more than $301 million. (JA/Brand-Menko, 2/7/03, Ex. TLB-AM-REP-2.)

With regard to Project Pronto, JA and XO claim that SBC-CA's ACFs include a portion of Project Pronto costs incurred in 2001 that relate to development of SBC-CA's broadband network. These costs are not required for provisioning of UNEs. (JA/Brand-Menko, 2/7/03, p. 58-62; XO, 2/7/03, p. 17.) According to JA and XO, Project Pronto is a major technological upgrade, and SBC-CA is incurring higher than normal costs in the early years of this upgrade, with the expectation of cost savings later. JA admit that a portion of Project Pronto costs are for overall network efficiency and should be included in ACFs, but they contend this amount is overstated because SBC-CA made no adjustment to 2001 expense levels to acknowledge high start-up expenses and account for future cost reductions from Project Pronto. (JA/Brand-Menko, 2/7/03, para. 125.) XO contends that expense data that SBC-CA uses in its cost studies reflects those early years of Project Pronto implementation, but fails to consider future Project Pronto savings. JA claim that Project Pronto costs are inextricably melded into SBC-CA's 2001 expense information, and SBC-CA admits it cannot identify separate Project Pronto costs.

SBC-CA does not deny that 2001 expenses used for ACFs include Project Pronto expenses, and that it cannot separate these costs from the expenses it used for its ACFs. Rather, SBC-CA makes the argument that the same equipment used for Project Pronto can also provide voice service. Thus, expense levels for this equipment are the same whether it serves voice or DSL service. (SBC-CA/Makarewicz, 3/12/03, p. 18.) On the charge that future Project Pronto savings are not incorporated, SBC-CA says that it has already incorporated Project Pronto savings by modeling fiber loop plant and lower maintenance factors associated with fiber. (Id., pp. 17-20.)

We find that SBC-CA has not met its burden to show that Project Pronto expenses were appropriately accounted for when developing its ACFs. SBC-CA admits it cannot separate Project Pronto expenses from its total 2001 expense information used to develop ACFs. We agree with JA that some Project Pronto costs likely contribute to overall network efficiency, but not necessarily all of them. SBC-CA offers us no ability to allocate the expenses between its voice and broadband services. We do not agree with SBC-CA's apparent assumption that all Project Pronto costs should be allocated to UNEs. SBC-CA argues that future efficiencies for Project Pronto are accounted for in its models through lower fiber maintenance ACFs. We do not find this argument convincing because without an allocation of Project Pronto expenses between voice and broadband services, we cannot be assured that SBC-CA's fiber maintenance expenses wouldn't actually be lower if some were properly attributed to the broadband network. Thus, SBC-CA has not met its burden of justifying these expenses as forward-looking. We are unable to adjust ACFs to remove potentially inflated Project Pronto expenses.

This particular debate is about an accrual for an accounting change that took place over a decade ago. As SBC-CA explains, the TBO accrual is a "liability for future retiree medical costs already earned by current and former employees [as of 1/1/93] but not yet paid...." (SBC-CA/Cohen, 3/12/03, p. 15.) In other words, it is an amortization of expenses SBC-CA would have shown on its books long ago if SFAS No. 106 had been in effect prior to 1/1/93.

JA contend that the TBO represents the amortization of an embedded cost resulting from past activities that are not forward-looking. The TBO that was recorded in 1993 has nothing to do with the costs of an efficient carrier today. (JA/Brand-Menko, 2/7/03, pp. 44-47.) SBC-CA contends these are forward looking expenses that are appropriately treated as shared and common costs. According to SBC-CA, it removed approximately one third of its estimate of total TBO expenses from its ACF study, explaining that it did not remove the full amount because it is amortized over 20 years. (SBC-CA/Cohen, 3/12/03, pp. 19-20.) JA contend that SBC-CA should have removed over $80 million, and the amount SBC-CA admits removing is far below this. (JA/Brand-Menko, 2/7/03, p. 46.)

We agree with JA that this particular TBO accrual is not a current operations cost, since it is essentially catching up for failing to account for this expense when it was incurred. TBO costs should not be included in SBC-CA's ACFs. From the meager record on this issue, we conclude that the adjustment SBC-CA made was too small because it removed only a portion of the full TBO amount. For our SBC-CA model runs, we removed the total TBO amount identified by SBC-CA. (See SBC-CA/Cohen, 3/12/03, p. 20.) We did not use the amount suggested by JA because, as SBC-CA explains, it includes expenses from many accounts that are not included in the cost factors. (SBC-CA/Cohen, 3/12/03, p. 19.) In addition, we are not reviewing the shared and common cost markup in this proceeding so we will not address whether TBO costs are appropriate to include in the shared and common cost markup.

XO claims that SBC-CA's building expense factor is exceptionally high and shows a 120% increase from 1999 to 2001, while ARMIS data does not show an acceleration of investment in buildings by SBC-CA. (XO, 2/7/03, p. 37.) JA contend that SBC-CA inappropriately assumes that all of its embedded building space for central offices and other buildings should be assigned to UNEs and ignores its OANAD admission that forward-looking central office buildings require less space than SBC-CA's historical building requirements. (JA/Brand-Menko, 2/7/03, pp. 48-49.) SBC-CA also ignores the fact that much of SBC-CA's embedded central office space is being paid for as part of collocation charges. (Id., p. 48.)

SBC-CA maintains that with regard to land and buildings, it cannot assume a constant stream of collocation revenue in the future, so it would be inappropriate to include this in the model. (SBC-CA/Makarewicz, 3/12/03, p. 23.)

XO and JA both raise convincing arguments that SBC-CA's land and building expense factors are not forward-looking. We find it reasonable to adjust historical land and building expenses to incorporate the forward-looking space requirements SBC-CA proposed in the prior OANAD proceeding, and an allocation of revenues from collocation. In our SBC-CA model runs, we reduced the land factor 2.14% to account for collocation revenues and the building expense factor by 80% in line with findings from the prior OANAD proceeding.

Finally, JA protest SBC-CA's inflation adjustments to its operating expenses and capital investments. SBC-CA incorporated inflation into its cost modeling using an inflation factor for capital investments based on the Telephone Plant Index (TPI), and an inflation factor for its operating expenses based on the Consumer Price Index-W (CPI-W).37 (SBC-CA/Cohen, 10/18/02, p. 17.)

JA criticize SBC-CA's inflation adjustments based on the TPI and CPI-W as neither specific to SBC-CA nor closely related to the types of costs SBC-CA experiences. (JA, 2/7/03, p. 103.) Further, JA contend that SBC-CA has failed to reflect future productivity improvements and expense savings from such sources as technological innovation and mergers. According to JA, there is extensive regulatory precedent for the concept that inflation should not be incorporated into a cost study unless there is a corresponding offset for productivity. (JA/Brand-Menko, 2/7/03, pp. 86-88.) Specifically, JA note that this Commission's own "New Regulatory Framework" (NRF) decision incorporates both inflation and productivity adjustments to rates. (Id., p. 87, citing D.89-10-031.) JA contend there is significant evidence that any inflation SBC-CA experiences in its costs will be offset by productivity estimates that exceed inflation. (Id., pp. 93-95.) According to JA's witness Flappan, BLS data for similar telephone utilities shows that worker productivity has exceeded inflation price increases by 3.8% per year on average from 1996 through 2000. When productivity exceeds inflation, costs per labor hour decrease even if nominal wages increase. (JA/Flappan, 2/7/03, p. 30.) Thus, Flappan contends it is unreasonable to assume inflation in labor costs without corresponding adjustments for productivity. (Id.) JA recommend that the Commission either incorporate a negative inflation factor, based on their conclusion that productivity will exceed inflation on a forward-looking basis, or inflation factors should be removed entirely because SBC-CA did not include any productivity assumptions.

Similarly, XO criticizes SBC-CA's inflation assumptions, noting that SBC-CA's actual operating expense data indicates expenses have fallen rather than risen in line with retail prices in the overall economy. (XO, 2/7/03, p. 9, and 19.) XO objects to SBC-CA's assumption that as telecommunications equipment prices decrease, operating expenses rise. XO says this assumption is contradicted by SBC-CA's actual operating expense data per access line that indicates 4.2% per year declines from 1996-2000. (Id., p. 19.)

SBC-CA responds that its use of the TPI and the CPI-W are appropriate as inflation factors because they conservatively estimate the inflation SBC-CA will face in its costs. SBC-CA justifies using the CPI-W by stating that it measures inflation in wages, which are a large portion of SBC-CA's expenses. (SBC-CA/Cohen, 3/12/03, p. 29.) Further, SBC-CA disputes JA's contention that it has left productivity out of its cost studies. SBC-CA contends that it has incorporated productivity in its cost studies by assuming placement of only new technology and applying maintenance factors associated with forward-looking technology. SBC-CA also contends that by using its latest expense data for its operating expense factors, it has already incorporated the latest gains in personnel productivity. (Id., p. 33.)

In the earlier draft of this Alternate Decision, we agreed with SBC-CA that its use of inflation factors was appropriate. Although JA and XO recommended that we remove inflation from the SBC-CA models because productivity has not been incorporated, we disagreed with this recommendation. The draft Alternate Decision found that SBC-CA had included the effects of productivity in its models by using forward-looking technologies and estimating expenses to match these forward-looking technologies. Therefore, the draft Alternate Decision found it reasonable for SBC-CA to estimate the inflation it will experience, particularly in labor costs, over the modeling period. Now that we abandon use of the SBC-CA models, this issue is moot.

In summary, we have found several problems with SBC-CA's ACF cost study. We agree with JA and XO that SBC-CA's expense calculations need to be revised to remove or reduce several categories of expenses that are not appropriate for a forward-looking cost study. We have modified SBC-CA's ACF study with regard to non-regulated expenses, affiliate transaction expenses, retiree costs, building expenses, and inflation assumptions. We were unable to modify the SBC-CA models in two significant areas, namely Project Pronto expenses and shared and common costs.

In conclusion, SBC-CA's LoopCAT model relies too extensively on embedded data that we are unable to modify to our satisfaction. This information includes cabling requirements, loop length forecasts, structure sharing assumptions, and labor installation times and crew sizes which are embedded in various factors. LoopCAT's extensive use of annual cost factors, or "linear loading factors," prevents us from making meaningful modifications to LoopCAT to test varying input assumptions because we cannot extract individual inputs from LoopCAT's aggregated factors, or compare and verify individual inputs to public information.

For switching costs, the flaws noted by JA are not fatal because SBC-CA's SICAT model can be modified through input modifications. On the other hand, we find that interoffice rates calculated by SBC-CA's SPICE model are not based on total demand. Although we made some modifications to SPICE's fill factors, we are unable to modify other embedded network assumptions within SPICE such as demand levels, structure sharing, and pole and conduit factors. With regard to expenses, we find SBC-CA's ACFs are based on historical information that is aggregated into bundles that we cannot dissect in order to understand the underlying inputs, compare them to other public information or the inputs SBC-CA criticizes in HM 5.3, or test the effect of different input assumptions. We made modifications to SBC-CA's ACFs in a few key areas, but we were unable to modify the ACFs with regard to Project Pronto expenses and shared and common costs. While JA propose many other adjustments to SBC-CA's cost factors and expenses, and some of these may be reasonable, it is simply not reasonable in the time allotted for this proceeding to resolve each disputed area. Instead, we have focused on what we consider key inputs and modeling assumptions.

Overall, our inability to modify SBC-CA's models in critical areas, including, but not limited to, network configuration, cost factors, interoffice network demand and expense levels, leads us to conclude that the SBC-CA models do not comply with TELRIC because they are merely estimating the current cost to rebuild the network that exists today, with some technology upgrades, rather than reconstructing a forward-looking network configuration.

Overall, SBC-CA criticizes HM 5.3 as understating SBC-CA's forward-looking cost of providing UNE's because it does not reasonably estimate the quantities or prices of network facilities to build a competing local exchange network. SBC-CA alleges that HM 5.3 is "results-driven" and "manipulated to produce the lowest UNE cost estimates possible." (SBC-CA, 2/7/03, p. 11.) Moreover, SBC-CA claims that HM 5.3 does not overcome the flaws the Commission found with its predecessor, HM 2.2.2, namely the earlier model's faulty representation of distribution plant in low-density areas. Thus, SBC-CA claims that despite its new customer location and clustering process, HM 5.3 models an inadequate amount of loop plant. (SBC-CA/Tardiff, 2/7/03, pp. 70-71.)

SBC-CA's criticisms of HM 5.3 can be grouped into seven categories. Essentially, SBC-CA contends that HM 5.3: (1) ignores accepted engineering and network design standards, (2) is based on a flawed customer location process, (3) relies excessively on unverifiable "expert judgment," (4) ignores actual demand in its switching and interoffice models and would be unable to provision high speed services, (5) does not provision enough spare capacity, (6) includes unrealistically low expense levels, and (7) fails to provide a test of validity. SBC-CA alleges that as a result of these flaws, HM 5.3 produces a network that is unrealistic and unreasonable because it has far less outside plant than SBC-CA's actual network today (i.e., fewer distribution areas, less distribution pairs, less fiber equipment, less trunks, and less interoffice network equipment). (SBC-CA, 2/7/03, p. 20.)

We will address each of these criticisms in turn. As an overview, we find merit to some of SBC-CA's criticisms of HM 5.3, but not all of them. We find that many of SBC-CA's criticisms can be addressed by input modifications to the model. Where we can modify inputs, we do not agree that SBC-CA's criticisms are insurmountable. Although SBC-CA is critical of inputs relating to engineering and design standards, spare capacity, and expense levels, these are all inputs that we can modify. However, this is not true in other areas. We agree with SBC-CA that some elements of the customer location process are flawed, and we do not agree with all of the assumptions built into the HM 5.3 geocoding and customer location process. As we found with SBC-CA's LoopCAT model, we find that it is not possible to modify this area of HM 5.3 and test various scenarios. In this regard, loop modeling by HM 5.3 lacks transparency, limits our ability to test scenarios, and can be faulted for the accuracy of customer locations, but we are unsure what affect different scenarios would have on rates. In addition, we agree with SBC-CA that many of the "expert judgments" used as inputs to HM 5.3 are questionable, and appear biased to produce low results. We have found that many of these expert judgments can be replaced with assumptions and inputs used by SBC-CA in its own model. But this is not the case in one important area that impacts costs throughout HM 5.3--labor costs. We explain this more fully below. Finally, we find that we cannot overcome criticisms of the HM 5.3 Transport module that it underestimates demand for interoffice transport, may not adequately incorporate optical interface equipment for the provisioning of high capacity services, and is insensitive to demand changes. We now turn to a detailed review of our findings with regard to HM 5.3.

In general, SBC-CA maintains that HM 5.3 is incapable of producing accurate TELRIC estimates because it ignores widely accepted engineering and network design standards, and instead relies upon a series of erroneous engineering assumptions, unrealistic input values, and inappropriate estimating methodologies. As a result, HM 5.3 understates the amount of facilities required to provide service. According to SBC-CA, HM 5.3's principal flaw is its assumption that a brand new fully functioning, optimal network could be instantly constructed at a single moment in time. (SBC-CA, 2/7/03, p. 13.) SBC-CA maintains that HM 5.3 is based on a fiction that a competitive firm could enter and instantly size and build all of its facilities to accommodate a known snapshot of demand, when in reality, networks are built and have evolved over time to accommodate demand as it grows and shifts. (SBC-CA/Tardiff, 3/12/03, p. 17.) By building an abstract network divorced from reality, SBC-CA maintains that HM 5.3 focuses only on existing lines and does not account for vacant parcels of land or vacant homes. Therefore, HM 5.3 does not build to the "ultimate demand" that a real-world carrier would serve. (SBC-CA, 2/7/03, p. 40.) In addition, SBC-CA contends that this assumption of an instantaneous network fails to match the other assumptions in HM 5.3, particularly the relatively long depreciation lives and a low cost of capital assumed by JA. (SBC-CA/Tardiff, 2/7/03, p. 15.)

Further, SBC-CA criticizes the "right angle routing" that HM 5.3 uses to connect customer locations. Rather than connecting customer locations by a straight line of the shortest-distance, or "as the crowflies," HM 5.3 assumes that customer connections form the two sides of a right triangle, hence the term "right angle" routing. (JA/Mercer, 10/18/02, Attachment RAM-4, p. 36, n. 33.) SBC-CA contends that this routing assumption constructs an outside plant design that is purely hypothetical and fails to reflect SBC-CA's operating realities, where carriers cannot ignore geographic impediments and man-made obstacles such as rivers, lakes, mountains, rights-of-way, and easements. (SBC-CA, 2/7/03, p. 13.) According to SBC-CA, "right angle routing" causes HM 5.3 to understate the amount of plant necessary to serve customers by ignoring real and man-made obstacles. (SBC-CA, 2/7/03, p. 48; SBC-CA/Murphy 2/7/03, p. 53.)

On the whole, SBC-CA alleges that HM 5.3 designs a hypothetical network that only satisfies existing demand at existing locations, excludes the real-world costs of fluctuations in demand from customer growth and churn, and results in a model that produces unrealistic investment levels compared with SBC-CA's actuals. SBC-CA contrasts HM 5.3 results to SBC-CA actual operating results and highlights the following:


· HM 5.3 calculates total investment of $9 billion, but SBC-CA spent $9.6 billion just on plant additions from 1998 to 2001 (SBC-CA 2/7/03, p. 7).


· HM 5.3 assumes a network that can be maintained for $.7 billion, while SBC-CA spent $2.7 billion on maintenance in 2001. (Id.)


· HM 5.3 models 32 million distribution pair while SBC-CA has almost double this number in its actual network. (Id., p. 20.)


· HM 5.3 creates a network with 11,661 distribution areas whereas SBC-CA has more than 5 times this number in its serving area. (Id.)

We find that SBC-CA's criticisms of HM 5.3 principally highlight questionable inputs that JA have used in HM 5.3, but we do not agree that HM 5.3 violates TELRIC requirements overall. SBC-CA takes issue with how HM 5.3 applies TELRIC to build a network instantaneously to meet current demand. While we agree that it may be unrealistic to assume a network can be constructed overnight, we find that HM 5.3 for the most part follows well-established TELRIC guidance and SBC-CA's criticisms center largely around quarrels with the inputs that are used in the model. We can modify many of the inputs and assumptions in HM 5.3 to address these criticisms. For example, we can ensure that the assumed fill factors provide reasonable spare capacity for growth, we can assume that a carrier will incur a higher cost to install sufficient switching investments because it cannot buy all the lines it will need at the steeply discounted "new" switch price, and we can change labor rates and task times to reflect more realistic equipment installation assumptions and expenses.

We agree with JA that based on established TELRIC rules, HM 5.3 should not build to "ultimate demand." (JA/Donovan, 3/12/03, paras. 191-194.) In its own modeling for federal universal service purposes, the FCC has stated that model inputs should reflect current demand, which it defines to include a "reasonable amount of excess capacity to accommodate short term growth." (Inputs Order, para. 190.) The FCC has explicitly rejected the notion of modeling based on "ultimate demand," because it is highly speculative (Id., para. 201). The FCC stated that "correctly forecasting ultimate demand is a speculative exercise, especially because of rapid technological advances in telecommunications." (Id., para. 200.) JA claim that HM 5.3 includes extra pairs to accommodate additional lines, maintenance and administrative needs, and therefore provides the same level of service as SBC-CA's current network. (JA, 3/12/03, p. 46.) We find that if we run HM 5.3 with an appropriate number of pairs per household and using appropriate fill factors, HM 5.3 accounts for a reasonable level of growth, and sizes the network to provide appropriate service quality and reach potential customers. As the FCC has stated, predicting ultimate demand is a speculative exercise, particularly in today's environment of rapidly changing technologies and demand levels, which SBC-CA acknowledges. (See e.g., SBC-CA, 3/12/03, p. 70, n. 278.)

We do not agree with SBC-CA that the right-angle routing used by HM 5.3 necessarily understates loop plant. SBC-CA relies on the opinion of its expert witness that outside plant is underestimated in HM 5.3, but it has not provided empirical evidence that its actual route distances are greater than those modeled by HM 5.3, or any comparison of the distances modeled in the SBC-CA Models to those in HM 5.3. JA contend that right-angle routing conservatively overestimates loop plant because it uses right-angle rather than straight line connections. It is logical that loop lengths based on right-angle connections will be longer than straight line connections because mathematically, the two sides of a right triangle, when added together, are longer than its hypotenuse. Thus, we find that the right-angle routing used in HM 5.3 is reasonable. Although right angles may not match SBC-CA's actual network routes, it is more realistic to assume right angles than to assume a carrier could build all routes along straight lines.

Indeed, the loops SBC-CA models in LoopCAT do not follow existing routes either for the distribution portion of the loop. We have more confidence in the loop lengths modeled by HM 5.3, which begin with actual customer locations and use right-angles to connect customers within a cluster, than we have in SBC-CA's LoopCAT which is based on half of the longest distribution loop segment that might be built in the next 20 years. Neither model follows existing routes or places all loop facilities in today's locations, and HM 5.3 makes conservative right-angle assumptions to connect existing customers rather than assuming all loops in a distribution area are one length.

SBC-CA argues that CCP 6 requires the modeling of SBC-CA's "existing or planned" outside plant facilities. Yet, the language SBC-CA quotes has been superseded by the FCC's TELRIC requirements adopted in 1996, describes assumptions for a TSLRIC analysis, and is contradicted by CCP 7 which mandates that costs be forward-looking and "shall not reflect a company's embedded base of facilities." (D.95-12-016, Appendix C, p. 5.) While we agree that the use of SBC-CA's actual right-of-way and plant routes would be a superior modeling technique, neither model has been able to achieve this level of reality. SBC-CA's LoopCAT does not follow CCP 6 either. While SBC-CA models existing feeder routes, this is not true for the distribution portion of the network where SBC-CA has used the design point to approximate distribution loop lengths. Further, the FCC's TELRIC rules, which issued after the Commission's CCP's, do not mandate the use of existing outside plant routes, and specifically allow a "reconstructed local network." (First Report and Order, para. 685.) Therefore, we find that although JA's simplifying assumption of right-angle routing is not based on today's outside plant routes, it most likely increases costs in the model by using a longer route than if customers were connected by straight lines.

We do not agree with SBC-CA that HM 5.3 is automatically flawed because its proposed costs are lower than SBC-CA actual costs. SBC-CA makes generic statements that the characteristics of its current network best reflect an efficient forward-looking network because SBC-CA has years of experience running a network and has been operating under incentive regulation designed to make its network competitive. SBC-CA actual costs may be skewed by unusual one-time expenses from that year, or may not be forward-looking because they reflect the cost of running a network based on embedded choices that a new carrier would not make. In many ways, we consider SBC-CA's comparisons of model results to its actual network experience irrelevant because its actual costs may not be forward-looking. Further, we find these comparisons less useful because they are often made at a very aggregate level and do not allow us to compare discrete modeling results in an "apples to apples" fashion.

SBC-CA's attempt to argue that HM 5.3 results are unrealistic when compared to SBC-CA's current operations appears to echo the unsuccessful arguments that ILECs presented to the U.S. Supreme Court. The Supreme Court recognized that "the problem with a method that relies in any part on historical cost, the cost incumbents say they actually incur, is that it will pass on to lessees the difference between most-efficient cost and embedded cost." (Verizon, 122 S. Ct. at 1673.) The court flatly rejected the idea of basing UNE costs on costs from SBC-CA's network today.

While SBC-CA criticizes numerous inputs to the HM 5.3 model as highly unrealistic and biased too low, it does not provide specifics on what a more realistic input should be given its own network experience. SBC-CA's main response is that the Commission should use its model instead, rather than amend the inputs in HM 5.3. For example, SBC-CA's witness McNeil criticizes HM 5.3 for what he considers unrealistic assumptions about how fast a crew can place and splice fiber and cable, but he does not provide actual placement and splicing times, only the vague suggestion that the crew size should be larger. (SBC-CA/McNeil, 2/7/03, pp. 46-50.) In contrast, JA witness Donovan defends his input assumptions, and notes that his estimates are actually higher than data from SBC-CA's job cost estimate database. (JA/Donovan, 3/12/03, pp. 24-30.)

A second example involves DLC installation costs. While SBC-CA criticizes HM 5.3's DLC cost assumptions, it cannot justify its own inputs in this area. SBC-CA's witness Palmer states that estimates of DLC installation costs by JA are too low because they are based on Project Pronto estimates and "[t]here is absolutely no relationship between the actual costs incurred today by SBC-CA California to install this equipment and the high-level estimates used in 1999 for business case purposes." (SBC-CA/Palmer, 3/12/03, p. 13.) When questioned at the hearings about SBC-CA's actual DLC installation costs, neither Palmer nor SBC-CA's other loop witness, Ms. Bash, could provide an answer or explain how SBC-CA knew that JA's DLC installation assumptions were too low if they did not know SBC-CA's actual costs. (Hearing Tr., 4/15/03, pp. 572-575.) Later, at a continuation hearing, Bash provided information on actual SBC-CA DLC installations from a sample she chose of 8 installations. (Proprietary Hearing Exhibit (PHE) 109.) SBC-CA admits that the actual costs from Bash's sample are lower than the factors for DLC installation used in LoopCAT. (SBC-CA, 8/1/03, p. 21.) A further sample of 50 installations chosen by SBC-CA and JA also indicates costs from actual DLC installations that are lower than the DLC installation factors used by SBC-CA in its own model. (JA, 8/1/03, Exhibits C-4, C-5.) We give little weight to criticisms of HM 5.3 assumptions when witnesses are unable to provide specifics from SBC-CA's own experiences or explain why modeling inputs differ from actual costs.38

HM 5.3 uses detailed customer location information supplied by SBC-CA to identify SBC-CA's current customer locations and cluster them into distribution areas. This is the foundation for the network that HM 5.3 models and is used to determine the lengths of facilities' routes, how much feeder plant is needed, and the types and amounts of copper cable and support structure. (SBC-CA/Tardiff, 2/7/03, p. 17.) Essentially, SBC-CA contends there are numerous flaws with the geocoding process and customer location database used as an input to HM 5.3, and these flaws violate the Commission's cost modeling criteria and result in loop costs that are too low and loop plant that is not constructed using standard engineering practices.

To understand SBC-CA's criticisms, it is helpful to review the geocoding and clustering process used in HM 5.3. (See JA/Mercer Decl., 10/18/02, Attachment RAM-4, Model Description, Section 5.2 - 5.3.) JA contracted with an outside vendor, TNS, to take SBC-CA's customer location data and "geocode" it by assigning each customer location a precise longitude and latitude. Where SBC-CA's data was incomplete or unreliable, "surrogate" geocoded locations were assigned. TNS then used its proprietary algorithms to group these geocoded customer locations into logical serving areas, or "clusters," based on the category of service appropriate for that customer. The clustering algorithm imposed three critical engineering restrictions to ensure that (1) no point in the cluster may be more than 17,000 feet from the center of the cluster, (2) no cluster may exceed 6451 lines,39 and (3) no point in the cluster may be farther than two miles from its nearest neighbor in the cluster. (Id., RAM-4, section 5.3.2.)

The clustering process produces irregularly shaped groupings of customers in each wire center that JA term "convex hulls," or "clusters." TNS then determines the "centroid" of the cluster, which is the midpoint of the line connecting the two farthest points in the cluster. (Id., RAM-4, Section 4.5, n. 26.) HM 5.3 uses the cluster centroid as the location for the feeder termination, or serving area interface (SAI). In addition, TNS calculates a "strand distance" which is a measurement of the route mileage required to connect all customer locations to each other. The strand distance is based on a "minimum spanning tree" (MST) theory and assumes "right-angle" routing between customer points. (Id., RAM-4, Section 8.4, n. 47.)

Next, TNS takes the convex hull clusters and transforms each into a rectangle of the same total area as the original convex hull, so that a distribution network can be laid out over the cluster. JA describe this as "rectangularization." Finally, TNS uses demographic data to assign demographic characteristics such as terrain, housing profiles, and line density zone characteristics, to the clusters. (Id., RAM-4, Section 6.1.)

The customer location database input is now complete and ready to be input to HM 5.3. HM 5.3 takes the rectangular clusters in the database and subdivides them into lots of equal sizes in order to lay out a distribution network over the cluster to reach each of the lots, which are uniformly dispersed over the area of the cluster. (Id., RAM-4, Section 8.1.) HM 5.3 compares the total distance of this distribution network to the "MST" strand distance, or route mileage, calculated by TNS and allows the user to adjust the route mileage to this MST distance when calculating the cost of the distribution network. (Id., RAM-4, Section 8.4.)

SBC-CA contends that the customer location database resulting from the TNS geocoding process is a black box that cannot be verified because JA have not provided the proprietary source code used by TNS for the geocoding and clustering process. SBC-CA also contends that the clustering process produces distribution areas that are too large and do not represent actual customer locations in SBC-CA's serving area. (SBC-CA/Dippon, 2/7/03, pp. 4-5.) We now describe and discuss these criticisms.

SBC-CA charges that the TNS customer location database and clustering process is not sufficiently open because it does not allow parties access to the database's underlying data, calculations, and assumptions. This inhibits SBC-CA's ability to examine and modify HM 5.3 customer location engineering principles. According to SBC-CA, JA never provided access to the source code of the algorithm used by TNS to cluster SBC-CA's customer information data. Without the source code, SBC-CA claims it cannot review, test, or modify how the model clusters customer locations. (SBC-CA, 2/7/03, p. 30.)40 SBC-CA's witness Dippon claims that the clustering description provided by JA does not match what TNS appears to have done. (SBC-CA/Dippon, 2/7/03, pp. 27-30.)In response, JA contend that SBC-CA was given everything it needed to review, understand, and test the TNS clustering process. (JA, 3/12/03, p. 51.)

We agree with JA that it provided sufficient access to its clustering process to allow SBC-CA's witness Dippon to run his own clustering scenario where he reduced the maximum lines in the cluster from 6,451 to 1,800. (SBC-CA/Dippon, 2/7/03, p. 42.) SBC-CA's claims that this access was insufficient are contradicted by the modifications it made to the clustering process and the detailed criticisms it provided in its comments.

On the other hand, we agree with SBC-CA that the clustering process lacks full transparency and we can sympathize with SBC-CA's frustration. Commission staff reviewed this area extensively. JA's description of the geocoding and clustering process is far from clear and overly laden with technical terminology that is difficult to wade through. Indeed, "rectangularize," "centroid," and "convex hull" are not common words. Ultimately, Commission staff was unable to run its own version of the TNS clustering process to test the effects of different assumptions because it would have required extensive computer equipment that the Commission does not have available. In this regard, SBC-CA was able to accomplish what we could not.

Overall, we find that the entire debate over transparency and access to the clustering process must be viewed relative to the transparency, access and ability to understand, review, and modify the preprocessed data SBC-CA used for its own models. The clustering algorithm provided by TNS as an input to HM 5.3 is comparable to SBC-CA's preprocessing of its loop records before they were input to LoopCAT. In other words, both parties had to "preprocess" vast amounts of data for input to the actual UNE cost models, and there are aspects of both the TNS and the LoopCAT preprocessing work that outside parties and Commission staff are not able to replicate or scrutinize for various reasons. It appears that both JA and SBC-CA attempted to give sufficient access to other parties so they could review, replicate, and modify the preprocessing steps. In both cases, complete access to the full extent requested by other parties was either not possible, as with SBC-CA's models because of the size of the preprocessed database, or dependent upon agreement over the handling of proprietary information, as with the TNS database where SBC-CA chose not to pursue its motion to compel access to the source code.

Ultimately, we find both models lack transparency because the vast amounts of preprocessed data limit the Commission's ability to run scenarios and test various input assumptions. Neither HM 5.3 nor SBC-CA's models allowed us the ability to fully understand or replicate their preprocessing steps, and therefore, both models have aspects that could be considered "black boxes."

Despite his lack of access to the clustering algorithm source code, SBC-CA's witness Dippon identified several errors in the clustering process that cause the clusters to bear no resemblance to real world customer groupings or actual customer locations in SBC-CA's serving area. (SBC-CA, 2/7/03, p. 31; SBC-CA/Dippon, 2/7/03, pp. 2-3.) Dippon lists numerous examples where the clustering process places customers or equipment in locations SBC-CA contends do not match reality. For example, Dippon takes issue with how TNS determined the cluster "centroid," which HM 5.3 uses to locate the Serving Area Interface (SAI) equipment. (SBC-CA/Dippon, 2/7/03, p. 36.) Second, Dippon describes how the clustering "clumps" customers in downtown areas into unrealistic high-rise buildings. For example, HM 5.3 produces a 1,020-story building and understates the amount of distribution plant to serve such a tall building. (SBC-CA, 2/7/03, p. 47.) Third, SBC-CA again raises the criticism that when constructing a real world network, geographic impediments and man-made obstructions must be considered.

We find these criticisms are somewhat ironic given SBC-CA's modeling approach that does not locate customers at all, assumes they are all uniformly dispersed in a ring around the central office, and makes no effort to model high-density customer locations or multiple dwelling units. Both models have made many simplifying assumptions in order to model a network. Some of these assumptions are more far-fetched than others. We agree with JA's assessment that HM 5.3 is not an engineering model, but a cost model that locates current customers and determines the cost of plant to reach those customers, plus room for reasonable growth, without determining the actual locations where plant will be placed.

We find that the clustering assumptions that form the basis of HM 5.3 are no worse than the loop input assumptions used by SBC-CA in its preprocessor, including SBC-CA's approximated loop length based on a "design point." In fact, we find the HM 5.3 model more reasonable in its loop design because it clusters customers based on actual customer locations, which means that HM 5.3 creates distribution areas based on the realities of where customers are grouped today. In contrast, SBC-CA's model presumes that all of the customer groupings in its network today are forward-looking and efficient, and does not allow the user to regroup customers into more logical groupings based on current population characteristics. Then, SBC-CA models loop plant to serve these existing groupings based on the "design point" concept and its resulting approximation of loop length. In contrast, HM 5.3 starts by locating all customers where they are today, and recognizes dense groupings of customers given the high proportion of multiple dwelling units in California.41 We find that HM 5.3 provides a more granular approach to designing a distribution network than SBC-CA. Therefore, SBC-CA's criticisms that customer locations are not accurate rings hollow, particularly when its own model does not accurately locate customers either.

There is one area of the HM 5.3 loop design process we find deserving of criticism. After the TNS process has created clusters based on actual customer locations, it essentially wipes the slate clean by subdividing the distribution area into equal size lots, then laying out distribution plant in a grid. Thus, even though actual customer locations are used to determine customer groupings, or clusters, they are completely ignored when modeling the distribution plant. JA defend this by suggesting it is too difficult to model plant to actual customer locations. We recognize all models contain limitations in their ability to mirror reality. In fact, SBC-CA's own model does not attempt to accurately locate existing customers and similarly assumes they are all evenly dispersed throughout the distribution area. Therefore, both models can be faulted for the accuracy of their modeling of customer locations.

With regard to SBC-CA's specific criticisms, JA counter that HM 5.3 may not reflect the physical realities of SBC-CA's network, but it is not intended to mimic the exact locations of SBC-CA's plant. Indeed, SBC-CA's model does not do this, and neither does the FCC's Synthesis Model. (JA, 3/12/03, p. 53, JA/Murray, 3/12/03, p. 13.) We agree with JA that TELRIC allows reconstruction of the network using existing wire centers, and does not require a model to use existing facility routes. In defining TELRIC, the FCC rejected cost approaches based entirely on a new network design or based entirely on existing network design. (First Report and Order, paras. 683 and 684.) Instead, the FCC found that a cost methodology that was based on the most efficient technology deployed in the incumbent LEC's current wire center locations "encourages facilities-based competition to the extent that new entrants, by designing more efficient network configurations, are able to provide the same service at a lower cost than the incumbent LEC." (Id., para. 685.) (Emphasis added.) The FCC therefore concluded that "the forward-looking pricing methodology for interconnection and unbundled network elements should be based on costs that assume that wire centers will be placed at the incumbent LEC's current wire center locations, but that the reconstructed local network will employ the most efficient technology for reasonably foreseeable capacity requirements." (Id.) (Emphasis added.)

We acknowledge that certain elements of the real-world network are fixed, such as terrain, roads, and customer locations. Nevertheless, a TELRIC model recognizes that the design of the current network may not represent the most efficient, forward-looking design because it may reflect choices made at a time when different technology options existed or when a different cost structure for equipment and labor drove decision-making. Fundamental to TELRIC cost modeling is the understanding that it is not merely an engineering cost estimate for actual re-construction of the existing network. Rather, a TELRIC model estimates costs based on the location of existing wire centers coupled with forward-looking network assumptions that in the aggregate are reasonable.42 Thus, we do not agree with SBC-CA that it is necessary for HM 5.3 to locate outside plant, such as SAI's, in the exact location that they are today.

Regarding clumping of customers into unrealistic high rise buildings, JA explain that HM 5.3 had to make simplifying assumptions about customers with the same address where it did not know the square footage "footprint" of the building. In other words, when HM 5.3 sees a high concentration of lines at one address, it does not know if this is a large shopping mall or a high-rise building. The model has been set up to treat many lines at one address as a high-rise, but only includes distribution cable to serve 50 floors recognizing that buildings seldom exceed this height. HM 5.3 includes this 50 floors of distribution cable even though such intra-building cable may not be part of the local exchange network, but property of the building owner. Therefore, JA admit that a 1,020 story building is unrealistic, but it is simply a result of simplifying modeling assumptions where HM 5.3 does not know the exact building square footage. Further, HM 5.3 conservatively overestimates costs by including distribution cable to serve these high-rises. (Workshop Tr., 6/25/03, pp. 658-661. See also JA/Mercer, 3/12/03, paras. 186-190.)

JA state that the criticisms levied by SBC-CA, if corrected, would only serve to lower the cost estimates produced by HM 5.3 by modeling the network with greater exactitude. (JA, 3/12/03, p. 54.) We find that HM 5.3 approach is reasonable in that it determines a logical customer cluster and builds plant to reach customers in that cluster. While the routes to build plant may not match SBC-CA's existing routes and may inadvertently hit a geographic or man-made obstacle, the right angle routing assumed throughout, rather than straight line routes, attempts to accommodate for this and is more realistic than assuming a network would follow straight line routes.

Therefore, we are not persuaded that SBC-CA's criticisms of the accuracy of the geocoding and customer location process indicate fatal flaws in HM 5.3 or are worse than the methods used in LoopCAT to configure the loop network. We find that the method used by HM 5.3 to model customer locations and the costs of reconstructing SBC-CA's network, given its existing wire centers, falls reasonably within TELRIC guidelines and, for the most part, uses logical assumptions. On the other hand, HM 5.3 ultimately ignores customer locations to model loop plant. While this is somewhat troubling, we find this no worse than simplifying assumptions made in the SBC-CA models to assume an average loop length and a uniform distribution of customers within the distribution area. Both HM 5.3 and LoopCAT can be faulted for the accuracy of their customer locations. On the whole, while we do not agree with all of the inputs used in HM 5.3, the concept of creating customer groupings based on today's actual customer locations, and calculating the cost of building a distribution network to connect them is reasonable, even if the reconstructed network does not follow today's exact outside plant routes.

SBC-CA contends that the clustering process is flawed because when HM 5.3 is re-run after re-clustering the customer location data into smaller clusters, the results show minimal impacts on total loop cost estimates. (SBC-CA, 2/7/03, p. 37.) Specifically, SBC-CA's witness Dippon ran a scenario of HM 5.3 where he reduced the cluster sizes from a maximum of 6,451 lines per cluster to 1,800, thereby increasing the number of clusters in HM 5.3 by 200%. (SBC-CA/Dippon, 2/7/03, p. 43.) Dippon's run with smaller cluster sizes resulted in only a slight decrease in loop cost. Dippon contends this result is illogical because if cluster sizes are reduced such that HM 5.3 has to build a network to reach more clusters of smaller size, loop costs should rise. (Id. p. 44.)

In response, JA maintain that the results of Dippon's "1,800 run" are not illogical when one considers the tradeoff between feeder and distribution costs that results from creating a network with more clusters that serve a smaller number of lines per cluster. As JA's witness Mercer explains, Dippon's "1,800 run" shows an increase in feeder investment to penetrate more deeply into the network and closer to customers, offset by a decrease in distribution investment because smaller cables are less expensive. (JA/Mercer, 3/12/03, pp. 24-25.) JA witness Donovan contends that Dippon's "1,800 run" shows that HM 5.3 is operating correctly by installing more feeder and DLC equipment to serve a larger number of smaller clusters, offset by less investment in distribution cable. (JA/Donovan, 3/12/03, pp. 52-53.)

We find JA's explanation on this point reasonable and we do not agree with SBC-CA that Dippon's "1,800 run" proves HM 5.3 is flawed. Nevertheless, we were unable to run our own clustering scenarios to examine differing model results. If we could have done this, we might have run a scenario somewhere between Dippon's "1,800 run" and the HM 5.3 default clustering which assumes a maximum of 6,451 lines. Therefore, we are not able to determine whether HM 5.3 appropriately handles the tradeoff between feeder and distribution investment because we were not able to run alternative scenarios.

SBC-CA contends that the abstract clustering algorithm used by TNS to create the customer location database ignores existing industry standards and creates unrealistic and inefficiently large clusters of 6,451 lines rather than a maximum of 1,800 lines. (SBC-CA, 2/7/03, p. 43, SBC-CA/McNeill, 2/7/03, p. 3.) According to SBC-CA, JA have abandoned prior modeling techniques that limited distribution areas (DAs) to between 200 to 600 households. (SBC-CA, 2/7/03, p. 43, n. 144.) According to SBC-CA, this results in DA's that are too large and no carrier has ever built a network like this. For comparison, SBC-CA states that its current network is comprised of over 60,000 DAs, whereas HM 5.3 produces only 7,679 main clusters for SBC-CA's entire California serving area. (SBC-CA/Murphy, 2/7/03, p. 39.)

According to SBC-CA, standard engineering principles recognize that because feeder facilities cost less per unit of length than distribution facilities, the objective is to minimize the size of the DA and achieve a reasonable fill of the feeder facilities. (SBC-CA/McNeil, 2/7/03, p. 16) Further, SBC-CA contends that HM 5.3 artificially lowers loop costs per line by assuming extraordinarily large underground controlled environmental vaults (CEVs) that spread the higher installation costs of a CEV over a larger number of lines. (SBC-CA/Tardiff, 2/7/03, pp. 40-41.)

We are somewhat troubled by JA's assumption that distribution areas can be sized up to 6,451 lines, which is much larger than the distribution areas in SBC-CA's current network. As stated above, we would have preferred to run our own scenario with a smaller maximum line size per cluster. Nevertheless, JA show that incumbent carriers are currently purchasing equipment that will serve distribution areas as large or larger than those modeled in HM 5.3. (JA/Donovan, 3/12/03, paras. 97-100.) SBC-CA witness Murphy also confirms that equipment to serve up to 7,200 pairs is readily available. (SBC-CA/Murphy, 2/7/03, pp. 40-41.)

We do not entirely agree with SBC-CA's approach either. SBC-CA appears to advocate that clusters should serve a maximum of 1,800 lines, based on a guideline of serving 200 to 600 households. SBC-CA's principal criticism of large clusters seems to be that they differ from historic practices. SBC-CA models distribution areas to serve a maximum of 200 to 600 households based on standards that date back at least 25 years, before the advent of fiber optics and equipment sized to serve a greater concentration of lines. (JA, 3/12/03, p. 53; JA/Donovan, 3/12/03, paras. 90-96.) Furthermore, SBC-CA's witness McNeil admitted that SBC-CA currently attempts "to establish large footprints so that the remote terminal can serve as many DAs as possible." (SBC-CA/McNeil, 2/7/03, para. 30.) He goes on to state, "[I]t is efficient and cheaper to place as few remote terminals as possible." (Id.) For these reasons, it is reasonable to conclude that a forward-looking network configuration might recognize today's dense customer groupings and the availability of larger equipment in order to size DA's larger than SBC-CA has done in the past.

Nevertheless, we agree with SBC-CA that HM 5.3 might have relied on too many large DA configurations, more than it is reasonable to assume would happen in the real-world network. The clusters used as an input in HM 5.3 are also based on a maximum number of lines per cluster of 6,451, which is larger than the CEVs SBC-CA normally uses. While CEVs do exist large enough to accommodate this number of lines, we find it inappropriate to assume that all distribution areas could accommodate a CEV of that size.

We would have preferred to take a middle ground and rely on clustering assumptions that did not assume the largest equipment could automatically be placed everywhere. JA have adequately defended the use of distribution areas sized larger than SBC-CA's outdated guideline of a maximum of 200 to 600 households. But, neither is it reasonable to conclude that every DA could accommodate the equipment to serve 6,451 lines. Thus, our principal criticism with HM 5.3 in this area is that we cannot modify the clustering process ourselves to re-run it with a more moderately sized clustering assumption.43

Overall, we find that while we do not agree with all aspects of HM 5.3's customer location and loop modeling, it is no more a "black box" than SBC-CA's own preprocessor and input modeling assumptions related to the design point. Both HM 5.3 and SBC-CA's LoopCAT lack transparency, limit the Commission's ability to test various scenarios, and can be faulted for the accuracy of their customer location process. HM 5.3 is based on a detailed examination of current customer locations, and makes simplifying assumptions not unlike the assumptions underlying SBC-CA's LoopCAT. The HM 5.3 model ultimately ignores customer locations when modeling loop plant. As a result, although HM 5.3 starts with current customer location data, it does not model outside plant in the exact locations in which it exists today. Nevertheless, HM 5.3 has one advantage over LoopCAT because it starts with actual customer locations to cluster customers into efficient groupings, whereas LoopCAT makes no attempt to determine efficient customer groupings based on current population density characteristics. We find that the method used by HM 5.3 to model customer locations, create forward-looking customer clusters, and estimate the costs of reconstructing SBC-CA's loop network falls reasonably within TELRIC guidelines, even if the reconstructed network does not follow today's exact outside plant routes.

This does not mean there are not other valid criticisms of the clustering process underlying HM 5.3. Significantly, we were unable to run our own sensitivity analyses to test HM 5.3's sensitivity with different clustering inputs. We would have preferred to test the results of different cluster sizes. At the same time, our inability to run sensitivity analyses of cluster sizes is not unlike our inability to run sensitivity of SBC-CA's preprocessor and design point assumptions. In other words, both models involved extensive preprocessing of data that, for various reasons, the Commission had to use as given. Thus, we find that both models contain aspects of their loop modeling that we were unable to modify to our satisfaction.

SBC-CA contends that HM 5.3 relies excessively on unsupported "expert judgment" for inputs that relate to such items as network design, install times for engineering and placement of cable, support structure, DLC equipment, labor loadings, and material costs. According to SBC-CA, many of the inputs to HM 5.3 are based on opinions with little or no analysis or backup documentation to support their derivation or reasonableness. In some cases, JA have selectively relied on vendor quote information to produce low UNE cost estimates, without revealing supporting documentation for these vendor quotes, or they have used information from around the country rather than using cost data supplied by SBC-CA. (SBC-CA, 2/7/03, pp. 32-33.) For example, JA have selected prices for switching based on extremely short run considerations, assuming that selected prices in a particular contract would somehow be available for all switching purchases. (SBC-CA/Tardiff, 3/12 /03, p. 13.)

We agree with SBC-CA that many of HM 5.3's inputs deserve a second look and may not be appropriate. HM 5.3 uses many inputs that are based on expert judgment (such as plant mix and structure sharing percentages), or derived from national data rather than California specific numbers (such as labor loading data). It is also true that HM 5.3 relies on vendor quotes that are not always documented.44 In many areas throughout HM 5.3, we have been successful in modifying these inputs, and in most cases we have substituted inputs or data from SBC-CA's models instead.

Furthermore, SBC-CA criticizes the use of "expert judgments," but its own model relies on judgments of its engineers and other unnamed subject matter experts for its "design point" assumptions, ACFs, and inputs to its switching (SICAT) and interoffice (SPICE) models. Both HM 5.3 and the SBC-CA models rely heavily on expert judgments. While SBC-CA often criticized many of the HM 5.3 inputs, its reply and rebuttal comments did not provide an assessment based on SBC-CA's own experience of the correct value for many of these disputed inputs. In many cases, we find that SBC-CA offers its own model with inputs so aggregated that we could not make "apples to apples" comparisons of the disputed inputs. Thus, we do not find that disputes over inputs selected through "expert judgment" are themselves a basis to abandon HM 5.3. Rather, the question is the reasonableness of the input assumptions themselves.

Finally, SBC-CA argues that HM 5.3 inputs do not match SBC-CA's actual costs. However, the Supreme Court dismissed comparisons of this sort, noting that costing methods that rely on historical costs can pass on inefficiencies caused by poor management or poor investment strategies. (Verizon, 122 S. Ct. 1646, 1673.) The Court further noted that:


Contrary to assertions by some [incumbents], regulation does not and should not guarantee full recovery of their embedded costs. Such a guarantee would exceed the assurances that [the FCC] or the states have provided in the past. (Verizon at 1681.)

Although we find HM 5.3's use of expert judgment usually can be corrected with input changes, there were several instances where corrections were not possible. Notably, we wanted to use the fully loaded hourly labor rate proposed by SBC-CA as an input rather than the lower rate assumed by JA. We were not able to change HM 5.3's labor rate assumptions in all instances because they were embedded with material and other assumptions such that we could not determine what portion of a total cost involved hourly labor. For example, many of HM 5.3's inputs include components for material costs, labor rates, task completion time, and crew size, which are joined into one input cost figure. Commission staff was unable to isolate hourly wages from this conglomeration of labor and material inputs in order to adjust hourly wage rates to SBC-CA's fully loaded levels, particularly for labor cost inputs relating to SAI investment, terminal and splice investments, buried drop installation, and riser cable investment. Because we were not able to change labor rate assumptions in all places within HM 5.3, we find the model flawed in this area.

In comments on the Proposed Decision, both MCI/WorldCom and SBC-CA suggest new approaches to modify the labor rate embedded in HM 5.3 inputs. (MCI/WorldCom, 6/1/04, p. 8; SBC-CA, 6/1/04, p. 4.) MCI/WorldCom suggests the Commission assume, based on HM 5.3 inputs, that labor costs are approximately 50% of terminal and splice investments. Using this information, the Commission can increase that portion of terminal and splice investment to match SBC-CA's hourly loaded labor rate. Alternatively, SBC-CA suggests the Commission increase labor amounts embedded in HM 5.3 inputs by the same factor used to increase understated DLC installation costs. (See Section VI.D.) We find the approach proposed by MCI/WorldCom is more reasonable than the approach offered by SBC-CA. It is not reasonable to assume that the factor derived from actual cost data for DLC installations bears any relationship to the labor costs in other areas. We will adjust HM 5.3 inputs for terminal, splice and SAI investments based on information from the record indicating that approximately 50% of these investments relate to labor costs and can be increased to match SBC-CA's hourly loaded labor rate. We will not modify labor rates for trenching and buried drop installation because HM 5.3 inputs are based on price quotes from contractors and do not assume installation by SBC-CA personnel. SBC-CA admits it uses outside contractors for these activities. (SBC-CA McNeill, 2/7/63, para.83, No.79)

These adjustments to labor rates mitigate the flaws in HM 5.3 labor rate assumptions and help alleviate concerns that HM 5.3 costs are understated. Furthermore, we find that other inputs we have chosen to run in HM 5.3 are conservative and may offset any underestimate from the labor inputs we were unable to modify. For example, the use of right angle routing may overestimate loop lengths, the use of a conservative 7.4% equity risk premium exceeds other reputable forecasts, and we have adopted the use of SBC-CA's proposed hourly loaded labor rate, despite the showing by JA that it is above nationwide labor loading figures, particularly for overtime and benefits. These and other input choices most likely counteract, to a great extent, any underestimate of labor costs in HM 5.3.

SBC-CA contends that the switching and interoffice portions of HM 5.3 do not accurately account for the actual demand generated by today's SBC-CA customers. On the switching side, HM 5.3 does not properly account for customers' peak period usage or the unique characteristics of individual switches in SBC-CA's California network. (SBC-CA, 2/7/03, p. 65.) On the interoffice side, HM 5.3 does not model the actual volume of trunk facilities ordered by SBC-CA customers, and therefore fails to construct an interoffice network with sufficient capacity to support the total demand handled by SBC-CA's network. (Id., p. 64.) SBC-CA alleges that the transport and fiber fill factors proposed by JA are unrealistically high given actual carrier operating fill levels. (SBC-CA, 3/12/03, p. 76.)

In addition, SBC-CA contends that HM 5.3 models a network that would be unable to provision all of the high-speed services that are available today because it omits key electronic "optical interface" equipment necessary to connect DS-1 facilities to the interoffice network. (SBC-CA, 2/7/03, p. 66.) This omission underestimates the facilities and equipment needed to provision DS-1 and DS-3 loops. (Id., p. 50.) Finally, SBC-CA contends that the HM 5.3 Transport module is flawed because it is insensitive to both the demand it considers and the costs for fiber cable and electronics, calling into question what the model optimizes when it is run with differing input assumptions. (SBC-CA/Tardiff, 3/12/03, p. 12.)

In response, JA maintain that HM 5.3 conservatively overestimates the number of trunks required in the switching and interoffice facilities (IOF) networks, and that the way in which HM 5.3 models demand is far superior to SBC-CA's SPICE model, which JA claim does not include all trunks and fails to incorporate demand data. (JA, 3/12/03, p. 70, n. 259.) According to JA, HM 5.3 models the known total amount of switched traffic carried by the SBC-CA network based on SBC-CA "dial equipment minutes" (DEM) traffic data and provides sufficient circuits to carry that traffic. Thus, in their opinion, HM 5.3 appropriately engineers a switching network to accommodate the requirements of a typical switch, its typical busy hour, and all required trunking. (Id., p. 72; JA/Mercer, 3/12/03, paras. 80-82.) JA claim that if additional traffic for interconnection and switched access trunks were modeled in HM 5.3, any additional economies of scale would likely only lower the resulting per unit dedicated circuit UNE cost. (JA/Mercer, 3/12/03, para. 78.)

Further, JA contend that HM 5.3 does not ignore demand, it simply has not fully configured network components to serve high capacity services, such as OC-level service and DS-1 on fiber, because these UNEs are not at issue in this proceeding. (Id., paras. 89-90.) Nevertheless, JA witness Mercer contends that HM 5.3 specifically provides fiber capacity for high capacity loops, even if it does not model the terminal equipment necessary for these services. (Id., paras. 89-92.)

First, we agree with SBC-CA that HM 5.3 does not model the characteristics of individual switches. However, we do not consider this a flaw because we note that SBC-CA's SICAT model does not do this either. Indeed, both HM 5.3 and SICAT appear to have taken a similar modeling approach that looks at aggregate switching requirements. The models differ primarily in their input assumptions for the amount of new, growth, and replacement lines, and switch fill factors. On this issue, we find that SBC-CA's criticisms appear to hold HM 5.3 to standards higher than its own SICAT model.

Second, we agree with SBC-CA that HM 5.3 appears to underestimate demand on the interoffice network. JA admit that they did not configure the interoffice network to handle all high capacity demand, claiming that these costs are not at issue in this proceeding. Yet the FCC's definition of TELRIC describes the forward-looking cost over the long run of the total quantity of facilities and functions that are directly attributable to an element, "taking as a given the incumbent LEC's provision of other elements." (47 C.F.R. Section 51.505(b).) (Emphasis added.) We find that HM 5.3 does not include all of SBC-CA's current interoffice demand and therefore, does not model an interoffice network to accommodate all of SBC-CA's current interoffice traffic. Unfortunately, because of the flaws we have already noted with SBC-CA's SPICE model, it is unclear how we would modify HM 5.3 to remedy this flaw. We have already discussed how we are unable to determine the demand level that SBC-CA's SPICE model is designed to serve. Thus, we are unable to take SBC-CA inputs and place them into the HM 5.3 model.

Third, we cannot determine whether HM 5.3 adequately incorporates optical interface equipment. JA maintain that the IOF network modeled by HM 5.3 includes the cost of all equipment necessary for optical trunking and therefore, would function properly. (JA, 3/12/03, p. 75.) According to JA witness Mercer, "the model includes in each wire center investment for a digital cross connect system of sufficient capacity to meet the circuit requirements of that wire center." (JA/Mercer, 3/12/03, p. 76.) Mercer goes on to state, "It is possible, then, to use the Titan 5500 to replace the switch interface to the OC-48 [synchronous optical network (SONET)] ADMs used by the Model..." (Id.) Based on our own reading of Mercer's rebuttal, we find it unclear whether and to what extent DCS investment, or the Titan 5500, was incorporated in HM 5.3 and properly allocated between all the services that might use it. Thus, we find that HM 5.3 might not allow provisioning of the high capacity services SBC-CA provides today.

Finally, JA witness Mercer attempts to respond to criticism that the HM 5.3 Transport module is insensitive to demand. Mercer describes some minor modifications to HM 5.3 to address SBC-CA's concerns regarding understatement in certain interoffice equipment investment. (Id., paras. 195-198.) Although Mercer provides these corrections to the Transport Module, it is not clear that he has entirely addressed the SBC-CA criticism. We do not find that JA have adequately addressed this criticism of how HM 5.3 derives its SONET ring structure and the resulting interoffice transport rates. Therefore, we are unwilling to rely solely on the results of HM 5.3 to establish interoffice transport rates.

In comments on the Proposed Decision, MCI/WorldCom argues that any alleged omission of demand in the HM 5.3 interoffice model only serves to overstate transport costs because higher demand lowers per unit costs. (MCI/WorldCom, 6/1/04, p.7.) We cannot agree with this simplistic assertion because higher demand could require modeling of more or different equipment, which would not necessarily lower per unit costs. The fact that HM 5.3 appeared insensitive to demand changes also contradicts this assertion. Further, MCI/WorldCom overlooks that the Commission finds fault with HM 5.3 for failing to model the equipment necessary for high capacity services and failing to adequately address how HM 5.3 derives its SONET ring structure. These are important criticisms that JA did not adequately address. Therefore, we make no changes to our conclusions regarding the HM 5.3 interoffice transport module.

SBC-CA contends that HM 5.3 does not model sufficient spare capacity and models a network that would require new orders to wait while new lines are installed. This could impact service quality and potentially lead to service disruptions. According to SBC-CA, although JA acknowledge that 1.5 to 2 pairs per living unit is the minimum design standard for distribution plant, HM 5.3 does not allocate this minimum number of cable pairs to each potential residence or business. (SBC-CA, 2/7/03, p. 41.) In addition, SBC-CA contends that HM 5.3 understates the cable lengths needed to serve potential demand by not designing plant to reach potential customer locations. (Id., p. 42.)

We disagree with SBC-CA's contention that HM 5.3 is seriously flawed with regard to how it handles spare capacity. From our own review, we know that HM 5.3 allows the user to adjust inputs throughout in order to achieve varying levels of spare capacity. This is discussed at length in Section VI.E and VI.J.5 below where the various fill factor inputs are discussed which vary the amount of network investment for spare capacity. Essentially, SBC-CA's spare capacity arguments can be reduced to a dispute over whether the model should assume 1.5 to 2 loop pairs per living unit, as JA propose, or 2.25 pairs per living unit, as SBC-CA proposes. We will address this dispute in our fill factor discussion below. For now, we find that SBC-CA's position on spare capacity does not represent a fatal flaw in HM 5.3.

Furthermore, we do not agree with SBC-CA's argument that HM 5.3 underestimates cable length by not considering the loops required to serve future customers. We have already discussed why it is improper to model to "ultimate demand" given guidance from the FCC. We have concluded that the demand assumptions for loops in HM 5.3 accommodate a reasonably foreseeable level of growth. We have also discussed why SBC-CA's loop lengths unreasonably include ultimate demand and cannot be modified to counteract this. Therefore, we do not find that HM 5.3 is critically flawed on the issue of spare capacity. Rather, we find that HM 5.3 can be modified to incorporate varying assumptions for spare capacity, as needed.

SBC-CA contends that HM 5.3's approach to calculating expenses is flawed and produces expenses that are only one-quarter of SBC-CA's current levels. (SBC-CA, 2/7/03, p. 73.) First, SBC-CA claims that HM 5.3 incorrectly uses expense to investment ratios, or "E/I" ratios, based on SBC-CA's current costs of network equipment, and uses these ratios with HM 5.3 investment levels that are considerably lower. (Id., p. 73.) Second, SBC-CA criticizes HM 5.3's use of Verizon California as a benchmark for efficient operation in California based solely on its proposed lower expense levels, without exploring other factors that may explain the difference in expenses between the two companies. (Id., p. 74.) SBC-CA notes that Verizon has a significantly higher investment per line than does SBC-CA, and that overall efficiency of a company is related to both investment and expense decisions. Looking at expenses in isolation, as JA have done, reveals little about overall company efficiency. (Id. ) SBC-CA maintains that using Verizon as a benchmark for E/I ratios results in absurdly low expense levels in HM 5.3.

We find that HM 5.3's use of E/I ratios is reasonable, and not unlike the factors that SBC-CA uses in its own model. Indeed, both JA and SBC-CA have used a similar approach of adjusting investments to current cost before calculating ratios and applying them to estimate current expense levels. (JA/Brand-Menko, 10/18/02, p. 6; SBC-CA/Cohen 10/18/02, p. 5-6.)

We agree with SBC-CA that HM 5.3 improperly uses Verizon California as a benchmark for expense estimation purposes. SBC-CA has presented convincing arguments that JA have overlooked Verizon's higher investments per line and have overlooked other factors that could explain expense differences between the two companies. We prefer to use recent data from SBC-CA's reported ARMIS expenses to estimate forward-looking expenses. This was the starting point for HM 5.3's expenses before adjustments to benchmark expenses to Verizon. Therefore, we will back out these Verizon "benchmarking" adjustments from the HM 5.3 model.

In comments on the Proposed Decision, AT&T claims that the Commission improperly implemented its attempt to back out the Verizon expense levels by overlooking changes to the network operations E/I factor. (AT&T, 6/1/04, p. 19.) We agree this was an oversight and the final run of HM 5.3 has been corrected in this regard.

SBC-CA criticizes HM 5.3 for not providing any internal or external demonstration of the validity of its cost estimates. (SBC-CA, 2/7/03, p. 24.) SBC-CA performs its own comparison of the investments and expenses produced by HM 5.3 to what SBC-CA currently incurs based on 2001 ARMIS data. SBC-CA maintains that the "sanity check" it performed shows HM 5.3 investments and expenses are only one-quarter of SBC-CA's current levels, and that HM 5.3 has not depicted loop routes of proper length, not accurately priced network components, nor included a sufficient amount of ongoing expenses to pay for the labor force needed to run the network. (SBC-CA/Tardiff 2/7/03, pp. 3-4, and 20.) SBC-CA claims that any deviation between HM 5.3 and reality implies inaccuracies in the model, not inefficiencies in the current network. (Id., p. 23.)

Once again, JA contend that TELRIC calculations cannot be validated by comparison to a carrier's embedded costs. Thus, SBC-CA's "validation tests" are irrelevant. (JA/Klick, 3/12/03, p. 7.) According to JA, the FCC has already rejected cost methodologies that base forward-looking costs on the existing ILEC infrastructure, adjusted only for depreciation and inflation. (Id., p. 7, citing First Report and Order, para. 683-685.) Further, JA allege that SBC-CA's ARMIS-based estimates of reproduction costs are overstated because they include Project Pronto costs for transitioning SBC-CA to a DSL-capable network. (JA/Klick, 3/12/03, p. 8.) Finally, JA point out that SBC-CA never did any "validation test" of its own model results, presumably because this type of analysis would be impossible given that SBC-CA's cost studies do not provide total investment or expense results to compare to ARMIS data. (Id., pp. 4-5.)

We conclude that it would be unreasonable to reject the use of HM 5.3 merely because its results are lower than SBC-CA's current costs, as shown by comparisons to 2001 ARMIS data. We agree with JA that such comparisons are of limited value given that it is unclear to what extent we can rely on SBC-CA's current costs as forward-looking.45 Much of SBC-CA's criticism of HM 5.3 involves the inputs that it uses. It makes more sense to vary these inputs to levels that we consider more appropriate before deciding which model to rely on.

Interestingly, ORA/TURN performed such an analysis for us. TURN's analysis by its witness Roycroft used the FCC's Synthesis Model (SynMod) to test for potential structural bias in both the HM 5.3 model and SBC-CA models.46 Roycroft did a side-by-side comparison of HM 5.3, the SBC-CA models, and SynMod after applying a uniform platform of loop-related and general input values taken from SynMod. From this comparison, he concludes that HM 5.3 is forward-looking and not structurally biased because it produced higher costs than SynMod when both models were run with SynMod's default inputs. (ORA/TURN 2/7 p. 11; Roycroft Decl., 2/7/03, p. 63.) If HM 5.3 were biased, it would have generated lower costs than SynMod. Based on these results, ORA/TURN suggests adjustment of HM 5.3 inputs to the default levels used in SynMod. (ORA/TURN/Roycroft, 2/7/03, p. 63.) SBC-CA opposes ORA/TURN's recommendation to use SynMod inputs to set UNE rates for SBC-CA because these inputs are five years old and are based on broad national averages. (SBC-CA/Tardiff, 3/12/03, p. 37.)

With regard to SBC-CA's models, Roycroft concludes they do not comply with forward-looking principles because when the SBC-CA models are run with similar inputs to HM 5.3 and SynMod, the SBC-CA models generate consistently higher costs than HM 5.3 or SynMod. Roycroft suggests this is because LoopCAT has fewer user-adjustable inputs and does not allow variation in cable sizing. (ORA/TURN, 2/7/03, p. 11.) Roycroft presumes it might be possible to alter LoopCAT to generate outputs more consistent with TELRIC principles, but the effort required to make such modifications would be large and is not necessary given the availability of other models. (ORA/TURN/Roycroft, 2/7/03, p. 54.) XO supports ORA/TURN's analysis and conclusion that the costs calculated by the SBC-CA models are clearly outliers and unreasonable. (XO, 3/12/03, p. 3.)

JA also performed a sensitivity analysis of HM 5.3 to demonstrate that it was not structurally biased. JA changed eight categories of inputs in HM 5.3 to the values proposed by SBC-CA. This yielded a significantly higher loop rate, closer to the level proposed by SBC-CA and its models.47 According to JA, this analysis refutes SBC-CA's claim that HM 5.3 is incapable of producing reasonable cost estimates.

We find that taken together, ORA/TURN's analysis using SynMod and JA's own sensitivity analysis varying eight inputs show that HM 5.3 is not structurally biased to produce unrealistically low results. The ORA/TURN analysis also corroborates our own findings that it is difficult to change many inputs within the SBC-CA models. We decline to adopt ORA/TURN's proposal to use all of SynMod's input values throughout HM 5.3 because we agree with SBC-CA that many of these inputs are dated or based on national averages.

Finally, we note that JA provided their own comparison of HM 5.3 with the FCC's Synthesis Model (SynMod). JA's witness Klick modified SynMod inputs to reflect HM 5.3 inputs, and then ran SynMod to estimate UNE rates in SBC-CA's territory. The results indicate SynMod loop investments 11% lower than those produced by HM 5.3. From these results, Klick concludes that HM 5.3 is an effective tool for estimating forward-looking costs because it produces similar investments and overall costs as SynMod, when run with comparable inputs. (JA/Klick, 10/18/02, pp. 14-15.) SBC-CA responds that Klick's analysis does not validate HM 5.3 because the similarity of outcomes of the two models, when run with the same inputs, merely shows that the inputs themselves are an important determinant of investment. SBC-CA contends HM 5.3's inputs are so low they produce invalid outputs for both models. (SBC-CA/Tardiff, 2/7/03, p. 35.)

Overall, we tend to agree with SBC-CA that Klick's analysis merely shows that HM 5.3 inputs provide very low results when input into SynMod. We agree with SBC-CA that the modeling inputs appear to be the more important drivers of model results. Therefore, we will not rely on or rule out either model based on these comparisons or validity tests provided by the various parties. Instead, we will turn to an analysis of the appropriate inputs to use in our model runs.

Finally, SBC-CA says numerous state and federal regulatory agencies have rejected HM 5.3 assumptions. (SBC-CA, 2/7/03, p. 76; SBC-CA/Tardiff, 3/12/03, pp. 6-7.) SBC-CA cites to various decisions in other states that have found earlier iterations of the HAI model unreliable. Much of this criticism has been directed at the use of unidentified experts and unidentifiable sources to substantiate modeling assumptions and input choices. In response, JA cite to several states, including Arizona, Minnesota, Nevada, Colorado and West Virginia, that have either adopted or used results from earlier versions of HM 5.3 to calculate UNE costs. (JA, 3/12/03, p. 74.)

Given our ability to modify many of the HM 5.3 inputs that SBC-CA contests, we are not troubled by this criticism. Moreover, the SBC-CA models that we are examining in this proceeding are of very recent vintage, and we do not believe they have been reviewed or adopted by any other states either. This proceeding may very well be the first time that this particular version of HM is compared directly with SBC-CA's newest models. Therefore, the findings of other state commissions that may have examined earlier versions of HM 5.3 or SBC-CA's models are of little value to us.

In summary, we find that HM 5.3 can be modified to overcome many of its alleged flaws. Specifically, the model can be modified to use different input and engineering design assumptions, spare capacity can be increased, labor rates can be increased, and expense assumptions can be modified to increase expense levels. Nevertheless, we were unable to modify assumptions with regard to the customer location and clustering process in order to overcome all of the model's criticisms. In addition, we could not overcome criticisms of the HM 5.3 interoffice transport module that it underestimates demand, may not adequately incorporate optical interface equipment, and is insensitive to demand changes.

Even if we could have modified the customer location and clustering process and the interoffice transport module, it is unclear what affect these changes would have on rates. In comments on the Proposed Decision, MCI/WorldCom argues that there is no support for stating that HM 5.3 underestimates forward-looking costs and that if higher interoffice demand were modeled, it would lower costs. (MCI/WorldCom, 6/1/04, p. 7.) Further, MCI/WorldCom contends that SBC-CA's own witness suggested that smaller customer clusters minimize costs, hence the larger clusters used in HM 5.3 likely overstate costs. (Id.) The record before us does not provide sufficient information to resolve this question because we were unable to run our own clustering scenarios or make complete modifications to the interoffice module.

Furthermore, comments on the Proposed Decision suggested that further modifications to labor inputs were appropriate. These labor input modifications, coupled with conservative assumptions for many other inputs in HM 5.3, offset concerns that HM 5.3 under estimates UNE costs.

It should come as no surprise after the extensive analysis described in the preceding sections that since we have found that both models are flawed, we do not consider either model to have adequately fulfilled the cost modeling criteria set forth in the June 2002 Scoping Memo. These criteria required that the cost studies and models allow the user to reasonably understand how costs are derived, generally replicate the model results, and modify inputs and assumptions. Without belaboring the point, we found that both HM 5.3 and the SBC-CA Models failed one or more of these criteria.

The principle failure of HM 5.3 was its use of a customer location database provided by a third party, TNS, as an input. We have already described how we would have preferred to cluster the geocoded customers into smaller distribution areas, but we were not able to perform these modifications ourselves. This criticism is mitigated by the fact that SBC-CA was able to modify the clustering and produce new HM 5.3 results. Nevertheless, we would have preferred to test various scenarios ourselves. Secondarily, HM 5.3 failed the modeling criteria because we were not able to modify all labor inputs. In our attempts to modify the HM 5.3 assumed labor rate to the level proposed by SBC-CA, we found that labor costs were embedded with other assumptions such that we were not able to disaggregate labor costs or assumptions and modify them alone. Ultimately, we were able to use information from the record to make assumptions regarding labor costs and modify these inputs.

With regard to the SBC-CA Models, we find that they failed the cost modeling criteria because we were not able to reasonably understand many of the input assumptions, which led to our inability to easily modify them based on comparison to public data or inputs used in HM 5.3. Specifically, we could not identify or make meaningful modifications to many of the SBC-CA model inputs because we could not extract individual inputs from aggregated data, or compare and verify inputs to public information. This was particularly evident with linear loading factors in LoopCAT and annual cost factors throughout the SBC-CA models. For example, we could not identify what structure sharing assumptions were embedded in the SBC-CA factors. Without knowing what structure sharing assumptions were used, it was impossible to modify them. Similarly, we could not adjust loop configuration assumptions such as the design point or cabling characteristics, and we could not segregate expenses for SBC-CA's shared and common costs or Project Pronto expenditures from its calculations of per unit expense levels for UNEs.

On the subject of DLC installation costs, we were unable to understand the underlying assumptions SBC-CA made when creating factors in this area. JA contend that SBC-CA's witness appeared unable to explain how the factors relating to these costs were derived. (JA, 2/7/03, pp. 32-33.) Our own review supported this allegation. Eventually, we delved into actual DLC installation costs to compare these to the factors. Again, SBC-CA's witness was unable to explain any linkage between actual costs and the factors used in the SBC-CA models. (Hearing Tr., 4/15/03, pp 573, 586.) Essentially, SBC-CA's witnesses were unable to support the modeling inputs adequately in this area. Finally, in the SPICE model, we were unable to determine how to modify it to test varying demand levels. In our final modeling runs following comments on the Proposed and Alternate Decisions, we found that input modifications to the SBC-CA models, particularly inputs related to factors and expenses, required time-intensive manual manipulation that was prone to error and produced varying results that could not easily be replicated. This is discussed in detail in Section V.D below. In sum, we found that both models failed one or more of our cost modeling criteria.

The analysis above describes why we have concluded that both HM 5.3 and the SBC-CA Models contain flaws that we cannot correct completely. Initially, we were unwilling to rely solely on the results of either model.

The SBC-CA models contain many inputs and assumptions that we conclude are not forward-looking -- such as loop configuration, cable inventory, structure sharing percentages, ACFs, SPICE demand assumptions, potentially duplicative shared and common expenses, and Project Pronto expenses. We are unable to modify these inputs for a variety of reasons. In some cases, the inherent structure of SBC-CA's models aggregates these inputs with other information to the point that we cannot isolate inputs for modification. In other cases, the record convinces us that the inputs may be overstated or not specific to the provision of UNEs, but the record has not provided us with adequate information for a replacement number. Also, we find the loop configuration modeled by SBC-CA is not forward looking because it uses the design point concept and embedded cable inventories, which most likely overstates loop costs.

In contrast, even though we disagree with many of the input assumptions used in HM 5.3 - such as the cost of capital, asset lives, the copper/fiber crossover point, structure sharing, plant mix, DLC costs, and switching assumptions - we can change many of these inputs and assumptions. In many areas, we have incorporated inputs from the SBC-CA models into HM 5.3, particularly in areas such as labor rates, plant mix, and switching investment information. Despite these efforts, we could not cure all of the flaws we found in HM 5.3. We find that we cannot perform sensitivity analyses on the clustering process that builds the initial estimates of outside plant, we cannot modify all inputs related to labor costs, and we cannot overcome flaws in the interoffice transport module that underestimate demand and may not adequately incorporate all necessary equipment.

In the Proposed and Alternate Decisions we circulated for comments, we stated that given the flaws of both models and our unwillingness to rely on either as the sole estimate of forward-looking UNE rates, we would instead use the two models to create a zone within which we will adopt new UNE rates. We found it unreasonable to throw out both models and have the parties start over, or have the parties try to remedy the flaws we noted in these models because starting over would take valuable time, might not produce good results, and would undoubtedly lead to further contention over the proposed remedies.

The Proposed Decision found that a fair and reasonable, albeit not perfect solution, given the amount of time that had already passed in this proceeding, was to use both models as endpoints for our ratesetting exercise. Both models were run with common inputs, to the extent possible. The results of the HM 5.3 run generally gave us what appeared to be reasonable as the lower boundary of a range for ratesetting purposes, and the SBC-CA model results generally gave us what appeared to be reasonable as a corresponding ceiling. In a few cases, such as rates for DS-1 ports and the DS-3 entrance facility without equipment, HM 5.3 actually resulted in a higher rate than the SBC-CA models.

Following comments on the Proposed and Alternate Decisions, we made appropriate adjustments to both modeling approaches and input assumptions to reflect parties' directions to resolve modeling difficulties. After doing so, we conclude we must now abandon the midpoint approach because we find that although both models are flawed, the SBC-CA models fail our modeling criteria to such a significant extent that we cannot reasonably rely on them to set UNE rates. After correcting what we agree are errors pointed out by parties in our runs of both HM 5.3 and the SBC-CA models, and making further modifications to inputs based on the comments, we find the flaws in the SBC-CA models are overwhelming and produce erratic and sometimes counterintuitive results.

Specifically, the final corrections and other modeling input changes we made to the SBC-CA models in response to comments were:48

· Corrected asset lives to ensure correct data used in all columns. (Section VI.A)

· Corrected copper distribution and feeder fill factors to ensure the adopted fill factor is used for both material and installation costs (Section VI.E.1-2)

· Corrected DLC common equipment fill to 62% (Section VI.E.4) Corrected average switch size in SICAT to use SBC average for Nortel switches (Section V.D.2, Other Switching and Port Model Changes.)

· Modified NID inputs to use a 2 pair NID, one hour NID installation time, and adjusted residential premise termination fill factor to 53.4% (Section VI.E.7)

· Modified cost of capital to 9.44% based on a 11.78% cost of equity and a 6.34% debt rate (Section VI.B)

· Modified expense levels and cost factors related to non-regulated expenses, affiliate expenses, TBO expenses, and land and building factors (Section V.A.4.c)

· Adjusted fill factors in SPICE for SONET and common equipment fill to 85%, and fiber fill 54% (Section V.A.3)

· Modified split of new and growth switching lines (Section VI.J.2)

· Adjusted port cost calculation to correct labor costs and concentration ratio, as suggested by SBC-CA (Section V.D.2, Other Switching and Port Model Changes)

The final corrections and other modeling changes we made to HM 5.3 in response to comments were:

· Corrected DLC Plug-In fill factor to 75% (Section VI.E.5)

· Adjusted DLC Common Equipment Fill modified to 62% (Section VI.E.4)

· Removed FCC cable prices and used HM 5.3 default values instead (Section V.D.2, Cable Prices)

· Modified plant mix inputs to match SBC-CA models (Section VI.G)

· Adjusted switching investment per line calculation (Section V.D.2, Switch Vendors)

· Corrected Verizon best in class expenses (Section V.B.6)

· Corrected BRI and trunk port factors (Section V.D.2, Other Switching and Port Model Changes)

· Modified cost of capital to 9.44% (Section VI.B)

· Adjusted DS-1 and DS-3 loop costs to account for missing equipment (Section V.D.2, DS-1 and DS-3 Loops)

· Calculated deaveraged rates for 4-wire, coin, PBX and ISDN loops (Section V.D.2, Deaveraged Rates)

· Modified NID installation time to one hour (Section VI.E.7)

· Removed additional splice crew to return to original splice crew proposals (Section VI.H)

· Changed maximum copper length to 12,000 feet (Section VI.I)

· Modified split of new and growth switching lines (Section VI.J.2)

· Modified interoffice fiber fill factor to 54% to match run of SPICE (Section V.D.2, Interoffice Rates)

· Modified PBX loop option to include investment for PBX line card (Section V.D.2, PBX loops)

· Adjusted asset lives to match SBC-CA proposal (Section VI.A)

· Increased labor costs for terminal, splice and SAI investments (Section V.B.2)

When we made these changes to the SBC-CA and HM 5.3 models, it became clear that the SBC-CA models were unreasonably difficult to operate and modify because they required extremely time-intensive efforts to make modifications, the models were prone to input errors due to the extraordinarily complex input modification requirements, and the results were erratic and difficult to replicate.

The majority of the problems we experienced can be traced to our manipulation of the SBC-CA cost factor module, which is an integral component used in all other SBC-CA cost modules. In response to comments on the Proposed Decision, we found it necessary to modify expenses and cost factors related to affiliate transactions, unregulated businesses, the building factor, retiree expenses, asset lives, and the cost of capital. We engaged in a time-intensive process to identify which inputs to modify. Each of our modifications required approximately 100 manual input changes to flow the results of the SBC-CA factor model into the other eleven SBC-CA UNE cost modules. As we made the various input changes, the model results varied substantially, and certainly more than we considered reasonable. On more than one occasion, we were unable to predict the direction or amount of change that would result from a given input change. For example, one of our SBC-CA model runs to make the changes we list above resulted in a basic loop rate of $9.14, more than $4 below the loop rate in the Proposed Decision, and below the current interim loop rate. This result was also a dollar below the $10.16 basic loop rate resulting from HM 5.3 when run with similar inputs. This ran counter to our previous findings, and led us to lose confidence in the operation and outputs of the SBC-CA models. Several times we ran the SBC-CA models with what we thought were the exact same inputs, but achieved different results.49 We surmise this may be due to input errors in some of the 100 manual input changes, but we are not certain.

In our view, it is unduly burdensome and therefore not reasonable to continue using a model that requires such extensive and time-consuming manual manipulation, is prone to error even in such basic steps as modifying a single input, and produces such widely varying results. As we ran and re-ran the SBC-CA models, we found we could not replicate, with a reasonable level of certainty and in a reasonable time, an error-free result. The amount of time required to pursue all potential modifications is not reasonable, let alone the time involved to investigate why the results cannot be easily replicated or why they unreasonably deviate from prior results. In sum, even with extraordinary efforts, the SBC-CA models do not provide us the ability to derive UNE rates with an acceptable level of confidence that is a basic requirement of modeling, and they fail to produce results upon which we can reasonably rely.

The problems we experienced modifying and running the SBC-CA models in response to comments echo comments made early in this proceeding by JA and ORA/TURN. JA had complained early on that SBC-CA's ACF model involved a complex series of algorithms that operate within and across 67 detailed worksheets. (JA, 2/7/03, p. 29.) They explained that because SBC-CA's models involved at least six separate cost studies that were not integrated, it was not possible to change critical assumptions in any one of the models and have those changes "flow through" to the others. (Id., p. 40.) We now see through our own experience the wisdom in JA's initial statement that:


...the use of separate, unconnected studies for each service and the use of unlinked files to develop inputs to those studies also create a potential for errors and inconsistencies in assumptions that should be consistent across each service, and makes auditing for and correcting of these errors unduly burdensome. (JA, 2/7/03, p. 29.)

In addition, ORA/TURN provided additional insightful criticism that in hindsight we find accurately describes what we have now experienced first-hand.


[SBC-CA's] cost modeling process is not "user friendly." Adjusting key inputs that have a significant affect on calculating costs, such as cost of capital, is a difficult and complicated process. [SBC-CA] admits that there is no fail safe mechanism to ensure that a change to a key input, which should be common to all models, made in one model would be automatically flowed through to all of [SBC-CA's] models. It would be up to the user of the models to identify where those changes would need to be made, and the user would then have to make the changes manually. A user would have to be intimately familiar with all of the [SBC-CA] models to ensure that he/she did not forget to make the same change in every location in every model. Thus, it is difficult to audit the model outputs because the models are not integrated. This makes it very difficult, if not impossible, for the user to generally replicate the cost study calculations, and to modify crucial assumptions with the certainty that it has been done consistently throughout each model. Thus, [SBC-CA's] cost models have not met the criteria set forth in the Commission's Scoping Memos. (ORA/TURN, 2/7/03, pp. 8-9.) (Emphasis in original; citations omitted.)

Now, with the experience we have gained attempting to modify the SBC-CA models and replicate our own modified work, we concur in the criticisms of JA and ORA/TURN. In our first model runs of the SBC-CA models supporting the Proposed Decision, we made few, if any, adjustments to inputs related to the SBC-CA factors. But now, in response to comments identifying errors and other modeling changes that we agree are valid, particularly related to expenses and cost factors, we conclude that the lack of flow through from one model to the other makes the SBC-CA models extremely challenging to manipulate, excessively prone to errors when modifying inputs, and exceptionally difficult to replicate. Our inability to replicate results in a reasonable time frame, after repeated attempts to do so, leaves us with little confidence in the models' results and it is not reasonable to rely on models that produce such varying results.

In contrast, we did not experience the same degree of difficulty in modifying and correcting our runs of HM 5.3 or verifying its results. In general, we were able to understand how to make the necessary modifications, implement them quickly and, after making them, we could easily and consistently replicate our results in a reasonable time frame and with a high degree of certainty. We have confidence that we have run HM 5.3 correctly with our chosen input modifications, and that the HM 5.3 results are reliable in this regard.

We will adopt HM 5.3 model results for SBC-CA's permanent UNE rates, despite the flaws we have identified in the HM 5.3 model. We conclude this approach is reasonable given the enormous complexity involved in TELRIC modeling exercises. As the FCC has recognized in its recent rulemaking reviewing the TELRIC methodology, UNE cost proceedings are "extremely complex," involve conflicting cost models, and hundreds of inputs to those models supported by the testimony of expert witnesses. State pricing proceedings are thus "extremely complicated."50 Given the agreed on complexity, it is reasonable to use a model with some flaws when the alternative is another model that is not only significantly flawed, but is also unreasonably difficult to operate and produces varying results.

Our use of HM 5.3, even though we find it flawed, is also supported by the D.C. Circuit's discussion of the difficulty in pinpointing TELRIC rates with exactitude. In a 2001 decision upholding FCC findings that UNE rates in Kansas were cost-based, the D.C. Circuit concluded that ratemaking is not an exact science but involves a "zone of reasonableness." As part of its discussion, the court cited to a prior case where it stated:


This argument, however, assumes that ratemaking is an exact science and that there is only one level at which a wholesale rate can be said to be just and reasonable.... However, there is no single cost-recovery rate, but a [wide] zone of reasonableness.... (Sprint Communications Company v. FCC, 274 F.3d 549, 555, (D.C. Circ. Dec. 28, 2001), citing Conway, 426 U.S. at 278.)

As a result, the court declined to find fault with the FCC "for approving the Kansas Commission's compromise resolution of an issue that the parties' behavior had left a muddle." (Id. at 559.) We find the SBC-CA models leave us in more of a muddle than HM 5.3 because we are able to reasonably modify HM 5.3, replicate our results, and do this in a reasonable length of time.

Interestingly, despite fierce criticisms of both models by the parties, we find that loop assumptions embedded in both models have surprising similarities. Earlier versions of the HM 5.3 model have been criticized by this Commission and others for assumptions regarding uniform dispersion of customers throughout the serving area. Indeed, HM 5.3 makes efforts to overcome this prior criticism by precisely locating today's customers through the geocoding process. However, after the geocoded customers are clustered into distribution areas, HM 5.3 does not use the geocoded locations to build a distribution network. Instead, the cluster is split into equal-sized lots and customers are uniformly distributed throughout the distribution area. Likewise, LoopCAT makes the simplifying assumption to approximate loop lengths based on the design point. As SBC-CA's witness Smallwood explains, "SBC-CA makes the reasonable assumption that customers will be distributed throughout a distribution area." (SBC-CA/Smallwood, 3/12/03, pp. 66-67.) By its own admission, SBC-CA is uniformly distributing customers throughout the serving area even though it has criticized prior versions of HM 5.3 for this same assumption.

Finally, we do not find that one model's set of assumptions is more accurate than the other. First, both models include a mixture of loop modeling assumptions that are somewhat reality-based and somewhat hypothetical. HM 5.3 uses today's customer locations, but clusters them differently than SBC-CA's existing network. LoopCAT uses some existing plant routes, but combines that information with estimates of future customer locations. By using the design point approximation technique, LoopCAT does not locate any customers where they are today. Second, HM 5.3 uses a minimum spanning tree theory to build plant to connect customers. By definition, any theory based on "minimums" would produce the lowest possible results. In contrast, LoopCAT uses embedded cable records that we find produce higher results than if cable-sizing guidelines were used to configure a rebuilt network. Third, HM 5.3 relies on the TNS customer location process and its clustering assumptions, while LoopCAT relies on SBC-CA's preprocessor and its assumptions regarding the design point. Both the TNS process and SBC-CA's preprocessor are presented to us as inputs that we cannot adjust, and we are asked to rely on the underlying assumptions without questioning or modifying them.

Thus, we now conclude we should not rely upon the SBC-CA models to set permanent UNE rates. While the HM 5.3 model is not perfect and does not meet 100% of our modeling criteria, we now see that on top of the SBC-CA models' basic modeling flaws, the SBC-CA models are unduly burdensome and difficult to operate and produce varying results that cannot easily be replicated with a reasonable level of certainty, even after our staff has spent over a year working with them. The SBC-CA models do not allow the user to derive UNE rates with an acceptable level of confidence. Therefore, we will abandon use of the SBC-CA models and use only HM 5.3 to set permanent UNE rates for SBC-CA.

Our decision to use HM 5.3 to set permanent UNE rates for SBC-CA is based on runs of the HM 5.3 and SBC-CA models where we have set as many inputs as possible at the same levels. The reasoning behind our chosen input levels is described at length in the Modeling Inputs Section VI below. Here, we will briefly summarize which inputs were used for the two model runs that ultimately led to our decision to rely on HM 5.3 for ratesetting purposes. The inputs that we varied for our runs are the following:

Cost of Capital: We modified both models to use an input assumption of a 9.44% cost of capital. Also, we modified the tax rate in HM 5.3 to 40.75% to match the SBC-CA models.

Asset Lives: HM 5.3 was adjusted to match the asset lives proposed by SBC-CA.

IDLC/UDLC: We adjusted both models to assume a configuration of 60% IDLC, and 40% UDLC.

Structure Sharing: In the HM 5.3 model, we used structure sharing levels from the FCC Inputs Order, and we assumed 55% sharing of the distribution and feeder network. In the SBC-CA models, we were not able to modify SBC-CA's proposed structure sharing percentages because we could not determine the percentages assumed by SBC-CA.

Plant Mix: We modified HM 5.3 to use SBC-CA's plant mix assumptions. In response to comments on the Proposed Decision, we modified our method for translating SBC-CA's plant mix into HM 5.3 using pair feet rather than sheath feet, as suggested by SBC-CA. (SBC-CA, 6/1/04, p. 9-10; Workshop Transcript, 6/14/04, p. 1000-1001.)

Labor Rates:

a) HM 5.3 is adjusted where possible to use the proprietary loaded labor rate from SBC-CA's models. This rate applies to Copper and Fiber OSP Technician, Engineering Labor rate, and EF&I per hour. Adjustments were made to labor rates for wire center terminal investment, customer premised fixed investment, pole labor, NID labor, copper cable manhole investment, fiber pullbox investment, and aerial drop placement.

b) Crew sizes in HM 5.3 were adjusted for cable placing, where possible, to add one person (i.e., a crew of 1 was increased to two, a crew of two increased three).

c) Terminal, splice and SAI investments were adjusted based on information from the record indicating that approximately 50% of these investments relate to labor costs. Therefore, half of these investments were increased to match SBC-CA's hourly loaded labor rate.

d) There were no changes to the labor rates assumed in the SBC-CA models.

Fill Factors: In the SBC-CA models, achieved fill factors were adjusted to the levels in the table below. In HM 5.3, the relevant fill inputs (e.g., cable sizing factors for distribution plant) were adjusted to produce an achieved fill to match the following "achieved fill"51 levels:

a) Loop

Copper Distribution

51.6%

Fiber Feeder

79.6%

Copper Feeder

76%

DLC Common Equipment

62%

DLC Plug-In Equipment

75%

Residential Premise Termination

53.4%

SAI

67.8%

e) Switching: We modified fill levels in the SBC-CA model to assume an 82% achieved fill for both analog and digital switches. HM 5.3 was modified to also achieve an 82% achieved fill for digital and analog switches.

Crossover Point: HM 5.3 was adjusted to assume a fiber/copper crossover point of 12,000 feet for analog loops. In addition, we limit the maximum copper loop length in HM 5.3 to 12,000 feet. There were no changes to the crossover assumptions in the SBC-CA models.

DLC costs: SBC-CA's LoopCAT model was adjusted to lower the EF&I for DLC installation to the average levels shown from a recent sample of 50 SBC-CA DLC installations.52 HM 5.3 does not use an EF&I factor, so instead, we used an average of actual Remote Terminal and CEV installation costs from the same sample of 50 SBC-CA installations.

Pole Spacing: We modified HM 5.3 to assume pole spacing of 150 feet for all density zones of the distribution network. (See SBC-CA/'s McNeil, 2/7/03, p. 38.)

Drop Terminal Investment: We modified HM 5.3 to assume 85% buried drop terminals, and 15% aerial, to match those the percentages of buried and aerial drops in SBC-CA's models. (See SBC-CA/Tardiff, 2/7/03, p. 76.)

Cable Prices: Initially, we modified HM 5.3 to use copper and fiber cable prices used by the FCC, based on criticisms by SBC-CA witness Tardiff. (SBC-CA/Tardiff, 2/7/03, p. 39.) JA provided these cable prices in documents supporting HM 5.3. (See JA/Klick Declaration, 10/18/02, Attachment JCK-2 pp. 10-12.) Following comments on the proposed decision, we removed this modification based on comments by AT&T that the FCC cable prices include both material and installation and result in double-counting of installation costs. (AT&T, 6/1/04, p. 10.) The copper and fiber cable prices were not modified in the SBC-CA models.

PBX Loops: In response to comments of SBC-CA, we added investment to HM 5.3 for PBX line cards based on assumptions from SBC-CA's LoopCAT. (SBC-CA, 6/1/04, p. 29.)

Switch Vendors: We modified HM 5.3 to base the switching investment per line on a weighted average of Lucent and Nortel prices only, based on information from SBC-CA's SICAT on the percent of lines purchased from those two vendors.53 Siemens was removed from the switch vendor mix assumed in HM 5.3, as explained in Section VI.J.1 below. There were no changes to the SBC-CA models in this area.

Based on comments on the Proposed Decision, we corrected our calculation of switching investment per line in HM 5.3. SBC-CA contends we should use its SICAT model to calculate switching investment per line in HM 5.3, while JA contend we should use the formula provided by JA witness Pitts, with a correction provided in their comments. (SBC-CA, 6/1/04, p. 10; AT&T, 6/7/04, p. 9, n. 68.) We will use the formula provided by JA witness Pitts, as corrected, to calculate switching investment per line in HM 5.3 because we agree with AT&T it is inappropriate to use SICAT to calculate the switching investment per line in HM 5.3.

New vs. Growth: Initially, we adjusted both models to assume 40% of lines are purchased at the "new" line price, and 60% at the "growth" line price. This matches the mix of new and growth lines that was used in the prior OANAD proceeding. We also removed "other replacement costs" from SBC-CA's SICAT model. After reviewing comments on the Proposed Decision, we agree with MCI/WorldCom that it is inconsistent to use the percentages from the prior OANAD proceeding when they assume a higher percentage of higher-priced growth line purchases than SBC-CA's own current data. (MCI/WorldCom, 6/1/04, p. 20.) Therefore, we modify our modeling assumptions to use the weighting of new and growth lines reflected in SBC-CA's SICAT model.54

Switch Rate Structure: We ran both models assuming a flat-rate for switching as proposed by JA. This means that 100% of switching costs are allocated to port and there are no usage rates.55 We also calculated a usage-based rate for reciprocal compensation purposes, based on a 70%/30% split of traffic sensitive and non-traffic sensitive costs. In comments on the proposed decision, SBC-CA contends a rate element that was separately identified in the prior OANAD proceeding for tandem truck port termination is missing. (SBC-CA, G11104, p.29.) MCI/WorldCom responds that the connection cost is already included in HM 5.3 tandem per minute costs and addition of a separate rate element would result in double-counting. (MCI/WorldCom,9/1/04, p.8.) Our own review indicates that MCI/WorldCom's position is correct, and there is no need to separately state tandem trunks port termination costs since they are included in usage charges.

Other Switching and Port Model Changes: We deleted per month white page listing expenses from the SBC-CA port cost study, based on statements by SBC-CA witnesses Lundy and Silver that this should be removed. (SBC-CA/Lundy, 3/12/03, p. 46; SBC-CA/Silver, 3/12/03, p. 4.) Also, we adjusted the concentration ratio of lines/trunk from 2:1 to 4:1 in both HM 5.3 and SICAT, to be consistent with loop modeling assumptions.

In response to comments on the Proposed Decision, we (1) modified our assumption regarding switch sizes based on the average switch size in SICAT (AT&T, 6/1/04, p. 8.), (2) modified our port cost calculations based on comments of SBC-CA that we ignored labor costs for switch installation and that we inappropriately included the concentration ratio in our port cost calculations56 (SBC-CA, 6/1/04, p. 28.), and (3) recalculated our BRI and trunk port calculations to correspond to our other changes in switch investment inputs. (AT&T, 6/1/04 p. 20.)

Vertical Switch Features: We modified the SBC-CA Model to include any identified feature hardware costs in the port rate. Using SBC-CA's support materials, we calculated total hardware costs for nine features. We then assumed that an average customer would use three of these nine features, so we added one third of this total cost to the port cost. There were no changes to HM 5.3 regarding feature costs. In comments on the Proposed Decision, SBC-CA disputes this methodology and contends we should add its total per line feature cost to the port price. We make no change to our approach because we do not agree with how SBC-CA has calculated its total per line feature cost and we believe it may include double-counting of feature hardware and software costs that are already included in per line switching costs.

Expenses: HM 5.3 was adjusted to remove the presumption that SBC-CA expenses would track those of Verizon California. In other words, we used SBC-CA's 2001 current E:I ratio without adjustments based on comparisons with Verizon. In the SBC-CA models, we removed the inflation adjustment to expenses, under the assumption that productivity increases offset inflation adjustments. Following comments on the Proposed Decision, we modified the SBC-CA models related to non-regulated expenses, affiliate expenses, TBO expenses, and land and building factors.

Interoffice Rates: We adjusted the SONET and common equipment fill factor in SBC-CA's SPICE model to 85%, as proposed by JA. We adjusted the fiber fill factor to 54%. Then, we ensured that these same interoffice fill factors were used in our runs of HM 5.3. SBC-CA proposed fill factors for SPICE based on its current utilization levels, which SBC-CA contends are forward looking. SBC-CA's proposed fill factors are significantly below the levels used in HM 5.3 and used by the FCC in its own modeling. (JA/Mercer-Murphy, 2/7/03, paras. 68-72; See also Inputs Order, para. 208.) The variable per mile rate elements for unbundled dedicated transport rate, as well as SS7 links, are rounded to the nearest penny to match the rate design in the prior OANAD proceedings.DS-1 and DS-3 Loops: In comments on the Proposed Decision, SBC-CA contends that DS-1 loop rates are incorrect because costs of critical pieces of equipment are missing or incorrectly applied and DS-3 loop costs for critical equipment are incorrectly calculated. (SBC-CA, 6/1/04, p. 7.) These omissions were noted by SBC-CA during the course of the proceeding and JA fixed these omissions and errors in its cost filings in the Verizon UNE Phase of R.93-04-003. (Id., see also SBC-CA/Murphy, 2/7/03, p. 63.) AT&T responds that parties may define the DS-1 loop in different ways and it would not object to an additional UNE to cover the costs noted by SBC-CA. AT&T also admits an error in its DS-3 loop cost calculations. (AT&T, 6/7/04, p. 7.)

We conclude that because AT&T admits errors or omissions in the HM 5.3 DS-1 and DS-3 loop cost calculations, we should fix them. We take official notice of DS-1 and DS-3 loop costs proposed by AT&T and MCI/WorldCom in the Verizon UNE phase of R.93-04-003 and use them as suggested by both SBC-CA and AT&T to amend DS-1 and DS-3 loop cost calculations.57

As described in Section VII below, we limit DS-3 related UNE rates to the levels proposed by SBC-CA.

Shared and Common Cost Markup: Both models include a 21% markup, as adopted in D.02-09-049.

Deaveraged Rates: In comments on the Proposed Decision, SBC-CA claims the Commission fails to adopt deaveraged rates for several UNEs that have previously been deaveraged by the Commission in D.02-02-047. (SBC-CA, 6/1/04, p. 29.) We agree this was an oversight and we modify the decision to adopt deaveraged rates for 4-wire, coin, PBX, and ISDN loops based on the relationship between current statewide average and deaveraged rates for these UNEs.

Appendix A shows the results of our run of the HM 5.3 model with our chosen inputs. The column in Appendix A showing the results of the HM 5.3 Model runs indicates the permanent UNE rates for SBC-CA that we adopt in this order.58

22 Despite claiming the SBC-CA models do not permit ready adjustment, JA provide a detailed restatement of them through several hundred pages of detailed adjustments to loop engineering assumptions and investment inputs, cost factors, and expense assumptions. SBC-CA disputes the results of JA's restatement, stating that the restated results defy common sense and are inconsistent with cost estimates produced by HM 5.3. SBC-CA disparages the JA's restatement because it produces a monthly loop rate of $2.25, or "about the same price as a large cup of coffee at Starbucks." (SBC-CA/Tardiff, 3/12/03, p. 24.) While it is clear that JA devoted considerable resources to restating the SBC-CA models, the Commission must also devote considerable resources to reviewing this work and SBC-CA's rebuttal. We cannot accept the restatements at face value without our own reasonable scrutiny. In many cases, we do not find JAs have adequately or convincingly supported their restated results. 23 For example, basic loop rates varied in our final LoopCAT runs from $9.14 to $10.34. 24 See Section V.A.3 below. 25 See Sections V.A.1.a, V.A.3, and V.A.4 below for a detailed discussion of these input problems. 26 See JA/Donovan-Pitkin-Turner, 2/7/03, a 224 page declaration containing over 70 subheadings with proposed modifications to LoopCAT, and JA/Brand-Menko, 2/7/03, a 109 page declaration containing 21 categories of alleged flaws in SBC-CA's expense modeling. 27 ARMIS refers to the FCC's "Automated Reporting Management Information System" that was initiated in 1987 for collecting financial and operational data from the largest carriers and is described further at http://www.fcc.gov/wcb/armis. 28 "Structure sharing" generally refers to the percentage of poles and conduit that are shared with other utilities, or between different portions of SBC-CA's network. 29 See Federal-State Joint Board on Universal Service (CC Docket No. 96-45), Tenth Report and Order, FCC 99-304, 14 Rcd 20156, (rel. Nov. 2, 1999) ("Inputs Order"). 30 See e.g., JA, 2/7/03, p. 26-27, and 29; JA/Declaration of Donovan/Pitkin/Turner, 2/7/03, p. 65-67. 31 As we discuss in Section V.B.7 below, we recognize that the FCC uses its Synthesis Model for universal service purposes, but it also relies on it for cross-state comparisons of forward looking UNE costs. Thus, we find it reasonable to look to the FCC's Synthesis Model and the Inputs Order for guidance on some modeling inputs. 32 Specifically, JA cite to factors used by SBC in Nevada, Connecticut, and Wisconsin. (JA/Mercer-Murphy, 2/7/03, p. 46.) 33 See, generally, declarations of SBC-CA witnesses Cohen, Henrichs, and Makarewicz, 3/12/03. 34 We note that JA filed a separate application, A.04-03-031, on March 12, 2004 nominating the shared and common cost markup for review in 2004. Also, the Ninth Circuit Court of Appeals recently granted an appeal of AT&T and WorldCom with respect to the Commission's markup calculation. (AT&T Communications of California Inc., et al., v. Pacific Bell Telephone Company, et al., No. 02-16818, 375 F3d 894, (Ninth Circuit 2004) (July 14, 2004).) Thus, issues surrounding the shared and common cost markup will be addressed by the Commission either in a later phase of this proceeding along with true-up payment issues, or separately, as determined by the Commission. 35 Project Pronto refers to SBC-CA's capital expenditures to add loop plant, circuit equipment, and other facilities to provision advanced data services like DSL, which are provided by SBC-CA's unregulated affiliate, SBC Advanced Services Inc. (ASI). 36 TBO refers to the accrual for post-retirement benefit expenses for SBC-CA's retirees. Effective in 1991, the rules for accounting for post-retirement benefits changed due to Statement of Financial Accounting Standards (SFAS) No. 106 Employers Accounting for Post-retirement Benefits Other than Pensions. SBC-CA adopted SFAS 106 for regulatory purposes on January 1, 1993. The TBO was established to account for the anticipated future retiree medical costs already earned as of that date, but not yet paid. (See SBC-CA/Cohen Declaration, 3/12/03, p. 15.) 37 According to SBC-CA, the TPI is obtained from C.A. Turner Utility Reports. (SBC-CA/Cohen, 10/18/02, p. 6.) The CPI-W is defined as the Consumer Price Index for Urban Wage Earners and Clerical Workers. (SBC-CA/Cohen, 3/12, p. 29.) 38 See Section VI.D for a complete discussion of the DLC inputs used in the Commission's model runs. 39 The limitation of 6,541 lines is based on a maximum underground vault, or "CEV" sized to hold 8,064 lines, of which 20% is reserved for growth. 40 See also ALJ's Ruling on Joint Applicants' and SBC Pacific's Motions to Strike, 5/21/03, regarding SBC-CA's request to strike rebuttal testimony of JA witness Landis regarding TNS and clustering issues because JA did not respond fully and completely to discovery requests for the clustering source code. The ALJ denied SBC-CA's motion to strike Landis' testimony because she had granted SBC-CA access to Landis in response to SBC-CA's motion to compel and SBC-CA never further pursued greater access per the procedure the ALJ outlined. (5/21/03 Ruling, p. 11-12.) 41 We note that similar to the SBC-CA models, HM 5.3 can also be criticized for how it handles multiple dwelling units. Although HM 5.3 clusters customers based on current population density characteristics, it does not necessarily model sufficient equipment to serve high density locations. This is discussed in detail in Section VI.E.7 where we address the fill factor for premises termination equipment. 42 The FCC has itself noted, in the context of its own cost modeling for universal service purposes, that: 43 Of course, we could have asked JA to re-run its clusters with our assumptions, but this would have required a reopening of the record and an opportunity for all parties to comment on the new model runs. Given the other flaws we identified in HM 5.3 and the SBC-CA Models, we did not consider this a valuable use of time. 44 For example, the HM 5.3 "Inputs Portfolio" lists numerous investment inputs relating to line cards that are selected based on "vendor documentation." (See JA/Mercer, 10/18/02, RAM-5, pp. 85-90.) 45 Indeed, SBC-CA contends that HM 5.3 has not depicted proper loop lengths, but it is unclear how SBC-CA can know this for sure since its own model does not use actual loop lengths and its data sources do not appear to provide this information. 46 The FCC uses SynMod for universal service support purposes and for cross-state comparisons of forward-looking UNE costs. For example, the FCC used SynMod with its default input values to assess the reasonableness of UNE prices when considering SBC's 271 application in Kansas and Oklahoma in 2001 and California in 2002. (See Joint Application of SBC Communications In., Southwestern Bell Telephone Company and Southwestern Bell Communications Services, Inc. d/b/a Southwestern Bell Long Distance for Provision of In-region, InterLATA Services in Kansas and Oklahoma (CC Docket 00-217), Memorandum Opinion and Order, FCC 01-29, (rel. Jan. 21. 2001.), para. 83-84. ("Kansas 271"); See also Application by SBC Communications Inc., Pacific Bell Telephone Company, and Southwestern Bell Services Inc., for Authorization to Provide In-Region InterLATA Services in California (WC Docket 02-306), Memorandum Opinion and Order, FCC 02-330, (rel. Dec. 19, 2002), para. 64 ("SBC California 271 Order"). 47 JA witness Bryant modified 8 inputs which were (1) copper cable installed investment, (2) DLC equipment (3) protection block/NID (4) outside plant maintenance factors (5) depreciation rates (6) cost of capital (7) maximum copper cable distance, and (8) switch investment. (JA/Bryant, 3/12/03, p. 5.) 48 For convenience, each correction or modeling change is followed by the Section number where the change is discussed. 49 For example, basic loop rates varied in one set of seemingly identical SBC-CA model runs from $9.14 to $10.34. 50 Review of the Commission's Rules Regarding the Pricing of Unbundled Network Elements and the Resale of Service by Incumbent Local Exchange Carriers, WC Docket No. 03-173, Notice of Proposed Rulemaking, FCC 03-224, (rel. Sept. 15, 2003) para. 6. ("TELRIC NPRM".) 51 Achieved fill is defined in Section VI.E.1 below. 52 The actual DLC costs and resulting factors are proprietary to SBC-CA, but contained in JA, 8/1/03, Exhibit C-4, p. 1 and C-5, p. 1. 53 These percentages are proprietary to SBC-CA, but can be found in SBC-CA's 10/18/02 filing of its SICAT model under the "Input-Cost Drivers" Tab, cells B32 and B37. 54 The weighting, which is based on proprietary information supplied by SBC-CA, is calculated based on SICAT's aggregated demand for Lucent and Nortel lines found in the respective "Input-Demand" tabs of SBC-CA's 10/18/02 SICAT filing. 55 During the process of calculating a flat monthly port rate, both models exhibited extraneous investment of less than 10 cents, which was manually added to the port rate. (See Appendix A, note 1.) 56 We corrected these items by using the annual cost factor that includes switch installation and by recalculating port investment to correct the concentration ratio. 57 See R.93-04-003/I.93-04-002, AT&T/MCI-WorldCom Opening Comments, 11/3/03, Mercer Declaration, Exh. RAM-5, p. 45-48; See also, AT&T, 6/7/04, p. 7, n. 59 for AT&T's description of how to amend HM 5.3 related to these costs. 58 We do not provide the results of our final runs of the SBC-CA models because we abandon their use in setting UNE rates. As mentioned previously, we had difficulty replicating results in the SBC-CA models and it is unclear which of our model runs we can rely on to compare to HM 5.3 rates.

Previous PageTop Of PageNext PageGo To First Page