In comments, workshops, and hearings during the course of this proceeding, JA and SBC-CA have made numerous criticisms of each other's models, alleging flaws in HM 5.3 and the SBC-CA Models. These criticisms can be quickly summarized.
The essential criticism of the HM 5.3 model is that it ignores generally accepted engineering and network design standards to instantly construct a brand new, fully functioning network at a single moment in time on a featureless plain devoid of regulatory requirements. SBC-CA contends that, through the use of unrealistic and unsupported inputs, HM 5.3 drastically understates the size of the network, minimizes the costs to maintain it, and lacks the capability to provide all the services that are provided over SBC-CA's network today.
Specifically, SBC-CA contends that HM 5.3 does not adequately represent customer locations, does not comport with how an engineer designs plant, and relies too heavily on subjective judgment for the prices of purchasing and installing network facilities. (SBC-CA/Tariff Decl., 2/7/03, p. 4.) SBC-CA asserts that HM 5.3 does not account for all the costs required to build and maintain the network - particularly those costs that rise from the topological, economic, and regulatory realities of construction in California. SBC alleges that HM 5.3 produces clusters and equipment cabinet sizes that would never obtain the needed local permits. In addition, SBC-CA claims that HM 5.3 relies on unrealistic labor assumptions to construct a network and that such a network could not handle all of SBC-CA's customer demand. According to SBC-CA, HM 5.3 fails to account for the substantial costs that carriers incur to accommodate growth and respond to demand changes. As a result, SBC-CA maintains that HM 5.3 provides a "static view" of a network that assumes a level of efficiency that no real carrier can possibly achieve and does not reflect how real-world telecommunications firms operate. Moreover, SBC-CA contends that HM 5.3 fails the cost modeling criteria set forth in the June 2002 Scoping Memo. In particular, SBC-CA alleges that it was not given sufficient access to the intricacies of the customer location process used in HM 5.3.
In contrast, JA contend that SBC-CA's cost models are deeply flawed and do not adhere to TELRIC standards because they rely almost exclusively on embedded data from SBC-CA's legacy network rather than forward-looking network configurations. Further, JA maintain that the SBC-CA Models do not meet the Commission's cost study criteria and do not permit ready adjustment to eliminate these inherent flaws.24
JA allege that SBC-CA's models suffer from structural flaws stemming from a basic misconception of the purpose of competition. JA claim that:
The purpose of local competition is not to ensure that [SBC-CA] is "made whole," or somehow recovers every penny it spends no matter how foolishly. Rather, one of the purposes of competition is to force entrenched incumbents such as [SBC-CA] to become more efficient. In a competitive market, there is no guarantee that a company will recover every dollar it spends. That lack of a guarantee is exactly what forces companies to spend wisely and operate efficiently. (JA, 2/7/03, p. 43.)
JA argue that, in some ways, forward-looking costs overcompensate incumbent carriers because much of the investment in the network to provide UNEs was incurred years ago and the loop plant has long since been fully depreciated. According to JA, "SBC-CA does not incur any incremental investment cost to allow competitors to use that loop plant. Nonetheless, under TELRIC, [SBC-CA] is entitled to recover investment costs for such loop plant as if [SBC-CA] had to install it all over again." (JA, 2/7/03, p. 41.) We find that both models are flawed and do not allow us complete flexibility to modify inputs and test various outcomes.
We find the loop modeling and customer location process in HM 5.3 lacks transparency, limits the Commission's ability to test various scenarios, and can be faulted for the accuracy of customer locations. Even if we could modify the cluster process used in HM 5.3, we are unsure what affect this would have on its final cost results. In addition, HM 5.3 contains myriad inputs that are at the low end of what we consider reasonable. While we can modify most of these inputs, we were not able to modify all input assumptions to our satisfaction, particularly certain inputs related to labor costs. We are also not able to modify the interoffice transport module of HM 5.3 to overcome the criticisms that it underestimates demand for interoffice transport, may not adequately incorporate optical interface equipment, and is insensitive to demand changes. If we could satisfactorily modify HM 5.3's labor inputs, these changes would most likely increase cost inputs in HM 5.3. If we could modify interoffice inputs related to demand and equipment, we are uncertain what effect this would have on rates. Therefore, the only conclusion we can draw from these areas that we cannot modify is that HM 5.3 may underestimate some labor-related, forward-looking UNE costs.
In contrast, the SBC-CA models contain numerous inputs based on the characteristics of SBC-CA's current network operations. SBC-CA claims, "[T]he key to a proper, TELRIC-compliant, long run analysis is to permit all facilities and characteristics to be variable and to assume replacement or change only where it is shown efficient to do so." (SBC-CA, 3/12/03, p. 8.) SBC-CA's approach essentially assumes that the current network design and current costs are at an efficient equilibrium, and that unless it can be shown that a change produces an outcome more efficient than that status quo, then these current costs are TELRIC compliant. Although this approach may be plausible, FCC requirements place the burden on incumbent LECs to demonstrate that its costs do not exceed forward-looking levels. Many of SBC-CA's modeling inputs, which include loop investment and design characteristics, expense levels, and labor inputs, have not been sufficiently justified as forward-looking, nor can we fully characterize them as the costs of a currently efficient network.
Some of these inputs can be modified to what we consider forward-looking levels, but many cannot. The inputs we are unable to modify include SBC-CA's loop length assumptions, loop cabling inputs, and numerous inputs embedded in annual cost factors such as structure sharing percentages and labor installation assumptions. Further, we are unable to modify SBC-CA's expense assumptions to remove potential shared and common costs, and Project Pronto expenses. Although we make limited adjustments to the SBC-CA models for expenses related to unregulated services, affiliate transactions, and retiree costs, it is unclear if our adjustments adequately remove the overestimates in these areas.
Finally, we are unable to modify demand assumptions and other factor inputs in SBC-CA's interoffice transport model. Most of the input modifications that we would make to SBC-CA's models would modify input assumptions from levels that we believe reflect historic costs to levels that we consider forward-looking. If we could properly modify loop lengths, loop cabling inputs, structure sharing percentages, labor installation factors, interoffice demand, and Project Pronto and shared and common costs, these changes would most likely lower the SBC-CA model results. Therefore, we find that the SBC-CA models over-estimate forward-looking UNE costs.
Thus, although we have undertaken the time-consuming and exhaustive task of modifying many of the inputs used in both models to levels that we conclude are reasonable, there are significant flaws in both models we are unable to modify and we are not satisfied that the changes we are able to make completely solve the structural flaws we have identified in both models.
Initially, we determined that because we could not rely on the results of either model in its entirety, the logical solution was to average the results of both models. We ran both models with our preferred inputs and used the results to create a "zone" of reasonable UNE rates. After running both models with our chosen inputs and finding that the results from the models converged to a much narrower range, we determined that reasonable UNE rates lie somewhere within the zone created by the two models' results. The ALJ issued a Proposed Decision that considered the results of each model as an endpoint and adopted the midpoint of the two models' results as SBC-CA's permanent UNE rates. Commissioner Wood issued an Alternate Decision that mirrored this methodology, but differed only in two modeling inputs.
Parties then filed comments on the Proposed Decision and Alternate Decision. They identified errors made during the Commission's modeling runs, and disputed several of the chosen inputs. In addition, the comments suggested that the Commission should reconsider modeling and input changes that were suggested in the parties' filings and apparently overlooked in the modeling runs supporting the Proposed and Alternate Decisions. After reviewing the comments and correcting what we agree are errors and valid modeling changes, we find the SBC-CA models prove extremely difficult to change and, with the exception of the California specific model of local loop costs, provide very little additional value for our efforts.
We now conclude that it would not be reasonable to use either the UNE-L endpoint produced by HM 5.3 or by SBC-CA to set this rate.25 On the other hand, we conclude that a reasonable UNE-L rate lies between these two endpoints, and we adopt the midpoint for our new, permanent UNE-L rates because the midpoint reasonably mitigates the flaws in both models. For all other UNE rate elements, we rely on HM 5.3 solely.
We conclude that this approach is reasonable. The bulk of the problems with the SBC-CA models appeared when staff attempted to make changes to SBC-CA's annual cost factor module in response to comments from both SBC-CA and JA. These changes related to the cost of capital, affiliate transaction expenses, non-regulated expenses, and building factors. The SBC-CA models required extremely significant efforts to pinpoint which inputs to modify, time-intensive manual manipulations to modify inputs, and the modeling results were prone to error. Moreover, aside from the costs related to basic loops, we found little differences between the results produced by the SBC-CA and HM 5.3 models.
We find it is unduly burdensome and unreasonable to continue using a model that requires such extensive and time-consuming manual manipulation except in those areas where it adds clear value to our analysis. In our analysis, we find that the SBC-CA model clearly adds value to our analysis only in the estimation of loop-related costs. We believe that this value arises for two reasons: 1) SBC-CA's LoopCAT better reflects California topological, regulatory and construction realities than HM 5.3; 2) HM 5.3's greatest weaknesses are in its modeling of loop costs, where the results it yields lead to the design of a network that could not be built in California either today or in the future.
Therefore, we will abandon the approach used in the Proposed Decision and Alternate. We use the SBC-CA and HM 5.3 model results as endpoints for a zone of reasonable UNE-L rates. We will use the HM 5.3 model to set permanent rates for all other UNE's offered by SBC-CA.
In the pages that follow, we will describe in further detail the key flaws that we found with HM 5.3 and the SBC-CA models. We will focus our discussion on the major structural flaws identified by the parties, and our conclusions regarding these alleged flaws based on our own staff analysis of the two models. For the most part, this discussion will pertain to those portions of the models that are not easily changed by modifying the inputs. In a separate section, we will discuss the various disputes over modeling inputs and which inputs we have chosen to use in our own modeling runs to determine final UNE rates for SBC-CA.
Fundamentally, Joint Applicants and other parties contend that the SBC-CA Models fail the TELRIC standards set by the FCC. (See JA, 2/7/03, p. 40, ORA/TURN, 2/7/03, p. 9.) The TELRIC methodology is intended to replicate the pricing that would occur in a competitive market if an existing firm had to match the prices offered by a new entrant who would build facilities using the lowest-cost, most efficient technology and network configuration available, assuming the location of existing wire centers. (47 C.F.R. Section 51.505(b).) The FCC TELRIC regulations, as upheld by the U.S. Supreme Court, explicitly state that embedded, or historical, costs shall not be considered when calculating forward-looking UNE costs. (47 C.F.R. Section 51.505(d).)
Generally, we agree with the criticism that SBC-CA's models rely too heavily on SBC-CA's embedded network, both for network configuration and costs. JA contend, and our own analysis shows, that SBC-CA's cost models are replete with embedded inputs and assumptions that are not readily modified to reflect forward-looking costs or configurations. We will discuss in detail in Sections V.A.1, and V.A.3-5 below examples of the faulty network assumptions that we found. In addition, TELRIC requires the calculation of the forward-looking cost over the long run of the total quantity of the facilities and functions attributable to a UNE. (47 C.F.R. Section 51.505(b).) JA claim that SBC-CA's studies "fail to put the `T' in TELRIC." (JA, 2/7/03, p. 48.) Indeed, SBC-CA admits that "we don't develop a TELRIC on a total basis." (Workshop Transcript (TR.), 12/5/02, p. 408.) We found that in some portions of SBC-CA's models, particularly the model for interoffice transport, it was either difficult or impossible to determine and/or modify the total quantity of the facilities or functions upon which the cost modeling was based, as required by TELRIC.26
As we will discuss below, the SBC-CA models replicate to a great extent SBC-CA's existing architecture based on historical network design. This approach has both strengths and weaknesses. To some extent, this reflects the solid topology of California's hills, streams and freeways. On the other hand, we found that we could not make meaningful modifications to many of the SBC-CA model inputs because we could not extract individual inputs from aggregated data, or compare and verify inputs to public information. This prevented us from modifying many of SBC-CA's cost and configuration assumptions, such as loop input assumptions in SBC-CA's loop module known as "LoopCAT," demand assumptions in SBC-CA's interoffice model, and expenses calculated by annual cost factors.27 Although we could modify some of SBC-CA's model inputs, we eventually came to many "dead-ends" and found that we were unable to modify important model inputs to our satisfaction. The inability to perform sensitivity analysis on any model decreases our confidence in using it as the sole input in the determination of reasonable prices.
While JA provided a detailed "restatement" of the SBC-CA models containing suggestions for modifications to cost factors and engineering assumptions, 28 SBC-CA disputed this restatement. We find that JA have pointed out many areas in the SBC-CA models that warrant scrutiny. Indeed, review by Commission staff in many of these areas led to further questions regarding the SBC-CA modeling inputs and assumptions. On the other hand, it is not reasonable for us to accept the JA restatement without resolving the underlying disagreements over modeling inputs and engineering assumptions. The corrections suggested by JA in its restatement are numerous, unclear, and often unsupported. It is not possible in the time allotted to examine each of the almost 100 categories of corrections proposed by JA in over 300 pages of declarations, particularly when the significance or priority of each of these numerous corrections is unknown. We cannot accept the restatements of the SBC-CA models without substantial further review that it is not reasonable to undertake. Instead, the Commission's analysis focused on what it considered key flaws and modeling inputs rather than all of the areas outlined by the parties. In a few limited areas, we did attempt to apply these additional corrections, particularly with regard to expenses in the SBC-CA models. Ultimately, many of these suggested changes to the SBC-CA models became moot when we decided to limit the use of the SBC-CA models to set UNE rates.
Overall, we find SBC-CA's models estimate the cost to rebuild the network SBC-CA has in place today, with some changes for forward-looking technology. This approach reasonably integrates the real world topological and regulatory constraints into the SBC-CA's models, but this does not always reflect the lowest cost network configuration.
In reviewing SBC-CA's LoopCAT module, we found we agreed with many of the parties' criticisms that it does not fully reflect forward-looking costs based, in part, on the lowest cost network configuration. Below, we discuss these criticisms, which principally relate to LoopCAT's reliance on current network data, its design point calculation, and the lack of integration of loop models.
There is no dispute that LoopCAT relies extensively, if not exclusively, on costs and facilities derived from SBC-CA's current network. SBC-CA's witness Sneed gives an overview of SBC-CA modeling approach and describes how "[t]he investments and network characteristics are based on the actual network in place necessary to serve [SBC-CA's] customers, modified where needed to incorporate forward-looking technology." (SBC-CA/Sneed Decl., 10/18/02, p. 4.) Sneed describes how LoopCAT uses annual cost factors to convert investments into annual costs. As Sneed states, "These factors are based on the costs that [SBC-CA] actually incurs, as these are the best indicator of the forward-looking costs that will be experienced in a network serving California." (Id. pp. 4-5.)
JA criticize LoopCAT's reliance on embedded data, including outside feeder plant routes, plant mix, unit costs of construction, cable sizing, fill factors, and installation costs. According to JA:
The use of embedded data ensures that [SBC-CA] will not model an efficient network, as prescribed by TELRIC, but rather will propose substantially inflated costs. For example, [SBC-CA's] reliance on embedded data for unit costs of construction ignores the economies of scale inherent in the TELRIC "total demand" approach, thereby significantly overstating costs. Similarly, [SBC-CA's] reliance on embedded data causes the inclusion of many undersized pieces of equipment in the network, rather than recognizing that today's demand can be served by far fewer, larger sizes of cable, DLC terminals and FDIs. Thus, again, [SBC-CA] ignores economies of scale that would be inherent in a TELRIC-compliant calculation. (JA, 2/7/03, p. 72.) (Footnotes omitted.)
For example, JA and ORA/TURN contend that LoopCAT's embedded cabling characteristics reflect an aggregation of incremental loop construction over many years, rather than a forward-looking design with cable sized to meet total demand. JA claim that LoopCAT models two 100-pair cables where an engineer would place one 200-pair cable at a lower cost if she were rebuilding the network today to serve current demand. (JA/Donovan-Pitkin-Turner Decl., 2/7/03, para. 25-27.) Thus, JA claim that LoopCAT fails to reflect the fact that today's demand can be served more efficiently and with greater economy of scale through the use of larger equipment. (Id.)
Similarly, TURN's witness Roycroft explains that engineering cost models typically use cable sizing guidelines to identify the capacity of cables needed to provide an efficient network design and a reasonable level of spare capacity. LoopCAT, however, does not use cable sizing conventions that would permit the model to optimize the design of its network. (ORA/TURN/Roycroft Decl., 2/7/03, p. 27-29.) Instead, LoopCAT relies on a mix of embedded outside plant design and hypothetical plant design, neither of which reflect forward-looking approaches. Roycroft alleges that even though users can adjust LoopCAT's fill factors, this will not modify the inventory of cables deployed and one is asked to assume that SBC-CA's existing network cabling reflects optimum design. (Id., p. 29.) In other words, LoopCAT is structurally incapable of modeling an efficient, forward-looking network and users cannot modify the model's assumptions to alter fundamental design assumptions. (ORA/TURN, 2/7/03, p. 8.) ORA/TURN contend that:
[SBC-CA] is attempting to turn the world on its head by claiming that a cost model that is based on embedded costs is a forward looking model, and urging the Commission to reject the [HM 5.3] model because it does not employ an embedded costing approach specifically rejected by the FCC. (ORA/TURN, 3/12/03, p. 5.)
Moreover, JA contend that SBC-CA's annual cost factors, or "linear loading factors," which are used throughout LoopCAT to calculate investment costs to engineer, furnish, and install (EF&I) loop facilities, violate TELRIC because they are based on installation activities related to SBC-CA's embedded equipment and embedded network design, and they cannot be properly audited. (JA, 2/7/03, p. 77.) According to JA:
...it is impossible to identify the costs associated with a particular piece of equipment because the linear loading factor is a purported average relationship between embedded installation cost and embedded material cost derived from overly broad categories of equipment. (Id., pp. 77-78.) (Footnote omitted.)
JA maintain that loading factors based on historic data can be problematic because the relationship between material investment and installation activities from historic data may not reflect forward-looking practices. (JA/Donovan-Pitkin-Turner, 2/7/03, paras. 97-98.) Moreover, loading factors can distort installation cost differences based on material prices. In other words, loading factors can make it appear that installation costs rise as material prices rise. (Id. para. 100.)
Our review of LoopCAT confirms that it contains data that SBC-CA derived from its current network experience. Implicitly, LoopCAT is based on the modeling assumption that the future will look a lot like the present. This assumption is largely reasonable, but there are clearly areas where this is not the case. Unfortunately, it is not possible to modify many aspects of LoopCAT to test forward-looking assumptions or differing network configurations.
First, we find that LoopCAT uses embedded cabling characteristics rather than cable sizing conventions. As ORA/TURN point out, the inventory of cables is a fixed input built on the assumption that existing cabling is optimal. SBC-CA has not met its burden of proving that its existing cable inventory, which reflects incremental growth in the network over many years, is optimal if the network were rebuilt today to meet current demand and reasonably foreseeable growth. Although it is likely that in the main the approach of SBC-CA is reasonable, it is also likely that in some areas there forecasts are or were wrong, and that with what is known today, network planners could be more efficient.
We agree with ORA/TURN and the Joint Applicants that the FCC has made clear that it rejects embedded cost approaches to modeling. In defining TELRIC, the FCC spoke of "designing more efficient network configurations" and a forward-looking cost methodology wherein a "reconstructed local network will employ the most efficient technology." (First Report and Order, para. 685.) In the FCC's brief defending TELRIC to the Supreme Court, the FCC stated:
The incumbents appear to be proposing a methodology based on "actual" cost in today's market, of duplicating "actual" existing networks in all physical particulars - or stated different, the "application of up-to-date prices to out-of-date properties." Economists, including those upon whom the incumbents rely, uniformly agree that such a measurement is "economically meaningless." The FCC considered, but rejected, such an approach as "essentially an embedded [i.e., historical] cost methodology," which would produce "prices for interconnection and unbundled network elements that reflect inefficient or obsolete network design and technology." (Reply Brief of the Petitioners United States and the FCC, Verizon v. FCC, July 2001, pp. 6-7, (citations omitted); as cited by ORA/TURN/Roycroft, 3/12/03, p. 9.)
On the other hand, we find that although LoopCAT relies on many current costs and assumes a future much like the present, we do not find that LoopCAT is an embedded cost model. Nevertheless, LoopCAT's flaws frequently arise from its heavy reliance on embedded cable characteristics, and the lack of cable sizing conventions to optimize network design, likely introduces an upward bias that overestimates forward-looking costs. We believe that this model falls short of FCC guidance that the cost model should assume reconstruction of the network, based on existing wire centers, in a least-cost configuration.
In comments on the Proposed Decision, SBC-CA contends the Commission could modify the cable inventory inputs to LoopCAT. (SBC-CA, 6/1/04, p. 19.) While it is true that the Commission could tinker with the cabling inputs, the lack of an optimization feature is more troubling. We conclude that even if modifications were made to cable inputs, there is no evidence that LoopCAT will generate optimal sizing for a least-cost reconstructed network. Thus, although we can clearly lower costs, we cannot readily determine when they are optimal.
Second, we find that LoopCAT's extensive use of factors prevents us from making modifications to LoopCAT to test varying input assumptions. Specifically, we could not extract individual inputs from LoopCAT's aggregated annual cost and linear loading factors, or compare and verify individual inputs to public information. While SBC-CA's filings and workpapers traced input costs to SBC-CA's internal accounting codes, we could not match this internal accounting data to SBC-CA's publicly available cost data, i.e., ARMIS29 filings. Thus, we are asked to rely on SBC-CA's historical accounting information without any ability to compare it to public information to verify that it actually did so.
In certain cases, the aggregation of inputs into factors, which are used liberally throughout LoopCAT, means we are not able to dissect the various factors into component pieces to isolate, for example, installation times, crew sizes, or material prices. Hence, we cannot fully understand how SBC-CA derived its investments costs or make meaningful modifications to these factors. For example, LoopCAT uses EF&I factors for pole, conduit, and cable installation which are critical elements in modeling the loop network. Indeed, SBC criticizes HM 5.3 for its various inputs relating to pole, conduit, and cable installation. Despite criticizing the HM 5.3 model inputs, SBC cannot show how the inputs in LoopCAT compare to those in HM 5.3, particularly for installation times, crew sizes, and material prices. Ultimately, we are asked to accept the factors that SBC-CA has created from its actual data, without knowing the assumptions embedded in them. Further, without knowing the assumptions embedded in the factors, we cannot test the sensitivity of the model with a changed input.
Another example where we disagreed with SBC-CA's input assumptions involves structure sharing percentages.30 Specifically, we wanted to modify LoopCAT's structure- sharing percentages to match those used by the FCC in its Universal Service Inputs Order.31 We found that it was not possible to isolate and modify the structure sharing rates that SBC-CA had built into its loading factors for conduit and cable investment. Despite criticism of its input assumptions, SBC-CA states that:
[SBC-CA's] structure sharing factors capture the efficient amount of structure sharing taking place in [SBC-CA] California's network today. [SBC-CA] properly assumes that the current rate of facilities sharing will continue into the future and be equivalent to the rate of sharing in a forward-looking environment." (SBC-CA, 3/12/03, p. 40.)
Noticeably absent from this rebuttal is any indication of how to determine the structure sharing percentages that are embedded in SBC-CA's models. Essentially, we are asked to accept LoopCAT's structure sharing percentages without knowing what they are, or being able to modify them. Our inability to know these percentages or to modify them diminishes the confidence we can have in the costs that this model generates.
Even though SBC-CA faced criticism from other parties for its failure to identify key input assumptions,32 it did not provide assistance in its rebuttal comments to decipher its various factors and inputs. Instead, it repeated its assertions that its input assumptions regarding network characteristics represent the most sensible measure of forward-looking network characteristics. (SBC-CA, 3/12/03, p. 9. See also Id., pp. 40 and 42.) SBC-CA makes this assertion without delving into an explanation of how to decipher its input assumptions to identify crew size, installation time, or material prices. Consequently, we cannot compare, for example, installation crew sizes in the model to what SBC-CA uses today because the data is too aggregated and SBC-CA does not offer information on current practices. Thus, the SBC-CA data has been aggregated to such an extent that we are unable to isolate discrete inputs and determine their validity.
SBC-CA argues that the Commission should accept its modeling approach based on actual costs and factors because its current network is forward-looking. SBC-CA claims that because it has been operating under incentive regulation for over ten years, it has a strong incentive to make economically efficient choices throughout its network, such as in the amounts of spare capacity in its network. (SBC-CA/Tardiff, 2/7/03, p. 9.) SBC-CA contends that when designing a forward-looking network, it is far better to use a model that reflects actual customer locations, actual cable placements, actual employee needs and work times, and the actual size and capacity of the network. (SBC-CA, 2/7/03, p. 7.)
Although what SBC says is plausible, we cannot find this argument convincing for several reasons. First, as we have just described, the parties and Commission staff were unable to decipher the SBC-CA's various factors to understand what SBC-CA used for its "actual" inputs. Indeed, the FCC noted that ILECs have asymmetric access to cost data and therefore put the burden on ILECs to prove their UNE rate proposals are both reasonable and forward-looking. Because we cannot decipher SBC's actual cost inputs, we cannot firmly reach the conclusion that its rate proposals are reasonable, or that other rate proposals are not more reasonable. For example, although SBC-CA heavily criticized the inputs in HM 5.3 regarding installation times, crew sizes, and material prices, we cannot compare the HM 5.3 inputs to what SBC-CA assumed for these same items because they are aggregated into cost factors. This means that we cannot test the sensitivity of the model with a changed input and we cannot easily compare or replace SBC-CA's inputs with other public information, such as the information used as inputs in HM 5.3.
Second, we find it too simplistic for SBC-CA to assert that the current network has already achieved all efficiencies that are possible, particularly when it did not provide examples so that we can compare actual install times or material costs from its current operations with those built into the SBC-CA models. SBC-CA aggregates current network information into large bundles of inputs and then claims that these input bundles must be correct because they are based on actual current costs. SBC-CA's witness Tardiff makes high-level comparisons between SBC-CA's current operating costs and HM 5.3 results to attempt to show that SBC-CA's current costs are far different from what HM 5.3 has modeled. (SBC-CA/Tardiff, 2/7/03, p. 19.) We find these comparisons difficult to interpret because we cannot make direct comparisons between SBC-CA's inputs and those used in HM 5.3.
In comments on the Proposed Decision, SBC-CA reiterates its argument that structure sharing percentages and other factors used in its model can be modified. (SBC-CA, 6/1/04, p. 19.) MCI/WorldCom comments that JA provided detailed restatements of many of SBC-CA's modeling inputs that were ignored by the Commission. (MCI/WorldCom, 6/1/04, p. 16.)
We agree with both SBC-CA and MCI/WorldCom that it is technically possible to modify the factors underlying the SBC-CA models, and indeed, JA provided a detailed restatement of LoopCAT that purportedly accomplished that goal. What neither party mentions is the degree of dispute over how to change these factors and the lack of meaningful information from SBC-CA explaining the components of its various factors. SBC disputed the JA restatement at length and, as we have already explained, it is not reasonable, given the public interest in reaching a reasonable conclusion to this proceeding, to devote the considerable additional time required to resolve the disputes underlying each of the JA's detailed restatement suggestions.
Furthermore, while SBC-CA provided guidance to staff on certain factor changes through a data request response, other parties have not had an opportunity to review these proposals. It is simply not possible for the Commission to modify the various factors in the SBC-CA models with any degree of confidence given the level of dispute over the modeling inputs and engineering assumptions.
We find that LoopCAT's loop network configuration is reasonably forward-looking because it combines existing feeder lengths with an approximated loop distribution length. SBC-CA claims that "actual lengths of loops in the networks are used to calculate Loop TELRICs," because loop information is pulled from SBC-CA's Loop Engineering Information System (LEIS) database containing 17 million records, and "LEIS captures the true distances between a customer's premises and Pacific Bells' central offices..." (SBC-CA, Smallwood, 10/18/02, p. 10.)
We disagree with ORA/TURN and JA that SBC-CA's claim that loop lengths based on actual distances is misleading. In actuality, LoopCAT does not model loops equivalent to actual loop lengths that exist today, but approximates one distribution loop length for each distribution area based on an engineering concept known as the "design point." Essentially, LoopCAT assumes all loops in a distribution area are one-half the length of the longest distribution loop segment that might be built in the next twenty years.
SBC-CA explains the design point concept with the following brief explanation:
[SBC-CA] estimates its distribution length based on the actual design point information that is contained in its database. The design point reflects the longest possible distribution length in a distribution area. [SBC-CA] makes the reasonable assumption that customers will be distributed throughout a distribution area, and based on that assumption, uses half of the design point length as an estimate of the average distribution length in the area. (SBC-CA/Smallwood, 3/12/03, pp. 66-67.)
At technical workshops in June 2003, SBC-CA further explained the design point and how it was used to approximate loop lengths. In response to questions from Commission staff during the workshops, SBC-CA's witness Smallwood explained that loop lengths in LoopCAT were estimated based on adding actual feeder lengths and one-half of the design point distance. (Workshop Tr., 6/26/03, p. 809.) SBC-CA's loop planning guidelines define design point as "The longest loop in any plant segment, expressed in feet from the CO." (SBC-CA Errata, 5/1/03, LROPP guidelines, p. 103.) These same guidelines explain up front that "[p]lans should be based on growth expectations for the next 20 years." (Id., p. 3.) SBC-CA witness McNeill clarified that the "design point is that existing or potential customer location in the distribution area that's the furthest away from the serving-area interface." (Workshop Tr., 6/26/03, p. 811, emphasis added.) According to McNeill, potential customer locations are projected by SBC-CA engineers based on building permits or discussions with planning commissions and developers. (Workshop Tr., 6/26/03, p. 812.) McNeill hypothesized that 75% of the loops used in the design point calculation are actual customers, and 25% are potential customers. (Workshop Tr., 6/26/03, p. 838.) In other words, LoopCAT assumes all loops in each distribution area are the same length-- i.e. one half of the maximum projected loop distribution segment--based on a 20-year growth forecast of the longest potential loop. Thus, LoopCAT models all customers in a given distribution area as if they are uniformly distributed over the distribution area. This is clearly a simplifying assumption, and it would build confidence if one could determine the extent to which this assumption matches the real world experience.
There are three weaknesses with SBC-CA's use of the design point to calculate loop lengths. First, the use of the design point means that loop lengths in LoopCAT are not based exclusively on actual loop lengths, but on an engineer's view of possible future loop lengths based on a 20-year growth forecast. Clearly, this leaves room for subjectivity.
Second, we agree with JA that LoopCAT may not correctly determine cable gauge because its calculations are not based on the longest loop served, but on half that distance. (JA/Donovan-Pitkin-Turner, 2/7/03, p. 38, n. 45.) Because the cable gauge is based on an average length that will be shorter than the length of some actual loops, the cable might not provide adequate service to customers with loops longer than the average. A related criticism is that because LoopCAT uses embedded locations and distances for remote terminals, coupled with a hypothetical "design point" distribution length, the model does not have any logic to recognize that some loops exceed the 18,000 foot restriction on copper length for forward-looking loops. Loops that have copper lengths exceeding 18,000 feet will not work without additional equipment such as load coils, which have not been incorporated into the model. (JA, 2/7/03, p. 74.) Indeed, JA claim, and SBC-CA does not dispute, that approximately 100,000 of the loops modeled in LoopCAT will not operate within SBC-CA's own design principles because they are longer than 18,000 feet. (Id., JA 2/7, p. 74; see also Workshop Tr., 6/26/03, p. 819.) However, both these weaknesses - using cables of inadequate gauge and permitting copper loops beyond 18,000 feet -- suggests that LoopCAT underestimates these loop costs.
Finally, we are unable to modify the design point in LoopCAT because we have no record-based information on actual loop lengths, and it is uncertain how we would determine the portion of the design point distance that is based on potential future customers. The design point distance and loop length calculations are part of the "preprocessor" to LoopCAT, which Commission staff is not able to run or modify on its own. In other words, we cannot run our own version to determine how sensitive the model is to assumptions concerning the loop lengths of customers.
As a result, we find that SBC-CA's use of the design point to calculate loop lengths provides a potentially reasonable forecast of costs, but one which we cannot fully investigate and which contains flaws.
In comments on the Proposed Decision, SBC-CA contends the Commission could modify the design point. (SBC-CA, 6/1/04, p. 19.) Again, it is true that the Commission could pick a different length for the design point and re-run the preprocessor and LoopCAT. However, SBC-CA fails to acknowledge that any modifications the Commission would make to the design point would be highly arbitrary since we have no record basis for separating actual loop lengths from potential ones. Thus, even though we know that approximately 25% of the loop lengths upon which the design point is based are not actual loops in place today, we do not know the length by which the design point is inflated. It is simply not possible to make any meaningful modification to the design point distance because there are no facts in the record to help us discern SBC-CA's actual loop lengths today. In summary, we find that although the results of LoopCAT are driven by uncertain forecasts, on balance, we do not find that the use of LoopCAT results in an unreasonable forecast of costs. Any potential for overestimating costs as discussed above appears to be offset by the potential for underestimating costs, also discussed above.
We find that LoopCAT is also flawed because it does not adequately attempt to model multiple dwelling units (MDUs). We agree with JA that LoopCAT inappropriately inflates costs for residential loops by installing network interface device (NID) and drop equipment to terminate six lines for every residence served by SBC-CA in California, rather than modeling the appropriate premise termination equipment for multiple-dwelling units that make up a large percentage of households served in California. By not including the appropriate equipment for MDUs, the SBC-CA model inflates loop costs by assuming each residence requires termination for six lines and that each customer account requires a separate drop. (JA, 2/7/03, p. 17.)
In comments on the Proposed Decision, MCI/WorldCom maintains that JA proposed a modification for this problem. (JA/Donovan-Pitkin-Turner, 2/7/03, paras. 222 through 232.) SBC-CA agrees that a fix is possible and proposes its own unique methodology. (SBC-CA, 6/1/04, p. 20.) We will not make a modification to LoopCAT to account for MDUs because we find it illogical to make LoopCAT more exact in this area than HM 5.3. Our review of HM 5.3 shows that it does not model premise termination equipment for MDU's either - it makes exactly the same modeling assumption. Since our initial aim was to run both models with nearly identical inputs and assumptions, we disagree with the concept of modifying the SBC-CA models to account for MDU's, while leaving HM 5.3 unchanged.
JA contend that SBC-CA's cost models are not integrated for 2-wire, DS-1, and DSL loops. Instead, SBC-CA's models calculate costs for these loops on a stand-alone basis. JA contend that this lack of integration distorts costs, and violates TELRIC and CCPs 8 and 9 which require consistent treatment of costs across all services and elements. By artificially segmenting its cost studies for basic loops, DS-1 loops and DS-3 loops, SBC-CA ignores the efficiencies of sharing facilities that TELRIC requires. (JA, 2/7/03, p. 76.) For example, JA contend that 2-wire loops and DS-1 loops share the same structure, such as poles, conduits, and trenches. Similarly, 2-wire loops, DS-1 loops, and DSL loops share the same DLC systems, and all services share the same central office facilities. (Id., pp. 75-76.)
We find that SBC-CA's failure to integrate all of its services in its cost studies overstates forward-looking cost by ignoring the fact that several services share much of the same network infrastructure. Through this failure, SBC-CA's models do not reflect the full effect of the economies of scope and scale within SBC-CA's network. We agree with JA that SBC-CA's failure to integrate its various loop models and capture the network effects of this total demand inflates true per unit cost of these UNEs.
Joint Applicants criticize SBC-CA's switching investment cost module known as "SICAT," contending it is not forward-looking because it uses a short run approach to determine the amount of switching investment. Specifically, JA contend SICAT is based on average purchases over a five year period (1998 through 2002), which, in most cases, involves the higher cost to add a growth line to an existing switch. (JA/Ankum Decl., 2/7/03, para. 112-117.) According to JA, the SICAT model produces a higher short run average cost for switching investment, which is then applied to the capacity to serve the entire network. In this fashion, SICAT overstates long run switching costs. (JA, 2/7/03, p. 42.)
In addition, JA contend that SICAT is not based on California specific switching information, but is instead based predominantly on switching cost investments from the other states in which SBC-CA operates. (Id., pp. 87-88.) Specifically, SICAT develops per line costs based on recent purchases in other states. Thus, in JA's view, SICAT is not sufficiently based on California demand nor does it attempt to identify the number or type of switches necessary to serve California. (JA/Ankum, 2/7/03, paras. 11 and 121-127.) Given SICAT's short term purchasing period and its use of information from other states, JA maintain that SICAT does not model a network designed to meet total demand, but simply calculates a per-line average cost of switches based on non-California data and then improperly uses that average to calculate switch investment.
We find that these two principal disputes with SICAT can be addressed by modifying SICAT inputs. Principally, in our runs of SICAT we have changed the input assumptions regarding the percentage of new and growth lines that are purchased over the modeling period. This should address JA's concern that SICAT uses too high a percentage of higher priced growth lines. We are less concerned that SICAT is not based exclusively on California switching information. Our own review shows that SICAT contains a mix of California switching data and pricing information from SBC's multi-state switching contract. In persuading the Commission to reexamine UNE switching rates, JA argued that the multi-state switching contract allows SBC-CA to obtain a better price for its switching purchases than if SBC-CA negotiated and purchased for its California network alone. (A.01-02-024, 2/21/01, p. 8.) We find this a reasonable assumption and we are not persuaded that SICAT is fatally flawed because it incorporates some non-California switching information. Nevertheless, we find that SICAT provides little of unique value for the effort required to correct its weaknesses. For this reason, we do not rely on this module to estimate the UNE costs of switching related services. Instead, we rely on HM 5.3
SBC-CA uses the "SBC Program for Interoffice and Circuit Equipment" (SPICE) to identify UNE rates for dedicated transport and SS7 links. JA claim that SPICE violates TELRIC because it relies entirely on SBC-CA's embedded network rather than a forward-looking one, and is not constructed based on a determination of total network demand. (JA, 2/7/03, p. 98.) Rather, SPICE determines investment based on a database of the existing circuits in SBC-CA's current network without demonstrating that this embedded network reflects the total demand for each service and all UNEs supported by the network. (Id.)
As a result, JA maintain that SPICE produces flawed results because it proposes costs significantly higher than the prior OANAD rates, without sufficient explanation or justification, during a period of productivity gains in telecommunications technology. (Id., p. 94.) Further, JA contend that SPICE limits the ability of the parties to propose changes to inputs and assumptions in order to modify costs. For example, there is no way to ensure the SPICE model has considered all possible routes that could be the "least-cost" path or to modify the structure sharing assumptions embedded in SPICE. (Id., pp. 96-97.) Nevertheless, JA witnesses Mercer and Murphy propose modifications to a few of the major SPICE inputs in an attempt to restate its results. (JA/Mercer-Murphy, 2/7/03, p. 40-51.)
SBC-CA responds that SPICE is based on the SBC-CA's current total network demand for interoffice transport circuits, and SPICE assumes that a forward-looking interoffice network would mirror SBC-CA's existing network. (SBC-CA, 3/12/03, p. 74.) SBC-CA counters JAs' allegations that costs are declining for interoffice transport by noting that per circuit investments have actually increased slightly from 1998 to 2001. (Id., p. 75.) SBC-CA contends that the "least cost path function" in SPICE reconfigures circuit paths to choose the least cost route. (Id.) Moreover, SBC-CA disputes JA's proposed modifications to the SPICE model, arguing the proposed modifications lack support. (SBC-CA/Cass, 3/12/03, p. 37-40.)
In our own review of SPICE, we were unable to determine the level of demand that it is designed to serve so that we could vary it and check the model's sensitivity. Essentially, to borrow a phrase coined by JA, SPICE "fails to put the `T' in TELRIC." At the technical workshops, SBC-CA's witness Cass was questioned extensively on how one could determine the total investment modeled in SPICE and the apportionment of that investment based on demand for certain services. Cass admitted that the SPICE model is not based on total investment. (Workshop Tr., 12/5/02, pp. 439-441.) Cass stated that it was not possible to pull a total investment figure out of SPICE without making demand assumptions because SPICE starts with the network in place today to serve all SBC-CA customers and calculates a per unit "node investment." (Id., 12/5/02, pp. 437-439.) Cass responded that the only way to determine total investment was to make assumptions about demand. (Id., p. 438.) Commission staff also inquired how to segment the interoffice demand between voice services and other advanced and unregulated services that use the interoffice network. SBC-CA's witness stated that it was not possible to segment demand in this manner. (Workshop Tr., 6/24/03, pp. 557-558.) We find it unreasonable that we cannot determine the total investment modeled by SPICE or the demand SPICE is intended to serve.
Essentially, SBC-CA asserts that its embedded network is a priori a forward-looking efficient network. (JA/Mercer-Murphy Decl., 2/7/03, para. 7.) When describing inputs to the SPICE model, SBC-CA states that actual data is used "because the facilities utilization of an efficient firm today is the best estimate available of the facilities utilization that an efficient firm will have in the forward-looking environment." (SBC-CA/Cass Decl., 10/18/02, p. 11.) In other words, SBC-CA claims that the characteristics of its existing network, including its current utilization level and the demand it is designed to serve, are automatically forward-looking, without giving us the ability to know what that demand level might be. We do not accept the unsupported assertion that SBC-CA's current network is automatically forward-looking, particularly when we cannot determine the demand SPICE serves in order to test differing assumptions.
Finally, we agree with JA that SPICE contains other inputs that are difficult to understand or modify. For example, SPICE uses historic structure sharing levels that it does not identify and that cannot easily be modified without knowing the assumptions embedded into SBC-CA's factors. (JA/Mercer-Murphy, 2/7/02, para. 21-23.) In addition, SPICE uses pole and conduit factors derived from SBC-CA's embedded network that correlate cable investment and structure investment (i.e. the more expensive the cable, the more expensive the structure) without evidence to support this correlation. (Id., para. 24-26, and 27.) Further, EF&I factors in SPICE are based on historical network data without showing a direct causal relationship between equipment costs and installation costs. In other words, SBC-CA's EF&I factors assume that more expensive equipment is automatically more expensive to install. (Id., para. 78.) We find these characteristics of SPICE are problematic. Similar to our discussion of the flaws in LoopCAT, we find that the use of factors in SPICE aggregates inputs into bundles that we cannot dissect in order to understand the underlying inputs, compare them to other public information or the inputs SBC-CA criticizes in HM 5.3, or test the effect of different input assumptions.
We find merit to portions of the suggested restatement of SPICE provided by JA, which are limited to changes to a few key inputs. In our final modeling runs of SPICE before abandoning this module, we made further changes to SPICE so that our two interoffice transport models, HM 5.3 and SPICE, ran with similar inputs. In their analysis, JA witnesses Mercer and Murphy modified two key fill factors in SPICE and two EF&I factors. In our model runs of SPICE, we find it reasonable to incorporate some of these changes in order to model forward-looking network utilization. Namely, we agree with JA to modify the SONET and common equipment fill factor from 58% to 85%. We find this higher percentage is reasonable given that an even higher fill factor was used by SBC in TELRIC modeling in other states and by the FCC in its modeling. (JA/Mercer-Murphy, 2/7/03, p. 41-43.) We modified the fiber fill factor in SPICE to 54%, which is an average of the fill factors used in other SBC states, as provided by JA. (Id., p. 43-44.)
With regard to EF&I factors in SPICE, we agree with JA that the circuit equipment EF&I factor should be lowered because SBC-CA's factor is out of line with factors it has proposed in other states. It is reasonable to run SPICE with a circuit equipment EF&I factor of 2.6, which is an average of factors modeled in other SBC states.33 We did not ultimately implement this change because we decided to abandon this module before making this change. As the discussion below makes clear, we cannot reasonably conclude a priori that using this module either alone or in conjunction with HM 5.3 would provide a better estimate of UNE costs than would reliance on HM 5.3 alone.
As we have already discussed, SBC-CA uses Annual Cost Factors (ACFs) to convert the investments in its models into annual costs and expenses. In simplest terms, ACFs are ratios of capital costs and operating expenses per dollar of plant investment, built on the assumption that capital costs and expenses have a direct relationship with investments. (SBC-CA/Cohen, 10/18/02, p. 2.) There are four types of ACFs in SBC-CA's cost studies: (1) capital cost factors, (2) operating expense factors, (3) investment factors, and (4) inflation factors. JA and other parties provide numerous criticisms of the ACFs and expense calculations in the SBC-CA models, which we now describe. This module then affects the costs in the other three modules.
JA criticize SBC-CA's cost model for its use of ACFs to calculate the expense portion of UNE costs. JA claim that these ACFs contain numerous computational errors and incorrectly assume that SBC-CA's 2001 ARMIS expense data, on which the ACFs are based, is efficient and forward-looking. (JA, 2/7/03, p. 101.) JA allege that SBC-CA failed to make forward-looking adjustments to its historical expense data to reflect future savings from potential technological innovations and cost savings from corporate mergers. (Id., p. 105.)
JA witnesses Brand and Menko provide a 100-page declaration detailing their allegations of eighteen categories of methodological errors in the development of SBC-CA's ACFs. They suggest twelve categories of corrections, and an additional six categories where corrections are not possible because of insufficient data. (JA/Brand-Menko, 2/7/03, p. 24.) According to Brand and Menko, if all of their suggested corrections are made, SBC-CA's ACF's should be reduced by 44.9%. (Id., p. 101.) XO joins JA in criticizing SBC-CA's modeling of expense factors related to support assets, building expenses, and shared and common costs. (XO, 2/7/03.)
Generally, SBC-CA responds that it is reasonable to assume that its baseline current expense and investment data reflect those of an efficient provider given the discipline imposed on SBC-CA by regulatory, shareholder, and competitive pressures. (SBC-CA, 3/12/03, p. 28.) SBC-CA disputes the eighteen categories of corrections proposed by JA's Brand and Menko and contends that any adjustments to current expenses would be speculative.34
Both models use factors to estimate expense levels based on investments. We do not find the fact that SBC-CA used a factor approach is, by itself, a flaw. We are not as troubled by SBC-CA's use of embedded ARMIS data to calculate expenses as we are by the fact that the factors do not allow us to isolate and understand individual input assumptions, or compare and verify inputs to public information such as ARMIS, or the inputs that SBC-CA criticizes in HM 5.3. Once again, as in LoopCAT and SPICE, we find ourselves having to rely on SBC-CA's aggregation of its historical accounting information into factors without understanding individual input assumptions. Therefore, we reiterate our finding that the ACFs SBC-CA uses to estimate expenses cannot be disaggregated to understand the underlying inputs or to compare them to other public information. As before, this failing reduces our confidence in the cost estimates produced by this model.
While JA propose detailed corrections to SBC-CA's ACF model, it is not reasonable to review each of the eighteen disputed expense categories in the time allotted, particularly when the adjustments are disputed. As with JA's detailed restatement of LoopCAT, the Commission will focus on the most significant expense categories where the suggested corrections are reasonably explained and where modifications can be easily implemented. In the sections below, we address the most significant of the categories raised by JA and XO and either modify the SBC-CA ACF model, or describe why that is not possible.
JA and XO maintain that because SBC-CA has abandoned the methodology used to derive costs in the prior OANAD proceeding, the newly derived costs are not coordinated with the prior cost study. In particular, certain expense categories from the prior OANAD proceeding are now used to develop SBC-CA's annual cost factors, even though these same expense categories were used to derive the 21% shared and common cost mark-up percentage in the prior OANAD proceeding. (JA/Brand-Menko, 2/7/03, pp. 40-44.) According to JA, SBC-CA's witness Cohen confirmed that no adjustments were made to SBC-CA's ACFs to remove shared and common costs. (Id., p. 42.) Thus, JA and XO contend that SBC-CA's current cost studies include some portion of shared and common costs, and therefore, double counting occurs when the 21% shared and common cost markup is added to these new UNE costs. (JA, 2/7/03, p. 53; XO, 2/7/03, p. 42.) XO contends that if the Commission employs SBC-CA's cost models to set UNE prices, it cannot use the existing 21% shared and common cost markup, but must undertake a review of the markup separately. (XO, 2/7/03, p. 46.)
In response, SBC-CA confirmed that "[SBC-CA] employed its standard approach for deriving ACFs without explicitly analyzing revised [shared and common] costs because those are not at issue in this proceeding." (SBC-CA/Makarewicz, 3/12/03, p. 22.) SBC-CA maintains that no parties have done an analysis to confirm that shared and common costs are included in SBC-CA's ACFs. SBC-CA contends it is equally possible that costs not recovered by the shared and common cost markup were inadvertently excluded from the ACF analysis and are not recovered anywhere. (Id.)
We agree with JA and XO that the current cost studies, which use a different methodology than the prior OANAD cost studies, may incorporate shared and common expenses that are already accounted for in the 21% markup. There is no dispute that SBC-CA has used a different cost methodology in this proceeding as compared to the prior OANAD proceeding, and SBC-CA confirms that it did not attempt to reconcile the shared and common costs currently collected through the 21% markup with the direct UNE costs calculated through the ACF study it proposes here. During a deposition, SBC-CA witness Smallwood testified that SBC-CA's multi-state cost study was designed to comply with FCC directives and recover as much of a UNE's direct incremental cost as possible to reduce common costs. JA are correct that this is a different approach than was used in the prior OANAD proceeding, where the Commission noted that the OANAD treatment of shared and common costs was contrary to the FCC directive. (JA/Murray Decl., 10/18/02, pp. 34-35, citing D.98-02-206, mimeo. at 18, n. 24.) This means that SBC-CA's proposed UNE costs may now categorize some costs as direct UNE costs that had been considered shared and common costs when the 21% markup was determined in the prior OANAD proceeding. Smallwood admits that SBC-CA made no adjustments to the annual cost factors in its new cost studies to recognize costs that are included in the existing shared and common cost markup. (XO, 2/7/03, p. 42.) Thus, it is reasonable to conclude that SBC-CA's ACFs contain some portion of shared and common costs.
XO contends that if the Commission adds a 21% markup to the UNE costs resulting from SBC-CA's models, it should either adjust the ACFs to mitigate the impact of this double counting, or undertake a new markup calculation. We have stated several times that we would not review the 21% markup in this limited proceeding. If we were to make any adjustments to SBC-CA's ACFs to remove shared and common costs, they would be highly speculative.
In comments on the Proposed Decision, JA argue that the Commission ignores suggested adjustments to the SBC-CA models to remove double counting of shared and common costs. JA witnesses Brand and Menko describe how they reduced SBC-CA factors by 5.2% to eliminate costs they contend are already recovered in the 21% markup. (JA/Brand-Menko, 2/7/03, p. 43 and p. 101.) XO comments that the Commission ignores its detailed recalculation of the markup factor to 7.9%. (See XO/Montgomery, 2/7/03, p. 41-46.)
We find Brand and Menko's description unclear on what expenses they have removed. We decline to adjust the SBC-CA factors as proposed by JA on this topic because the basis of the changes are unclear. Moreover, we note that JA filed a separate application, A.04-03-031, on March 12, 2004 nominating the shared and common cost markup for review in 2004. Also, the Ninth Circuit Court of Appeals recently granted an appeal of AT&T and WorldCom with respect to the Commission's markup calculation. (AT&T Communications of California Inc., et al., v. Pacific Bell Telephone Company, et al., No. 02-16818, _____ F3d _____, (Ninth Circuit 2004) (July 14, 2004).) Thus, we expect that the issues surrounding the shared and common cost markup will be addressed elsewhere.
JA contend that SBC-CA has not eliminated certain expenses from its ACF cost study such as i) non-regulated expenses unrelated to UNEs, ii) affiliate transaction expenses, iii) DSL-related Project Pronto35 expenses, and iv) annual amortization of post-retirement benefits, known as the "Transitional Benefit Obligation" (TBO).36 JA maintain that all of these expenses are inappropriate to include in a study of forward-looking recurring UNE costs because they are either not current operating costs or are costs related to unregulated activities. (JA, 2/7/03, p. 104.) In addition, JA contend that SBC-CA overstates expenses related to central office building space.
JA contend that SBC-CA included investments and expenses related to its non-regulated activities when it developed its ACFs. These non-regulated activities include services related to customer premise equipment, inside wire maintenance plants, and billing and processing of third-party customer bill payments. (JA/Brand-Menko, 2/7/03, pp. 25-26.) According to JA, it verified that these unregulated expenses are included by comparing ARMIS reports for regulated and unregulated expenses with the inputs used in SBC-CA's ACF cost study. (Id.) JA propose removal of these expenses, as identified through ARMIS reports, which reduces SBC-CA's cost factors by 6.7%. (JA/Brand-Menko, 2/7/03, p. 25-27 and p. 100.)
SBC-CA responds that it appropriately relied on total expenses and investments when calculating per unit expense factors. (SBC-CA/Makarewicz, 3/12/03, p. 9.) For example, SBC-CA claims that its cable maintenance cost per unit will remain the same regardless of what service is using the cable. SBC-CA provides the analogy of a salesperson that has a company car that is used both for business (regulated) and personal (non-regulated) purposes. SBC-CA reasons as follows:
Accurate per mile maintenance expenses for the vehicle are calculated using data for its entire use rather than the "arbitrary" distinction between business use and personal use. Likewise, it is appropriate for [SBC-CA] to use total account balances for plant expenses and investments to calculate its ACFs, and to apply the same ACFs to measure the costs for any services/elements whose provisioning relies on plant investment and expense. (Id., p. 10.)
We disagree with SBC-CA's reasoning. We doubt that the company in SBC-CA's example, which issues a car for business use, would happily pay higher maintenance costs if the salesperson used the company car for excessive personal use. Likewise, we do not agree that expenses SBC-CA incurs for its unregulated businesses, such as inside wire maintenance, or billing services to third parties, should be considered when determining the expenses related to its UNE operations. SBC-CA admits that it has included expenses related to unregulated activities in developing its ACFs. We will adopt the modification proposed by JA to SBC-CA's ACF model to eliminate expenses for non-regulated activities, which is based on a comparison of regulated and non-regulated ARMIS data.
JA contend that SBC-CA inappropriately includes expenses related to transactions with its affiliated companies in its ACFs, both for services sold by SBC-CA to its affiliates, and for services purchased by SBC-CA from its affiliates. JA note that expenses related to affiliate transactions have increased four-fold since the merger of Pacific Telesis and SBC Communications in 1997. (JA/Brand-Menko, 2/7/03, pp. 28-30.) JA maintain that $1.1 billion in expenses related to providing services to SBC affiliates should not be attributed to California UNEs. (Id., p. 32.) Further, they propose costs for services purchased from affiliates need to be adjusted to forward-looking levels because SBC-CA pays for what it purchases from affiliates at "fully distributed cost" which is an embedded cost methodology. (Id. p. 34.) JA suggest a 30% disallowance for purchases from affiliates to adjust for what JA contend are inflated transaction costs. (Id.)
SBC-CA does not deny that expenses related to affiliate transactions are included in its 2001 expense information that was used to develop its ACFs. SBC-CA contends no adjustment is needed for services purchased from affiliates, such as payroll processing, procurement, fleet operations, and information technology, because SBC-CA pays the lower of fully distributed cost or fair market value for its purchases. (SBC-CA/Henrichs, 3/12/03, p. 15.) We find that SBC-CA has provided sufficient justification that no adjustment is required to its ACFs for services purchased from affiliates.
SBC-CA does not respond to the JA argument that expenses for items sold to affiliates are not related to provisioning UNEs. We find that it would be unreasonable for SBC-CA's ACFs to include expenses related to services SBC-CA has performed on behalf of its affiliates and which do not relate to the provisioning of UNEs. SBC-CA has not met its burden of proof to assure us that these costs have been adequately removed from its ACFs. In our final runs of the SBC-CA models, we removed approximately $301 million in affiliate transaction expenses from SBC-CA's ACF study. This differs from the $1.1 billion cited by JA because upon review, the workpapers of their witnesses Brand and Menko did not support removal of any more than $301 million. (JA/Brand-Menko, 2/7/03, Ex. TLB-AM-REP-2.)
With regard to Project Pronto, JA and XO claim that SBC-CA's ACFs include a portion of Project Pronto costs incurred in 2001 that relate to development of SBC-CA's broadband network. These costs are not required for provisioning of UNEs. (JA/Brand-Menko, 2/7/03, p. 58-62; XO, 2/7/03, p. 17.) According to JA and XO, Project Pronto is a major technological upgrade, and SBC-CA is incurring higher than normal costs in the early years of this upgrade, with the expectation of cost savings later. JA admit that a portion of Project Pronto costs are for overall network efficiency and should be included in ACFs, but they contend this amount is overstated because SBC-CA made no adjustment to 2001 expense levels to acknowledge high start-up expenses and account for future cost reductions from Project Pronto. (JA/Brand-Menko, 2/7/03, para. 125.) XO contends that expense data that SBC-CA uses in its cost studies reflects those early years of Project Pronto implementation, but fails to consider future Project Pronto savings. JA claim that Project Pronto costs are inextricably melded into SBC-CA's 2001 expense information, and SBC-CA admits it cannot identify separate Project Pronto costs.
SBC-CA does not deny that 2001 expenses used for ACFs include Project Pronto expenses, and that it cannot separate these costs from the expenses it used for its ACFs. Rather, SBC-CA makes the argument that the same equipment used for Project Pronto can also provide voice service. Thus, expense levels for this equipment are the same whether it serves voice or DSL service. (SBC-CA/Makarewicz, 3/12/03, p. 18.) On the charge that future Project Pronto savings are not incorporated, SBC-CA says that it has already incorporated Project Pronto savings by modeling fiber loop plant and lower maintenance factors associated with fiber. (Id., pp. 17-20.)
We agree with JA that some Project Pronto costs likely contribute to overall network efficiency, but not necessarily all of them. SBC-CA offers us no ability to allocate the expenses between its voice and broadband services. We do not agree with SBC-CA's assumption that all Project Pronto costs should be allocated to UNEs. SBC-CA argues that future efficiencies for Project Pronto are accounted for in its models through lower fiber maintenance ACFs. However, without an allocation of Project Pronto expenses between voice and broadband services, we cannot be assured that SBC-CA's fiber maintenance expenses wouldn't actually be lower if some were properly attributed to the broadband network. Thus, SBC-CA has not met its burden of justifying the entirety of these expenses as forward-looking. We are unable to adjust ACFs to remove potentially inflated Project Pronto expenses. Thus, we find this element of the SBC-CA's model flawed.
This particular debate is about an accrual for an accounting change that took place over a decade ago. As SBC-CA explains, the TBO accrual is a "liability for future retiree medical costs already earned by current and former employees [as of 1/1/93] but not yet paid...." (SBC-CA/Cohen, 3/12/03, p. 15.) In other words, it is an amortization of expenses SBC-CA would have shown on its books long ago if SFAS No. 106 had been in effect prior to 1/1/93.
JA contend that the TBO represents the amortization of an embedded cost resulting from past activities that are not forward-looking. The TBO that was recorded in 1993 has nothing to do with the costs of an efficient carrier today. (JA/Brand-Menko, 2/7/03, pp. 44-47.) SBC-CA contends these are forward looking expenses that are appropriately treated as shared and common costs. According to SBC-CA, it removed approximately one third of its estimate of total TBO expenses from its ACF study, explaining that it did not remove the full amount because it is amortized over 20 years. (SBC-CA/Cohen, 3/12/03, pp. 19-20.) JA contend that SBC-CA should have removed over $80 million, and the amount SBC-CA admits removing is far below this. (JA/Brand-Menko, 2/7/03, p. 46.)
We agree with JA that this particular TBO accrual is not a current operations cost and should not be included in SBC-CA's ACFs. From the meager record on this issue, we conclude that the adjustment SBC-CA made was too small because it removed only a portion of the full TBO amount. For our SBC-CA model runs, we removed the total TBO amount identified by SBC-CA. (See SBC-CA/Cohen, 3/12/03, p. 20.) We did not use the amount suggested by JA because, as SBC-CA explains, it includes expenses from many accounts that are not included in the cost factors. (SBC-CA/Cohen, 3/12/03, p. 19.) In addition, we are not reviewing the shared and common cost markup in this proceeding so we will not address whether TBO costs are appropriate to include in the shared and common cost markup.
XO claims that SBC-CA's building expense factor is exceptionally high and shows a 120% increase from 1999 to 2001, while ARMIS data does not show an acceleration of investment in buildings by SBC-CA. (XO, 2/7/03, p. 37.) JA contend that SBC-CA inappropriately assumes that all of its embedded building space for central offices and other buildings should be assigned to UNEs and ignores its OANAD admission that forward-looking central office buildings require less space than SBC-CA's historical building requirements. (JA/Brand-Menko, 2/7/03, pp. 48-49.) SBC-CA also ignores the fact that much of SBC-CA's embedded central office space is being paid for as part of collocation charges. (Id., p. 48.)
SBC-CA maintains that , with regard to land and buildings, it cannot assume a constant stream of collocation revenue in the future, so it would be inappropriate to include this in the model. (SBC-CA/Makarewicz, 3/12/03, p. 23.)
XO and JA both raise convincing arguments that SBC-CA's land and building expense factors are not forward-looking. We find it reasonable to adjust historical land and building expenses to incorporate the forward-looking space requirements SBC-CA proposed in the prior OANAD proceeding, and an allocation of revenues from collocation. In our SBC-CA model runs, we reduced the land factor 2.14% to account for collocation revenues and the building expense factor by 80% in line with findings from the prior OANAD proceeding.
Finally, JA protest SBC-CA's inflation adjustments to its operating expenses and capital investments. SBC-CA incorporated inflation into its cost modeling using an inflation factor for capital investments based on the Telephone Plant Index (TPI), and an inflation factor for its operating expenses based on the Consumer Price Index-W (CPI-W).37 (SBC-CA/Cohen, 10/18/02, p. 17.)
JA criticize SBC-CA's inflation adjustments based on the TPI and CPI-W as neither specific to SBC-CA nor closely related to the types of costs SBC-CA experiences. (JA, 2/7/03, p. 103.) Further, JA contend that SBC-CA has failed to reflect future productivity improvements and expense savings from such sources as technological innovation and mergers. According to JA, there is extensive regulatory precedent for the concept that inflation should not be incorporated into a cost study unless there is a corresponding offset for productivity. (JA/Brand-Menko, 2/7/03, pp. 86-88.) Specifically, JA note that this Commission's own "New Regulatory Framework" (NRF) decision incorporates both inflation and productivity adjustments to rates. (Id., p. 87, citing D.89-10-031.) JA contend there is significant evidence that any inflation SBC-CA experiences in its costs will be offset by productivity estimates that exceed inflation. (Id., pp. 93-95.) According to JA's witness Flappan, BLS data for similar telephone utilities shows that worker productivity has exceeded inflation price increases by 3.8% per year on average from 1996 through 2000. When productivity exceeds inflation, costs per labor hour decrease even if nominal wages increase. (JA/Flappan, 2/7/03, p. 30.) Thus, Flappan contends it is unreasonable to assume inflation in labor costs without corresponding adjustments for productivity. (Id.) JA recommend that the Commission either incorporate a negative inflation factor, based on their conclusion that productivity will exceed inflation on a forward-looking basis, or inflation factors should be removed entirely because SBC-CA did not include any productivity assumptions.
Similarly, XO criticizes SBC-CA's inflation assumptions, noting that SBC-CA's actual operating expense data indicates expenses have fallen rather than risen in line with retail prices in the overall economy. (XO, 2/7/03, p. 9, and 19.) XO objects to SBC-CA's assumption that as telecommunications equipment prices decrease, operating expenses rise. XO says this assumption is contradicted by SBC-CA's actual operating expense data per access line that indicates 4.2% per year declines from 1996-2000. (Id., p. 19.)
SBC-CA responds that its use of the TPI and the CPI-W are appropriate as inflation factors because they conservatively estimate the inflation SBC-CA will face in its costs. SBC-CA justifies using the CPI-W by stating that it measures inflation in wages, which are a large portion of SBC-CA's expenses. (SBC-CA/Cohen, 3/12/03, p. 29.) Further, SBC-CA disputes JA's contention that it has left productivity out of its cost studies. SBC-CA contends that it has incorporated productivity in its cost studies by assuming placement of only new technology and applying maintenance factors associated with forward-looking technology. SBC-CA also contends that by using its latest expense data for its operating expense factors, it has already incorporated the latest gains in personnel productivity. (Id., p. 33.)
We agree with JA and XO that it is improper to include inflation adjustments in the expense data, without a corresponding adjustment for productivity. We do not agree with SBC-CA's assertion that it has already factored in productivity simply by using forward-looking technology and by using SBC-CA's 2001 expense information. Investment in equipment of the latest technology does not by itself account for all the productivity gains that the company could achieve in the future. Similarly, the use of 2001 expenses, by definition, does not include future productivity potential. Rather, we find merit in Flappan's arguments regarding productivity. Flappan provides BLS data indicating that worker productivity has exceeded inflation price increases for several years. (JA/Flappan, 2/7/03, p. 30.) We do not find it reasonable that SBC-CA has included inflation price increases in its labor rates, but no corresponding productivity assumptions. As JA have pointed out, other states faced with this same issue, namely Texas, Missouri, and Kansas, have removed the inflation increases under the assumption that productivity offsets inflation. (Id., p. 29.) As in other states, we will make the conservative assumption that productivity will at least equal inflation even though it has actually outstripped it according to recent BLS data.
Therefore, in our runs of the SBC-CA models, we have removed the inflation component of SBC-CA's capital and operating expense factors, which essentially means that we assume inflation and productivity offset each other. This is consistent with the Commission's assumptions regarding inflation and productivity in our NRF decisions.
In summary, we have found several problems with SBC-CA's ACF cost study. We agree with JA and XO that SBC-CA's expense calculations need to be revised to remove or reduce several categories of expenses that are not appropriate for a forward-looking cost study. We have modified SBC-CA's ACF study with regard to non-regulated expenses, affiliate transaction expenses, retiree costs, building expenses, and inflation assumptions. We were unable to modify the SBC-CA models in two significant areas, namely Project Pronto expenses and shared and common costs. We anticipate that we will address the issue of shared and common costs in a separate proceeding. The model's inadequacy in handling Project Pronto expense remains a flaw. Since these Annual Cost Factors affects all of the modules in this model, this flaw, along with others, prevents us from relying on the SBC-CA model exclusively for even loop costs, which LoopCAT models well.
In conclusion, SBC-CA's LoopCAT model relies extensively on current data that we are not confident fully reflect going-forward efficiencies. In addition, we are unable to modify the model to our satisfaction. In particular, cabling requirements, loop length forecasts, structure sharing assumptions, and labor installation times and crew sizes are embedded in various factors and not readily modified. LoopCAT's extensive use of annual cost factors, or "linear loading factors," prevents us from making meaningful modifications to LoopCAT to test varying input assumptions because we cannot extract individual inputs from LoopCAT's aggregated factors, or compare and verify individual inputs to public information. In addition, we know that the annual cost fact model suffers from flaws.
For switching costs, the flaws noted by JA are not fatal because SBC-CA's SICAT model can be modified through input modifications. On the other hand, we find that interoffice rates calculated by SBC-CA's SPICE model are not based on total demand. Although we made some modifications to SPICE's fill factors, we are unable to modify other embedded network assumptions within SPICE such as demand levels, structure sharing, and pole and conduit factors.
With regard to expenses, we find SBC-CA's ACFs are based on historical information that is aggregated into bundles that we cannot dissect in order to understand the underlying inputs, compare them to other public information or the inputs SBC-CA criticizes in HM 5.3, or test the effect of different input assumptions. We made modifications to SBC-CA's ACFs in a few key areas, but we were unable to modify the ACFs with regard to Project Pronto expenses and shared and common costs. While JA propose many other adjustments to SBC-CA's cost factors and expenses, and some of these may be reasonable, it is simply not reasonable to resolve each and every disputed area. Instead, we have focused on what we consider key inputs and modeling assumptions.
Overall, our inability to modify SBC-CA's models in critical areas, including, but not limited to, network configuration, cost factors, interoffice network demand and expense levels, leads us to conclude that the SBC-CA models contain weaknesses that preclude us from relying solely on SBC-CA's models to develop TELRIC prices. As we will see below, our access to a competing model leads us to restrict our use of the SBC-CA model to forecasting UNE-L costs. Moreover, we make this decision both cognizant of the models flaws, but mindful that the competing model, HM 5.3, is also flawed, particularly in its modeling of local loop costs.
Overall, SBC-CA criticizes HM 5.3 as understating SBC-CA's forward-looking cost of providing UNE's because it does not reasonably estimate the quantities or prices of network facilities to build a competing local exchange network. SBC-CA alleges that HM 5.3 is "results-driven" and "manipulated to produce the lowest UNE cost estimates possible." (SBC-CA, 2/7/03, p. 11.) Moreover, SBC-CA claims that HM 5.3 does not overcome the flaws the Commission found with its predecessor, HM 2.2.2, namely the earlier model's faulty representation of distribution plant in low-density areas. Thus, SBC-CA claims that despite its new customer location and clustering process, HM 5.3 models an inadequate amount of loop plant. (SBC-CA/Tardiff, 2/7/03, pp. 70-71.)
SBC-CA's criticisms of HM 5.3 can be grouped into seven categories. Essentially, SBC-CA contends that HM 5.3: (1) ignores accepted engineering and network design standards, (2) is based on a flawed customer location process, (3) relies excessively on unverifiable "expert judgment," (4) ignores actual demand in its switching and interoffice models and would be unable to provision high speed services, (5) does not provision enough spare capacity, (6) includes unrealistically low expense levels, and (7) fails to provide a test of validity. SBC-CA alleges that as a result of these flaws, HM 5.3 produces a network that is unrealistic and unreasonable because it has far less outside plant than SBC-CA's actual network today (i.e., fewer distribution areas, less distribution pairs, less fiber equipment, less trunks, and less interoffice network equipment). (SBC-CA, 2/7/03, p. 20.)
We will address each of these criticisms in turn. As an overview, we find merit to many of SBC-CA's criticisms of HM 5.3. We find that many of SBC-CA's criticisms can be addressed by input modifications to the model. Where we can modify inputs, we do not agree that SBC-CA's criticisms are insurmountable. Although SBC-CA is critical of inputs relating to engineering and design standards, spare capacity, and expense levels, these are all inputs that we can modify. However, this is not true in other areas. We agree with SBC-CA that some elements of the customer location process are flawed, and we do not agree with all of the assumptions built into the HM 5.3 geocoding and customer location process. As we found with SBC-CA's LoopCAT model, we find that it is not possible to modify this area of HM 5.3 and test various scenarios. In this regard, loop modeling by HM 5.3 lacks transparency, limits our ability to test scenarios, and can be faulted for the accuracy of customer locations. As a result, it would not be reasonable to rely solely on HM 5.3 to estimate loop costs. ,
In addition, we agree with SBC-CA that many of the "expert judgments" used as inputs to HM 5.3 are questionable, and appear biased to produce low results. We have found that many of these expert judgments can be replaced with assumptions and inputs used by SBC-CA in its own model. But this is not the case in one important area that impacts costs throughout HM 5.3--labor costs. We explain this more fully below. Finally, we find that we cannot overcome criticisms of the HM 5.3 Transport module that it underestimates demand for interoffice transport, may not adequately incorporate optical interface equipment for the provisioning of high capacity services, and is insensitive to demand changes. We now turn to a detailed review of our findings with regard to HM 5.3.
In general, SBC-CA maintains that HM 5.3 is incapable of producing accurate TELRIC estimates because it ignores widely accepted engineering and network design standards, and instead relies upon a series of erroneous engineering assumptions, unrealistic input values, and inappropriate estimating methodologies. As a result, HM 5.3 understates the amount of facilities required to provide service. According to SBC-CA, HM 5.3's principal flaw is its assumption that a brand new fully functioning, optimal network could be instantly constructed at a single moment in time. (SBC-CA, 2/7/03, p. 13.) SBC-CA maintains that HM 5.3 is based on a fiction that a competitive firm could enter and instantly size and build all of its facilities to accommodate a known snapshot of demand, when in reality, networks are built and have evolved over time to accommodate demand as it grows and shifts. (SBC-CA/Tardiff, 3/12/03, p. 17.) By building an abstract network divorced from reality, SBC-CA maintains that HM 5.3 focuses only on existing lines and does not account for vacant parcels of land or vacant homes. Therefore, HM 5.3 does not build to the "ultimate demand" that a real-world carrier would serve. (SBC-CA, 2/7/03, p. 40.) In addition, SBC-CA contends that this assumption of an instantaneous network fails to match the other assumptions in HM 5.3, particularly the relatively long depreciation lives and a low cost of capital assumed by JA. (SBC-CA/Tardiff, 2/7/03, p. 15.)
Further, SBC-CA criticizes the "right angle routing" that HM 5.3 uses to connect customer locations. Rather than connecting customer locations by a straight line of the shortest-distance, or "as the crow flies," HM 5.3 assumes that customer connections form the two sides of a right triangle, hence the term "right angle" routing. (JA/Mercer, 10/18/02, Attachment RAM-4, p. 36, n. 33.) SBC-CA contends that this routing assumption constructs an outside plant design that is purely hypothetical and fails to reflect SBC-CA's operating realities, where carriers cannot ignore geographic impediments and man-made obstacles such as rivers, lakes, mountains, rights-of-way, and easements. (SBC-CA, 2/7/03, p. 13.) According to SBC-CA, "right angle routing" causes HM 5.3 to understate the amount of plant necessary to serve customers by ignoring real and man-made obstacles. (SBC-CA, 2/7/03, p. 48; SBC-CA/Murphy 2/7/03, p. 53.)
On the whole, SBC-CA alleges that HM 5.3 designs a hypothetical network that only satisfies existing demand at existing locations, excludes the real-world costs of fluctuations in demand from customer growth and churn, and results in a model that produces unrealistic investment levels compared with SBC-CA's actuals. SBC-CA contrasts HM 5.3 results to SBC-CA actual operating results and highlights the following:
· HM 5.3 calculates total investment of $9 billion, but SBC-CA spent $9.6 billion just on plant additions from 1998 to 2001 (SBC-CA 2/7/03, p. 7).
· HM 5.3 assumes a network that can be maintained for $.7 billion, while SBC-CA spent $2.7 billion on maintenance in 2001. (Id.)
· HM 5.3 models 32 million distribution pair while SBC-CA has almost double this number in its actual network. (Id., p. 20.)
· HM 5.3 creates a network with 11,661 distribution areas whereas SBC-CA has more than 5 times this number in its serving area. (Id.)
We find that SBC-CA's criticisms of HM 5.3 principally highlight questionable inputs that JA have used in HM 5.3. While HM 5.3 may not violate TELRIC in all respects, it specifically undercuts our confidence in its ability to track loop costs.
SBC-CA takes issue with how HM 5.3 applies TELRIC to build a network instantaneously to meet current demand. While we agree that it may be unrealistic to assume a network can be constructed overnight, we find that HM 5.3 for the most part follows well-established TELRIC guidance. Many of SBC-CA's criticisms center on quarrels with the inputs that are used in the model. We can modify many of the inputs and assumptions in HM 5.3 to address these criticisms. For example, we can ensure that the assumed fill factors provide reasonable spare capacity for growth, we can assume that a carrier will incur a higher cost to install sufficient switching investments because it cannot buy all the lines it will need at the steeply discounted "new" switch price, and we can change labor rates and task times to reflect more realistic equipment installation assumptions and expenses.
We agree with JA that based on established TELRIC rules, HM 5.3 should not build to "ultimate demand." (JA/Donovan, 3/12/03, paras. 191-194.) In its own modeling for federal universal service purposes, the FCC has stated that model inputs should reflect current demand, which it defines to include a "reasonable amount of excess capacity to accommodate short term growth." (Inputs Order, para. 190.) The FCC has explicitly rejected the notion of modeling based on "ultimate demand," because it is highly speculative (Id., para. 201). The FCC stated that "correctly forecasting ultimate demand is a speculative exercise, especially because of rapid technological advances in telecommunications." (Id., para. 200.) JA claim that HM 5.3 includes extra pairs to accommodate additional lines, maintenance and administrative needs, and therefore provides the same level of service as SBC-CA's current network. (JA, 3/12/03, p. 46.) We find that if we run HM 5.3 with an appropriate number of pairs per household and using appropriate fill factors, HM 5.3 accounts for a reasonable level of growth, and sizes the network to provide appropriate service quality and reach potential customers. As the FCC has stated, predicting ultimate demand is a speculative exercise, particularly in today's environment of rapidly changing technologies and demand levels, which SBC-CA acknowledges. (See e.g., SBC-CA, 3/12/03, p. 70, n. 278.)
We agree with SBC-CA that the right-angle routing used by HM 5.3 and divorced from the considerations of California topography. SBC-CA relies on the opinion of its expert witness that outside plant is underestimated in HM 5.3. It has not, however, provided empirical evidence that its actual route distances are greater than those modeled by HM 5.3, or any comparison of the distances modeled in the SBC-CA Models to those in HM 5.3. JA contend that right-angle routing conservatively overestimates loop plant because it uses right-angle rather than straight line connections. It is logical that loop lengths based on right-angle connections will be longer than straight line connections because mathematically, the two sides of a right triangle, when added together, are longer than its hypotenuse. Thus, we find that the right-angle routing used in HM 5.3 is reasonable. Although right angles may not match SBC-CA's actual network routes, it is more realistic to assume right angles than to assume a carrier could build all routes along straight lines. The problem with HM 5.3 is not the right angle modeling assumption - it is the use of a model that assumes that California is a featureless plain that can be served by large clusters wherein customers are reached by neat right angle distribution routes (this issue is discussed in the next section).
While we agree that the use of SBC-CA's actual right-of-way and plant routes would be a superior modeling technique, neither model has been able to achieve this level of reality. While SBC-CA models existing feeder routes, this is not true for the distribution portion of the network where SBC-CA has used the design point to approximate distribution loop lengths. Further, the FCC's TELRIC rules do not mandate the use of existing outside plant routes, and specifically allow a "reconstructed local network." (First Report and Order, para. 685.) Therefore, we find that although JA's simplifying assumption of right-angle routing is not based on today's outside plant routes, it most likely increases costs in the model by using a longer route than if customers were connected by straight lines, but likely underestimates the costs that a real world of freeways, mountains, streams and hills impose on the routing of infrastructure.
We do not agree with SBC-CA that HM 5.3 is automatically flawed because its proposed costs are lower than SBC-CA actual costs, but the huge discrepancy between forecast costs and real world costs diminish our confidence that HM 5.3 has accurately captured real world constraints that raise costs. SBC-CA makes generic statements that the characteristics of its current network best reflect an efficient forward-looking network because SBC-CA has years of experience running a network and has been operating under incentive regulation designed to make its network competitive. SBC-CA actual costs may be skewed by unusual one-time expenses from that year, or may not be forward-looking because they reflect the cost of running a network based on embedded choices that a new carrier would not make. Nevertheless, they are likely in the "ballpark" of the costs that an efficient network that heavily relies on the traditional technology of twisted copper wires. Finally, it is difficult to determine whether this discrepancy results from flaws in the HM 5.3 model, or simply from wrong inputs.
While SBC-CA criticizes numerous inputs to the HM 5.3 model as highly unrealistic and biased too low, it does not provide specifics on what a more realistic input should be given its own network experience. SBC-CA's main response is that the Commission should use its model instead, rather than amend the inputs in HM 5.3. For example, SBC-CA's witness McNeil criticizes HM 5.3 for what he considers unrealistic assumptions about how fast a crew can place and splice fiber and cable, but he does not provide actual placement and splicing times, only the vague suggestion that the crew size should be larger. (SBC-CA/McNeil, 2/7/03, pp. 46-50.) In contrast, JA witness Donovan defends his input assumptions, and notes that his estimates are actually higher than data from SBC-CA's job cost estimate database. (JA/Donovan, 3/12/03, pp. 24-30.)
A second example involves DLC installation costs. While SBC-CA criticizes HM 5.3's DLC cost assumptions, it cannot justify its own inputs in this area. SBC-CA's witness Palmer states that estimates of DLC installation costs by JA are too low because they are based on Project Pronto estimates and "[t]here is absolutely no relationship between the actual costs incurred today by SBC-CA California to install this equipment and the high-level estimates used in 1999 for business case purposes." (SBC-CA/Palmer, 3/12/03, p. 13.) When questioned at the hearings about SBC-CA's actual DLC installation costs, neither Palmer nor SBC-CA's other loop witness, Ms. Bash, could provide an answer or explain how SBC-CA knew that JA's DLC installation assumptions were too low if they did not know SBC-CA's actual costs. (Hearing Tr., 4/15/03, pp. 572-575.) Later, at a continuation hearing, Bash provided information on actual SBC-CA DLC installations from a sample she chose of 8 installations. (Proprietary Hearing Exhibit (PHE) 109.) SBC-CA admits that the actual costs from Bash's sample are lower than the factors for DLC installation used in LoopCAT. (SBC-CA, 8/1/03, p. 21.) A further sample of 50 installations chosen by SBC-CA and JA also indicates costs from actual DLC installations that are lower than the DLC installation factors used by SBC-CA in its own model. (JA, 8/1/03, Exhibits C-4, C-5.) We give little weight to criticisms of HM 5.3 assumptions when witnesses are unable to provide specifics from SBC-CA's own experiences or explain why modeling inputs differ from actual costs.38
HM 5.3 uses detailed customer location information supplied by SBC-CA to identify SBC-CA's current customer locations and cluster them into distribution areas. This is the foundation for the network that HM 5.3 models and is used to determine the lengths of facilities' routes, how much feeder plant is needed, and the types and amounts of copper cable and support structure. (SBC-CA/Tardiff, 2/7/03, p. 17.) Essentially, SBC-CA contends there are numerous flaws with the geocoding process and customer location database used as an input to HM 5.3, and these flaws violate the Commission's cost modeling criteria and result in loop costs that are too low and loop plant that is not constructed using standard engineering practices.
To understand SBC-CA's criticisms, it is helpful to review the geocoding and clustering process used in HM 5.3. (See JA/Mercer Decl., 10/18/02, Attachment RAM-4, Model Description, Section 5.2 - 5.3.) JA contracted with an outside vendor, TNS, to take SBC-CA's customer location data and "geocode" it by assigning each customer location a precise longitude and latitude. Where SBC-CA's data was incomplete or unreliable, "surrogate" geocoded locations were assigned. TNS then used its proprietary algorithms to group these geocoded customer locations into logical serving areas, or "clusters," based on the category of service appropriate for that customer. The clustering algorithm imposed three critical engineering restrictions to ensure that (1) no point in the cluster may be more than 17,000 feet from the center of the cluster, (2) no cluster may exceed 6451 lines,39 and (3) no point in the cluster may be farther than two miles from its nearest neighbor in the cluster. (Id., RAM-4, section 5.3.2.)
The clustering process produces irregularly shaped groupings of customers in each wire center that JA term "convex hulls," or "clusters." TNS then determines the "centroid" of the cluster, which is the midpoint of the line connecting the two farthest points in the cluster. (Id., RAM-4, Section 4.5, n. 26.) HM 5.3 uses the cluster centroid as the location for the feeder termination, or serving area interface (SAI). In addition, TNS calculates a "strand distance" which is a measurement of the route mileage required to connect all customer locations to each other. The strand distance is based on a "minimum spanning tree" (MST) theory and assumes "right-angle" routing between customer points. (Id., RAM-4, Section 8.4, n. 47.)
Next, TNS takes the convex hull clusters and transforms each into a rectangle of the same total area as the original convex hull, so that a distribution network can be laid out over the cluster. JA describe this as "rectangularization." Finally, TNS uses demographic data to assign demographic characteristics such as terrain, housing profiles, and line density zone characteristics, to the clusters. (Id., RAM-4, Section 6.1.)
The customer location database input is now complete and ready to be input to HM 5.3. HM 5.3 takes the rectangular clusters in the database and subdivides them into lots of equal sizes in order to lay out a distribution network over the cluster to reach each of the lots, which are uniformly dispersed over the area of the cluster. (Id., RAM-4, Section 8.1.) HM 5.3 compares the total distance of this distribution network to the "MST" strand distance, or route mileage, calculated by TNS and allows the user to adjust the route mileage to this MST distance when calculating the cost of the distribution network. (Id., RAM-4, Section 8.4.)
SBC-CA contends that the customer location database resulting from the TNS geocoding process is a black box that cannot be verified because JA have not provided the proprietary source code used by TNS for the geocoding and clustering process. SBC-CA also contends that the clustering process produces distribution areas that are too large and do not represent actual customer locations in SBC-CA's serving area. (SBC-CA/Dippon, 2/7/03, pp. 4-5.) We now describe and discuss these criticisms.
SBC-CA charges that the TNS customer location database and clustering process is not sufficiently open because it does not allow parties access to the database's underlying data, calculations, and assumptions. This inhibits SBC-CA's ability to examine and modify HM 5.3 customer location engineering principles. According to SBC-CA, JA never provided access to the source code of the algorithm used by TNS to cluster SBC-CA's customer information data. Without the source code, SBC-CA claims it cannot review, test, or modify how the model clusters customer locations. (SBC-CA, 2/7/03, p. 30.)40 SBC-CA's witness Dippon claims that the clustering description provided by JA does not match what TNS appears to have done. (SBC-CA/Dippon, 2/7/03, pp. 27-30.)In response, JA contend that SBC-CA was given everything it needed to review, understand, and test the TNS clustering process. (JA, 3/12/03, p. 51.)
We agree with JA that it provided sufficient access to its clustering process to allow SBC-CA's witness Dippon to run his own clustering scenario where he reduced the maximum lines in the cluster from 6,451 to 1,800. (SBC-CA/Dippon, 2/7/03, p. 42.) SBC-CA's claims that this access was insufficient are contradicted by the modifications it made to the clustering process and the detailed criticisms it provided in its comments.
On the other hand, Commission staff reviewed this area extensively and we agree with SBC-CA that the clustering process lacks full transparency. JA's description of the geocoding and clustering process is far from clear and laden with technical concepts such as "rectangularize," "centroid," and "convex hull" that are not commonly used. Ultimately, Commission staff was unable to run its own version of the TNS clustering process to test the effects of different assumptions because it would have required extensive computer equipment that the Commission does not have available. In this regard, SBC-CA was able to accomplish what we could not.
Overall, we find that the entire debate over transparency and access to the clustering process must be viewed relative to the transparency, access and ability to understand, review, and modify the preprocessed data SBC-CA used for its own models. The clustering algorithm provided by TNS as an input to HM 5.3 is comparable to SBC-CA's preprocessing of its loop records before they were input to LoopCAT. In other words, both parties had to "preprocess" vast amounts of data for input to the actual UNE cost models, and there are aspects of both the TNS and the LoopCAT preprocessing work that outside parties and Commission staff are not able to replicate or scrutinize for various reasons. It appears that both JA and SBC-CA attempted to give sufficient access to other parties so they could review, replicate, and modify the preprocessing steps. In both cases, complete access to the full extent requested by other parties was either not possible, as with SBC-CA's models because of the size of the preprocessed database, or dependent upon agreement over the handling of proprietary information, as with the TNS database where SBC-CA chose not to pursue its motion to compel access to the source code.
Ultimately, we find both models lack sufficient transparency because the vast amounts of preprocessed data limit the Commission's ability to run scenarios and test various input assumptions. Neither HM 5.3 nor SBC-CA's models allowed us the ability to fully understand or replicate their preprocessing steps, and therefore, both models have aspects that could be considered "black boxes." This, of course, undermines our confidence in either model.
Despite his lack of access to the clustering algorithm source code, SBC-CA's witness Dippon identified several errors in the clustering process that cause the clusters to bear no resemblance to real world customer groupings or actual customer locations in SBC-CA's serving area. (SBC-CA, 2/7/03, p. 31; SBC-CA/Dippon, 2/7/03, pp. 2-3.) Dippon lists numerous examples where the clustering process places customers or equipment in locations SBC-CA contends do not match reality. For example, Dippon takes issue with how TNS determined the cluster "centroid," which HM 5.3 uses to locate the Serving Area Interface (SAI) equipment. (SBC-CA/Dippon, 2/7/03, p. 36.) Second, Dippon describes how the clustering "clumps" customers in downtown areas into unrealistic high-rise buildings. For example, HM 5.3 produces a 1,020-story building and understates the amount of distribution plant to serve such a tall building. (SBC-CA, 2/7/03, p. 47.) Third, SBC-CA again raises the criticism that when constructing a real world network, geographic impediments and man-made obstructions must be considered.
We find that both models have made many simplifying assumptions in order to model a network. Some of these assumptions are more far-fetched than others. We agree with JA's assessment that HM 5.3 is not an engineering model, but a cost model that locates current customers and determines the cost of plant to reach those customers, plus room for reasonable growth, without determining the actual locations where plant will be placed. Still, when all customers are located into a single high rise, it is difficult to trust that the resulting costs replicate those that would arise from customers dispersed over a real area.
We find that the clustering assumptions that form the basis of HM 5.3 are not that different than the loop input assumptions used by SBC-CA in its preprocessor, including SBC-CA's approximated loop length based on a "design point." In fact, we find the HM 5.3 model's loop design clusters customers based on actual customer locations, which means that HM 5.3 creates distribution areas based on the realities of where customers are grouped today. In contrast, SBC-CA's model presumes that all of the customer groupings in its network today are forward-looking and efficient, and does not allow the user to regroup customers into more logical groupings based on current population characteristics. Then, SBC-CA models loop plant to serve these existing groupings based on the "design point" concept and its resulting approximation of loop length. In contrast, HM 5.3 starts by locating all customers where they are today, and recognizes dense groupings of customers given the high proportion of multiple dwelling units in California.41 We find that HM 5.3 provides a more granular approach to designing a distribution network than SBC-CA. Therefore, SBC-CA's criticisms that customer locations are not accurate are less convincing, particularly when its own model does not accurately locate customers either.
There is an area of the HM 5.3 loop design process we find deserving of criticism. After the TNS process has created clusters based on actual customer locations, it essentially wipes the slate clean by subdividing the distribution area into equal size lots, then laying out distribution plant in a grid. Thus, even though actual customer locations are used to determine customer groupings, or clusters, they are completely ignored when modeling the distribution plant. JA defend this by suggesting it is too difficult to model plant to actual customer locations. We recognize all models contain limitations in their ability to mirror reality. In fact, SBC-CA's own model does not attempt to accurately locate existing customers and similarly assumes they are all evenly dispersed throughout the distribution area. Therefore, both models can be faulted for the accuracy of their modeling of customer locations.
With regard to SBC-CA's specific criticisms, JA counter that HM 5.3 may not reflect the physical realities of SBC-CA's network, but it is not intended to mimic the exact locations of SBC-CA's plant. Indeed, SBC-CA's model does not do this, and neither does the FCC's Synthesis Model. (JA, 3/12/03, p. 53, JA/Murray, 3/12/03, p. 13.) We agree with JA that TELRIC allows reconstruction of the network using existing wire centers, and does not require a model to use existing facility routes. In defining TELRIC, the FCC rejected cost approaches based entirely on a new network design or based entirely on existing network design. (First Report and Order, paras. 683 and 684.) Instead, the FCC found that a cost methodology that was based on the most efficient technology deployed in the incumbent LEC's current wire center locations "encourages facilities-based competition to the extent that new entrants, by designing more efficient network configurations, are able to provide the same service at a lower cost than the incumbent LEC." (Id., para. 685.) (Emphasis added.) The FCC therefore concluded that "the forward-looking pricing methodology for interconnection and unbundled network elements should be based on costs that assume that wire centers will be placed at the incumbent LEC's current wire center locations, but that the reconstructed local network will employ the most efficient technology for reasonably foreseeable capacity requirements." (Id.) (Emphasis added.)
We acknowledge that certain elements of the real-world network are fixed, such as terrain, roads, and customer locations. Nevertheless, a TELRIC model recognizes that the design of the current network may not represent the most efficient, forward-looking design because it may reflect choices made at a time when different technology options existed or when a different cost structure for equipment and labor drove decision-making. Fundamental to TELRIC cost modeling is the understanding that it is not merely an engineering cost estimate for actual re-construction of the existing network. Rather, a TELRIC model estimates costs based on the location of existing wire centers coupled with forward-looking network assumptions that in the aggregate are reasonable.42 Thus, we do not agree with SBC-CA that it is necessary for HM 5.3 to locate outside plant, such as SAI's, in the exact location that they are today.
Regarding clumping of customers into unrealistic high rise buildings, JA explain that HM 5.3 had to make simplifying assumptions about customers with the same address where it did not know the square footage "footprint" of the building. In other words, when HM 5.3 sees a high concentration of lines at one address, it does not know if this is a large shopping mall or a high-rise building. The model has been set up to treat many lines at one address as a high-rise, but only includes distribution cable to serve 50 floors recognizing that buildings seldom exceed this height. HM 5.3 includes this 50 floors of distribution cable even though such intra-building cable may not be part of the local exchange network, but property of the building owner. Therefore, JA admit that a 1,020 story building is unrealistic, but it is simply a result of simplifying modeling assumptions where HM 5.3 does not know the exact building square footage. Further, JA state that HM 5.3 conservatively overestimates costs by including distribution cable to serve these high-rises. (Workshop Tr., 6/25/03, pp. 658-661. See also JA/Mercer, 3/12/03, paras. 186-190.)
JA state that the criticisms levied by SBC-CA, if corrected, would only serve to lower the cost estimates produced by HM 5.3 by modeling the network with greater exactitude. (JA, 3/12/03, p. 54.) We find that implausible, particularly because the large cluster sizes exploit scale economies that smaller clusters cannot.
Nevertheless, we are not persuaded that SBC-CA's criticisms of the accuracy of the geocoding and customer location process indicate fatal flaws in HM 5.3 or are worse than the methods used in LoopCAT to configure the loop network. We find that the method used by HM 5.3 to model customer locations and the costs of reconstructing SBC-CA's network, given its existing wire centers, is reasonable and, for the most part, uses logical assumptions.
On the other hand, HM 5.3 ultimately ignores customer locations to model loop plant. While this is somewhat troubling, we find this no worse than simplifying assumptions made in the SBC-CA models to assume an average loop length and a uniform distribution of customers within the distribution area. Both HM 5.3 and LoopCAT can be faulted for the accuracy of their customer locations. On the whole, while we do not agree with all of the inputs used in HM 5.3, the concept of creating customer groupings based on today's actual customer locations, and calculating the cost of building a distribution network to connect them is reasonable, even if the reconstructed network does not follow today's exact outside plant routes.
SBC-CA contends that the clustering process is flawed because when HM 5.3 is re-run after re-clustering the customer location data into smaller clusters, the results show minimal impacts on total loop cost estimates. (SBC-CA, 2/7/03, p. 37.) Specifically, SBC-CA's witness Dippon ran a scenario of HM 5.3 where he reduced the cluster sizes from a maximum of 6,451 lines per cluster to 1,800, thereby increasing the number of clusters in HM 5.3 by 200%. (SBC-CA/Dippon, 2/7/03, p. 43.) Dippon's run with smaller cluster sizes resulted in only a slight decrease in loop cost. Dippon contends this result is illogical because if cluster sizes are reduced such that HM 5.3 has to build a network to reach more clusters of smaller size, loop costs should rise. (Id. p. 44.)
In response, JA maintain that the results of Dippon's "1,800 run" are not illogical when one considers the tradeoff between feeder and distribution costs that results from creating a network with more clusters that serve a smaller number of lines per cluster. As JA's witness Mercer explains, Dippon's "1,800 run" shows an increase in feeder investment to penetrate more deeply into the network and closer to customers, offset by a decrease in distribution investment because smaller cables are less expensive. (JA/Mercer, 3/12/03, pp. 24-25.) JA witness Donovan contends that Dippon's "1,800 run" shows that HM 5.3 is operating correctly by installing more feeder and DLC equipment to serve a larger number of smaller clusters, offset by less investment in distribution cable. (JA/Donovan, 3/12/03, pp. 52-53.)
We find JA's explanation on this point implausible. We agree with SBC-CA that Dippon's "1,800 run" proves HM 5.3 is flawed. Any model that produces distribution level costs insensitive to the design of the network lacks credibility and has serious flaws.
In addition, we were unable to run our own clustering scenarios to examine differing model results. If we could have done this, we might have run a scenario somewhere between Dippon's "1,800 run" and the HM 5.3 default clustering which assumes a maximum of 6,451 lines. Therefore, we are not able to determine whether HM 5.3 appropriately handles the tradeoff between feeder and distribution investment because we were not able to run alternative scenarios.
SBC-CA contends that the abstract clustering algorithm used by TNS to create the customer location database ignores existing industry standards and creates unrealistic and inefficiently large clusters of 6,451 lines rather than a maximum of 1,800 lines. (SBC-CA, 2/7/03, p. 43, SBC-CA/McNeill, 2/7/03, p. 3.) According to SBC-CA, JA have abandoned prior modeling techniques that limited distribution areas (DAs) to between 200 to 600 households. (SBC-CA, 2/7/03, p. 43, n. 144.) According to SBC-CA, this results in DA's that are too large and no carrier has ever built a network like this. For comparison, SBC-CA states that its current network is comprised of over 60,000 DAs, whereas HM 5.3 produces only 7,679 main clusters for SBC-CA's entire California serving area. (SBC-CA/Murphy, 2/7/03, p. 39.)
According to SBC-CA, standard engineering principles recognize that because feeder facilities cost less per unit of length than distribution facilities, the objective is to minimize the size of the DA and achieve a reasonable fill of the feeder facilities. (SBC-CA/McNeil, 2/7/03, p. 16) Further, SBC-CA contends that HM 5.3 artificially lowers loop costs per line by assuming extraordinarily large underground controlled environmental vaults (CEVs) that spread the higher installation costs of a CEV over a larger number of lines. (SBC-CA/Tardiff, 2/7/03, pp. 40-41.)
We are troubled by JA's assumption that distribution areas can be sized up to 6,451 lines, which is much larger than the distribution areas in SBC-CA's current network. This is a very serious flaw, for it suggests that HM 5.3's loop model is inadequately constrained by local topology and regulatory restraints, such as zoning laws. As stated above, we would have preferred to run our own scenario with a smaller maximum line size per cluster. Although JA show that incumbent carriers are currently purchasing equipment that will serve distribution areas as large as or larger than those modeled in HM 5.3. (JA/Donovan, 3/12/03, paras. 97-100.), we do not believe that equipment of this size and street footprint could be deployed in many California cities.
We find SBC-CA's approach more compelling, but also flawed. SBC-CA appears to advocate that clusters should serve a maximum of 1,800 lines, based on a guideline of serving 200 to 600 households. SBC-CA models distribution areas to serve a maximum of 200 to 600 households based on standards that date back at least 25 years, before the advent of fiber optics and equipment sized to serve a greater concentration of lines. (JA, 3/12/03, p. 53; JA/Donovan, 3/12/03, paras. 90-96.) Furthermore, SBC-CA's witness McNeil admitted that SBC-CA currently attempts "to establish large footprints so that the remote terminal can serve as many DAs as possible." (SBC-CA/McNeil, 2/7/03, para. 30.) He goes on to state, "[I]t is efficient and cheaper to place as few remote terminals as possible." (Id.) For these reasons, it is reasonable to conclude that a forward-looking network configuration might recognize today's dense customer groupings and the availability of larger equipment in order to size DA's larger than SBC-CA has done in the past.
Nevertheless, we agree with SBC-CA that HM 5.3 relied on too many large DA configurations, more than it is reasonable to assume would happen in the real-world network. The clusters used as an input in HM 5.3 are also based on a maximum number of lines per cluster of 6,451, which is larger than the CEVs SBC-CA normally uses. While CEVs do exist large enough to accommodate this number of lines, we find it inappropriate to assume that all distribution areas could accommodate a CEV of that size.
We would have preferred to take a middle ground and rely on clustering assumptions that did not assume the largest equipment could automatically be placed everywhere. JA have adequately defended the use of distribution areas sized larger than SBC-CA's outdated guideline of a maximum of 200 to 600 households. But, neither is it reasonable to conclude that every DA could accommodate the equipment to serve 6,451 lines. Thus, our principal criticisms with HM 5.3 in this area is that we cannot modify the clustering process ourselves to re-run it with a more moderately sized clustering assumption, that the cluster size chosen is not plausible for California, and the apparent insensitivity of costs to cluster size is implausible.43 We therefore believe that the HM 5.3 model may not accurately estimate UNE-L costs..
Overall, we find that while we do not agree with all aspects of HM 5.3's customer location and loop modeling, it is no more a "black box" than SBC-CA's own preprocessor and input modeling assumptions related to the design point. Both HM 5.3 and SBC-CA's LoopCAT lack transparency, limit the Commission's ability to test various scenarios, and can be faulted for the accuracy of their customer location process. HM 5.3 is based on a detailed examination of current customer locations, and makes simplifying assumptions not unlike the assumptions underlying SBC-CA's LoopCAT. The HM 5.3 model ultimately ignores customer locations when modeling loop plant. As a result, although HM 5.3 starts with current customer location data, it does not model outside plant in either the exact locations in which it exists today or in a plausible alternative way. Nevertheless, HM 5.3 has one advantage over LoopCAT because it starts with actual customer locations to cluster customers into efficient groupings, whereas LoopCAT makes no attempt to determine efficient customer groupings based on current population density characteristics. We find that the method used by HM 5.3 to model customer locations, create forward-looking customer clusters, and estimate the costs of reconstructing SBC-CA's loop network falls reasonably within TELRIC guidelines, even if the reconstructed network does not follow today's exact outside plant routes.
This does not mean there are not other valid criticisms of the clustering process underlying HM 5.3. We find that LoopCAT is superior to HM 5.3 in that the size of its "local serving areas" are much more plausible than the huge agglomerations that characterize HM 5.3 and we believe that HM 5.3 systematically understates loop costs.
In summary, we find that both models contain aspects of their loop modeling that we were unable to modify to our satisfaction and which undercut our confidence in the results each model yields. Moreover, the cluster sizes used by HM 5.3 and the insensitivity of costs to cluster size undermine our confidence that it produces an accurate estimate of loop costs. We conclude that it would be unwise to rely on either model to price UNE-L's.
SBC-CA contends that HM 5.3 relies excessively on unsupported "expert judgment" for inputs that relate to such items as network design, install times for engineering and placement of cable, support structure, DLC equipment, labor loadings, and material costs. According to SBC-CA, many of the inputs to HM 5.3 are based on opinions with little or no analysis or backup documentation to support their derivation or reasonableness. In some cases, JA have selectively relied on vendor quote information to produce low UNE cost estimates, without revealing supporting documentation for these vendor quotes, or they have used information from around the country rather than using cost data supplied by SBC-CA. (SBC-CA, 2/7/03, pp. 32-33.) For example, JA have selected prices for switching based on extremely short run considerations, assuming that selected prices in a particular contract would somehow be available for all switching purchases. (SBC-CA/Tardiff, 3/12 /03, p. 13.)
We agree with SBC-CA that many of HM 5.3's inputs may not be appropriate. HM 5.3 uses many inputs that are based on expert judgment (such as plant mix and structure sharing percentages), or derived from national data rather than California specific numbers (such as labor loading data). It is also true that HM 5.3 relies on vendor quotes that are not always documented.44 In many areas throughout HM 5.3, we have been successful in modifying these inputs, and in most cases we have substituted inputs or data from SBC-CA's models instead.
While SBC-CA often criticized many of the HM 5.3 inputs, its reply and rebuttal comments did not provide an assessment based on SBC-CA's own experience of the correct value for many of these disputed inputs. Thus, although they have cast doubt on the costs that HM 5.3 yield, they have not provided a means to correct them.
Finally, SBC-CA argues that HM 5.3 inputs do not match SBC-CA's actual costs. However, the Supreme Court dismissed comparisons of this sort, noting that costing methods that rely on historical costs can pass on inefficiencies caused by poor management or poor investment strategies. (Verizon, 122 S. Ct. 1646, 1673.) The Court further noted that:
Contrary to assertions by some [incumbents], regulation does not and should not guarantee full recovery of their embedded costs. Such a guarantee would exceed the assurances that [the FCC] or the states have provided in the past. (Verizon at 1681.)
Although we find HM 5.3's use of expert judgment usually can be corrected with input changes, there were several instances where corrections were not possible. Notably, we wanted to use SBC-CA's hourly wage rate as an input rather than the lower rate assumed by JA. We were not able to change HM 5.3's labor rate assumptions in all instances because they were embedded with material and other assumptions such that we could not determine what portion of a total cost involved hourly labor. For example, many of HM 5.3's inputs include components for material costs, labor rates, task completion time, and crew size, which are joined into one input cost figure. Commission staff was unable to isolate hourly wages from this conglomeration of labor and material inputs in order to adjust hourly wage rates to SBC-CA levels, particularly for labor cost inputs relating to SAI investment, terminal and splice investments, buried drop installation, and riser cable investment. Because we were not able to change labor rate assumptions in all places within HM 5.3, we find the model flawed in this area.
In comments on the Proposed Decision, both MCI/WorldCom and SBC-CA suggest new approaches to modify the labor rate embedded in HM 5.3 inputs. (MCI/WorldCom, 6/1/04, p. 8; SBC-CA, 6/1/04, p. 4.) We use the method suggested by MCI/WorldCom to adjust some of the labor rates in HM 5.3 that we were unable to modify directly, namely aerial and buried terminal, splice and SAI investment.
SBC-CA contends that the switching and interoffice portions of HM 5.3 do not accurately account for the actual demand generated by today's SBC-CA customers. On the switching side, HM 5.3 does not properly account for customers' peak period usage or the unique characteristics of individual switches in SBC-CA's California network. (SBC-CA, 2/7/03, p. 65.) On the interoffice side, HM 5.3 does not model the actual volume of trunk facilities ordered by SBC-CA customers, and therefore fails to construct an interoffice network with sufficient capacity to support the total demand handled by SBC-CA's network. (Id., p. 64.) SBC-CA alleges that the transport and fiber fill factors proposed by JA are unrealistically high given actual carrier operating fill levels. (SBC-CA, 3/12/03, p. 76.)
In addition, SBC-CA contends that HM 5.3 models a network that would be unable to provision all of the high-speed services that are available today because it omits key electronic "optical interface" equipment necessary to connect DS-1 facilities to the interoffice network. (SBC-CA, 2/7/03, p. 66.) This omission underestimates the facilities and equipment needed to provision DS-1 and DS-3 loops. (Id., p. 50.) Finally, SBC-CA contends that the HM 5.3 Transport module is flawed because it is insensitive to both the demand it considers and the costs for fiber cable and electronics, calling into question what the model optimizes when it is run with differing input assumptions. (SBC-CA/Tardiff, 3/12/03, p. 12.)
In response, JA maintain that HM 5.3 conservatively overestimates the number of trunks required in the switching and interoffice facilities (IOF) networks, and that the way in which HM 5.3 models demand is far superior to SBC-CA's SPICE model, which JA claim does not include all trunks and fails to incorporate demand data. (JA, 3/12/03, p. 70, n. 259.) According to JA, HM 5.3 models the known total amount of switched traffic carried by the SBC-CA network based on SBC-CA "dial equipment minutes" (DEM) traffic data and provides sufficient circuits to carry that traffic. Thus, in their opinion, HM 5.3 appropriately engineers a switching network to accommodate the requirements of a typical switch, its typical busy hour, and all required trunking. (Id., p. 72; JA/Mercer, 3/12/03, paras. 80-82.) JA claim that if additional traffic for interconnection and switched access trunks were modeled in HM 5.3, any additional economies of scale would likely only lower the resulting per unit dedicated circuit UNE cost. (JA/Mercer, 3/12/03, para. 78.)
Further, JA contend that HM 5.3 does not ignore demand, it simply has not fully configured network components to serve high capacity services, such as OC-level service and DS-1 on fiber, because these UNEs are not at issue in this proceeding. (Id., paras. 89-90.) Nevertheless, JA witness Mercer contends that HM 5.3 specifically provides fiber capacity for high capacity loops, even if it does not model the terminal equipment necessary for these services. (Id., paras. 89-92.)
First, we agree with SBC-CA that HM 5.3 does not model the characteristics of individual switches. However, we do not consider this a flaw because we note that SBC-CA's SICAT model does not do this either. Indeed, both HM 5.3 and SICAT appear to have taken a similar modeling approach that looks at aggregate switching requirements. The models differ primarily in their input assumptions for the amount of new, growth, and replacement lines, and switch fill factors. On this issue, we find that SBC-CA's criticisms appear to hold HM 5.3 to standards higher than its own SICAT model.
Second, we agree with SBC-CA that HM 5.3 appears to underestimate demand on the interoffice network. JA admit that they did not configure the interoffice network to handle all high capacity demand, claiming that these costs are not at issue in this proceeding. Yet the FCC's definition of TELRIC describes the forward-looking cost over the long run of the total quantity of facilities and functions that are directly attributable to an element, "taking as a given the incumbent LEC's provision of other elements." (47 C.F.R. Section 51.505(b).) (Emphasis added.) We find that HM 5.3 does not include all of SBC-CA's current interoffice demand and therefore, does not model an interoffice network to accommodate all of SBC-CA's current interoffice traffic. Unfortunately, because of the flaws we have already noted with SBC-CA's SPICE model, it is unclear how we would modify HM 5.3 to remedy this flaw. We have already discussed how we are unable to determine the demand level that SBC-CA's SPICE model is designed to serve. Thus, we are unable to take SBC-CA inputs and place them into the HM 5.3 model.
Third, we cannot determine whether HM 5.3 adequately incorporates optical interface equipment. JA maintain that the IOF network modeled by HM 5.3 includes the cost of all equipment necessary for optical trunking and therefore, would function properly. (JA, 3/12/03, p. 75.) According to JA witness Mercer, "the model includes in each wire center investment for a digital cross connect system of sufficient capacity to meet the circuit requirements of that wire center." (JA/Mercer, 3/12/03, p. 76.) Mercer goes on to state, "It is possible, then, to use the Titan 5500 to replace the switch interface to the OC-48 [synchronous optical network (SONET)] ADMs used by the Model..." (Id.) Based on our own reading of Mercer's rebuttal, we find it unclear whether and to what extent DCS investment, or the Titan 5500, was incorporated in HM 5.3 and properly allocated between all the services that might use it. Thus, we find that HM 5.3 might not allow provisioning of the high capacity services SBC-CA provides today.
Finally, JA witness Mercer attempts to respond to criticism that the HM 5.3 Transport module is insensitive to demand. Mercer describes some minor modifications to HM 5.3 to address SBC-CA's concerns regarding understatement in certain interoffice equipment investment. (Id., paras. 195-198.) Although Mercer provides these corrections to the Transport Module, it is not clear that he has entirely addressed the SBC-CA criticism. We do not find that JA have adequately addressed this criticism of how HM 5.3 derives its SONET ring structure and the resulting interoffice transport rates. Therefore, we are unwilling to rely solely on the results of HM 5.3 to establish interoffice transport rates.
In comments on the Proposed Decision, MCI/WorldCom argues that any alleged omission of demand in the HM 5.3 interoffice model only serves to overstate transport costs because higher demand lowers per unit costs. (MCI/WorldCom, 6/1/04, p.7.) We cannot agree with this simplistic assertion because higher demand could require modeling of more or different equipment, which would not necessarily lower per unit costs. The fact that HM 5.3 appeared insensitive to demand changes also contradicts this assertion. Further, MCI/WorldCom overlooks that the Commission finds fault with HM 5.3 for failing to model the equipment necessary for high capacity services and failing to adequately address how HM 5.3 derives its SONET ring structure. These are important criticisms that JA did not adequately address. Therefore, we make no changes to our conclusions regarding the HM 5.3 interoffice transport module.
SBC-CA contends that HM 5.3 does not model sufficient spare capacity and models a network that would require new orders to wait while new lines are installed. This could impact service quality and potentially lead to service disruptions. According to SBC-CA, although JA acknowledge that 1.5 to 2 pairs per living unit is the minimum design standard for distribution plant, HM 5.3 does not allocate this minimum number of cable pairs to each potential residence or business. (SBC-CA, 2/7/03, p. 41.) In addition, SBC-CA contends that HM 5.3 understates the cable lengths needed to serve potential demand by not designing plant to reach potential customer locations. (Id., p. 42.)
We disagree with SBC-CA's contention that HM 5.3 is seriously flawed with regard to how it handles spare capacity. From our own review, we know that HM 5.3 allows the user to adjust inputs throughout in order to achieve varying levels of spare capacity. This is discussed at length in Section VI.E and VI.J.5 below where the various fill factor inputs are discussed which vary the amount of network investment for spare capacity. Essentially, SBC-CA's spare capacity arguments can be reduced to a dispute over whether the model should assume 1.5 to 2 loop pairs per living unit, as JA propose, or 2.25 pairs per living unit, as SBC-CA proposes. We will address this dispute in our fill factor discussion below. For now, we find that SBC-CA's position on spare capacity does not represent a fatal flaw in HM 5.3.
Furthermore, we do not agree with SBC-CA's argument that HM 5.3 underestimates cable length by not considering the loops required to serve future customers. We have already discussed why it is improper to model to "ultimate demand" given guidance from the FCC. We have concluded that the demand assumptions for loops in HM 5.3 accommodate a reasonably foreseeable level of growth. We have also discussed why SBC-CA's loop lengths unreasonably include ultimate demand and cannot be modified to counteract this. Therefore, we do not find that HM 5.3 is critically flawed on the issue of spare capacity. Rather, we find that HM 5.3 can be modified to incorporate varying assumptions for spare capacity, as needed.
SBC-CA contends that HM 5.3's approach to calculating expenses is flawed and produces expenses that are only one-quarter of SBC-CA's current levels. (SBC-CA, 2/7/03, p. 73.) First, SBC-CA claims that HM 5.3 incorrectly uses expense to investment ratios, or "E/I" ratios, based on SBC-CA's current costs of network equipment, and uses these ratios with HM 5.3 investment levels that are considerably lower. (Id., p. 73.) Second, SBC-CA criticizes HM 5.3's use of Verizon California as a benchmark for efficient operation in California based solely on its proposed lower expense levels, without exploring other factors that may explain the difference in expenses between the two companies. (Id., p. 74.) SBC-CA notes that Verizon has a significantly higher investment per line than does SBC-CA, and that overall efficiency of a company is related to both investment and expense decisions. Looking at expenses in isolation, as JA have done, reveals little about overall company efficiency. (Id. ) SBC-CA maintains that using Verizon as a benchmark for E/I ratios results in absurdly low expense levels in HM 5.3.
We find that HM 5.3's use of E/I ratios is reasonable, and not unlike the factors that SBC-CA uses in its own model. Indeed, both JA and SBC-CA have used a similar approach of adjusting investments to current cost before calculating ratios and applying them to estimate current expense levels. (JA/Brand-Menko, 10/18/02, p. 6; SBC-CA/Cohen 10/18/02, p. 5-6.)
We agree with SBC-CA that HM 5.3 improperly uses Verizon California as a benchmark for expense estimation purposes. SBC-CA has presented convincing arguments that JA have overlooked Verizon's higher investments per line and have overlooked other factors that could explain expense differences between the two companies. We prefer to use recent data from SBC-CA's reported ARMIS expenses to estimate forward-looking expenses. This was the starting point for HM 5.3's expenses before adjustments to benchmark expenses to Verizon. Therefore, we will back out these Verizon "benchmarking" adjustments from the HM 5.3 model.
In comments on the Proposed Decision, AT&T claims that the Commission improperly implemented its attempt to back out the Verizon expense levels by overlooking changes to the network operations E/I factor. (AT&T, 6/1/04, p. 19.) We agree this was an oversight and the final run of HM 5.3 has been corrected in this regard.
SBC-CA criticizes HM 5.3 for not providing any internal or external demonstration of the validity of its cost estimates. (SBC-CA, 2/7/03, p. 24.) SBC-CA performs its own comparison of the investments and expenses produced by HM 5.3 to what SBC-CA currently incurs based on 2001 ARMIS data. SBC-CA maintains that the "sanity check" it performed shows HM 5.3 investments and expenses are only one-quarter of SBC-CA's current levels, and that HM 5.3 has not depicted loop routes of proper length, not accurately priced network components, nor included a sufficient amount of ongoing expenses to pay for the labor force needed to run the network. (SBC-CA/Tardiff 2/7/03, pp. 3-4, and 20.) SBC-CA claims that any deviation between HM 5.3 and reality implies inaccuracies in the model, not inefficiencies in the current network. (Id., p. 23.)
Once again, JA contend that TELRIC calculations cannot be validated by comparison to a carrier's embedded costs, thus SBC-CA's "validation tests" are irrelevant. (JA/Klick, 3/12/03, p. 7.) According to JA, the FCC has already rejected cost methodologies that base forward-looking costs on the existing ILEC infrastructure, adjusted only for depreciation and inflation. (Id., p. 7, citing First Report and Order, para. 683-685.) Further, JA allege that SBC-CA's ARMIS-based estimates of reproduction costs are overstated because they include Project Pronto costs for transitioning SBC-CA to a DSL-capable network. (JA/Klick, 3/12/03, p. 8.) Finally, JA point out that SBC-CA never did any "validation test" of its own model results, presumably because this type of analysis would be impossible given that SBC-CA's cost studies do not provide total investment or expense results to compare to ARMIS data. (Id., pp. 4-5.)
We conclude that it would be unreasonable to reject the use of HM 5.3 merely because its results are lower than SBC-CA's current costs, as shown by comparisons to 2001 ARMIS data. We agree with JA that such comparisons are of limited value given that it is unclear to what extent we can rely on SBC-CA's current costs as forward-looking.45 Much of SBC-CA's criticism of HM 5.3 involves the inputs that it uses. It makes more sense to vary these inputs to levels that we consider more appropriate before deciding which model to rely on.
Interestingly, ORA/TURN performed such an analysis for us. TURN's analysis by its witness Roycroft used the FCC's Synthesis Model (SynMod) to test for potential structural bias in both the HM 5.3 model and SBC-CA models.46 Roycroft did a side-by-side comparison of HM 5.3, the SBC-CA models, and SynMod after applying a uniform platform of loop-related and general input values taken from SynMod. From this comparison, he concludes that HM 5.3 is forward-looking and not structurally biased because it produced higher costs than SynMod when both models were run with SynMod's default inputs. (ORA/TURN 2/7 p. 11; Roycroft Decl., 2/7/03, p. 63.) If HM 5.3 were biased, he argues, it would have generated lower costs than SynMod. Based on these results, ORA/TURN suggests adjustment of HM 5.3 inputs to the default levels used in SynMod. (ORA/TURN/Roycroft, 2/7/03, p. 63.) SBC-CA opposes ORA/TURN's recommendation to use SynMod inputs to set UNE rates for SBC-CA because these inputs are five years old and are based on broad national averages. (SBC-CA/Tardiff, 3/12/03, p. 37.)
With regard to SBC-CA's models, Roycroft concludes they do not comply with forward-looking principles because when the SBC-CA models are run with similar inputs to HM 5.3 and SynMod, the SBC-CA models generate consistently higher costs than HM 5.3 or SynMod. Roycroft suggests this is because LoopCAT has fewer user-adjustable inputs and does not allow variation in cable sizing. (ORA/TURN, 2/7/03, p. 11.) Roycroft presumes it might be possible to alter LoopCAT to generate outputs more consistent with TELRIC principles, but the effort required to make such modifications would be large and is not necessary given the availability of other models. (ORA/TURN/Roycroft, 2/7/03, p. 54.) XO supports ORA/TURN's analysis and conclusion that the costs calculated by the SBC-CA models are clearly outliers and unreasonable. (XO, 3/12/03, p. 3.)
JA also performed a sensitivity analysis of HM 5.3 to demonstrate that it was not structurally biased. JA changed eight categories of inputs in HM 5.3 to the values proposed by SBC-CA. This yielded a significantly higher loop rate, closer to but still below the level proposed by SBC-CA and its models.47 According to JA, this analysis refutes SBC-CA's claim that HM 5.3 is incapable of producing reasonable cost estimates.
We find that taken together, ORA/TURN's analysis using SynMod and JA's own sensitivity analysis varying eight inputs show that HM 5.3 is not so structurally biased as to produce unuseably low results, but we are also not inclined to adopt SynMod as the standard for determining the validity of UNE costs. The ORA/TURN analysis also corroborates our own findings that it is difficult to change many inputs within the SBC-CA models. We decline to adopt ORA/TURN's proposal to use all of SynMod's input values throughout HM 5.3 because we agree with SBC-CA that many of these inputs are dated or based on national averages.
Finally, we note that JA provided their own comparison of HM 5.3 with the FCC's Synthesis Model (SynMod). JA's witness Klick modified SynMod inputs to reflect HM 5.3 inputs, and then ran SynMod to estimate UNE rates in SBC-CA's territory. The results indicate SynMod loop investments 11% lower than those produced by HM 5.3. From these results, Klick concludes that HM 5.3 is an effective tool for estimating forward-looking costs because it produces similar investments and overall costs as SynMod, when run with comparable inputs. (JA/Klick, 10/18/02, pp. 14-15.) SBC-CA responds that Klick's analysis does not validate HM 5.3 because the similarity of outcomes of the two models, when run with the same inputs, merely shows that the inputs themselves are an important determinant of investment. SBC-CA contends HM 5.3's inputs are so low they produce invalid outputs for both models. (SBC-CA/Tardiff, 2/7/03, p. 35.)
Overall, we tend to agree with SBC-CA that Klick's analysis merely shows that HM 5.3 inputs provide very low results when input into SynMod. We agree with SBC-CA that the modeling inputs appear to be the more important drivers of model results. Therefore, we will not rely on or rule out either model based on these comparisons or validity tests provided by the various parties. Instead, we will turn to an analysis of the appropriate inputs to use in our model runs.
Finally, SBC-CA says numerous state and federal regulatory agencies have rejected HM 5.3 assumptions. (SBC-CA, 2/7/03, p. 76; SBC-CA/Tardiff, 3/12/03, pp. 6-7.) SBC-CA cites to various decisions in other states that have found earlier iterations of the HAI model unreliable. Much of this criticism has been directed at the use of unidentified experts and unidentifiable sources to substantiate modeling assumptions and input choices. In response, JA cite to several states, including Arizona, Minnesota, Nevada, Colorado and West Virginia, that have either adopted or used results from earlier versions of HM 5.3 to calculate UNE costs. (JA, 3/12/03, p. 74.)
Given our ability to modify many of the HM 5.3 inputs that SBC-CA contests, we are not troubled by this criticism. Moreover, the SBC-CA models that we are examining in this proceeding are of very recent vintage, and we do not believe they have been reviewed or adopted by any other states either. This proceeding may very well be the first time that this particular version of HM is compared directly with SBC-CA's newest models. Therefore, the findings of other state commissions that may have examined earlier versions of HM 5.3 or SBC-CA's models are not dispositive.
In summary, we find that HM 5.3 can be modified to overcome many of its alleged flaws, with the exception of its modeling of the local loop. Specifically, the model can be modified to use different input and engineering design assumptions, spare capacity can be increased, and expense assumptions can be modified to increase expense levels. Nevertheless, we were unable to modify assumptions with regard to the customer location and clustering process and certain labor inputs in order to overcome all of the model's criticisms. In addition, we could not overcome criticisms of the HM 5.3 interoffice transport module that it underestimates demand, may not adequately incorporate optical interface equipment, and is insensitive to demand changes.
The record before us does not provide sufficient information to resolve this question because we were unable to run our own clustering scenarios or make complete modifications to the interoffice module. Thus, we conclude that HM 5.3 has flaws, and that the flaws in its modeling of UNE-L would make it unreasonable to rely solely on this model to price local loops. For other UNE's however, it is possible to overcome the major modeling flaws of HM 5.3 through changing input assumptions.
It should come as no surprise after the extensive analysis described in the preceding sections that since we have found that both models are flawed, we do not consider either model to have adequately fulfilled the cost modeling criteria set forth in the June 2002 Scoping Memo. These criteria required that the cost studies and models allow the user to reasonably understand how costs are derived, generally replicate the model results, and modify inputs and assumptions. Without belaboring the point, we found that both HM 5.3 and the SBC-CA Models failed one or more of these criteria.
The principle failure of HM 5.3 was its use of a customer location database provided by a third party, TNS, as an input. We have already described how we would have preferred to cluster the geocoded customers into smaller distribution areas, but we were not able to perform these modifications ourselves. Although SBC-CA was able to modify the clustering and produce new HM 5.3 results, we would have preferred to test various scenarios ourselves. Secondarily, HM 5.3 failed the modeling criteria because we were not able to modify all labor inputs. In our attempts to modify the HM 5.3 assumed labor rate to the level proposed by SBC-CA, we found that labor costs were embedded with other assumptions such that we were not able to disaggregate labor costs or assumptions and modify them alone. Although we were able to develop some "work arounds," it is clear that the model remains seriously flawed in this dimension.
With regard to the SBC-CA Models, we find that we were not able to understand many of the input assumptions, which led to our inability to easily modify them based on comparison to public data or inputs used in HM 5.3. Specifically, we could not identify or make meaningful modifications to many of the SBC-CA model inputs because we could not extract individual inputs from aggregated data, or compare and verify inputs to public information. This was particularly evident with linear loading factors, an element of LoopCAT, and annual cost factors, which affected all the SBC-CA modules. For example, we could not identify what structure sharing assumptions were embedded in the SBC-CA factors. Without knowing what structure sharing assumptions were used, it was impossible to modify them. Similarly, we could not adjust loop configuration assumptions such as the design point or cabling characteristics, and we could not segregate expenses for SBC-CA's shared and common costs or Project Pronto expenditures from its calculations of per unit expense levels for UNEs.
On the subject of DLC installation costs, the underlying assumptions SBC-CA made when creating factors in this area are unclear. JA contend that SBC-CA's witness appeared unable to explain how the factors relating to these costs were derived. (JA, 2/7/03, pp. 32-33.) Our own review supported this allegation. Eventually, we delved into actual DLC installation costs to compare these to the factors. Again, SBC-CA's witness was unable to explain any linkage between actual costs and the factors used in the SBC-CA models. (Hearing Tr., 4/15/03, pp 573, 586.) Essentially, SBC-CA's witnesses were unable to support the modeling inputs adequately in this area. Finally, in the SPICE model, we were unable to determine how to modify it to test varying demand levels. In our final modeling runs following comments on the Proposed and Alternate Decisions, we found that input modifications to the SBC-CA models, particularly inputs related to factors and expenses, required time-intensive manual manipulation that was prone to error. On the other hand, those commenting on our efforts were readily able to identify our errors and propose corrections. In sum, we found that both models failed one or more of our cost modeling criteria.
The analysis above describes why we have concluded that both HM 5.3 and the SBC-CA Models contain flaws that we cannot correct completely.
The SBC-CA models contain many inputs and assumptions that we conclude did not fully reflect forward looking costs -- such as loop configuration, cable inventory, structure sharing percentages, ACFs, SPICE demand assumptions, potentially duplicative shared and common expenses, and Project Pronto expenses. We are unable to modify these inputs for a variety of reasons. In some cases, the inherent structure of SBC-CA's models aggregates these inputs with other information to the point that we cannot isolate inputs for modification. In other cases, the record convinces us that the inputs may be overstated or not specific to the provision of UNEs, but the record has not provided us with adequate information for a replacement number.
In contrast, even though we disagree with many of the input assumptions used in HM 5.3 - such as the cost of capital, the copper/fiber crossover point, structure sharing, plant mix, DLC costs, and switching assumptions - we can change many of these inputs and assumptions. In many areas, we have incorporated inputs from the SBC-CA models into HM 5.3, particularly in areas such as labor rates, plant mix, and switching investment information.
Nevertheless, despite these efforts, we could not cure all of the flaws we found in HM 5.3. We find that we cannot perform sensitivity analyses on the clustering process that builds the initial estimates of outside plant nor can we have confidence that such large clusters accurately reflect the inherent costs that arise from California topographical and urban realities. In addition, we cannot modify all inputs related to labor costs, and we cannot overcome flaws in the interoffice transport module that underestimate demand and may not adequately incorporate all necessary equipment.
In the Proposed and Alternate Decisions we circulated for comments, we stated that given the flaws of both models and our unwillingness to rely on either as the sole estimate of forward-looking UNE rates, we would instead use the two models to create a zone within which we will adopt new UNE rates. We found it unreasonable to throw out both models and have the parties start over, or have the parties try to remedy the flaws we noted in these models because starting over would take valuable time, might not produce good results, and would undoubtedly lead to further contention over the proposed remedies.
The Proposed Decision found that a fair and reasonable, albeit not perfect solution, given the amount of time that had already passed in this proceeding, was to use both models as endpoints for our ratesetting exercise. Both models were run with common inputs, to the extent possible. The results of the HM 5.3 run generally gave us what appeared to be reasonable as the lower boundary of a range for ratesetting purposes, and the SBC-CA model results generally gave us what appeared to be reasonable as a corresponding ceiling. In a few cases, such as rates for DS-1 ports and the DS-3 entrance facility without equipment, HM 5.3 actually resulted in a higher rate than the SBC-CA models.
Following comments on the Proposed and Alternate Decisions, we made appropriate adjustments to both modeling approaches and input assumptions to reflect parties' directions to resolve modeling difficulties. After doing so, we conclude we must now abandon the use of the SBC-CA model in the determination of non-UNE-L costs. After correcting what we agree are errors pointed out by parties in our runs of both HM 5.3 and the SBC-CA models, and making further modifications to inputs based on the comments, we find the SBC-CA models are overwhelmingly complex and it is not reasonable to use them unless they capture cost elements that HM 5.3 clearly fails to model.
When we made a large series of changes to the SBC-CA and HM 5.3 models based on the comments, it became clear that the SBC-CA models were unreasonably difficult to operate and modify because they required extremely time-intensive efforts to make modifications, the models were prone to input errors due to the extraordinarily complex input modification requirements. The majority of the problems we experienced can be traced to our manipulation of the SBC-CA cost factor module, which is an integral component used in all other SBC-CA cost modules. In response to comments on the Proposed Decision, we found it necessary to modify expenses and cost factors related to affiliate transactions, unregulated businesses, the building factor, retiree expenses, asset lives, and the cost of capital. We engaged in a time-intensive process to identify which inputs to modify. Each of our modifications required approximately 100 manual input changes to flow the results of the SBC-CA factor model into the other eleven SBC-CA UNE cost modules. With changes of this magnitude, errors clearly arise.
In our view, it is unduly burdensome and therefore not reasonable to continue using a model that requires extensive and time-consuming manual manipulation unless it clearly contributes to producing a better estimate of costs. As we ran and re-ran the SBC-CA models, we found the amount of time required to pursue all potential modifications is not reasonable, let alone the time involved investigating why the results cannot be easily replicated or why they deviate from prior results
The problems we experienced modifying and running the SBC-CA models in response to comments echo comments made early in this proceeding by JA and ORA/TURN. JA had complained early on that SBC-CA's ACF model involved a complex series of algorithms that operate within and across 67 detailed worksheets. (JA, 2/7/03, p. 29.) They explained that because SBC-CA's models involved at least six separate cost studies that were not integrated, it was not possible to change critical assumptions in any one of the models and have those changes "flow through" to the others. (Id., p. 40.) We agree with JA's initial statement that:
...the use of separate, unconnected studies for each service and the use of unlinked files to develop inputs to those studies also create a potential for errors and inconsistencies in assumptions that should be consistent across each service, and makes auditing for and correcting of these errors unduly burdensome. (JA, 2/7/03, p. 29.)
In addition, ORA/TURN provided additional insightful criticism that we find accurately describes what we have now experienced first-hand.
[SBC-CA's] cost modeling process is not "user friendly." Adjusting key inputs that have a significant affect on calculating costs, such as cost of capital, is a difficult and complicated process. [SBC-CA] admits that there is no fail safe mechanism to ensure that a change to a key input, which should be common to all models, made in one model would be automatically flowed through to all of [SBC-CA's] models. It would be up to the user of the models to identify where those changes would need to be made, and the user would then have to make the changes manually. A user would have to be intimately familiar with all of the [SBC-CA] models to ensure that he/she did not forget to make the same change in every location in every model. Thus, it is difficult to audit the model outputs because the models are not integrated. This makes it very difficult, if not impossible, for the user to generally replicate the cost study calculations, and to modify crucial assumptions with the certainty that it has been done consistently throughout each model. Thus, [SBC-CA's] cost models have not met the criteria set forth in the Commission's Scoping Memos. (ORA/TURN, 2/7/03, pp. 8-9.) (Emphasis in original; citations omitted.)
Now, with the experience we have gained attempting to modify the SBC-CA models and replicate our own modified work, we concur in the criticisms of JA and ORA/TURN. In our first model runs of the SBC-CA models supporting the Proposed Decision, we made few, if any, adjustments to inputs related to the SBC-CA factors. But now, in response to comments identifying errors and other modeling changes that we agree are valid, particularly related to expenses and cost factors, we conclude that the lack of flow through from one model to the other makes the SBC-CA models extremely challenging to manipulate and prone to errors
In contrast, we did not experience the same degree of difficulty in modifying and correcting our runs of HM 5.3 or verifying its results. In general, we were able to understand how to make the necessary modifications, implement them quickly and, after making them, we could easily and consistently replicate our results in a reasonable time frame.
We will adopt HM 5.3 model results for all of SBC-CA's permanent UNE rates with the exception of the UNE-L rates. Because of the serious flaws that we have identified in the HM 5.3 modeling of local loop costs - particularly the clustering module - we believe it would be unreasonable to rely on HM 5.3 alone to set the rate of these elements. For these elements, we take the extra step of averaging the results yielded by HM 5.3 and SBC-CA's model to develop UNE-L costs.
We conclude this approach is reasonable given the enormous complexity involved in TELRIC modeling exercises. As the FCC has recognized in its recent rulemaking reviewing the TELRIC methodology, UNE cost proceedings are "extremely complex," involve conflicting cost models, and hundreds of inputs to those models supported by the testimony of expert witnesses. State pricing proceedings are thus "extremely complicated."48 It is not unreasonable to use a model with some flaws when the alternative is another model that is not only significantly flawed as well, but is also unreasonably difficult to operate and produces varying results.
Our use of HM 5.3, even though we find it flawed, is also supported by the D.C. Circuit's discussion of the difficulty in pinpointing TELRIC rates with exactitude. In a 2001 decision upholding FCC findings that UNE rates in Kansas were cost-based, the D.C. Circuit concluded that ratemaking is not an exact science but involves a "zone of reasonableness." As part of its discussion, the court cited to a prior case where it stated:
This argument, however, assumes that ratemaking is an exact science and that there is only one level at which a wholesale rate can be said to be just and reasonable.... However, there is no single cost-recovery rate, but a [wide] zone of reasonableness.... (Sprint Communications Company v. FCC, 274 F.3d 549, 555, (D.C. Circ. Dec. 28, 2001), citing Conway, 426 U.S. at 278.)
As a result, the court declined to find fault with the FCC "for approving the Kansas Commission's compromise resolution of an issue that the parties' behavior had left a muddle." (Id. at 559.)
Interestingly, even though we do not use the SBC-CA models to establish all permanent UNE rate elements, our final runs of the SBC-CA models before abandoning them indicate results not that different from HM 5.3. This was also the case for local loops. Nevertheless, we feel that the richness of the SBC-CA LoopCAT module justifies its use and we continue to use the SBC-CA model as part of our determination of UNE-L prices.
Also, despite fierce criticism of both models by the parties, we find that loop assumptions embedded in both models have surprising similarities. Earlier versions of the HM 5.3 model have been criticized by this Commission and others for assumptions regarding uniform dispersion of customers throughout the serving area. Indeed, HM 5.3 makes efforts to overcome this prior criticism by precisely locating today's customers through the geocoding process. However, after the geocoded customers are clustered into distribution areas, HM 5.3 does not use the geocoded locations to build a distribution network. Instead, the cluster is split into equal-sized lots and customers are uniformly distributed throughout the distribution area. Likewise, LoopCAT makes the simplifying assumption to approximate loop lengths based on the design point. As SBC-CA's witness Smallwood explains, "SBC-CA makes the reasonable assumption that customers will be distributed throughout a distribution area." (SBC-CA/Smallwood, 3/12/03, pp. 66-67.) By its own admission, SBC-CA is uniformly distributing customers throughout the serving area even though it has criticized prior versions of HM 5.3 for this same assumption.
Finally, as noted before, we believe that LoopCAT's modeling better reflects California realities. Both models include a mixture of loop modeling assumptions that are somewhat reality-based and somewhat hypothetical. HM 5.3 uses today's customer locations, but clusters them differently than SBC-CA's existing network. LoopCAT uses some existing plant routes, but combines that information with estimates of future customer locations. By using the design point approximation technique, LoopCAT does not locate any customers where they are today.
Second, HM 5.3 uses a minimum spanning tree theory to build plant to connect customers. By definition, any theory based on "minimums" would produce the lowest possible results. In contrast, LoopCAT uses embedded cable records that we find produce higher results than if cable-sizing guidelines were used to configure a rebuilt network. Third, HM 5.3 relies on the TNS customer location process and its clustering assumptions, while LoopCAT relies on SBC-CA's preprocessor and its assumptions regarding the design point. Both the TNS process and SBC-CA's preprocessor are presented to us as inputs that we cannot adjust, and we are asked to rely on the underlying assumptions without questioning or modifying them. As noted before, HM 5.3's large cluster size and process of modeling all demand into an urban center into a single tower fail to reflect California topographic and political realities in a realistic way.
Thus, we now conclude we should not rely solely on the HM 5.3 model to set local loop rates, but can rely on HM 5.3, as modified, to set other permanent UNE rates. While the HM 5.3 model is not perfect and does not meet 100% of our modeling criteria, we now see that on top of the SBC-CA models' basic modeling flaws, the SBC-CA models are unduly burdensome and difficult to operate. Aside from the modeling of UNE-L rates, where the SBC-CA models better capture California realities, it is simply not reasonable to invest the time and energy to use this model.
In summary, after our review of the comments and further deliberation, we made the final corrections and other modeling input changes to both models. For the SBC-CA model, the changes we made in response to comments were:49
· Corrected asset lives to ensure correct data used in all columns and to use proposed lives of SBC-CA. (Section VI.A)
· Corrected copper distribution and feeder fill factors to ensure the adopted fill factor is used for both material and installation costs (Section VI.E.1-2)
· Modified fill factor for copper distribution to 41.7%, copper feeder to 66.2%, DLC common equipment to 47.4% and DLC plug-in equipment to 53.1% (Sections VI.E.1,2,4 and 5)
· Corrected average switch size in SICAT to use SBC average for Nortel switches (Section V.D.2, Other Switching and Port Model Changes.)
· Modified NID inputs to use a 2 pair NID, one hour NID installation time, and adjusted residential premise termination fill factor to 53.4% (Section VI.E.7)
· Modified cost of capital to 10.37% based on an 11.78% cost of equity and a 6.34% debt rate, and a capital structure of 74% equity and 26% debt. (Section VI.B)
· Modified expense levels and cost factors related to non-regulated expenses, affiliate expenses, TBO expenses, and land and building factors (Section V.A.4.c)
· Adjusted fill factors in SPICE for SONET and common equipment fill to 85%, and fiber fill 54% (Section V.A.3)
· Modified split of new and growth switching lines (Section VI.J.2)
· Adjusted port cost calculation to correct labor costs and concentration ratio, as suggested by SBC-CA (Section V.D.2, Other Switching and Port Model Changes)
· Modified IDLC/UDLC inputs to 95% UDLC and 5% IDLC (Section VI.C.)
The final corrections and other modeling changes we made to HM 5.3 in response to comments were:
· Modified fill rates for copper distribution, copper feeder, DLC common equipment, and DLC plug-in equipment to match SBC-CA proposals. (Section VI.E.1,2, 4 and 5)
· Removed FCC cable prices and used HM 5.3 default values instead (Section V.D.2, Cable Prices)
· Modified plant mix inputs to match SBC-CA models (Section VI.G)
· Adjusted switching investment per line calculation (Section V.D.2, Switch Vendors)
· Corrected Verizon best in class expenses (Section V.B.6)
· Corrected BRI and trunk port factors (Section V.D.2, Other Switching and Port Model Changes)
· Modified cost of capital to 10.37% (Section VI.B)
· Adjusted DS-1 and DS-3 loop costs to account for missing equipment (Section V.D.2, DS-1 and DS-3 Loops)
· Calculated deaveraged rates for 4-wire, coin, PBX and ISDN loops (Section V.D.2, Deaveraged Rates)
· Modified NID installation time to one hour (Section VI.E.7)
· Removed additional splice crew to return to original splice crew proposals (Section VI.H)
· Modified split of new and growth switching lines (Section VI.J.2)
· Modified interoffice fiber fill factor to 54% to match run of SPICE (Section V.D.2, Interoffice Rates)
· Modified PBX loop option to include investment for PBX line card (Section V.D.2, PBX loops)
· Adjusted asset lives to match SBC-CA proposal (Section VI.A)
· Increased labor costs for terminal, splice and SAI investments (Section V.B.2)
· Modified IDLC/UDLC inputs to 95% UDLC/5% IDLC (Section VI.C)
Our decision to use HM 5.3 to set permanent UNE rates aside from the local loops for SBC-CA is based on runs of the HM 5.3 and SBC-CA models where we have set as many inputs as possible at the same levels. The reasoning behind our chosen input levels is described at length in the Modeling Inputs Section VI below. Here, we will briefly summarize which inputs were used for the two model runs that ultimately led to our decision to rely on HM 5.3 for ratesetting purposes. The inputs that we varied for our runs are the following:
Cost of Capital: We modified both models to use an input assumption of a 10.37% cost of capital. Also, we modified the tax rate in HM 5.3 to 40.75% to match the SBC-CA models.
Asset Lives: We used the asset lives proposed by SBC-CA in both the SBC-CA models and in HM 5.3.
IDLC/UDLC: We adjusted both models to assume a configuration of 5% IDLC, and 95% UDLC.
Structure Sharing: In the HM 5.3 model, we used structure sharing levels from the FCC Inputs Order, and we assumed 55% sharing of the distribution and feeder network. In the SBC-CA models, we were not able to modify SBC-CA's proposed structure sharing percentages because we could not determine the percentages assumed by SBC-CA.
Plant Mix: We modified HM 5.3 to use SBC-CA's plant mix assumptions. In response to comments on the Proposed Decision, we modified our method for translating SBC-CA's plant mix into HM 5.3 using pair feet rather than sheath feet, as suggested by SBC-CA. (SBC-CA, 6/1/04, p. 9-10; Workshop Transcript, 6/14/04, p. 1000-1001.)
Labor Rates:
a) HM 5.3 is adjusted where possible to use the proprietary loaded labor rate from SBC-CA's models. This rate applies to Copper and Fiber OSP Technician, Engineering Labor rate, and EF&I per hour. Adjustments were made to labor rates for wire center terminal investment, customer premised fixed investment, pole labor, NID labor, copper cable manhole investment, fiber pullbox investment, and aerial drop placement. In addition, we used the MCI proposed workaround to modify labor costs for terminal, splice and SAI investment.
b) Crew sizes in HM 5.3 were adjusted for cable placing, where possible, to add one person (i.e., a crew of 1 was increased to two, a crew of two increased to three).
c) There were no changes to the labor rates assumed in the SBC-CA models.
Fill Factors: In the SBC-CA models, achieved fill factors were adjusted to the levels in the table below. In HM 5.3, the relevant fill inputs (e.g., cable sizing factors for distribution plant) were adjusted to produce an achieved fill to match the following levels:
a) Loop
Copper Distribution |
41.7% |
Fiber Feeder |
79.6% |
Copper Feeder |
66.2% |
DLC Common Equipment |
47.4% |
DLC Plug-In Equipment |
53.1% |
Residential Premise Termination |
53.4% |
SAI |
67.8% |
d) Switching: We modified fill levels in the SBC-CA model to assume an 82% achieved fill for both analog and digital switches. HM 5.3 was modified to also achieve an 82% achieved fill for digital and analog switches.
Crossover Point: HM 5.3 was adjusted to assume a fiber/copper crossover point of 12,000 feet for analog loops. There were no changes to the crossover assumptions in the SBC-CA models.
DLC costs: SBC-CA's LoopCAT model was adjusted to lower the EF&I for DLC installation to the average levels shown from a recent sample of 50 SBC-CA DLC installations.50 HM 5.3 does not use an EF&I factor, so instead, we used an average of actual Remote Terminal and CEV installation costs from the same sample of 50 SBC-CA installations.
Pole Spacing: We modified HM 5.3 to assume pole spacing of 150 feet for all density zones of the distribution network. (See SBC-CA/'s McNeil, 2/7/03, p. 38.)
Drop Terminal Investment: We modified HM 5.3 to assume 85% buried drop terminals, and 15% aerial, to match those the percentages of buried and aerial drops in SBC-CA's models. (See SBC-CA/Tardiff, 2/7/03, p. 76.)
Cable Prices: Initially, we modified HM 5.3 to use copper and fiber cable prices used by the FCC, based on criticisms by SBC-CA witness Tardiff. (SBC-CA/Tardiff, 2/7/03, p. 39.) JA provided these cable prices in documents supporting HM 5.3. (See JA/Klick Declaration, 10/18/02, Attachment JCK-2 pp. 10-12.) Following comments on the proposed decision, we removed this modification based on comments by AT&T that the FCC cable prices include both material and installation and result in double-counting of installation costs. (AT&T, 6/1/04, p. 10.) The copper and fiber cable prices were not modified in the SBC-CA models.
PBX Loops: In response to comments of SBC-CA, we added investment to HM 5.3 for PBX line cards based on assumptions from SBC-CA's LoopCAT. (SBC-CA, 6/1/04, p. 29.)
Switch Vendors: We modified HM 5.3 to base the switching investment per line on a weighted average of Lucent and Nortel prices only, based on information from SBC-CA's SICAT on the percent of lines purchased from those two vendors.51 Siemens was removed from the switch vendor mix assumed in HM 5.3, as explained in Section VI.J.1 below. There were no changes to the SBC-CA models in this area.
Based on comments on the Proposed Decision, we corrected our calculation of switching investment per line in HM 5.3. SBC-CA contends we should use its SICAT model to calculate switching investment per line in HM 5.3, while JA contend we should use the formula provided by JA witness Pitts, with a correction provided in their comments. (SBC-CA, 6/1/04, p. 10; AT&T, 6/7/04, p. 9, n. 68.) We will use the formula provided by JA witness Pitts, as corrected, to calculate switching investment per line in HM 5.3 because we agree with AT&T it is inappropriate to use SICAT to calculate the switching investment per line in HM 5.3.
New vs. Growth: Initially, we adjusted both models to assume 40% of lines are purchased at the "new" line price, and 60% at the "growth" line price. This matches the mix of new and growth lines that was used in the prior OANAD proceeding. We also removed "other replacement costs" from SBC-CA's SICAT model. After reviewing comments on the Proposed Decision, we conclude that it is inconsistent to use the percentages from the prior OANAD proceeding. We therefore have modified our modeling assumptions, which now average the forecasts of SBC-CA and JA on the split between new and growth. This yields a modeling assumption of 70.2% new and 29.8% growth. Switch Rate Structure: We ran both models assuming a flat-rate for switching as proposed by JA. This means that 100% of switching costs are allocated to port and there are no usage rates.52 We also calculated a usage-based rate for reciprocal compensation purposes, based on a 70%/30% split of traffic sensitive and non-traffic sensitive costs.
Other Switching and Port Model Changes: We deleted per month white page listing expenses from the SBC-CA port cost study, based on statements by SBC-CA witnesses Lundy and Silver that this should be removed. (SBC-CA/Lundy, 3/12/03, p. 46; SBC-CA/Silver, 3/12/03, p. 4.) Also, we adjusted the concentration ratio of lines/trunk from 2:1 to 4:1 in both HM 5.3 and SICAT, to be consistent with loop modeling assumptions.
In response to comments on the Proposed Decision, we (1) modified our assumption regarding switch sizes based on the average switch size in SICAT (AT&T, 6/1/04, p. 8.), (2) modified our port cost calculations based on comments of SBC-CA that we ignored labor costs for switch installation and that we inappropriately included the concentration ratio in our port cost calculations53 (SBC-CA, 6/1/04, p. 28.), and (3) recalculated our BRI and trunk port calculations to correspond to our other changes in switch investment inputs. (AT&T, 6/1/04 p. 20.)
Vertical Switch Features: We modified the SBC-CA Model to include any identified feature hardware costs in the port rate. Using SBC-CA's support materials, we calculated total hardware costs for nine features. We then assumed that an average customer would use three of these nine features, so we added one third of this total cost to the port cost. There were no changes to HM 5.3 regarding feature costs. In comments on the Proposed Decision, SBC-CA disputes this methodology and contends we should add its total per line feature cost to the port price. We make no change to our approach because we do not agree with how SBC-CA has calculated its total per line feature cost and we believe it may include double-counting of feature hardware and software costs that are already included in per line switching costs.
Expenses: HM 5.3 was adjusted to remove the presumption that SBC-CA expenses would track those of Verizon California. In other words, we used SBC-CA's 2001 current E:I ratio without adjustments based on comparisons with Verizon. In the SBC-CA models, we removed the inflation adjustment to expenses, under the assumption that productivity increases offset inflation adjustments. Following comments on the Proposed Decision, we modified the SBC-CA models related to non-regulated expenses, affiliate expenses, TBO expenses, and land and building factors.
Interoffice Rates: We adjusted the SONET and common equipment fill factor in SBC-CA's SPICE model to 85%, as proposed by JA. We adjusted the fiber fill factor to 54%. Then, we ensured that these same interoffice fill factors were used in our runs of HM 5.3. SBC-CA proposed fill factors for SPICE based on its current utilization levels, which SBC-CA contends are forward looking. SBC-CA's proposed fill factors are significantly below the levels used in HM 5.3 and used by the FCC in its own modeling. (JA/Mercer-Murphy, 2/7/03, paras. 68-72; See also Inputs Order, para. 208.)
DS-1 and DS-3 Loops: In comments on the Proposed Decision, SBC-CA contends that DS-1 loop rates are incorrect because costs of critical pieces of equipment are missing or incorrectly applied and DS-3 loop costs for critical equipment are incorrectly calculated. (SBC-CA, 6/1/04, p. 7.) These omissions were noted by SBC-CA during the course of the proceeding and JA fixed these omissions and errors in its cost filings in the Verizon UNE Phase of R.93-04-003. (Id., see also SBC-CA/Murphy, 2/7/03, p. 63.) AT&T responds that parties may define the DS-1 loop in different ways and it would not object to an additional UNE to cover the costs noted by SBC-CA. AT&T also admits an error in its DS-3 loop cost calculations. (AT&T, 6/7/04, p. 7.)
We conclude that because AT&T admits errors or omissions in the HM 5.3 DS-1 and DS-3 loop cost calculations, we should fix them. We take official notice of DS-1 and DS-3 loop costs proposed by AT&T and MCI/WorldCom in the Verizon UNE phase of R.93-04-003 and use them as suggested by both SBC-CA and AT&T to amend DS-1 and DS-3 loop cost calculations.54
Shared and Common Cost Markup: Both models include a 21% markup, as adopted in D.02-09-049.
Deaveraged Rates: In comments on the Proposed Decision, SBC-CA claims the Commission fails to adopt deaveraged rates for several UNEs that have previously been deaveraged by the Commission in D.02-02-047. (SBC-CA, 6/1/04, p. 29.) We agree this was an oversight and we modify the decision to adopt deaveraged rates for 4-wire, coin, PBX, and ISDN loops based on the relationship between current statewide average and deaveraged rates for these UNEs.
Appendix A shows the results of our run of the HM 5.3 model with our chosen inputs and the results of our runs for the SBC-CA UNE-L. The column in Appendix A showing the results of the HM 5.3 Model runs indicates the permanent UNE rates for SBC-CA that we adopt in this order for rate elements other than UNE-L.
24 Despite claiming the SBC-CA models do not permit ready adjustment, JA provide a detailed restatement of them through several hundred pages of detailed adjustments to loop engineering assumptions and investment inputs, cost factors, and expense assumptions. SBC-CA disputes the results of JA's restatement, stating that the restated results defy common sense and are inconsistent with cost estimates produced by HM 5.3. SBC-CA disparages the JA's restatement because it produces a monthly loop rate of $2.25, or "about the same price as a large cup of coffee at Starbucks." (SBC-CA/Tardiff, 3/12/03, p. 24.) While it is clear that JA devoted considerable resources to restating the SBC-CA models, the Commission must also devote considerable resources to reviewing this work and SBC-CA's rebuttal. We cannot accept the restatements at face value without our own reasonable scrutiny. In many cases, we do not find JAs have adequately or convincingly supported their restated results. 25 As noted previously, we use the expression "UNE-L" as a shorthand for those UNEs whose cost are shaped by loop costs. This includes 2-wire loop, 4-wire loop, coin, pbx, and ISDN loops, but does not extend to DS-1 and DS-3, whose electronic elements require different modeling. 26 See Section V.A.3 below. 27 See Sections V.A.1.a, V.A.3, and V.A.4 below for a detailed discussion of these input problems. 28 See JA/Donovan-Pitkin-Turner, 2/7/03, a 224 page declaration containing over 70 subheadings with proposed modifications to LoopCAT, and JA/Brand-Menko, 2/7/03, a 109 page declaration containing 21 categories of alleged flaws in SBC-CA's expense modeling. 29 ARMIS refers to the FCC's "Automated Reporting Management Information System" that was initiated in 1987 for collecting financial and operational data from the largest carriers and is described further at http://www.fcc.gov/wcb/armis. 30 "Structure sharing" generally refers to the percentage of poles and conduit that are shared with other utilities, or between different portions of SBC-CA's network. 31 See Federal-State Joint Board on Universal Service (CC Docket No. 96-45), Tenth Report and Order, FCC 99-304, 14 Rcd 20156, (rel. Nov. 2, 1999) ("Inputs Order"). 32 See e.g., JA, 2/7/03, p. 26-27, and 29; JA/Declaration of Donovan/Pitkin/Turner, 2/7/03, p. 65-67. 33 Specifically, JA cite to factors used by SBC in Nevada, Connecticut, and Wisconsin. (JA/Mercer-Murphy, 2/7/03, p. 46.) 34 See, generally, declarations of SBC-CA witnesses Cohen, Henrichs, and Makarewicz, 3/12/03. 35 Project Pronto refers to SBC-CA's capital expenditures to add loop plant, circuit equipment, and other facilities to provision advanced data services like DSL, which are provided by SBC-CA's unregulated affiliate, SBC Advanced Services Inc. (ASI). 36 TBO refers to the accrual for post-retirement benefit expenses for SBC-CA's retirees. Effective in 1991, the rules for accounting for post-retirement benefits changed due to Statement of Financial Accounting Standards (SFAS) No. 106 Employers Accounting for Post-retirement Benefits Other than Pensions. SBC-CA adopted SFAS 106 for regulatory purposes on January 1, 1993. The TBO was established to account for the anticipated future retiree medical costs already earned as of that date, but not yet paid. (See SBC-CA/Cohen Declaration, 3/12/03, p. 15.) 37 According to SBC-CA, the TPI is obtained from C.A. Turner Utility Reports. (SBC-CA/Cohen, 10/18/02, p. 6.) The CPI-W is defined as the Consumer Price Index for Urban Wage Earners and Clerical Workers. (SBC-CA/Cohen, 3/12, p. 29.) 38 See Section VI.D for a complete discussion of the DLC inputs used in the Commission's model runs. 39 The limitation of 6,541 lines is based on a maximum underground vault, or "CEV" sized to hold 8,064 lines, of which 20% is reserved for growth. 40 See also ALJ's Ruling on Joint Applicants' and SBC Pacific's Motions to Strike, 5/21/03, regarding SBC-CA's request to strike rebuttal testimony of JA witness Landis regarding TNS and clustering issues because JA did not respond fully and completely to discovery requests for the clustering source code. The ALJ denied SBC-CA's motion to strike Landis' testimony because she had granted SBC-CA access to Landis in response to SBC-CA's motion to compel and SBC-CA never further pursued greater access per the procedure the ALJ outlined. (5/21/03 Ruling, p. 11-12.) 41 We note that similar to the SBC-CA models, HM 5.3 can also be criticized for how it handles multiple dwelling units. Although HM 5.3 clusters customers based on current population density characteristics, it does not necessarily model sufficient equipment to serve high density locations. This is discussed in detail in Section VI.E.7 where we address the fill factor for premises termination equipment. 42 The FCC has itself noted, in the context of its own cost modeling for universal service purposes, that: