V. Both HM 5.3 and the SBC-CA Models Are Flawed

In comments, workshops, and hearings during the course of this proceeding, Joint Applicants and SBC-CA have lobbed numerous criticisms at each other regarding alleged flaws in HM 5.3 and the SBC-CA Models. These criticisms can be quickly summarized.

The essential criticism of the HM 5.3 model is that it ignores generally accepted engineering and network design standards to instantly construct a brand new, fully functioning network at a single moment in time. Through the use of unrealistic and unsupported inputs, SBC-CA contends that HM 5.3 drastically understates the size of the network, minimizes the costs to maintain it, and lacks the capability to provide all the services that are provided over SBC-CA's network today.

Specifically, SBC-CA contends that HM 5.3 does not adequately represent customer locations, does not comport with how an engineer designs plant, and relies too heavily on subjective judgment for the prices of purchasing and installing network facilities. (SBC-CA/Tardiff Decl., 2/7/03, p. 4.) SBC-CA asserts that HM 5.3 does not account for all the costs required to build and maintain the network. According to SBC-CA, HM 5.3 fails to account for the substantial costs that carriers incur to accommodate growth and respond to demand changes. As a result, SBC-CA maintains that HM 5.3 provides a "static view" of a network that assumes a level of efficiency that no real carrier can achieve and does not reflect how real-world telecommunications firms operate. Moreover, SBC-CA contends that HM 5.3 fails the cost modeling criteria set forth in the June 2002 Scoping Memo. In particular, SBC-CA alleges that it was not given sufficient access to the intricacies of the customer location process used in HM 5.3. Moreover, SBC-CA claims that HM 5.3 relies on unrealistic labor assumptions to construct a non-functional network that cannot handle all of SBC-CA's customer demand.

In contrast, JA contend that SBC-CA's cost models are deeply flawed and do not adhere to TELRIC standards because they rely almost exclusively on embedded data from SBC-CA's legacy network rather than forward-looking network configurations. Further, JA maintain that the SBC-CA Models do not meet the Commission's cost study criteria and do not permit ready adjustment to eliminate these inherent flaws.

JA allege that SBC-CA's models suffer from structural flaws stemming from a basic misconception of the purpose of competition. JA claim that:


The purpose of local competition is not to ensure that [SBC-CA] is "made whole," or somehow recovers every penny it spends no matter how foolishly. Rather, one of the purposes of competition is to force entrenched incumbents such as [SBC-CA] to become more efficient. In a competitive market, there is no guarantee that a company will recover every dollar it spends. That lack of a guarantee is exactly what forces companies to spend wisely and operate efficiently. (JA, 2/7/03, p. 43.)

JA argue that, in some ways, forward-looking costs overcompensate incumbent carriers because much of the investment in the network to provide UNEs was incurred years ago and the loop plant has long since been fully depreciated. According to JA, "SBC-CA does not incur any incremental investment cost to allow competitors to use that loop plant. Nonetheless, under TELRIC, [SBC-CA] is entitled to recover investment costs for such loop plant as if [SBC-CA] had to install it all over again." (JA, 2/7/03, p. 41.)

We find that both models are flawed and do not allow us sufficient flexibility to modify inputs and test various outcomes. HM 5.3 uses a customer location database as an input, and this database is built on a set of assumptions we do not necessarily agree with and are unable to modify. In addition, HM 5.3 contains myriad inputs that are at the low end of what we consider reasonable. While we can modify most of these inputs, we were not able to modify all input assumptions to our satisfaction, particularly those related to labor costs. We are also not able to modify the interoffice transport module of HM 5.3 to overcome the criticisms that it underestimates demand for interoffice transport, may not adequately incorporate optical interface equipment, and is insensitive to demand changes. If we could modify HM 5.3's labor inputs and interoffice inputs for demand and equipment, these changes would increase cost inputs in HM 5.3. Therefore, given these areas that we cannot modify, we find that HM 5.3 under-estimates forward-looking UNE costs.

In contrast, the SBC-CA models contain numerous inputs based entirely on the characteristics of SBC-CA's current network operations. SBC-CA claims that it should only model a change from its current network where it is shown efficient to make a change. SBC-CA's approach essentially challenges other parties to prove that its embedded network is not forward-looking. However, this claim runs counter to FCC requirements that an incumbent LEC bears the burden of proving that its costs do not exceed forward-looking levels. (See 47 C.F.R. 51.505(e).) By using the current network as the starting point, SBC-CA's models run contrary to the definition of TELRIC. These inputs, which include loop investment and design characteristics, expense levels, and labor inputs, have not been sufficiently justified as forward-looking. Some of these inputs can be modified to what we consider forward-looking levels, but many cannot. The inputs we are unable to modify include SBC-CA's loop length assumptions, loop cabling inputs, numerous inputs embedded in annual cost factors such as structure sharing percentages and labor installation assumptions, and SBC-CA's assumed link between utilization levels and maintenance expenses. Further, we are unable to modify SBC-CA's expense assumptions to remove potential shared and common costs, and expenses related to unregulated services, affiliate transactions, retiree costs, and Project Pronto. Finally, we are unable to modify demand assumptions and other factor inputs in SBC-CA's interoffice transport model. Most of the input modifications that we would make to SBC-CA's models would decrease input assumptions from historical levels to what we consider forward-looking. Therefore, we find that the SBC-CA models over-estimate forward looking UNE costs.

Thus, although we have undertaken the time-consuming and exhaustive task of modifying many of the inputs used in both models to levels that we conclude are reasonable, we are unable to modify key structural elements of both models. In short, while we can modify and test portions of both models, and see how these changes affect model outputs, we cannot rely on either model in its entirety. Nevertheless, we can run both models with our preferred inputs and use the results to create a "zone" of reasonable UNE rates. After running both models with our chosen inputs and finding that the results from the models converge to a much narrower range, we determine that reasonable UNE rates lie somewhere within the zone created by the two models' results. It would not be reasonable to use either endpoint of this zone to set UNE rates because of the flaws in both models that cannot be corrected and which are discussed in detail in the sections that follow. On the other hand, we conclude that reasonable UNE rates lie between these two endpoints, and we adopt the midpoint of the zone for our new, permanent UNE rates because the midpoint reasonably mitigates the flaws in both models. Thus, while we cannot rely on the results of either model to produce reasonable or accurate UNE rates, the results of HM 5.3 set the lower boundary for our rate zone, and the results of the SBC-CA models set the upper boundary.

In the pages that follow, we will describe in further detail the key flaws that we found with HM 5.3 and the SBC-CA models. We will focus our discussion on the major structural flaws identified by the parties, and our conclusions regarding these alleged flaws based on our own staff analysis of the two models. For the most part, this discussion will pertain to those portions of the models that are not easily changed by modifying the inputs. In a separate section, we will discuss the various disputes over modeling inputs and which inputs we have chosen to use in our own modeling runs that set the endpoints for our ratesetting zone.

Fundamentally, Joint Applicants and other parties contend that the SBC-CA Models fail the TELRIC standards set by the FCC. (See JA, 2/7/03, p. 40, ORA/TURN, 2/7/03, p. 9.) The TELRIC methodology is intended to replicate the pricing that would occur in a competitive market if an existing firm had to match the prices offered by a new entrant who would build facilities using the lowest-cost, most efficient technology and network configuration available, assuming the location of existing wire centers. (47 C.F.R. Section 51.505(b).) The FCC TELRIC regulations, as upheld by the U.S. Supreme Court, explicitly state that embedded, or historical, costs shall not be considered when calculating forward-looking UNE costs. (47 C.F.R. Section 51.505(d).)

Generally, we agree with the criticism that SBC-CA's models rely too heavily on SBC-CA's embedded network, both for network configuration and costs. JA contend, and our own analysis shows, that SBC-CA's cost models are replete with embedded inputs and assumptions that are not readily modified to reflect forward-looking costs or configurations. We will discuss in detail in Sections V.A.1, and V.A.3-5 below examples of the embedded network assumptions that we found. In addition, TELRIC requires the calculation of the forward-looking cost over the long run of the total quantity of the facilities and functions attributable to a UNE. (47 C.F.R. Section 51.505(b).) JA claim that SBC-CA's studies "fail to put the `T' in TELRIC." (JA, 2/7/03, p. 48.) Indeed, SBC-CA admits that "we don't develop a TELRIC on a total basis." (Workshop Transcript (TR.), 12/5/02, p. 408.) We found that in some portions of SBC-CA's models, particularly the model for interoffice transport, it was either difficult or impossible to determine and/or modify the total quantity of the facilities or functions upon which the cost modeling was based, as required by TELRIC.20

As we will discuss below, the SBC-CA models merely replicate to a great extent SBC-CA's existing architecture based on historical network design. Overall, we found that we could not make meaningful modifications to many of the SBC-CA model inputs because we could not extract individual inputs from aggregated data, or compare and verify inputs to public information. This prevented us from modifying many of SBC-CA's embedded cost and configuration assumptions, such as loop input assumptions in SBC-CA's loop module known as "LoopCAT", demand assumptions in SBC-CA's interoffice model, and expenses calculated by annual cost factors.21 Although we could modify some of SBC-CA's model inputs, we eventually came to many "dead-ends" and found that we were unable to modify important model inputs to our satisfaction.

As a result, SBC-CA's models estimate the cost to rebuild the network SBC-CA has in place today, with some changes for forward-looking technology, but not necessarily with the lowest cost network configuration. In short, we conclude that the SBC-CA models do not meet the FCC's TELRIC standard and the structural problems inherent in the models do not allow sufficient modification to overcome these flaws. We will now discuss the specific problems that we encountered in each of the SBC-CA models.

In reviewing SBC-CA's LoopCAT module, we found we agreed with many of the parties' criticisms that it does not conform to TELRIC requirements to reflect forward-looking costs based, in part, on the lowest cost network configuration. Below, we discuss these criticisms, which principally relate to LoopCAT's reliance on embedded network data, its design point calculation, and additional modeling characteristics that we were unable to modify relating to multiple dwelling units, maintenance expenses, and model integration.

There is no dispute that LoopCAT relies extensively, if not exclusively, on costs and facilities derived from SBC-CA's current network. SBC-CA's witness Sneed gives an overview of SBC-CA modeling approach and describes how "[t]he investments and network characteristics are based on the actual network in place necessary to serve [SBC-CA's] customers, modified where needed to incorporate forward-looking technology." (SBC-CA/Sneed Decl., 10/18/02, p. 4.) Sneed describes how LoopCAT uses annual cost factors to convert investments into annual costs. As Sneed states, "These factors are based on the costs that [SBC-CA] actually incurs, as these are the best indicator of the forward-looking costs that will be experienced in a network serving California." (Id. pp. 4-5.)

JA criticize LoopCAT's reliance on embedded data, including outside feeder plant routes, plant mix, unit costs of construction, cable sizing, fill factors, and installation costs. According to JA:

The use of embedded data ensures that [SBC-CA] will not model an efficient network, as prescribed by TELRIC, but rather will propose substantially inflated costs. For example, [SBC-CA's] reliance on embedded data for unit costs of construction ignores the economies of scale inherent in the TELRIC "total demand" approach, thereby significantly overstating costs. Similarly, [SBC-CA's] reliance on embedded data causes the inclusion of many undersized pieces of equipment in the network, rather than recognizing that today's demand can be served by far fewer, larger sizes of cable, DLC terminals and FDIs. Thus, again, [SBC-CA] ignores economies of scale that would be inherent in a TELRIC-compliant calculation. (JA, 2/7/03, p. 72.) (Footnotes omitted.)

For example, JA and ORA/TURN contend that LoopCAT's embedded cabling characteristics reflect an aggregation of incremental loop construction over many years, rather than a forward-looking design with cable sized to meet total demand. JA claim that LoopCAT models two 100-pair cables where an engineer would place one 200-pair cable at a lower cost if she were rebuilding the network today to serve current demand. (JA/Donovan-Pitkin-Turner Decl., 2/7/03, para. 25-27.) Thus, JA claim that LoopCAT fails to reflect the fact that today's demand can be served more efficiently and with greater economy of scale through the use of larger equipment. (Id.)

Similarly, ORA/TURN's witness Roycroft explains that engineering cost models typically use cable sizing guidelines to identify the capacity of cables needed to provide an efficient network design and a reasonable level of spare capacity. LoopCAT, however, does not use cable sizing conventions that would permit the model to optimize the design of its network. (ORA/TURN/Roycroft Decl., 2/7/03, p. 27-29.) Instead, LoopCAT relies on a mix of embedded outside plant design and hypothetical plant design, neither of which reflect forward-looking approaches. Roycroft alleges that even though users can adjust LoopCAT's fill factors, this will not modify the inventory of cables deployed and one is asked to assume that SBC-CA's existing network cabling reflects optimum design. (Id., p. 29.) In other words, LoopCAT is structurally incapable of modeling an efficient, forward-looking network and users cannot modify the model's assumptions to alter fundamental design assumptions. (ORA/TURN, 2/7/03, p. 8.) ORA/TURN contend that:

[SBC-CA] is attempting to turn the world on its head by claiming that a cost model that is based on embedded costs is a forward looking model, and urging the Commission to reject the [HM 5.3] model because it does not employ an embedded costing approach specifically rejected by the FCC. (ORA/TURN, 3/12/03, p. 5.)

Moreover, JA contend that SBC-CA's annual cost factors, or "linear loading factors," which are used throughout LoopCAT to calculate investment costs to engineer, furnish, and install (EF&I) loop facilities, violate TELRIC because they are based on installation activities related to SBC-CA's embedded equipment and embedded network design, and they cannot be properly audited. (JA, 2/7/03, p. 77.) According to JA:

...it is impossible to identify the costs associated with a particular piece of equipment because the linear loading factor is a purported average relationship between embedded installation cost and embedded material cost derived from overly broad categories of equipment. (Id., pp. 77-78.) (Footnote omitted.)

JA maintain that loading factors based on historic data can be problematic because the relationship between material investment and installation activities from historic data may not reflect forward-looking practices. (JA/Donovan-Pitkin-Turner, 2/7/03, paras. 97-98.) Moreover, loading factors can distort installation cost differences based on material prices. In other words, loading factors can make it appear that installation costs rise as material prices rise. (Id. para. 100.)

Our review of LoopCAT confirms that it contains embedded data that SBC-CA derived from its current network experience and it is not possible to modify many aspects of LoopCAT to test forward-looking assumptions or differing network configurations.

First, we find that LoopCAT uses embedded cabling characteristics rather than cable sizing conventions. As ORA/TURN point out, the inventory of cables is a fixed input built on the assumption that existing cabling is optimal. SBC-CA has not met its burden of proving that its existing cable inventory, which reflects incremental growth in the network over many years, is optimal if the network were rebuilt today to meet current demand and reasonably foreseeable growth.

We agree with ORA/TURN and the Joint Applicants that the FCC has made clear it rejects embedded cost approaches to modeling. In defining TELRIC, the FCC spoke of "designing more efficient network configurations" and a forward-looking cost methodology wherein a "reconstructed local network will employ the most efficient technology." (First Report and Order, para. 685.) In the FCC's brief defending TELRIC to the Supreme Court, the FCC stated:

The incumbents appear to be proposing a methodology based on "actual" cost in today's market, of duplicating "actual" existing networks in all physical particulars - or stated different, the "application of up-to-date prices to out-of-date properties." Economists, including those upon whom the incumbents rely, uniformly agree that such a measurement is "economically meaningless." The FCC considered, but rejected, such an approach as "essentially an embedded [i.e., historical] cost methodology," which would produce "prices for interconnection and unbundled network elements that reflect inefficient or obsolete network design and technology." (Reply Brief of the Petitioners United States and the FCC, Verizon v. FCC, July 2001, pp. 6-7, (citations omitted); as cited by ORA/TURN/Roycroft, 3/12/03, p. 9.).)

We find that LoopCAT's reliance on embedded cable characteristics, and the users inability to modify this information because it is embedded in the "preprocessor" files to LoopCAT, renders the model incapable of adequately estimating forward-looking costs and directly contradicts FCC guidance that TELRIC should assume reconstruction of the network, based on existing wire centers, in a least-cost configuration.

Second, we find that LoopCAT's extensive use of factors prevents us from making meaningful modifications to LoopCAT to test varying input assumptions. Specifically, we could not extract individual inputs from LoopCAT's aggregated annual cost and linear loading factors, or compare and verify individual inputs to public information. While SBC-CA's filings and workpapers traced input costs to SBC-CA's internal accounting codes, we could not match this internal accounting data to SBC-CA's publicly available cost data, i.e. ARMIS22 filings. Thus, we are asked to rely on SBC-CA's historical accounting information without any ability to compare it to public information to verify its reasonableness.

In certain cases, the aggregation of inputs into factors, which are used liberally throughout LoopCAT, means we are not able to dissect the various factors into component pieces to isolate, for example, installation times, crew sizes, or material prices. Hence, we cannot fully understand how SBC-CA derived its investments costs or make meaningful modifications to these factors. For example, LoopCAT uses EF&I factors for pole, conduit, and cable installation which are critical elements in modeling the loop network. Indeed, SBC criticizes HM 5.3 for its various inputs relating to pole, conduit, and cable installation. Despite criticizing the HM 5.3 model inputs, SBC cannot show how the inputs in LoopCAT compare to those in HM 5.3, particularly for installation times, crew sizes, and material prices. Ultimately, we are asked to accept the factors that SBC-CA has created from its actual data, without knowing the assumptions embedded in them. Further, without knowing the assumptions embedded in the factors, we cannot test the sensitivity of the model with a changed input.

Another example where we disagreed with SBC-CA's input assumptions involves structure sharing percentages.23 Specifically, we wanted to modify LoopCAT's structure- sharing percentages to match those used by the FCC in its Universal Service Inputs Order.24 We found that it was not possible to isolate and modify the structure sharing rates that SBC-CA had built into its loading factors for conduit and cable investment. Despite criticism of its input assumptions, SBC-CA states that:

[SBC-CA's] structure sharing factors capture the efficient amount of structure sharing taking place in [SBC-CA] California's network today. [SBC-CA] properly assumes that the current rate of facilities sharing will continue into the future and be equivalent to the rate of sharing in a forward-looking environment." (SBC-CA, 3/12/03, p. 40.)

Noticeably absent from this rebuttal is any indication of how to determine the structure sharing percentages that are embedded in SBC-CA's models. Essentially, we are asked to accept LoopCAT's structure sharing percentages without knowing what they are, or being able to modify them.

Even though SBC-CA faced criticism from other parties for its failure to identify key input assumptions,25 it did not provide assistance in its rebuttal comments to decipher its various factors and inputs. Instead, it repeated its assertions that its input assumptions regarding network characteristics represent the most sensible measure of forward-looking network characteristics. (SBC-CA, 3/12/03, p. 9. See also Id., p. 40 and 42.) SBC-CA makes this assertion without delving into an explanation of how to decipher its input assumptions to identify crew size, installation time, or material prices. Consequently, we cannot compare, for example, installation crew sizes in the model to what SBC-CA uses today because the data is too aggregated and SBC-CA does not offer information on current practices. Thus, the SBC-CA data has been aggregated to such an extent that we are unable to isolate discrete inputs and determine their validity.

SBC-CA argues that the Commission should accept its modeling approach based on actual costs and factors because its current network is forward-looking. SBC-CA claims that because it has been operating under incentive regulation for over ten years, it has a strong incentive to make economically efficient choices throughout its network, such as in the amounts of spare capacity in its network. (SBC-CA/Tardiff, 2/7/03, p. 9.) SBC-CA contends that when designing a forward-looking network, it is far better to use a model that reflects actual customer locations, actual cable placements, actual employee needs and work times, and the actual size and capacity of the network. (SBC-CA, 2/7/03, p. 7.)

We do not find this argument convincing for several reasons. First, as we have just described, the parties and Commission staff were unable to decipher the SBC-CA's various factors to understand what SBC-CA used for its "actual" inputs. Although SBC-CA heavily criticized the inputs in HM 5.3 regarding installation times, crew sizes, and material prices, we cannot compare the HM 5.3 inputs to what SBC-CA assumed for these same items because they are aggregated into cost factors. This means that we cannot test the sensitivity of the model with a changed input and we cannot compare or replace SBC-CA's inputs with other public information.

Second, we find it too simplistic for SBC-CA to assert that the current network has already achieved all efficiencies that are possible, particularly when it did not provide examples so that we can compare actual install times or material costs from its current operations with those built into the SBC-CA models. SBC-CA aggregates current network information into large bundles of inputs and then claims that these input bundles must be correct because they are based on actuals. SBC-CA's witness Tardiff makes high-level comparisons between SBC-CA's current operating costs and HM 5.3 results to attempt to show that SBC-CA's current costs are far different from what HM 5.3 has modeled. (SBC-CA/Tardiff, 2/7/03, p. 19.) We find these comparisons meaningless because we cannot make direct comparisons between SBC-CA's inputs and those used in HM 5.3.

Furthermore, we find that LoopCAT's loop network configuration is not forward-looking because it combines existing feeder lengths with an approximated loop distribution length. SBC-CA claims that "actual lengths of loops in the networks are used to calculate Loop TELRICs," because loop information is pulled from SBC-CA's Loop Engineering Information System (LEIS) database containing 17 million records, and "LEIS captures the true distances between a customer's premises and Pacific Bells' central offices..." (SBC-CA, Smallwood, 10/18/02, p. 10.) We agree with ORA/TURN and JA that SBC-CA's claim that loop lengths are based on actual distances is misleading. In actuality, LoopCAT does not model loops equivalent to actual loop lengths that exist today, but approximates one distribution loop length for each distribution area based on an engineering concept known as the "design point." Essentially, LoopCAT assumes all loops in a distribution area are one-half the length of the longest distribution loop segment that might be built in the next twenty years.

SBC-CA does not explain in its opening comments how it uses the design point to estimate loop lengths. It was only after JA criticized the design point estimation technique, that SBC-CA explained the design point concept with the following brief explanation:

[SBC-CA] estimates its distribution length based on the actual design point information that is contained in its database. The design point reflects the longest possible distribution length in a distribution area. [SBC-CA] makes the reasonable assumption that customers will be distributed throughout a distribution area, and based on that assumption, uses half of the design point length as an estimate of the average distribution length in the area. (SBC-CA/Smallwood, 3/12/03, pp. 66-67.)

At technical workshops in June 2003, SBC-CA further explained the design point and how it was used to approximate loop lengths. In response to questions from Commission staff during the workshops, SBC-CA's witness Smallwood explained that loop lengths in LoopCAT were estimated based on adding actual feeder lengths and one-half of the design point distance. (Workshop Tr., 6/26/03, p. 809.) SBC-CA's loop planning guidelines define design point as "The longest loop in any plant segment, expressed in feet from the CO." (SBC-CA Errata, 5/1/03, , LROPP guidelines, p. 103.) These same guidelines explain up front that "[p]lans should be based on growth expectations for the next 20 years." (Id., p. 3.) SBC-CA witness McNeill clarified that the "design point is that existing or potential customer location in the distribution area that's the furthest away from the serving-area interface." (Workshop Tr., 6/26/03, p. 811, emphasis added.) According to McNeill, potential customer locations are projected by SBC-CA engineers based on building permits or discussions with planning commissions and developers. (Workshop Tr., 6/26/03, p. 812.) McNeill hypothesized that 75% of the loops used in the design point calculation are actual customers, and 25% are potential customers. (Workshop Tr., 6/26/03, p. 838.) In other words, LoopCAT assumes all loops in each distribution area are the same length-- i.e. one half of the maximum projected loop distribution segment--based on a twenty-year growth forecast of the longest potential loop. Thus, LoopCAT models all customers in a given distribution area as if they are all exactly the same distance from the central office, and does not employ any weighting or other criteria to assume a varied distribution of loop lengths.

There are three major problems with SBC-CA's use of the design point to calculate loop lengths. First, the use of the design point means that loop lengths in LoopCAT are not based exclusively on actual loop lengths, but on an undisclosed engineer's view of possible future loop lengths based on a twenty-year growth forecast. We find that a forecast period of twenty years is too long for the purposes of this TELRIC costing exercise and is not reasonable. A twenty-year forecast cannot be construed as "reasonably foreseeable short term growth," which is the standard the FCC has used in its own modeling efforts. (Inputs Order, para. 200.)26

Second, we agree with JA that LoopCAT may not correctly determine cable gauge because its calculations are not based on the longest loop served, but on half that distance. (JA/Donovan-Pitkin-Turner, 2/7/03, p. 38, n. 45.) Because the cable gauge is based on an average length that will be shorter than the length of some actual loops, the cable might not provide adequate service to customers with loops longer than the average. A related criticism is that because LoopCAT uses embedded locations and distances for remote terminals, coupled with a hypothetical "design point" distribution length, the model does not have any logic to recognize that some loops exceed the 18,000 foot restriction on copper length for forward-looking loops. Loops that have copper lengths exceeding 18,000 feet will not work without additional equipment such as load coils, which have not been incorporated into the model. (JA, 2/7/03, p. 74.) Indeed, JA claim, and SBC-CA does not dispute, that approximately 100,000 of the loops modeled in LoopCAT will not operate within SBC-CA's own design principles because they are longer than 18,000 feet. (Id., JA 2/7, p. 74; see also Workshop Tr., 6/26/03, p. 819.)

Finally, we are unable to modify the design point in LoopCAT because we have no record-based information on actual loop lengths, and it is uncertain how we would determine the portion of the design point distance that is based on potential future customers. The design point distance and loop length calculations are part of the "preprocessor" to LoopCAT, which Commission staff is not able to run or modify on its own. In other words, even though we disagree with SBC-CA's design point calculation, we cannot run our own version to eliminate the portion of the loop based on "potential" customers and use only the loop lengths of actual customers today.

As a result, we find that SBC-CA's use of the design point to calculate loop lengths results in a loop network design that is not forward-looking and does not use the lowest-cost network configuration.

We find that LoopCAT does not mirror a forward-looking network because it does not attempt to model multiple dwelling units (MDUs). We agree with JA that LoopCAT inappropriately inflates costs for residential loops by installing network interface device (NID) and drop equipment to terminate six lines for every residence served by SBC-CA in California, rather than modeling the appropriate premise termination equipment for multiple-dwelling units that make up a large percentage of households served in California. By not including the appropriate equipment for MDUs, the SBC-CA model inflates loop costs by assuming each residence requires termination for six lines and that each customer account requires a separate drop. (JA, 2/7/03, p. 17.)

An important feature of calculating total loop cost is the "fill factor," or utilization level that is assumed. Fill factors are discussed in greater detail in the Modeling Inputs Section VI.E below. For now, we note that JA and XO contend that SBC-CA's models inappropriately link fill factors and maintenance expenses, such that whenever a higher fill factor is assumed in the model, maintenance expenses automatically increase as well. (JA/Donovan-Pitkin-Turner,2/7/03, pp. 208-09.) XO contends the linkage of fill factors and maintenance expenses is a structural flaw in SBC-CA's model that cannot be easily removed because the only way to eliminate this feature requires access to SBC-CA's "Preprocessor" program. (XO, 2/7/03, p. 10 and, p. 28.)

In Section VI.E.8 below, we determine it is not appropriate to link fill factors and maintenance expenses and evaluate why that is the case. We agree that this is a flaw in SBC-CA's model that we cannot overcome. We have run the SBC-CA models with various fill levels, and we do not know how to de-link fill factors and maintenance expenses. Therefore, we do not have confidence that the results from our modeling runs with higher fill levels are reasonable, because the higher fill automatically produces higher maintenance expenses.

JA contend that SBC-CA's cost models are not integrated for 2-wire, DS-1, and DSL loops. Instead, SBC-CA's models calculate costs for these loops on a stand-alone basis. JA contend that this lack of integration distorts costs, and violates TELRIC and CCPs 8 and 9 which require consistent treatment of costs across all services and elements. By artificially segmenting its cost studies for basic loops, DS-1 loops and DS-3 loops, SBC-CA ignores the efficiencies of sharing facilities that TELRIC requires. (JA, 2/7/03, p. 76.) For example, JA contend that 2-wire loops and DS-1 loops share the same structure, such as poles, conduits, and trenches. Similarly, 2-wire loops, DS-1 loops, and DSL loops share the same DLC systems, and all services share the same central office facilities. (Id., pp. 75-76.)

We find that SBC-CA's failure to integrate all of its services in its cost studies overstates forward-looking cost by ignoring the fact that several services share much of the same network infrastructure. Through this failure, SBC-CA's models do not reflect the full effect of the economies of scope and scale within SBC-CA's network. We agree with JA that SBC-CA's failure to integrate its various loop models and capture the network effects of this total demand inflates true per unit cost of these UNEs and is an impermissible departure from TELRIC principles.

Joint Applicants criticize SBC-CA's switching investment cost module known as "SICAT," contending it is not forward-looking because it uses a short run approach to determine the amount of switching investment. Specifically, JA contend SICAT is based on average purchases over a five year period (1998 through 2002), which, in most cases, involves the higher cost to add a growth line to an existing switch. (JA/Ankum Decl., 2/7/03, para. 112-117.) According to JA, the SICAT model produces a higher short run average cost for switching investment, which is then applied to the capacity to serve the entire network. In this fashion, SICAT overstates long run switching costs. (JA, 2/7/03, p. 42.)

In addition, JA contend that SICAT is not based on California specific switching information, but is instead based predominantly on switching cost investments from the other states in which SBC-CA operates. (Id., pp. 87-88.) Specifically, SICAT develops per line costs based on recent purchases in other states. Thus, in JA's view, SICAT is not sufficiently based on California demand nor does it attempt to identify the number or type of switches necessary to serve California. (JA/Ankum, 2/7/03, paras. 11 and 121-127.) Given SICAT's short term purchasing period and its use of information from other states, JA maintain that SICAT does not model a network designed to meet total demand, but simply calculates a per-line average cost of switches based on non-California data and then improperly uses that average to calculate switch investment.

We find that these two principle disputes with SICAT can be addressed by modifying SICAT inputs. Principally, in our runs of SICAT we have changed the input assumptions regarding the percentage of new and growth lines that are purchased over the modeling period. This should address JA's concern that SICAT uses too high a percentage of higher priced growth lines. We are less concerned that SICAT is not based exclusively on California switching information. Our own review shows that SICAT contains a mix of California switching data and pricing information from SBC's multi-state switching contract. In persuading the Commission to reexamine UNE switching rates, JA argued that the multi-state switching contract allows SBC-CA to obtain a better price for its switching purchases than if SBC-CA negotiated and purchased for its California network alone. (Application 01-02-024, 2/21/01, p. 8.) We find this a reasonable assumption and we are not persuaded that SICAT is fatally flawed because it incorporates some non-California switching information.

SBC-CA uses the "SBC Program for Interoffice and Circuit Equipment" (SPICE) to identify UNE rates for dedicated transport and SS7 links. JA claim that SPICE violates TELRIC because it relies entirely on SBC-CA's embedded network rather than a forward-looking one, and is not constructed based on a determination of total network demand. (JA, 2/7/03, p. 98.) Rather, SPICE determines investment based on a database of the existing circuits in SBC-CA's current network without demonstrating that this embedded network reflects the total demand for each service and all UNEs supported by the network. (Id.)

As a result, JA maintain that SPICE produces flawed results because it proposes costs significantly higher than the prior OANAD rates, without sufficient explanation or justification, during a period of productivity gains in telecommunications technology. (Id., p. 94.) Further, JA contend that SPICE limits the ability of the parties to propose changes to inputs and assumptions in order to modify costs. For example, there is no way to ensure the SPICE model has considered all possible routes that could be the "least-cost" path or to modify the structure sharing assumptions embedded in SPICE. (Id., pp. 96-97.)

SBC-CA responds that SPICE is based on the SBC-CA's current total network demand for interoffice transport circuits, and SPICE assumes that a forward-looking interoffice network would mirror SBC-CA's existing network. (SBC-CA, 3/12/03, p. 74.) SBC-CA counters JAs' allegations that costs are declining for interoffice transport by noting that per circuit investments have actually increased slightly from 1998 to 2001. (Id., p. 75.) SBC-CA contends that the "least cost path function" in SPICE reconfigures circuit paths to choose the least cost route. (Id.)

We agree with JA that SPICE does not meet TELRIC requirements. In our own review of SPICE, we were unable to determine the level of demand that it is designed to serve so that we could vary it and check the model's sensitivity. Essentially, to borrow a phrase coined by JA, SPICE "fails to put the `T' in TELRIC." At the technical workshops, SBC-CA's witness Cass was questioned extensively on how one could determine the total investment modeled in SPICE and the apportionment of that investment based on demand for certain services. Cass admitted that the SPICE model is not based on total investment. (Workshop Tr., 12/5/02, pp. 439-441.) Cass stated that it was not possible to pull a total investment figure out of SPICE without making demand assumptions because SPICE starts with the network in place today to serve all SBC-CA customers and calculates a per unit "node investment." (Id., 12/5/02, pp. 437-439.) Cass responded that the only way to determine total investment was to make assumptions about demand. (Id., p. 438.) Commission staff also inquired how to segment the interoffice demand between voice services and other advanced and unregulated services that use the interoffice network. SBC-CA's witness stated that it was not possible to segment demand in this manner. (Workshop Tr., 6/24/03, pp. 557-558.) We find it unreasonable that we cannot determine the total investment modeled by SPICE or the demand SPICE is intended to serve.

Essentially, SBC-CA asserts that its embedded network is a priori a forward-looking efficient network. (JA/Mercer-Murphy Decl., 2/7/03, para. 7.) When describing inputs to the SPICE model, SBC-CA states that actual data is used "because the facilities utilization of an efficient firm today is the best estimate available of the facilities utilization that an efficient firm will have in the forward-looking environment." (SBC-CA/Cass Decl., 10/18/02, p. 11.) In other words, SBC-CA claims that the characteristics of its existing network, including its current utilization level and the demand it is designed to serve, are automatically forward-looking, without giving us the ability to know what that demand level might be. We do not accept the unsupported assertion that SBC-CA's current network is automatically forward-looking, particularly when we cannot determine the demand SPICE serves in order to test differing assumptions.

Finally, we agree with JA that SPICE contains other inputs that are difficult to understand or modify. For example, SPICE uses historic structure sharing levels that it does not identify and that cannot be modified. (JA/Mercer-Murphy, 2/7/02, para. 21-23.) In addition, SPICE uses pole and conduit factors derived from SBC-CA's embedded network that correlate cable investment and structure investment (i.e. the more expensive the cable, the more expensive the structure) without evidence to support this correlation. (Id., para. 24-26, and 27.) Further, EF&I factors in SPICE are based on historical network data without showing a direct causal relationship between equipment costs and installation costs. In other words, SBC-CA's EF&I factors assume that more expensive equipment is automatically more expensive to install. (Id., para. 78.) We find these characteristics of SPICE are problematic. Similar to our discussion of the flaws in LoopCAT, we find that the use of factors in SPICE aggregates inputs into bundles that we cannot dissect in order to understand the underlying inputs, compare them to other public information or the inputs SBC-CA criticizes in HM 5.3, or test the effect of different input assumptions.

One area of SBC's SPICE model that we were able to modify was its fiber fill factor. In our models runs of SPICE, we have incorporated an 85% fiber fill factor, as proposed by JA. (Id., p. 43.) JA contend that SBC's default fiber fill factors in SPICE are not forward-looking because they are based on current utilization levels, which are far below the 90% fiber fill level used by the FCC in its modeling and below the levels modeled in other SBC states. (Id., pp. 41-43.) We will incorporate an 85% fiber fill factor into our runs of SPICE, noting that the FCC modeled an even higher fill level as forward-looking.

As we have already discussed, SBC-CA uses Annual Cost Factors (ACFs) to convert the investments in its models into annual costs and expenses. In simplest terms, ACFs are ratios of capital costs and operating expenses per dollar of plant investment, built on the assumption that capital costs and expenses have a direct relationship with investments. (SBC-CA/Cohen, 10/18/02, p. 2.) There are four types of ACFs in SBC-CA's cost studies: 1) capital cost factors, 2) operating expense factors, 3) investment factors, and 4) inflation factors. JA and other parties provide numerous criticisms of the ACFs and expense calculations in the SBC-CA models, which we now describe.

JA criticize SBC-CA's cost model for its use of ACFs to calculate the expense portion of UNE costs. JA claim that these ACFs contain numerous computational errors and incorrectly assume that SBC-CA's 2001 ARMIS expense data, on which the ACFs are based, is efficient and forward-looking. (JA, 2/7/03, p. 101.) JA allege that SBC-CA failed to make forward-looking adjustments to its historical expense data to reflect future savings from potential technological innovations and cost savings from corporate mergers. (Id., p. 105.)

XO joins JA in describing numerous criticisms of SBC-CA's modeling factors. As an example, XO claims that SBC-CA's building expense factor is exceptionally high and shows a 120% increase from 1999 to 2001, while ARMIS data does not show an acceleration of investment in buildings by SBC-CA. (XO, 2/7/03, p. 37.) JA contend that SBC-CA inappropriately assumes that all of its embedded building space for central offices and other buildings should be assigned to UNEs and ignores its OANAD admission that forward-looking central office buildings require less space than SBC-CA's historical building requirements. (JA/Brand-Menko, 2/7/03, pp. 48-49.) SBC-CA also ignores the fact that much of SBC-CA's embedded central office space is being paid for as part of collocation charges. (Id., p. 48.)

Generally, SBC-CA responds that it is reasonable to assume that its baseline current expense and investment data reflect those of an efficient provider given the discipline imposed on SBC-CA by regulatory, shareholder, and competitive pressures. (SBC-CA, 3/12/03, p. 28.) Thus, any adjustments to current expenses would be speculative. Further, SBC-CA maintains that with regard to land and buildings, it cannot assume a constant stream of collocation revenue in the future, so it would be inappropriate to include this in the model. (SBC-CA/Makarewicz, 3/12/03, p. 23.)

Both models use factors to estimate expense levels based on investments. We do not find the fact that SBC-CA used a factor approach is, by itself, a flaw. We are not as troubled by SBC-CA's use of embedded ARMIS data to calculate expenses as we are by the fact that the factors do not allow us to isolate and understand individual input assumptions, or compare and verify inputs to public information such as ARMIS, or the inputs that SBC-CA criticizes in HM 5.3. Once again, as in LoopCAT and SPICE, we find ourselves having to rely on SBC-CA's aggregation of its historical accounting information into factors without understanding individual input assumptions. For example, XO and JA both raise serious doubts about SBC-CA's land and building expense factor. We find it reasonable to adjust historical land and building expenses to incorporate forward-looking space requirements and an allocation of revenues from collocation. However, we were unable to make these adjustments to SBC-CA's land and building expense factor because we could not compare SBC-CA's internal accounting information to publicly available ARMIS data to aid in these adjustments. Therefore, we reiterate our finding that the ACFs SBC-CA uses to estimate expenses cannot be disaggregated to understand the underlying inputs, to compare them to other public information, or to modify them to test the effect of different assumptions.

JA and XO maintain that because SBC-CA has abandoned the methodology used to derive costs in the prior OANAD proceeding, the newly derived costs are not coordinated with the prior cost study. In particular, certain expense categories from the prior OANAD proceeding are now used to develop SBC-CA's annual cost factors, even though these same expense categories were used to derive the 21% shared and common cost mark-up percentage in the prior OANAD proceeding. (JA/Brand-Menko, 2/7/03, pp. 40-44.) According to JA, SBC-CA's witness Cohen confirmed that no adjustments were made to SBC-CA's ACFs to remove shared and common costs. (Id., p. 42.) Thus, JA and XO contend that SBC-CA's current cost studies include some portion of shared and common costs, and therefore, double counting occurs when the 21% shared and common cost markup is added to these new UNE costs. (JA, 2/7/03, p. 53; XO, 2/7/03, p. 42.) XO contends that if the Commission employs SBC-CA's cost models to set UNE prices, it cannot use the existing 21% shared and common cost markup, but must undertake a review of the markup separately. (XO, 2/7/03, p. 46.)

In response, SBC-CA confirmed that "[SBC-CA] employed its standard approach for deriving ACFs without explicitly analyzing revised [shared and common] costs because those are not at issue in this proceeding." (SBC-CA/Makarewicz, 3/12/03, p. 22.) SBC-CA maintains that no parties have done an analysis to confirm that shared and common costs are included in SBC-CA's ACFs. SBC-CA contends it is equally possible that costs not recovered by the shared and common cost markup were inadvertently excluded from the ACF analysis and are not recovered anywhere. (Id.)

We agree with JA and XO that there is reason to be concerned whether the current cost studies, which use a different methodology than the prior OANAD cost studies, may incorporate shared and common expenses that are already accounted for in the 21% markup. There is no dispute that SBC-CA has used a different cost methodology in this proceeding as compared to the prior OANAD proceeding, and SBC-CA confirms that it did not attempt to reconcile the shared and common costs currently collected through the 21% markup with the direct UNE costs calculated through the ACF study it proposes here. A lack of analysis and hard evidence of double-counting does not mean that the potential for it does not exist. During a deposition, SBC-CA witness Smallwood testified that SBC-CA's multi-state cost study was designed to comply with FCC directives and recover as much of a UNE's direct incremental cost as possible to reduce common costs. JA are correct that this is a different approach than was used in the prior OANAD proceeding, where the Commission noted that the OANAD treatment of shared and common costs was contrary to the FCC directive. (JA/Murray Decl., 10/18/02, pp. 34-35, citing D.98-02-206, mimeo., at 18, n. 24.) This means that SBC-CA's proposed UNE costs may now categorize some costs as direct UNE costs that had been considered shared and common costs when the 21% markup was determined in the prior OANAD proceeding. Smallwood admits that SBC-CA made no adjustments to the annual cost factors in its new cost studies to recognize costs that are included in the existing shared and common cost markup. (XO, 2/7/03, p. 42.) Thus, it is reasonable to conclude that SBC-CA's ACFs contain some portion of shared and common costs.

XO contends that if the Commission adds a 21% markup to the UNE costs resulting from SBC-CA's models, it should either adjust the ACFs to mitigate the impact of this double counting, or undertake a new markup calculation. We have stated several times that we would not review the 21% markup in this limited proceeding.27 If we were to make any adjustments to SBC-CA's ACFs to remove shared and common costs, they would be highly speculative. We find that this is yet another area of the SBC-CA models that we are not able to modify to our satisfaction in order to derive reasonable UNE costs.

Moreover, JA contend that SBC-CA has not eliminated certain expenses from its ACF cost study such as i) non-regulated expenses unrelated to UNEs, ii) affiliate transaction expenses, iii) DSL-related Project Pronto28 expenses, and iv) annual amortization of post-retirement benefits, known as the "Transitional Benefit Obligation" (TBO).29 JA maintain that all of these expenses are inappropriate to include in a study of forward-looking recurring UNE costs because they are either not current operating costs or are costs related to unregulated activities. (JA, 2/7/03, p. 104.)

JA contend that SBC-CA included investments and expenses related to its non-regulated activities when it developed its ACFs. These non-regulated activities include services related to customer premise equipment, inside wire maintenance plants, and billing and processing of third-party customer bill payments. (JA/Brand-Menko, 2/7/03, pp. 25-26.) According to JA, it verified that these unregulated expenses are included by comparing ARMIS reports for regulated and unregulated expenses with the inputs used in SBC-CA's ACF cost study. (Id.)

SBC-CA responds that it appropriately relied on total expenses and investments when calculating per unit expense factors. (SBC-CA/Makarewicz, 3/12/03, p. 9.) For example, SBC-CA claims that its cable maintenance cost per unit will remain the same regardless of what service is using the cable. SBC-CA provides the analogy of a salesperson that has a company car that is used both for business (regulated) and personal (non-regulated) purposes. SBC-CA reasons as follows:

Accurate per mile maintenance expenses for the vehicle are calculated using data for its entire use rather than the "arbitrary" distinction between business use and personal use. Likewise, it is appropriate for [SBC-CA] to use total account balances for plant expenses and investments to calculate its ACFs, and to apply the same ACFs to measure the costs for any services/elements whose provisioning relies on plant investment and expense. (Id., p. 10.)

We disagree with SBC-CA's reasoning. We doubt that the company in SBC-CA's example, which issues a car for business use, would happily pay higher maintenance costs if the salesperson used the company car for excessive personal use. Likewise, we do not agree that expenses SBC-CA incurs for its unregulated businesses, such as inside wire maintenance, or billing services to third parties, should be considered when determining the expenses related to its UNE operations. SBC-CA admits that it has included expenses related to unregulated activities in developing its ACFs. We have no way to adjust the ACFs to remove these inappropriate expenses.

JA contend that SBC-CA inappropriately includes expenses related to transactions with its affiliated companies in its ACFs, both for services sold by SBC-CA to its affiliates, and for purchases SBC-CA made from its affiliates. JA note that expenses related to affiliate transactions have increased four-fold since the merger of Pacific Telesis and SBC Communications in 1997. (JA/Brand-Menko, 2/7/03, pp. 28-30.) JA maintain that expenses related to providing services to SBC affiliates should not be attributed to California UNEs. (Id., p. 32.) Further, costs for services purchased from affiliates need to be adjusted to forward-looking levels because SBC-CA pays for what it purchases from affiliates at "fully distributed cost" which is an embedded cost methodology. (Id. p. 34.)

SBC-CA does not deny that expenses related to affiliate transactions are included in its 2001 expense information that was used to develop its ACFs. SBC-CA contends no adjustment is needed for services purchased from affiliates, such as payroll processing, procurement, fleet operations, and information technology, because SBC-CA pays the lower of fully distributed cost or fair market value for its purchases. (SBC-CA/Henrichs, 3/12/03, p. 15.) SBC-CA does not respond to the JA argument that expenses for items sold to affiliates are not related to provisioning UNEs. We find that it would be unreasonable for SBC-CA's ACFs to include expenses related to services SBC-CA has performed on behalf of its affiliates and which do not relate to the provisioning of UNEs. SBC-CA has not met its burden of proof to assure us that these costs have been adequately removed from its ACFs. Further, we have no means of adjusting SBC-CA's ACF cost study to make this adjustment.

With regard to Project Pronto, JA and XO claim that SBC-CA's ACFs include a portion of Project Pronto costs incurred in 2001 that relate to development of SBC-CA's broadband network. These costs are not required for provisioning of UNEs. (JA/Brand-Menko, 2/7/03, p. 58-62; XO, 2/7/03, p. 17.) According to JA and XO, Project Pronto is a major technological upgrade, and SBC-CA is incurring higher than normal costs in the early years of this upgrade, with the expectation of cost savings later. JA admit that a portion of Project Pronto costs are for overall network efficiency and should be included in ACFs, but they contend this amount is overstated because SBC-CA made no adjustment to 2001 expense levels to acknowledge high start-up expenses and account for future cost reductions from Project Pronto. (JA/Brand-Menko, 2/7/03, para. 125.) XO contends that expense data that SBC-CA uses in its cost studies reflects those early years of Project Pronto implementation, but fail to consider future Project Pronto savings. JA claim that Project Pronto costs are inextricably melded into SBC-CA's 2001 expense information, and SBC-CA admits it cannot identify separate Project Pronto costs.

SBC-CA does not deny that 2001 expenses used for ACFs include Project Pronto expenses, and that it cannot separate these costs from the expenses it used for its ACFs. Rather, SBC-CA makes the argument that the same equipment used for Project Pronto can also provide voice service. Thus, expense levels for this equipment are the same whether it serves voice or DSL service. (SBC-CA/Makarewicz, 3/12/03, p. 18.) On the charge that future Project Pronto savings are not incorporated, SBC-CA says that it has already incorporated Project Pronto savings by modeling fiber loop plant and lower maintenance factors associated with fiber. (Id., pp. 17-20.)

We find that SBC-CA has not met its burden to show that Project Pronto expenses were appropriately accounted for when developing its ACFs. SBC-CA admits it cannot separate Project Pronto expenses from its total 2001 expense information used to develop ACFs. We agree with JA that some Project Pronto costs likely contribute to overall network efficiency, but not necessarily all of them. SBC-CA offers us no ability to allocate the expenses between its voice and broadband services. We do not agree with SBC-CA's apparent assumption that all Project Pronto costs should be allocated to UNEs. SBC-CA argues that future efficiencies for Project Pronto are accounted for in its models through lower fiber maintenance ACFs. We do not find this argument convincing because without an allocation of Project Pronto expenses between voice and broadband services, we cannot be assured that SBC-CA's fiber maintenance expenses wouldn't actually be lower if some were properly attributed to the broadband network. Thus, SBC-CA has not met its burden of justifying these expenses as forward-looking. We are unable to adjust ACFs to remove potentially inflated Project Pronto expenses.

This particular debate is about an accrual for an accounting change that took place over a decade ago. As SBC-CA explains, the TBO accrual is a "liability for future retiree medical costs already earned by current and former employees [as of 1/1/93] but not yet paid...." (SBC-CA/Cohen, 3/12/03, p. 15.) In other words, it is an amortization of expenses SBC-CA would have shown on its books long ago if SFAS No. 106 had been in effect prior to 1/1/93.

JA contend that the TBO represents the amortization of an embedded cost resulting from past activities that are not forward-looking. The TBO that was recorded in 1993 has nothing to do with the costs of an efficient carrier today. (JA/Brand-Menko, 2/7/03, pp. 44-47.) SBC-CA contends these are forward looking expenses that are appropriately treated as shared and common costs. According to SBC-CA, it removed some TBO expenses from its ACF study, but it admits that the proper amount should have been almost three times what it actually removed. (SBC-CA/Cohen, 3/12/03, pp. 19-20.) JA contend that SBC-CA should have removed over $80 million, and the amount SBC-CA admits removing is far below this. (JA/Brand-Menko, 2/7/03, p. 46.)

We agree with JA that this particular TBO accrual is not a current operations cost, since it is essentially catching up for failing to account for this expense when it was incurred. TBO costs should not be included in SBC-CA's ACFs. From the meager record on this issue, we have no idea what the amount is that should be removed from SBC-CA's ACF calculations, because it appears that SBC-CA's TBO amortization for the 1993 accounting change may be included with the accruals it makes today for current employees. In addition, we are not reviewing the shared and common cost markup in this proceeding so we will not address whether TBO costs are appropriate to include in the shared and common cost markup. What we do know is that SBC-CA admits the amount it removed was understated.

In summary, we have found several problems with SBC-CA's ACF cost study. We agree with JA and XO that in an ideal world, SBC-CA's cost model expense calculations need to be revised to remove the expenses described above. When we tried to make these adjustments ourselves, we found it was not possible given the current record to isolate and remove expenses for non-regulated operations, affiliate transactions, Project Pronto, and the TBO.

Finally, JA protest SBC-CA's inflation adjustments to its operating expenses and capital investments. SBC-CA incorporated inflation into its cost modeling using an inflation factor for capital investments based on the Telephone Plant Index (TPI), and an inflation factor for its operating expenses based on the Consumer Price Index-W (CPI-W).30 (SBC-CA/Cohen, 10/18/02, p. 17.)

JA criticize SBC-CA's inflation adjustments based on the TPI and CPI-W as neither specific to SBC-CA nor closely related to the types of costs SBC-CA experiences. (JA, 2/7/03, p. 103.) Further, JA contend that SBC-CA has failed to reflect future productivity improvements and expense savings from such sources as technological innovation and mergers. According to JA, there is extensive regulatory precedent for the concept that inflation should not be incorporated into a cost study unless there is a corresponding offset for productivity. (JA/Brand-Menko, 2/7/03, pp. 86-88.) Specifically, JA note that this Commission's own "New Regulatory Framework" (NRF) decision incorporates both inflation and productivity adjustments to rates. (Id., p. 87, citing D.89-10-031.) JA contend there is significant evidence that any inflation SBC-CA experiences in its costs will be offset by productivity estimates that exceed inflation. (Id., pp. 93-95.) According to JA's witness Flappan, BLS data for similar telephone utilities shows that worker productivity has exceeded inflation price increases by 3.8% per year on average from 1996 through 2000. When productivity exceeds inflation, costs per labor hour decrease even if nominal wages increase. (JA/Flappan, 2/7/03, p. 30.) Thus, Flappan contends it is unreasonable to assume inflation in labor costs without corresponding adjustments for productivity. (Id.) JA recommend that the Commission either incorporate a negative inflation factor, based on their conclusion that productivity will exceed inflation on a forward-looking basis, or inflation factors should be removed entirely because SBC-CA did not include any productivity assumptions.

Similarly, XO criticizes SBC-CA's inflation assumptions, noting that SBC-CA's actual operating expense data indicates expenses have fallen rather than risen in line with retail prices in the overall economy. (XO, 2/7/03, p. 9, and 19.) XO objects to SBC-CA's assumption that as telecommunications equipment prices decrease, operating expenses rise. XO says this assumption is contradicted by SBC-CA's actual operating expense data per access line that indicates 4.2% per year declines from 1996-2000. (Id., p. 19.)

SBC-CA responds that its use of the TPI and the CPI-W are appropriate as inflation factors because they conservatively estimate the inflation SBC-CA will face in its costs. SBC-CA justifies using the CPI-W by stating that it measures inflation in wages, which are a large portion of SBC-CA's expenses. (SBC-CA/Cohen, 3/12/03, p. 29.) Further, SBC-CA disputes JA's contention that it has left productivity out of its cost studies. SBC-CA contends that it has incorporated productivity in its cost studies by assuming placement of only new technology and applying maintenance factors associated with forward-looking technology. SBC-CA also contends that by using its latest expense data for its operating expense factors, it has already incorporated the latest gains in personnel productivity. (Id., p. 33.)

We agree with JA and XO that it is improper to include inflation adjustments in the expense data, without a corresponding adjustment for productivity. We do not agree with SBC-CA's assertion that it has already factored in productivity simply by using forward-looking technology and by using SBC-CA's 2001 expense information. Investment in equipment of the latest technology does not by itself account for all the productivity gains that the company could achieve in the future. Similarly, the use of 2001 expenses, by definition, does not include future productivity potential. Rather, we find merit in Flappan's arguments regarding productivity. Flappan provides BLS data indicating that worker productivity has exceeded inflation price increases for several years. (JA/Flappan, 2/7/03, p. 30.) We do not find it reasonable that SBC-CA has included inflation price increases in its labor rates, but no corresponding productivity assumptions. As JA have pointed out, other states faced with this same issue, namely Texas, Missouri, and Kansas, have removed the inflation increases under the assumption that productivity offsets inflation. (Id., p. 29.) As in other states, we will make the conservative assumption that productivity will at least equal inflation even though it has actually outstripped it according to recent BLS data.

Therefore, in our runs of the SBC-CA models, we have removed the inflation component of SBC-CA's capital and operating expense factors, which essentially means that we assume inflation and productivity offset each other. This is consistent with the Commission's assumptions regarding inflation and productivity in our NRF decisions.

In conclusion, SBC-CA's LoopCAT model relies too extensively on embedded data that we are unable to modify to our satisfaction. This information includes cabling requirements, loop length forecasts, structure sharing assumptions, and labor installation times and crew sizes which are embedded in various factors. LoopCAT's extensive use of annual cost factors, or "linear loading factors," prevents us from making meaningful modifications to LoopCAT to test varying input assumptions because we cannot extract individual inputs from LoopCAT's aggregated factors, or compare and verify individual inputs to public information.

For switching costs, the flaws noted by JA are not fatal because SBC-CA's SICAT model can be modified through input modifications. On the other hand, we find that interoffice rates calculated by SBC-CA's SPICE model are not based on total demand and we are unable to make modifications to the SPICE model to remove embedded network assumptions. With regard to expenses, we cannot modify the SBC-CA models sufficiently to remove several categories of expenses. Further, SBC-CA's ACFs are based on historical information that is aggregated into bundles that we cannot dissect in order to understand the underlying inputs, compare them to other public information or the inputs SBC-CA criticizes in HM 5.3, or test the effect of different input assumptions.

Overall, our inability to modify SBC-CA's models in these areas leads us to conclude that the SBC-CA models do not comply with TELRIC because they are merely rebuilding the network that exists today, with some technology upgrades, rather than building a forward-looking network configuration.

Overall, SBC-CA criticizes HM 5.3 as understating SBC-CA's forward-looking cost of providing UNE's because it does not reasonably estimate the quantities or prices of network facilities to build a competing local exchange network. SBC-CA alleges that HM 5.3 is "results-driven" and "manipulated to produce the lowest UNE cost estimates possible." (SBC-CA, 2/7/03, p. 11.) Moreover, SBC-CA claims that HM 5.3 does not overcome the flaws the Commission found with its predecessor, HM 2.2.2, namely the earlier model's faulty representation of distribution plant in low-density areas. Thus, SBC-CA claims that despite its new customer location and clustering process, HM 5.3 models an inadequate amount of loop plant. (SBC-CA/Tardiff, 2/7/03, pp. 70-71.)

SBC-CA's criticisms of HM 5.3 can be grouped into seven categories. Essentially, SBC-CA contends that HM 5.3: 1) ignores accepted engineering and network design standards, 2) is based on a flawed customer location process, 3) relies excessively on unverifiable "expert judgment," 4) ignores actual demand in its switching and interoffice models and would be unable to provision high speed services, 5) does not provision enough spare capacity, 6) includes unrealistically low expense levels, and 7) fails to provide a test of validity. SBC-CA alleges that as a result of these flaws, HM 5.3 produces a network that is unrealistic and unreasonable because it has far less outside plant than SBC-CA's actual network today (i.e. fewer distribution areas, less distribution pairs, less fiber equipment, less trunks, and less interoffice network equipment). (SBC-CA, 2/7/03, p. 20.)

We will address each of these criticisms in turn. As an overview, we find merit to some of SBC-CA's criticisms of HM 5.3, but not all of them. We find that many of SBC-CA's criticisms can be addressed by input modifications to the model. Where we can modify inputs, we do not agree that SBC-CA's criticisms are insurmountable. Although SBC-CA is critical of inputs relating to engineering and design standards, spare capacity, and expense levels, these are all inputs that we can modify. However, this is not true in other areas. While we do not agree with SBC-CA that the entire customer location process is flawed, neither do we agree with all of the assumptions built into the HM 5.3 geocoding and customer location process. We find that it is not possible to modify this area of HM 5.3 and test various scenarios. In addition, we agree with SBC-CA that many of the "expert judgments" used as inputs to HM 5.3 are questionable, and appear biased to produce low results. We have found that many of these expert judgments can be replaced with assumptions and inputs used by SBC-CA in its own model. But this is not the case in one important area--labor costs. We explain this more fully below. Finally, we find that we cannot overcome criticisms of the HM 5.3 Transport module that it underestimates demand for interoffice transport, may not adequately incorporate optical interface equipment for the provisioning of high capacity services, and is insensitive to demand changes. We now turn to a detailed review of our findings with regard to HM 5.3.

In general, SBC-CA maintains that HM 5.3 is incapable of producing accurate TELRIC estimates because it ignores widely accepted engineering and network design standards, and instead relies upon a series of erroneous engineering assumptions, unrealistic input values, and inappropriate estimating methodologies. As a result, HM 5.3 understates the amount of facilities required to provide service. According to SBC-CA, HM 5.3's principal flaw is its assumption that a brand new fully functioning, optimal network could be instantly constructed at a single moment in time. (SBC-CA, 2/7/03, p. 13.) SBC-CA maintains that HM 5.3 is based on a fiction that a competitive firm could enter and instantly size and build all of its facilities to accommodate a known snapshot of demand, when in reality, networks are built and have evolved over time to accommodate demand as it grows and shifts. (SBC-CA/Tardiff, 3/12/03, p. 17.) By building an abstract network divorced from reality, SBC-CA maintains that HM 5.3 focuses only on existing lines and does not account for vacant parcels of land or vacant homes. Therefore, HM 5.3 does not build to the "ultimate demand" that a real-world carrier would serve. (SBC-CA, 2/7/03, p. 40.) In addition, SBC-CA contends that this assumption of an instantaneous network fails to match the other assumptions in HM 5.3, particularly the relatively long depreciation lives and a low cost of capital assumed by JA. (SBC-CA/Tardiff, 2/7/03, p. 15.)

Further, SBC-CA criticizes the "right angle routing" that HM 5.3 uses to connect customer locations. Rather than connecting customer locations by a straight line of the shortest-distance, or "as the crow flys," HM 5.3 assumes that customer connections form the two sides of a right triangle, hence the term "right angle" routing. (JA/Mercer, 10/18/02, Attachment RAM-4, p. 36, n. 33.) SBC-CA contends that this routing assumption constructs an outside plant design that is purely hypothetical and fails to reflect SBC-CA's operating realities, where carriers cannot ignore geographic impediments and man-made obstacles such as rivers, lakes, mountains, rights-of-way, and easements. (SBC-CA, 2/7/03, p. 13.) According to SBC-CA, "right angle routing" causes HM 5.3 to understate the amount of plant necessary to serve customers by ignoring real and man-made obstacles. (SBC-CA, 2/7/03, p. 48; SBC-CA/Murphy 2/7/03, p. 53.)

On the whole, SBC-CA alleges that HM 5.3 designs a hypothetical network that only satisfies existing demand at existing locations, excludes the real-world costs of fluctuations in demand from customer growth and churn, and results in a model that produces unrealistic investment levels compared with SBC-CA's actuals. SBC-CA contrasts HM 5.3 results to SBC-CA actual operating results and highlights the following:

· HM 5.3 calculates total investment of $9 billion, but SBC-CA spent $9.6 billion just on plant additions from 1998 to 2001 (SBC-CA 2/7/03, p. 7.)

· HM 5.3 assumes a network that can be maintained for $.7 billion, while SBC-CA spent $2.7 billion on maintenance in 2001. (Id.)

· HM 5.3 models 32 million distribution pair while SBC-CA has almost double this number in its actual network. (Id., p. 20.)

· HM 5.3 creates a network with 11,661 distribution areas whereas SBC-CA has more than 5 times this number in its serving area. (Id.)

We find that SBC-CA's criticisms of HM 5.3 principally highlight questionable inputs that JA have used in HM 5.3, but we do not agree that HM 5.3 violates TELRIC requirements overall. SBC-CA takes issue with how HM 5.3 applies TELRIC to build a network instantaneously to meet current demand. While we agree that it may be unrealistic to assume a network can be constructed overnight, we find that HM 5.3 for the most part follows well-established TELRIC guidance and SBC-CA's criticisms center largely around quarrels with the inputs that are used in the model. We can modify many of the inputs and assumptions in HM 5.3 to address these criticisms. For example, we can ensure that the assumed fill factors provide reasonable spare capacity for growth, we can assume that a carrier will incur a higher cost to install sufficient switching investments because it cannot buy all the lines it will need at the steeply discounted "new" switch price, and we can change labor rates and task times to reflect more realistic equipment installation assumptions and expenses.

We agree with JA that based on established TELRIC rules, HM 5.3 should not build to "ultimate demand." (JA/Donovan, 3/12/03, paras. 191-194.) In its own modeling for federal universal service purposes, the FCC has stated that model inputs should reflect current demand, which it defines to include a "reasonable amount of excess capacity to accommodate short term growth." (Inputs Order, para. 190.) The FCC has explicitly rejected the notion of modeling based on "ultimate demand," because it is highly speculative (Id., para. 201). The FCC stated that "correctly forecasting ultimate demand is a speculative exercise, especially because of rapid technological advances in telecommunications." (Id., para. 200.) JA claim that HM 5.3 includes extra pairs to accommodate additional lines, maintenance and administrative needs, and therefore provides the same level of service as SBC-CA's current network. (JA, 3/12/03, p. 46.) We find that if we run HM 5.3 with an appropriate number of pairs per household and using appropriate fill factors, HM 5.3 accounts for a reasonable level of growth, and sizes the network to provide appropriate service quality and reach potential customers. As the FCC has stated, predicting ultimate demand is a speculative exercise, particularly in today's environment of rapidly changing technologies and demand levels, which SBC-CA acknowledges. (See e.g., SBC-CA, 3/12/03, p. 70, n. 278.)

We do not agree with SBC-CA that the right-angle routing used by HM 5.3 necessarily understates loop plant. SBC-CA relies on the opinion of its expert witness that outside plant is underestimated in HM 5.3, but it has not provided empirical evidence that its actual route distances are greater than those modeled by HM 5.3, or any comparison of the distances modeled in the SBC-CA Models to those in HM 5.3. JA contend that right-angle routing conservatively overestimates loop plant because it uses right-angle rather than straight line connections. It is logical that loop lengths based on right-angle connections will be longer than straight line connections because mathematically, the two sides of a right triangle, when added together, are longer than its hypotenuse. Thus, we find that the right-angle routing used in HM 5.3 is reasonable. Although right angles may not match SBC-CA's actual network routes, it is more realistic to assume right angles than to assume a carrier could build all routes along straight lines.

Indeed, the loops SBC-CA models in LoopCAT do not follow existing routes either for the distribution portion of the loop. We have more confidence in the loop lengths modeled by HM 5.3, which begin with actual customer locations and use right-angles to connect customers within a cluster, than we have in SBC-CA's LoopCAT which is based on half of the longest distribution loop segment that might be built in the next twenty years. Neither model follows existing routes or places all loop facilities in today's locations, and HM 5.3 makes conservative right-angle assumptions to connect existing customers rather than assuming all loops in a distribution area are one length.

SBC-CA argues that CCP 6 requires the modeling of SBC-CA's "existing or planned" outside plant facilities. Yet, the language SBC-CA quotes has been superseded by the FCC's TELRIC requirements adopted in 1996, describes assumptions for a TSLRIC analysis, and is contradicted by CCP 7 which mandates that costs be forward-looking and "shall not reflect a company's embedded base of facilities." (D.95-12-016, Appendix C, p. 5.) While we agree that the use of SBC-CA's actual right-of-way and plant routes would be a superior modeling technique, neither model has been able to achieve this level of reality. SBC-CA's LoopCAT does not follow CCP 6 either. While SBC-CA models existing feeder routes, this is not true for the distribution portion of the network where SBC-CA has used the design point to approximate distribution loop lengths. Further, the FCC's TELRIC rules, which issued after the Commission's CCP's, do not mandate the use of existing outside plant routes, and specifically allow a "reconstructed local network." (First Report and Order, para. 685.) Therefore, we find that although JA's simplifying assumption of right-angle routing is not based on today's outside plant routes, it most likely increases costs in the model by using a longer route than if customers were connected by straight lines.

We do not agree with SBC-CA that HM 5.3 is automatically flawed because its proposed costs are lower than SBC-CA actual costs. SBC-CA makes generic statements that the characteristics of its current network best reflect an efficient forward-looking network because SBC-CA has years of experience running a network and has been operating under incentive regulation designed to make its network competitive. SBC-CA actual costs may not be forward-looking, may be skewed by unusual one-time expenses from that year, or may simply reflect the cost of running a network based on embedded choices that a new carrier would not make. In many ways, we consider SBC-CA's comparisons of model results to its actual network experience irrelevant because its actual costs may not be forward-looking. Further, we find these comparisons less useful because they are often made at a very aggregate level and do not allow us to compare discrete modeling results in an "apples to apples" fashion.

SBC-CA's attempt to argue that HM 5.3 results are unrealistic when compared to SBC-CA's current operations appears to echo the unsuccessful arguments that ILECs presented to the U.S. Supreme Court. The Supreme Court recognized that "the problem with a method that relies in any part on historical cost, the cost incumbents say they actually incur, is that it will pass on to lessees the difference between most-efficient cost and embedded cost." (Verizon, 122 S. Ct. at 1673.) The court flatly rejected the idea of basing UNE costs on costs from SBC-CA's network today.

While SBC-CA criticizes numerous inputs to the HM 5.3 model as highly unrealistic and biased too low, it does not provide specifics on what a more realistic input should be given its own network experience. SBC-CA's main response is that the Commission should use its model instead, rather than amend the inputs in HM 5.3. For example, SBC-CA's witness McNeil criticizes HM 5.3 for what he considers unrealistic assumptions about how fast a crew can place and splice fiber and cable, but he does not provide actual placement and splicing times, only the vague suggestion that the crew size should be larger. (SBC-CA/McNeil, 2/7/03, pp. 46-50.) In contrast, JA witness Donovan defends his input assumptions, and notes that his estimates are actually higher than data from SBC-CA's job cost estimate database. (JA/Donovan, 3/12/03, pp. 24-30.)

A second example involves DLC installation costs. While SBC-CA criticizes HM 5.3's DLC cost assumptions, it cannot justify its own inputs in this area. SBC-CA's witness Palmer states that estimates of DLC installation costs by JA are too low because they are based on Project Pronto estimates and "[t]here is absolutely no relationship between the actual costs incurred today by SBC-CA California to install this equipment and the high-level estimates used in 1999 for business case purposes." (SBC-CA/Palmer, 3/12/03, p. 13.) When questioned at the hearings about SBC-CA's actual DLC installation costs, neither Palmer nor SBC-CA's other loop witness, Ms. Bash, could provide an answer or explain how SBC-CA knew that JA's DLC installation assumptions were too low if they did not know SBC-CA's actual costs. (Hearing Tr., 4/15/03, pp. 572-575.) Later, at a continuation hearing, Bash provided information on actual SBC-CA DLC installations from a sample she chose of 8 installations. (Proprietary Hearing Exhibit (PHE) 109.) SBC-CA admits that the actual costs from Bash's sample are lower than the factors for DLC installation used in LoopCAT. (SBC-CA, 8/1/03, p. 21.) A further sample of 50 installations chosen by SBC-CA and JA also indicates costs from actual DLC installations that are lower than the DLC installation factors used by SBC-CA in its own model. (JA, 8/1/03, Exhibits C-4, C-5.) We give little weight to criticisms of HM 5.3 assumptions when witnesses are unable to provide specifics from SBC-CA's own experiences or explain why modeling inputs differ from actual costs.31

HM 5.3 uses detailed customer location information supplied by SBC-CA to identify SBC-CA's current customer locations and cluster them into distribution areas. This is the foundation for the network that HM 5.3 models and is used to determine the lengths of facilities' routes, how much feeder plant is needed, and the types and amounts of copper cable and support structure. (SBC-CA/Tardiff, 2/7/03, p. 17.) Essentially, SBC-CA contends there are numerous flaws with the geocoding process and customer location database used as an input to HM 5.3, and these flaws violate the Commission's cost modeling criteria and result in loop costs that are too low and loop plant that is not constructed using standard engineering practices.

To understand SBC-CA's criticisms, it is helpful to review the geocoding and clustering process used in HM 5.3. (See JA/Mercer Decl., 10/18/02, Attachment RAM-4, Model Description, Section 5.2 - 5.3.) JA contracted with an outside vendor, TNS, to take SBC-CA's customer location data and "geocode" it by assigning each customer location a precise longitude and latitude. Where SBC-CA's data was incomplete or unreliable, "surrogate" geocoded locations were assigned. TNS then used its proprietary algorithms to group these geocoded customer locations into logical serving areas, or "clusters," based on the category of service appropriate for that customer. The clustering algorithm imposed three critical engineering restrictions to ensure that 1) no point in the cluster may be more than 17,000 feet from the center of the cluster, 2) no cluster may exceed 6451 lines,32 and 3) no point in the cluster may be farther than two miles from its nearest neighbor in the cluster. (Id., RAM-4, section 5.3.2.)

The clustering process produces irregularly shaped groupings of customers in each wire center that JA term "convex hulls," or "clusters." TNS then determines the "centroid" of the cluster, which is the midpoint of the line connecting the two farthest points in the cluster. (Id., RAM-4, Section 4.5, n. 26.) HM 5.3 uses the cluster centroid as the location for the feeder termination, or serving area interface (SAI). In addition, TNS calculates a "strand distance" which is a measurement of the route mileage required to connect all customer locations to each other. The strand distance is based on a "minimum spanning tree" (MST) theory and assumes "right-angle" routing between customer points. (Id., RAM-4, Section 8.4, n. 47.)

Next, TNS takes the convex hull clusters and transforms each into a rectangle of the same total area as the original convex hull, so that a distribution network can be laid out over the cluster. JA describe this as "rectangularization." Finally, TNS uses demographic data to assign demographic characteristics such as terrain, housing profiles, and line density zone characteristics, to the clusters. (Id., RAM-4, Section 6.1.)

The customer location database input is now complete and ready to be input to HM 5.3. HM 5.3 takes the rectangular clusters in the database and subdivides them into lots of equal sizes in order to lay out a distribution network over the cluster to reach each of the lots, which are uniformly dispersed over the area of the cluster. (Id., RAM-4, Section 8.1.) HM 5.3 compares the total distance of this distribution network to the "MST" strand distance, or route mileage, calculated by TNS and allows the user to adjust the route mileage to this MST distance when calculating the cost of the distribution network. (Id., RAM-4, Section 8.4.)

SBC-CA contends that the customer location database resulting from the TNS geocoding process is a black box that cannot be verified because JA have not provided the proprietary source code used by TNS for the geocoding and clustering process. SBC-CA also contends that the clustering process produces distribution areas that are too large and do not represent actual customer locations in SBC-CA's serving area. (SBC-CA/Dippon, 2/7/03, pp. 4-5.) We now describe and discuss these criticisms.

SBC-CA charges that the TNS customer location database and clustering process is not sufficiently open because it does not allow parties access to the database's underlying data, calculations, and assumptions. This inhibits SBC-CA's ability to examine and modify HM 5.3 customer location engineering principles. According to SBC-CA, JA never provided access to the source code of the algorithm used by TNS to cluster SBC-CA's customer information data. Without the source code, SBC-CA claims it cannot review, test, or modify how the model clusters customer locations. (SBC-CA, 2/7/03, p. 30.) SBC-CA's witness Dippon claims that the clustering description provided by JA does not match what TNS appears to have done. (SBC-CA/Dippon, 2/7/03, pp. 27-30.)

In response, JA contend that SBC-CA was given everything it needed to review, understand, and test the TNS clustering process. (JA, 3/12/03, p. 51.) We agree with JA that it provided reasonable access to its clustering process since SBC-CA's witness Dippon was able to run his own clustering scenario where he reduced the maximum lines in the cluster from 6,451 to 1,800. (SBC-CA/Dippon, 2/7/03, p. 42.) While the clustering algorithm was performed by TNS as an outside input to HM 5.3, it is comparable to SBC-CA's preprocessing of its loop records before they were input to LoopCAT. In other words, both parties had to "preprocess" vast amounts of data to prepare it for input to the actual UNE cost models, and there are aspects of both the TNS and the LoopCAT preprocessing work that outside parties and Commission staff are not able to replicate or scrutinize for various reasons. Nevertheless, JA did describe the TNS clustering process in some detail in its filings and through discussions with SBC-CA, and SBC-CA was evidently provided enough information to be able to run its own version to test a different set of clustering criteria.

Nevertheless, we can sympathize with SBC-CA's frustration at not being able to examine every detail of the TNS process as Commission staff have spent much time reviewing this area. JA's description of the geocoding and clustering process is far from clear and overly laden with technical terminology that is difficult to wade through. Indeed, "rectangularize," "centroid," and "convex hull" are not common words. Ultimately, Commission staff found itself unable to run its own version of the TNS clustering process to see the effects of different assumptions because it would have required extensive computer equipment that the Commission does not have available. In this regard, SBC-CA was able to accomplish what we couldn't. We recognize that neither HM 5.3 nor SBC-CA's models allowed us the ability to fully understand or replicate their preprocessing steps, and therefore, both models have aspects that could be considered "black boxes."

Despite his lack of access to the clustering algorithm source code, SBC-CA's witness Dippon identified several errors in the clustering process that cause the clusters to bear no resemblance to real world customer groupings or actual customer locations in SBC-CA's serving area. (SBC-CA, 2/7/03, p. 31; SBC-CA/Dippon, 2/7/03, pp. 2-3.) Dippon lists numerous examples where the clustering process places customers or equipment in locations SBC-CA contends do not match reality. For example, Dippon takes issue with how TNS determined the cluster "centroid," which HM 5.3 uses to locate the Serving Area Interface (SAI) equipment. (SBC-CA/Dippon, 2/7/03, p. 36.) Second, Dippon describes how the clustering "clumps" customers in downtown areas into unrealistic high-rise buildings. For example, HM 5.3 produces a 1,020-story building and understates the amount of distribution plant to serve such a tall building. (SBC-CA, 2/7/03, p. 47.) Third, SBC-CA again raises the criticism that when constructing a real world network, geographic impediments and man-made obstructions must be considered.

We find these criticisms are somewhat ironic given SBC-CA's modeling approach that does not locate customers at all, assumes they are all uniformly dispersed in a ring around the central office, and makes no effort to model high-density customer locations or multiple dwelling units. Both models have made many simplifying assumptions in order to model a network. Some of these assumptions are more far-fetched than others. We agree with JA's assessment that HM 5.3 is not an engineering model, but a cost model that locates current customers and determines the cost of plant to reach those customers, plus room for reasonable growth, without determining the actual locations where plant will be placed.

We find that the clustering assumptions that form the basis of HM 5.3 are no worse than the loop input assumptions used by SBC-CA in its preprocessor, including SBC-CA's approximated loop length based on a "design point." In fact, we find the HM 5.3 model more reasonable in its loop design because it is based on actual customer locations and designs plant based on the realities of where customers are grouped today. SBC-CA's model presumes that all of the customer groupings in its network today are forward-looking and efficient, and does not allow the user to regroup customers into more logical groupings. Then, SBC-CA models loop plant to serve these existing groupings based on the "design point" concept and its resulting approximation of loop length. In contrast, HM 5.3 starts by locating all customers where they are today, and recognizes dense groupings of customers given the high proportion of multiple dwelling units in California.33 We find that HM 5.3 provides a more granular approach to designing a distribution network than SBC-CA. Therefore, SBC-CA's criticisms that customer locations are not accurate rings hollow, particularly when its own model does not accurately locate customers either.

With regard to SBC-CA's specific criticisms, JA counter that HM 5.3 may not reflect the physical realities of SBC-CA's network, but it is not intended to mimic the exact locations of SBC-CA's plant. Indeed, SBC-CA's model does not do this, and neither does the FCC's Synthesis Model. (JA, 3/12/03, p. 53, JA/Murray, 3/12/03, p. 13.) We agree with JA that TELRIC allows reconstruction of the network using existing wire centers, and does not require a model to use existing facility routes. In defining TELRIC, the FCC rejected cost approaches based entirely on a new network design or based entirely on existing network design. (First Report and Order, paras. 683 and 684.) Instead, the FCC found that a cost methodology that was based on the most efficient technology deployed in the incumbent LEC's current wire center locations "encourages facilities-based competition to the extent that new entrants, by designing more efficient network configurations, are able to provide the same service at a lower cost than the incumbent LEC." (Id., para. 685.) (Emphasis added.) The FCC therefore concluded that "the forward-looking pricing methodology for interconnection and unbundled network elements should be based on costs that assume that wire centers will be placed at the incumbent LEC's current wire center locations, but that the reconstructed local network will employ the most efficient technology for reasonably foreseeable capacity requirements." (Id.) (Emphasis added.)

We acknowledge that certain elements of the real-world network are fixed, such as terrain, roads, and customer locations. Nevertheless, a TELRIC model recognizes that the design of the current network may not represent the most efficient, forward-looking design because it may reflect choices made at a time when different technology options existed or when a different cost structure for equipment and labor drove decision-making. Fundamental to TELRIC cost modeling is the understanding that it is not merely an engineering cost estimate for actual re-construction of the existing network. Rather, a TELRIC model estimates costs based on the location of existing wire centers coupled with forward-looking network assumptions that in the aggregate are reasonable.34 Thus, we do not agree with SBC-CA that it is necessary for HM 5.3 to locate outside plant, such as SAI's, in the exact location that they are today.

Regarding clumping of customers into unrealistic high rise buildings, JA explain that HM 5.3 had to make simplifying assumptions about customers with the same address where it did not know the square footage "footprint" of the building. In other words, when HM 5.3 sees a high concentration of lines at one address, it does not know if this is a large shopping mall or a high-rise building. The model has been set up to treat many lines at one address as a high-rise, but only includes distribution cable to serve 50 floors recognizing that buildings seldom exceed this height. HM 5.3 includes this 50 floors of distribution cable even though such intra-building cable may not be part of the local exchange network, but property of the building owner. Therefore, JA admit that a 1,020 story building is unrealistic, but it is simply a result of simplifying modeling assumptions where HM 5.3 does not know the exact building square footage. Further, HM 5.3 conservatively overestimates costs by including distribution cable to serve these high-rises. (Workshop Tr., 6/25/03,pp. 658-661. See also JA/Mercer, 3/12/03, paras. 186-190.)

JA state that the criticisms levied by SBC-CA, if corrected, would only serve to lower the cost estimates produced by HM 5.3 by modeling the network with greater exactitude. (JA, 3/12/03, p. 54.) We find that HM 5.3 approach is reasonable in that it determines a logical customer cluster and builds plant to reach customers in that cluster. While the routes to build plant may not match SBC-CA's existing routes and may inadvertently hit a geographic or man-made obstacle, the right angle routing assumed throughout, rather than straight line routes, attempts to accommodate for this and is more realistic than assuming a network would follow straight line routes.

Therefore, we find that the method used by HM 5.3 to model customer locations and the costs of reconstructing SBC-CA's network, given its existing wire centers, falls reasonably within TELRIC guidelines and uses logical assumptions. Again, we do not agree with all of the inputs used in HM 5.3, but the concept of creating customer groupings based on today's actual customer locations, and calculating the cost of building a distribution network to connect them is reasonable, even if the reconstructed network does not follow today's exact outside plant routes.

SBC-CA contends that the clustering process is flawed because when HM 5.3 is re-run after re-clustering the customer location data into smaller clusters, the results show minimal impacts on total loop cost estimates. (SBC-CA, 2/7/03, p. 37.) Specifically, SBC-CA's witness Dippon ran a scenario of HM 5.3 where he reduced the cluster sizes from a maximum of 6,451 lines per cluster to 1,800, thereby increasing the number of clusters in HM 5.3 by 200 %. (SBC-CA/Dippon, 2/7/03, p. 43.) Dippon's run with smaller cluster sizes resulted in only a slight decrease in loop cost. Dippon contends this result is illogical because if cluster sizes are reduced such that HM 5.3 has to build a network to reach more clusters of smaller size, loop costs should rise. (Id. p. 44.)

In response, JA maintain that the results of Dippon's "1,800 run" are not illogical when one considers the tradeoff between feeder and distribution costs that results from creating a network with more clusters that serve a smaller number of lines per cluster. As JA's witness Mercer explains, Dippon's "1,800 run" shows an increase in feeder investment to penetrate more deeply into the network and closer to customers, offset by a decrease in distribution investment because smaller cables are less expensive. (JA/Mercer, 3/12/03, pp. 24-25.) JA witness Donovan contends that Dippon's "1,800 run" shows that HM 5.3 is operating correctly by installing more feeder and DLC equipment to serve a larger number of smaller clusters, offset by less investment in distribution cable. (JA/Donovan, 3/12/03, pp. 52-53.)

We find JA's explanation on this point reasonable and we do not agree with SBC-CA that Dippon's "1800 run" proves HM 5.3 is flawed. Nevertheless, we were unable to run our own clustering scenarios to examine differing model results. If we could have done this, we might have run a scenario somewhere between Dippon's "1,800 run" and the HM 5.3 default clustering which assumes a maximum of 6,451 lines. Therefore, we are not completely satisfied that HM 5.3 appropriately handles the tradeoff between feeder and distribution investment in all scenarios.

SBC-CA contends that the abstract clustering algorithm used by TNS to create the customer location database ignores existing industry standards and creates unrealistic and inefficiently large clusters of 6,451 lines rather than a maximum of 1,800 lines. (SBC-CA, 2/7/03, p. 43, SBC-CA/McNeill, 2/7/03, p. 3.) According to SBC-CA, JA have abandoned prior modeling techniques that limited distribution areas (DAs) to between 200 to 600 households. (SBC-CA, 2/7/03, p. 43, n. 144.) According to SBC-CA, this results in DA's that are too large and no carrier has ever built a network like this. For comparison, SBC-CA states that its current network is comprised of over 60,000 DAs, whereas HM 5.3 produces only 7,679 main clusters for SBC-CA's entire California serving area. (SBC-CA/Murphy, 2/7/03, p. 39.)

According to SBC-CA, standard engineering principles recognize that because feeder facilities cost less per unit of length than distribution facilities, the objective is to minimize the size of the DA and achieve a reasonable fill of the feeder facilities. (SBC-CA/McNeil, 2/7/03, p. 16) Further, SBC-CA contends that HM 5.3 artificially lowers loop costs per line by assuming extraordinarily large underground controlled environmental vaults (CEVs) that spread the higher installation costs of a CEV over a larger number of lines. (SBC-CA/Tardiff, 2/7/03, p. 40-41.)

We are somewhat troubled by JA's assumption that distribution areas can be sized up to 6,451 lines, which is much larger than the distribution areas in SBC-CA's current network. As stated above, we would have preferred to run our own scenario with a smaller maximum line size per cluster. Nevertheless, JA show that incumbent carriers are currently purchasing equipment that will serve distribution areas as large or larger than those modeled in HM 5.3. (JA/Donovan, 3/12/03, paras. 97-100.) SBC-CA witness Murphy also confirms that equipment to serve up to 7,200 pairs is readily available. (SBC-CA/Murphy, 2/7/03, pp. 40-41.)

We do not entirely agree with SBC-CA's approach either. SBC-CA appears to advocate that clusters should serve a maximum of 1,800 lines, based on a guideline of serving 200 to 600 households. SBC-CA's principal criticism of large clusters seems to be that they differ from historic practices. SBC-CA models distribution areas to serve a maximum of 200 to 600 households based on standards that date back at least 25 years, before the advent of fiber optics and equipment sized to serve a greater concentration of lines. (JA, 3/12/03, p. 53; JA/Donovan, 3/12/03, paras. 90-96.) Furthermore, SBC-CA's witness McNeil admitted that SBC-CA currently attempts "to establish large footprints so that the remote terminal can serve as many DAs as possible." (SBC-CA/McNeil, 2/7/03, para. 30.) He goes on to state, "[I]t is efficient and cheaper to place as few remote terminals as possible." (Id.) For these reasons, it is reasonable to conclude that a forward-looking network configuration might recognize today's dense customer groupings and the availability of larger equipment in order to size DA's larger than SBC-CA has done in the past.

Nevertheless, we agree with SBC-CA that HM 5.3 might have relied on too many large DA configurations, more than it is reasonable to assume would happen in the real-world network. The clusters used as an input in HM 5.3 are also based on a maximum number of lines per cluster of 6,451, which is larger than the CEVs SBC-CA normally uses. While CEVs do exist large enough to accommodate this number of lines, we find it inappropriate to assume that all distribution areas could accommodate a CEV of that size.

We would have preferred to take a middle ground and rely on clustering assumptions that did not assume the largest equipment could automatically be placed everywhere. JA have adequately defended the use of distribution areas sized larger than SBC-CA's outdated guideline of a maximum of 200 to 600 households. But, neither is it reasonable to conclude that every DA could accommodate the equipment to serve 6,451 lines. Thus, our principal criticism with HM 5.3 in this area is that we cannot modify the clustering process ourselves to re-run it with a more moderately sized clustering assumption.35

Overall, we find that while we do not agree with all aspects of HM 5.3's customer location and loop modeling, it is no more a "black box" than SBC-CA's own preprocessor and input modeling assumptions related to the design point. HM 5.3 is based on a detailed examination of current customer locations, and makes simplifying assumptions not unlike the assumptions underlying SBC-CA's LoopCAT. As a result, although HM 5.3 starts with current customer location data, it does not model outside plant in the exact locations in which it exists today. We find that the method used by HM 5.3 to model customer locations and the costs of reconstructing SBC-CA's loop network falls reasonably within TELRIC guidelines, even if the reconstructed network does not follow today's exact outside plant routes.

Regarding the clustering process, we were unable to run our own sensitivity analyses to test HM 5.3's sensitivity with different clustering inputs. We would have preferred to test the results of different cluster sizes. At the same time, our inability to run sensitivity analyses of cluster sizes is not unlike our inability to run sensitivity of SBC-CA's preprocessor and design point assumptions. In other words, both models involved extensive preprocessing of data that, for various reasons, the Commission had to use as given. Thus, we find that both models contain aspects of their loop modeling that we were unable to modify to our satisfaction.

SBC-CA contends that HM 5.3 relies excessively on unsupported "expert judgment" for inputs that relate to such items as network design, install times for engineering and placement of cable, support structure, DLC equipment, labor loadings, and material costs. According to SBC-CA, many of the inputs to HM 5.3 are based on opinions with little or no analysis or backup documentation to support their derivation or reasonableness. In some cases, JA have selectively relied on vendor quote information to produce low UNE cost estimates, without revealing supporting documentation for these vendor quotes, or they have used information from around the country rather than using cost data supplied by SBC-CA. (SBC-CA, 2/7/03, pp. 32-33.) For example, JA have selected prices for switching based on extremely short run considerations, assuming that selected prices in a particular contract would somehow be available for all switching purchases. (SBC-CA/Tardiff, 3/12 /03, p. 13.)

We agree with SBC-CA that many of HM 5.3's inputs deserve a second look and may not be appropriate. HM 5.3 uses many inputs that are based on expert judgment (such as plant mix and structure sharing percentages), or derived from national data rather than California specific numbers (such as labor loading data). It is also true that HM 5.3 relies on vendor quotes that are not always documented.36 In many areas throughout HM 5.3, we have been successful in modifying these inputs, and in most cases we have substituted inputs or data from SBC-CA's models instead.

Furthermore, SBC-CA criticizes the use of "expert judgments," but its own model relies on judgments of its engineers and other unnamed subject matter experts for its "design point" assumptions, ACFs, and inputs to its switching (SICAT) and interoffice (SPICE) models. Both HM 5.3 and the SBC-CA models rely heavily on expert judgments. While SBC-CA often criticized many of the HM 5.3 inputs, its reply and rebuttal comments did not provide an assessment based on SBC-CA's own experience of the correct value for many of these disputed inputs. In many cases, we find that SBC-CA offers its own model with inputs so aggregated that we could not make "apples to apples" comparisons of the disputed inputs. Thus, we do not find that disputes over inputs selected through "expert judgment" are themselves a basis to abandon HM 5.3. Rather, the question is the reasonableness of the input assumptions themselves.

Finally, SBC-CA argues that HM 5.3 inputs do not match SBC-CA's actual costs. However, the Supreme Court dismissed comparisons of this sort, noting that costing methods that rely on historical costs can pass on inefficiencies caused by poor management or poor investment strategies. (Verizon, 122 S. Ct. 1646, 1673.) The Court further noted that:

Contrary to assertions by some [incumbents], regulation does not and should not guarantee full recovery of their embedded costs. Such a guarantee would exceed the assurances that [the FCC] or the states have provided in the past. (Verizon at 1681.)

Although we find HM 5.3's use of expert judgment usually can be corrected with input changes, there were several instances where corrections were not possible. Notably, we wanted to use SBC-CA's hourly wage rate as an input rather than the lower rate assumed by JA. We were not able to change HM 5.3's labor rate assumptions in all instances because they were embedded with material and other assumptions such that we could not determine what portion of a total cost involved hourly labor. For example, many of HM 5.3's inputs include components for material costs, labor rates, task completion time, and crew size, which are joined into one input cost figure. Commission staff was unable to isolate hourly wages from this conglomeration of labor and material inputs in order to adjust hourly wage rates to SBC-CA levels, particularly for labor relating to terminal and splice investments, buried drop installation, and riser cable investment. Because we were not able to change labor rate assumptions in all places within HM 5.3, we find the model flawed in this area.

SBC-CA contends that the switching and interoffice portions of HM 5.3 do not accurately account for the actual demand generated by today's SBC-CA customers. On the switching side, HM 5.3 does not properly account for customers' peak period usage or the unique characteristics of individual switches in SBC-CA's California network. (SBC-CA, 2/7/03, p. 65.) On the interoffice side, HM 5.3 does not model the actual volume of trunk facilities ordered by SBC-CA customers, and therefore fails to construct an interoffice network with sufficient capacity to support the total demand handled by SBC-CA's network. (Id., p. 64.) SBC-CA alleges that the transport and fiber fill factors proposed by JA are unrealistically high given actual carrier operating fill levels. (SBC-CA, 3/12/03, p. 76.)

In addition, SBC-CA contends that HM 5.3 models a network that would be unable to provision all of the high-speed services that are available today because it omits key electronic "optical interface" equipment necessary to connect DS-1 facilities to the interoffice network. (SBC-CA, 2/7/03, p. 66.) This omission underestimates the facilities and equipment needed to provision DS-1 and DS-3 loops. (Id., p. 50.) Finally, SBC-CA contends that the HM 5.3 Transport module is flawed because it is insensitive to both the demand it considers and the costs for fiber cable and electronics, calling into question what the model optimizes when it is run with differing input assumptions. (SBC-CA/Tardiff, 3/12/03, p. 12.)

In response, JA maintain that HM 5.3 conservatively overestimates the number of trunks required in the switching and interoffice facilities (IOF) networks, and that the way in which HM 5.3 models demand is far superior to SBC-CA's SPICE model, which JA claim does not include all trunks and fails to incorporate demand data. (JA, 3/12/03, p. 70, n. 259.) According to JA, HM 5.3 models the known total amount of switched traffic carried by the SBC-CA network based on SBC-CA "dial equipment minutes" (DEM) traffic data and provides sufficient circuits to carry that traffic. Thus, in their opinion, HM 5.3 appropriately engineers a switching network to accommodate the requirements of a typical switch, its typical busy hour, and all required trunking. (Id., p. 72; JA/Mercer, 3/12/03, paras. 80-82.) JA claim that if additional traffic for interconnection and switched access trunks were modeled in HM 5.3, any additional economies of scale would likely only lower the resulting per unit dedicated circuit UNE cost. (JA/Mercer, 3/12/03, para. 78.)

Further, JA contend that HM 5.3 does not ignore demand, it simply has not fully configured network components to serve high capacity services, such as OC-level service and DS-1 on fiber, because these UNEs are not at issue in this proceeding. (Id., paras. 89-90.) Nevertheless, JA witness Mercer contends that HM 5.3 specifically provides fiber capacity for high capacity loops, even if it does not model the terminal equipment necessary for these services. (Id., paras. 89-92.)

First, we agree with SBC-CA that HM 5.3 does not model the characteristics of individual switches. However, we do not consider this a flaw because we note that SBC-CA's SICAT model does not do this either. Indeed, both HM 5.3 and SICAT appear to have taken a similar modeling approach that looks at aggregate switching requirements. The models differ primarily in their input assumptions for the amount of new, growth, and replacement lines, and switch fill factors. On this issue, we find that SBC-CA's criticisms appear to hold HM 5.3 to standards higher than its own SICAT model.

Second, we agree with SBC-CA that HM 5.3 appears to underestimate demand on the interoffice network. JA admit that they did not configure the interoffice network to handle all high capacity demand, claiming that these costs are not at issue in this proceeding. Yet the FCC's definition of TELRIC describes the forward-looking cost over the long run of the total quantity of facilities and functions that are directly attributable to an element, "taking as a given the incumbent LEC's provision of other elements." (47 C.F.R. Section 51.505(b).) (Emphasis added.) We find that HM 5.3 does not include all of SBC-CA's current interoffice demand and therefore, does not model an interoffice network to accommodate all of SBC-CA's current interoffice traffic. Unfortunately, because of the flaws we have already noted with SBC-CA's SPICE model, it is unclear how we would modify HM 5.3 to remedy this flaw. We have already discussed how we are unable to determine the demand level that SBC-CA's SPICE model is designed to serve. Thus, we are unable to take SBC-CA inputs and place them into the HM 5.3 model.

Third, we cannot determine whether HM 5.3 adequately incorporates optical interface equipment. JA maintain that the IOF network modeled by HM 5.3 includes the cost of all equipment necessary for optical trunking and therefore, would function properly. (JA, 3/12/03, p. 75.) According to JA witness Mercer, "the model includes in each wire center investment for a digital cross connect system of sufficient capacity to meet the circuit requirements of that wire center." (JA/Mercer, 3/12/03, p. 76.) Mercer goes on to state, "It is possible, then, to use the Titan 5500 to replace the switch interface to the OC-48 [synchronous optical network (SONET)] ADMs used by the Model..." (Id.) Based on our own reading of Mercer's rebuttal, we find it unclear whether and to what extent DCS investment, or the Titan 5500, was incorporated in HM 5.3 and properly allocated between all the services that might use it. Thus, we find that HM 5.3 might not allow provisioning of the high capacity services SBC-CA provides today.

Finally, JA witness Mercer attempts to respond to criticism that the HM 5.3 Transport module is insensitive to demand. Mercer describes some minor modifications to HM 5.3 to address SBC-CA's concerns regarding understatement in certain interoffice equipment investment. (Id., paras. 195-198.) Although Mercer provides these corrections to the Transport Module, it is not clear that he has entirely addressed the SBC-CA criticism. We do not find that JA have adequately addressed this criticism of how HM 5.3 derives its SONET ring structure and the resulting interoffice transport rates. Therefore, we are unwilling to rely solely on the results of HM 5.3 to establish interoffice transport rates.

SBC-CA contends that HM 5.3 does not model sufficient spare capacity and models a network that would require new orders to wait while new lines are installed. This could impact service quality and potentially lead to service disruptions. According to SBC-CA, although JA acknowledge that 1.5 to 2 pairs per living unit is the minimum design standard for distribution plant, HM 5.3 does not allocate this minimum number of cable pairs to each potential residence or business. (SBC-CA, 2/7/03, p. 41.) In addition, SBC-CA contends that HM 5.3 understates the cable lengths needed to serve potential demand by not designing plant to reach potential customer locations. (Id., p. 42.)

We disagree with SBC-CA's contention that HM 5.3 is seriously flawed with regard to how it handles spare capacity. From our own review, we know that HM 5.3 allows the user to adjust inputs throughout in order to achieve varying levels of spare capacity. This is discussed at length in Section VI.E and VI.J.5 below where the various fill factor inputs are discussed which vary the amount of network investment for spare capacity. Essentially, SBC-CA's spare capacity arguments can be reduced to a dispute over whether the model should assume 1.5 to 2 loop pairs per living unit, as JA propose, or 2.25 pairs per living unit, as SBC-CA proposes. We will address this dispute in our fill factor discussion below. For now, we find that SBC-CA's position on spare capacity does not represent a fatal flaw in HM 5.3.

Furthermore, we do not agree with SBC-CA's argument that HM 5.3 underestimates cable length by not considering the loops required to serve future customers. We have already discussed why it is improper to model to "ultimate demand" given guidance from the FCC. We have concluded that the demand assumptions for loops in HM 5.3 accommodate a reasonably foreseeable level of growth. We have also discussed why SBC-CA's loop lengths unreasonably include ultimate demand and cannot be modified to counteract this. Therefore, we do not find that HM 5.3 is critically flawed on the issue of spare capacity. Rather, we find that HM 5.3 can be modified to incorporate varying assumptions for spare capacity, as needed.

SBC-CA contends that HM 5.3's approach to calculating expenses is flawed and produces expenses that are only one-quarter of SBC-CA's current levels. (SBC-CA, 2/7/03, p. 73.) First, SBC-CA claims that HM 5.3 incorrectly uses expense to investment ratios, or "E/I" ratios, based on SBC-CA's current costs of network equipment, and uses these ratios with HM 5.3 investment levels that are considerably lower. (Id., p. 73.) Second, SBC-CA criticizes HM 5.3's use of Verizon California as a benchmark for efficient operation in California based solely on its proposed lower expense levels, without exploring other factors that may explain the difference in expenses between the two companies. (Id., p. 74.) SBC-CA notes that Verizon has a significantly higher investment per line than does SBC-CA, and that overall efficiency of a company is related to both investment and expense decisions. Looking at expenses in isolation, as JA have done, reveals little about overall company efficiency. (Id. ) SBC-CA maintains that using Verizon as a benchmark for E/I ratios results in absurdly low expense levels in HM 5.3.

We find that HM 5.3's use of E/I ratios is reasonable, and not unlike the factors that SBC-CA uses in its own model. Indeed, both JA and SBC-CA have used a similar approach of adjusting investments to current cost before calculating ratios and applying them to estimate current expense levels. (JA/Brand-Menko, 10/18/02, p. 6; SBC-CA/Cohen 10/18/02, p. 5-6.)

We agree with SBC-CA that HM 5.3 improperly uses Verizon California as a benchmark for expense estimation purposes. SBC-CA has presented convincing arguments that JA have overlooked Verizon's higher investments per line and have overlooked other factors that could explain expense differences between the two companies. We prefer to use recent data from SBC-CA's reported ARMIS expenses to estimate forward-looking expenses. This was the starting point for HM 5.3's expenses before adjustments to benchmark expenses to Verizon. Therefore, we will back out these Verizon "benchmarking" adjustments from the HM 5.3 model.

SBC-CA criticizes HM 5.3 for not providing any internal or external demonstration of the validity of its cost estimates. (SBC-CA, 2/7/03, p. 24.) SBC-CA performs its own comparison of the investments and expenses produced by HM 5.3 to what SBC-CA currently incurs based on 2001 ARMIS data. SBC-CA maintains that the "sanity check" it performed shows HM 5.3 investments and expenses are only one-quarter of SBC-CA's current levels, and that HM 5.3 has not depicted loop routes of proper length, not accurately priced network components, nor included a sufficient amount of ongoing expenses to pay for the labor force needed to run the network. (SBC-CA/Tardiff 2/7/03, pp. 3-4, and 20.) SBC-CA claims that any deviation between HM 5.3 and reality implies inaccuracies in the model, not inefficiencies in the current network. (Id., p. 23.)

Once again, JA contend that TELRIC calculations cannot be validated by comparison to a carrier's embedded costs. Thus, SBC-CA's "validation tests" are irrelevant. (JA/Klick, 3/12/03, p. 7.) According to JA, the FCC has already rejected cost methodologies that base forward-looking costs on the existing ILEC infrastructure, adjusted only for depreciation and inflation. (Id., p. 7, citing First Report and Order, para. 683-685.) Further, JA allege that SBC-CA's ARMIS-based estimates of reproduction costs are overstated because they include Project Pronto costs for transitioning SBC-CA to a DSL-capable network. (JA/Klick, 3/12/03, p. 8.) Finally, JA point out that SBC-CA never did any "validation test" of its own model results, presumably because this type of analysis would be impossible given that SBC-CA's cost studies do not provide total investment or expense results to compare to ARMIS data. (Id., pp. 4-5.)

We conclude that it would be unreasonable to reject the use of HM 5.3 merely because its results are lower than SBC-CA's current costs, as shown by comparisons to 2001 ARMIS data. We agree with JA that such comparisons are of limited value given that it is unclear to what extent we can rely on SBC-CA's current costs as forward-looking.37 Much of SBC-CA's criticism of HM 5.3 involves the inputs that it uses. It makes more sense to vary these inputs to levels that we consider more appropriate before deciding which model to rely on.

Interestingly, ORA/TURN performed such an analysis for us. ORA/TURN's analysis by its witness Roycroft used the FCC's Synthesis Model (SynMod) to test for potential structural bias in both the HM 5.3 model and SBC-CA models.38 Roycroft did a side-by-side comparison of HM 5.3, the SBC-CA models, and SynMod after applying a uniform platform of loop-related and general input values taken from SynMod. From this comparison, he concludes that HM 5.3 is forward-looking and not structurally biased because it produced higher costs than SynMod when both models were run with SynMod's default inputs. (ORA/TURN 2/7 p. 11; Roycroft Decl., 2/7/03, p. 63.) If HM 5.3 were biased, it would have generated lower costs than SynMod. Based on these results, ORA/TURN suggests adjustment of HM 5.3 inputs to the default levels used in SynMod. (ORA/TURN/Roycroft, 2/7/03, p. 63.) SBC-CA opposes ORA/TURN's recommendation to use SynMod inputs to set UNE rates for SBC-CA because these inputs are five years old and are based on broad national averages. (SBC-CA/Tardiff, 3/12/03, p. 37.)

With regard to SBC-CA's models, Roycroft concludes they do not comply with forward-looking principles because when the SBC-CA models are run with similar inputs to HM 5.3 and SynMod, the SBC-CA models generate consistently higher costs than HM 5.3 or SynMod. Roycroft suggests this is because LoopCAT has fewer user-adjustable inputs and does not allow variation in cable sizing. (ORA/TURN, 2/7/03, p. 11.) Roycroft presumes it might be possible to alter LoopCAT to generate outputs more consistent with TELRIC principles, but the effort required to make such modifications would be large and is not necessary given the availability of other models. (ORA/TURN/Roycroft, 2/7/03, p. 54.) XO supports ORA/TURN's analysis and conclusion that the costs calculated by the SBC-CA models are clearly outliers and unreasonable. (XO, 3/12/03, p.3.)

JA also performed a sensitivity analysis of HM 5.3 to demonstrate that it was not structurally biased. JA changed eight categories of inputs in HM 5.3 to the values proposed by SBC-CA. This yielded a significantly higher loop rate, closer to the level proposed by SBC-CA and its models.39 According to JA, this analysis refutes SBC-CA's claim that HM 5.3 is incapable of producing reasonable cost estimates.

We find that taken together, ORA/TURN's analysis using SynMod and JA's own sensitivity analysis varying eight inputs show that HM 5.3 is not structurally biased to produce unrealistically low results. The ORA/TURN analysis also corroborates our own findings that it is difficult to change many inputs within the SBC-CA models. We decline to adopt ORA/TURN's proposal to use all of SynMod's input values throughout HM 5.3 because we agree with SBC-CA that many of these inputs are dated or based on national averages.

Finally, we note that JA provided their own comparison of HM 5.3 with the FCC's Synthesis Model (SynMod). JA's witness Klick modified SynMod inputs to reflect HM 5.3 inputs, and then ran SynMod to estimate UNE rates in SBC-CA's territory. The results indicate SynMod loop investments 11% lower than those produced by HM 5.3. From these results, Klick concludes that HM 5.3 is an effective tool for estimating forward-looking costs because it produces similar investments and overall costs as SynMod, when run with comparable inputs. (JA/Klick, 10/18/02, pp. 14-15.) SBC-CA responds that Klick's analysis does not validate HM 5.3 because the similarity of outcomes of the two models, when run with the same inputs, merely shows that the inputs themselves are an important determinant of investment. SBC-CA contends HM 5.3's inputs are so low they produce invalid outputs for both models. (SBC-CA/Tardiff, 2/7/03, p. 35.)

Overall, we tend to agree with SBC-CA that Klick's analysis merely shows that HM 5.3 inputs provide very low results when input into SynMod. We agree with SBC-CA that the modeling inputs appear to be the more important drivers of model results. Therefore, we will not rely on or rule out either model based on these comparisons or validity tests provided by the various parties. Instead, we will turn to an analysis of the appropriate inputs to use in our model runs.

Finally, SBC-CA says numerous state and federal regulatory agencies have rejected HM 5.3 assumptions. (SBC-CA, 2/7/03, p. 76; SBC-CA/Tardiff, 3/12/03, p. 6-7.) SBC-CA cites to various decisions in other states that have found earlier iterations of the HAI model unreliable. Much of this criticism has been directed at the use of unidentified experts and unidentifiable sources to substantiate modeling assumptions and input choices. In response, JA cite to several states, including Arizona, Minnesota, Nevada, Colorado and West Virginia, that have either adopted or used results from earlier versions of HM 5.3 to calculate UNE costs. (JA, 3/12/03, p. 74.)

Given our ability to modify many of the HM 5.3 inputs that SBC-CA contests, we are not troubled by this criticism. Moreover, the SBC-CA models that we are examining in this proceeding are of very recent vintage, and we do not believe they have been reviewed or adopted by any other states either. This proceeding may very well be the first time that this particular version of HM is compared directly with SBC-CA's newest models. Therefore, the findings of other state commissions that may have examined earlier versions of HM 5.3 or SBC-CA's models are of little value to us.

In summary, we find that HM 5.3 can be modified to overcome many of its alleged flaws. Specifically, the model can be modified to use different input and engineering design assumptions, spare capacity can be increased, and expense assumptions can be modified to increase expense levels. Nevertheless, we were unable to modify assumptions with regard to the customer location and clustering process and certain labor inputs in order to overcome all of the model's criticisms. In addition, we could not overcome criticisms of the HM 5.3 interoffice transport module that it underestimates demand, may not adequately incorporate optical interface equipment, and is insensitive to demand changes.

It should come as no surprise after the extensive analysis described in the preceding sections that since we have found that both models are flawed, we do not consider either model to have adequately fulfilled the cost modeling criteria set forth in the June 2002 Scoping Memo. These criteria required that the cost studies and models allow the user to reasonably understand how costs are derived, generally replicate the model results, and modify inputs and assumptions. Without belaboring the point, we found that both HM 5.3 and the SBC-CA Models failed one or more of these criteria.

The principle failure of HM 5.3 was its use of a customer location database provided by a third party, TNS, as an input. We have already described how we would have preferred to cluster the geocoded customers into smaller distribution areas, but we were not able to perform these modifications ourselves. This criticism is mitigated by the fact that SBC-CA was able to modify the clustering and produce new HM 5.3 results. Nevertheless, we would have preferred to test various scenarios ourselves. Secondarily, HM 5.3 failed the modeling criteria because we were not able to modify all labor inputs. In our attempts to modify the HM 5.3 assumed labor rate to the level proposed by SBC-CA, we found that labor costs were embedded with other assumptions such that we were not able to disaggregate labor costs or assumptions and modify them alone.

With regard to the SBC-CA Models, we find that they failed the cost modeling criteria because we were not able to reasonably understand many of the input assumptions and we were not able to modify them. Specifically, we could not identify or make meaningful modifications to many of the SBC-CA model inputs because we could not extract individual inputs from aggregated data, or compare and verify inputs to public information. This was particularly evident with linear loading factors in LoopCAT and annual cost factors throughout the SBC-CA models. For example, we could not identify what structure sharing assumptions were embedded in the SBC-CA factors. Without knowing what structure sharing assumptions were used, it was impossible to modify them. Similarly, we could not segregate expenses for SBC-CA's unregulated ventures or Project Pronto expenditures from its calculations of per unit expense levels for UNEs.

On the subject of DLC installation costs, we were unable to understand the underlying assumptions SBC-CA made when creating factors in this area. JA contend that SBC-CA's witness appeared unable to explain how the factors relating to these costs were derived. (JA, 2/7/03, p. 32-33.) Our own review supported this allegation. Eventually, we delved into actual DLC installation costs to compare these to the factors. Again, SBC-CA's witness was unable to explain any linkage between actual costs and the factors used in the SBC-CA models. (Hearing Tr., 4/15/03, pp 573, 586.) Essentially, SBC-CA's witnesses were unable to support the modeling inputs adequately in this area. Finally, in the SPICE model, we were unable to determine how to modify it to test varying demand levels.

In sum, we found that both models failed one or more of our cost modeling criteria.

The analysis above describes why we have concluded that both HM 5.3 and the SBC-CA Models contain flaws that we cannot correct completely. We are unwilling to rely solely on the results of either model.

The SBC-CA models contain many inputs and assumptions that we conclude are not forward-looking -- such as loop configuration, cable inventory, structure sharing percentages, ACFs, SPICE demand assumptions, potentially duplicative shared and common expenses, affiliate expenses, and Project Pronto expenses. We are unable to modify these inputs for a variety of reasons. In some cases, the inherent structure of SBC-CA's models aggregates these inputs with other information to the point that we cannot isolate inputs for modification. In other cases, the record convinces us that the inputs may be overstated or not specific to the provision of UNEs, but the record has not provided us with adequate information for a replacement number.

In contrast, even though we disagree with many of the input assumptions used in HM 5.3 - such as the cost of capital, the copper/fiber crossover point, structure sharing, plant mix, DLC costs, and switching assumptions - we can change many of these inputs and assumptions. In many areas, we have incorporated inputs from the SBC-CA models into HM 5.3, particularly in areas such as labor rates, plant mix, and switching investment information. Despite these efforts, we could not cure all of the flaws we found in HM 5.3. We find that we cannot perform sensitivity analyses on the clustering process that builds the initial estimates of outside plant, we cannot modify all inputs related to labor costs, and we cannot overcome flaws in the interoffice transport module that underestimate demand and may not adequately incorporate all necessary equipment.

Given the flaws of both models and our unwillingness to rely on either as the sole estimate of forward-looking UNE rates, we will instead use the two models to create a zone within which we will adopt new UNE rates. While this is a new approach we have not used before, our only alternative is to throw out both models and have the parties start over, or have the parties try to remedy the flaws we have noted in these models. Neither of these ideas is workable. Starting over with new models is far too time consuming and does not assure us that we would get any better results than those we have now. If we asked either or both parties to fix the flaws noted above, this too would take valuable time, might not produce good results, and would undoubtedly lead to further contention over the proposed remedies.

A far better solution, given the amount of time that has already passed in this proceeding, is for us to use both models as endpoints for our ratesetting exercise. We have run both models with common inputs, to the extent possible. The results of the HM 5.3 run generally give us a lower boundary for ratesetting purposes, and the SBC-CA model results generally give us a ceiling. In a few cases, such as rates for DS-1 ports and the DS-3 entrance facility without equipment, HM 5.3 actually results in a higher rate than the SBC-CA models.

While we cannot rely on the results of either model to produce reasonable or accurate UNE rates, we are confident that reasonable rates lie somewhere within the zone created by the two model results. We consider our HM 5.3 model run as a lower boundary for our rate zone because if we could change all the labor rates to our satisfaction, we are fairly confident that HM 5.3 rates would be higher. We view the SBC-CA model results as an upper boundary because if we could modify all inputs to desired levels, we are fairly confident that the SBC-CA model would produce lower rates. For example, we think the SBC-CA model contains expenses for items such as affiliates, unregulated ventures, Project Pronto, retiree costs, and shared and common costs that should be removed. Expenses in the SBC-CA models are also too high because of the linkage to fill factors. If we could modify SBC-CA's structure sharing assumptions to the inputs used by the FCC, this would most likely lower SBC-CA's results as well. Also, we think the loop configuration modeled by SBC-CA is not forward looking because it uses the design point concept and embedded cable inventories, which most likely overstates loop costs.

Using the results of both models as endpoints, we will take the midpoint of the range as the adopted rate for each UNE or sub-element of that UNE because the midpoint reasonably mitigates the flaws we have identified in both models that we are unable to correct. We conclude that this approach is reasonable given the enormous complexity of attempting to use either model and perfect its results. As the FCC has recognized in its recent rulemaking reviewing the TELRIC methodology, UNE cost proceedings are "extremely complex," involve conflicting cost models, and hundreds of inputs to those models supported by the testimony of expert witnesses. State pricing proceedings are thus "extremely complicated."40

Our use of both models to give us a range for UNE rates is also supported by the D.C. Circuit's discussion of the difficulty in pinpointing TELRIC rates with exactitude. In a 2001 decision upholding FCC findings that UNE rates in Kansas were cost-based, the D.C. Circuit concluded that ratemaking is not an exact science but involves a "zone of reasonableness." As part of its discussion, the court cited to a prior case where it stated:

This argument, however, assumes that ratemaking is an exact science and that there is only one level at which a wholesale rate can be said to be just and reasonable.... However, there is no single cost-recovery rate, but a [wide] zone of reasonableness.... (Sprint Communications Company v. FCC, 274 F.3d 549, 555, (D.C. Circ. Dec. 28, 2001), citing Conway, 426 U.S. at 278.)

As a result, the court declined to find fault with the FCC "for approving the Kansas Commission's compromise resolution of an issue that the parties' behavior had left a muddle." (Id. at 559.)

Interestingly, as we modified the models to run with common inputs in line with Commission precedent, federal requirements, and the additional rationale developed from the record in this case, we found that the cost results of our model runs converged. The range created by our runs of both models is not that wide for many of the rates, particularly 2-wire loops, 2-wire ports, and the combination of UNEs for loop, port, and switching, also known as "UNE-Platform" (UNE-P). The degree of this convergence provides additional evidence of the validity of the rates we adopt today, particularly since the rates are the midpoints of a much-narrowed range.

Also, despite fierce criticisms of both models by the parties, we find that loop assumptions embedded in both models have surprising similarities. Earlier versions of the HM 5.3 model have been criticized by this Commission and others for assumptions regarding uniform dispersion of customers throughout the serving area. Indeed, HM 5.3 makes efforts to overcome this prior criticism by precisely locating today's customers through the geocoding process. However, after the geocoded customers are clustered into distribution areas, HM 5.3 does not use the geocoded locations to build a distribution network. Instead, the cluster is split into equal-sized lots and customers are uniformly distributed throughout the distribution area. Likewise, LoopCAT makes the simplifying assumption to approximate loop lengths based on the design point. As SBC-CA's witness Smallwood explains, "SBC-CA makes the reasonable assumption that customers will be distributed throughout a distribution area." (SBC-CA/Smallwood, 3/12/03, pp. 66-67.) By its own admission, SBC-CA is uniformly distributing customers throughout the serving area even though it has criticized prior versions of HM 5.3 for this same assumption.

Finally, we do not find that one model's set of assumptions is more accurate than the other. First, both models include a mixture of loop modeling assumptions that are somewhat reality-based and somewhat hypothetical. HM 5.3 uses today's customer locations, but clusters them differently than SBC-CA's existing network. LoopCAT uses some existing plant routes, but combines that information with estimates of future customer locations. By using the design point approximation technique, LoopCAT does not locate any customers where they are today. Second, HM 5.3 uses a minimum spanning tree theory to build plant to connect customers. By definition, any theory based on "minimums" would produce the lowest possible results. In contrast, LoopCAT uses embedded cable records that we find produce higher results than if cable-sizing guidelines were used to configure a rebuilt network. Third, HM 5.3 relies on the TNS customer location process and its clustering assumptions, while LoopCAT relies on SBC-CA's preprocessor and its assumptions regarding the design point. Both the TNS process and SBC-CA's preprocessor are presented to us as inputs that we cannot adjust, and we are asked to rely on the underlying assumptions without questioning or modifying them.

Therefore, we will rely on neither model to set SBC-CA's permanent UNE rates, but we will instead split the difference obtained after we run both models with our chosen inputs.

The compromise resolution we will use in this case is based on runs of the HM 5.3 and SBC-CA models where we have set as many inputs as possible at the same levels. The reasoning behind our chosen input levels is described at length in the Modeling Inputs Section VI below. Here, we will briefly summarize which inputs were used for the two model runs that we use as endpoints for our UNE ratesetting zone. The inputs that we varied for our run are the following:

Cost of Capital: We modified both models to use an input assumption of a 9.9% cost of capital. Also, we modified the tax rate in HM 5.3 to 40.75% to match the SBC-CA models.

Asset Lives: The SBC-CA models were adjusted to match the prescribed asset lives proposed by DOD/FEA and used in HM 5.3.

IDLC/UDLC: We adjusted both models to assume a configuration of 75% IDLC, and 25% UDLC.

Structure Sharing: In the HM 5.3 model, we used structure sharing levels from the FCC Inputs Order, and we assumed 55% sharing of the distribution and feeder network. In the SBC-CA models, we were not able to modify SBC-CA's proposed structure sharing percentages because we could not determine the percentages assumed by SBC-CA.

Plant Mix: We modified HM 5.3 to use SBC-CA's plant mix assumptions.

Labor Rates:

a) HM 5.3 is adjusted where possible to use the proprietary loaded labor rate from SBC-CA's models. This rate applies to Copper and Fiber OSP Technician, Engineering Labor rate, and EF&I per hour. Adjustments were made to labor rates for wire center terminal investment, customer premised fixed investment, pole labor, NID labor, copper cable manhole investment, fiber pullbox investment, and aerial drop placement.

b) Crew sizes in HM 5.3 were adjusted for cable placing and splicing, where possible, to add one person (i.e. a crew of 1 was increased to 2, a crew of 2 increased to 3).

c) There were no changes to the labor rates assumed in the SBC-CA models.

Fill Factors: In the SBC-CA models, achieved fill factors were adjusted to the levels in the table below. In HM 5.3, the relevant fill inputs (e.g. cable sizing factors for distribution plant) were adjusted to produce an achieved fill to match the following "achieved fill"41 levels:

a) Loop

Copper Distribution

51.6%

Fiber Feeder

79.6%

Copper Feeder

76%

DLC Common Equipment

70%

DLC Plug-In Equipment

75%

Premises Termination

No adjustment from levels as filed

SAI

67.8%

b) Switching: We modified fill levels in the SBC-CA model to assume an 82% achieved fill for both analog and digital switches. HM 5.3 was modified to also achieve an 82% achieved fill for digital and analog switches.

Crossover Point: HM 5.3 was adjusted to assume a fiber/copper crossover point of 12,000 feet for analog loops. There were no changes to the crossover assumptions in the SBC-CA models.

DLC costs: SBC-CA's LoopCAT model was adjusted to lower the EF&I for DLC installation to the average levels shown from a recent sample of 50 SBC-CA DLC installations.42 HM 5.3 does not use an EF&I factor, so instead, we used an average of actual Remote Terminal and CEV installation costs from the same sample of 50 SBC-CA installations.

Pole Spacing: We modified HM 5.3 to assume pole spacing of 150 feet for all density zones of the distribution network. (See SBC-CA/'s McNeil, 2/7/03, p. 38.)

Drop Terminal Investment: We modified HM 5.3 to assume 85% buried drop terminals, and 15% aerial, to match those the percentages of buried and aerial drops in SBC-CA's models. (See SBC-CA/Tardiff, 2/7/03, p. 76.)

Cable Prices: We modified HM 5.3 to use copper and fiber cable prices used by the FCC, based on criticisms by SBC-CA witness Tardiff. (SBC-CA/Tardiff, 2/7/03, p. 39.) JA provided these cable prices in documents supporting HM 5.3. (See JA/Klick Declaration, 10/18/02, Attachment JCK-2 pp. 7-10.) The copper and fiber cable prices were not modified in the SBC-CA models.

Switch Vendors: We modified HM 5.3 to base the switching investment per line on a weighted average of Lucent and Nortel prices only. Siemens was removed from the switch vendor mix assumed in HM 5.3, as explained in Section VI.J.1 below. There were no changes to the SBC-CA models in this area.

New vs. Growth: We adjusted both models to assume 40% of lines are purchased at the "new" line price, and 60% at the "growth" line price. This matches the mix of new and growth lines that was used in the prior OANAD proceeding. (D.98-02-106, Conclusion of Law 32, mimeo at 104.) We also removed "other replacement costs" from SBC-CA's SICAT model.

Switch Rate Structure: We ran both models assuming a flat-rate for switching as proposed by JA. This means that 100% of switching costs are allocated to port and there are no usage rates.43 We also calculated a usage-based rate for reciprocal compensation purposes, based on a 70%/30% split of traffic sensitive and non-traffic sensitive costs.

Other SICAT Changes: We deleted per month white page listing expenses from the port cost study, based on statements by SBC-CA witnesses Lundy and Silver that this should be removed. (SBC-CA/Lundy, 3/12/03 p. 46; SBC-CA/Silver, 3/12/03, p. 4.) Also, we adjusted the concentration rate of lines/trunk from 2:1 to 4:1 so that SICAT was consistent with LoopCAT.

Features: We modified the SBC-CA Model to include any identified feature hardware costs in the port rate. Using SBC-CA's support materials, we calculated total hardware costs for 9 features. We then assumed that an average customer would use 3 of these 9 features, so we added one third of this total cost to the port cost. There were no changes to HM 5.3 regarding feature costs.

Expenses: HM 5.3 was adjusted to remove the presumption that SBC-CA expenses would track those of Verizon California. In other words, we used SBC-CA's 2001 current E:I ratio without adjustments based on comparisons with Verizon. In the SBC-CA models, we removed the inflation adjustment to expenses, under the assumption that productivity increases offset inflation adjustments.

Interoffice Rates: We adjusted the fiber fill factor in SBC-CA's SPICE model to 85%, as proposed by JA. SBC-CA proposed fill factors for SPICE based on its current utilization levels, which SBC-CA contends are forward looking. SBC-CA's proposed fill factors are significantly below the levels used in HM 5.3 and used by the FCC in its own modeling. (JA/Mercer-Murphy, 2/7/03, paras. 68-72; See also Inputs Order, para. 208.)

Shared and Common Cost Markup: Both models include a 21% markup, as adopted in D.02-09-049.

Appendix A shows the results of our run of the SBC-CA Models and HM 5.3 with our chosen inputs, and the resulting average of these two model runs. The column in Appendix A showing the average of the SBC-CA and HM 5.3 Model runs indicates the permanent UNE rates for SBC-CA that we adopt in this order.

20 See Section V.A.3 below. 21 See Sections V.A.1.a, V.A.3, and V.A.4 below for a detailed discussion of these input problems. 22 ARMIS refers to the FCC's "Automated Reporting Management Information System" that was initiated in 1987 for collecting financial and operational data from the largest carriers and is described further at http://www.fcc.gov/wcb/armis. 23 "Structure sharing" generally refers to the percentage of poles and conduit that are shared with other utilities, or between different portions of SBC-CA's network. 24 See Federal-State Joint Board on Universal Service (CC Docket No. 96-45), Tenth Report and Order, FCC 99-304, 14 Rcd 20156, (rel. Nov. 2, 1999) ("Inputs Order"). 25 See e.g., JA, 2/7/03, p. 26-27, and 29; JA/Declaration of Donovan/Pitkin/Turner, 2/7/03, p. 65-67. 26 As we discuss in Section V.B.7 below, we recognize that the FCC uses its Synthesis Model for universal service purposes, but it also relies on it for cross-state comparisons of forward looking UNE costs. Thus, we find it reasonable to look to the FCC's Synthesis Model and the Inputs Order for guidance on some modeling inputs. 27 We note that JA filed a separate application, A.04-03-031, on March 12, 2004 nominating the shared and common cost markup for review in 2004. 28 Project Pronto refers to SBC-CA's capital expenditures to add loop plant, circuit equipment, and other facilities to provision advanced data services like DSL, which are provided by SBC-CA's unregulated affiliate, SBC Advanced Services Inc. (ASI). 29 TBO refers to the accrual for post-retirement benefit expenses for SBC-CA's retirees. Effective in 1991, the rules for accounting for post-retirement benefits changed due to Statement of Financial Accounting Standards (SFAS) No. 106 Employers Accounting for Post-retirement Benefits Other than Pensions. SBC-CA adopted SFAS 106 for regulatory purposes on January 1, 1993. The TBO was established to account for the anticipated future retiree medical costs already earned as of that date, but not yet paid. (See SBC-CA/Cohen Declaration, 3/12/03, p. 15.) 30 According to SBC-CA, the TPI is obtained from C.A. Turner Utility Reports. (SBC-CA/Cohen, 10/18/02, p. 6.) The CPI-W is defined as the Consumer Price Index for Urban Wage Earners and Clerical Workers. (SBC-CA/Cohen, 3/12, p. 29.) 31 See Section VI.D for a complete discussion of the DLC inputs used in the Commission's model runs. 32 The limitation of 6,541 lines is based on a maximum underground vault, or "CEV" sized to hold 8,064 lines, of which 20% is reserved for growth. 33 We note that similar to the SBC-CA models, HM 5.3 can also be criticized for how it handles multiple dwelling units. Although HM 5.3 clusters customers based on current population density characteristics, it does not necessarily model sufficient equipment to serve high density locations. This is discussed in detail in Section VI.E.7 where we address the fill factor for premises termination equipment. 34 The FCC has itself noted, in the context of its own cost modeling for universal service purposes, that: 35 Of course, we could have asked JA to re-run its clusters with our assumptions, but this would have required a reopening of the record and an opportunity for all parties to comment on the new model runs. Given the other flaws we identified in HM 5.3 and the SBC-CA Models, we did not consider this a valuable use of time. 36 For example, the HM 5.3 "Inputs Portfolio" lists numerous investment inputs relating to line cards that are selected based on "vendor documentation." (See JA/Mercer, 10/18/02, RAM-5, p. 85-90.) 37 Indeed, SBC-CA contends that HM 5.3 has not depicted proper loop lengths, but it is unclear how SBC-CA can know this for sure since its own model does not use actual loop lengths and its data sources do not appear to provide this information. 38 The FCC uses SynMod for universal service support purposes and for cross-state comparisons of forward-looking UNE costs. For example, the FCC used SynMod with its default input values to assess the reasonableness of UNE prices when considering SBC's 271 application in Kansas and Oklahoma in 2001 and California in 2002. (See Joint Application of SBC Communications In., Southwestern Bell Telephone Company and Southwestern Bell Communications Services, Inc. d/b/a Southwestern Bell Long Distance for Provision of In-region, InterLATA Services in Kansas and Oklahoma (CC Docket 00-217), Memorandum Opinion and Order, FCC 01-29, (rel. Jan. 21. 2001.), para. 83-84. ("Kansas 271"); See also Application by SBC Communications Inc., Pacific Bell Telephone Company, and Southwestern Bell Services Inc., for Authorization to Provide In-Region InterLATA Services in California (WC Docket 02-306), Memorandum Opinion and Order, FCC 02-330, (rel. Dec. 19, 2002), para. 64 ("SBC California 271 Order"). 39 JA witness Bryant modified 8 inputs which were 1) copper cable installed investment, 2) DLC equipment 3) protection block/NID 4) outside plant maintenance factors 5) depreciation rates 6) cost of capital 7) maximum copper cable distance, and 8) switch investment. (JA/Bryant, 3/12/03, p. 5.) 40 Review of the Commission's Rules Regarding the Pricing of Unbundled Network Elements and the Resale of Service by Incumbent Local Exchange Carriers, WC Docket No. 03-173, Notice of Proposed Rulemaking, FCC 03-224, (rel. Sept. 15, 2003) para. 6. ("TELRIC NPRM".) 41 Achieved fill is defined in Section VI.E.1 below. 42 The actual DLC costs and resulting factors are proprietary to SBC-CA, but contained in JA, 8/1/03, Exhibit C-4, p. 1 and C-5, p. 1. 43 During the process of calculating a flat monthly port rate, both models exhibited extraneous investment of less than 10 cents, which was manually added to the port rate. (See Appendix A, note 1.)

Previous PageTop Of PageNext PageGo To First Page