Cable operators tend to think narrowly about the HFC (hybrid fiber/coax) local access world when it comes to emerging telecom standards and services. Broadband local access is one of the most difficult and expensive issues facing the evolution of the public network. It's also been the most visible concern throughout the 1990s.

The migration battle between the reliable, bandwidth constrained, interactive network owned by the local exchange carrier and the cable operator's broadband broadcast network (of dubious reputation) has seemingly framed all of the relevant telecommunications issues. Much of this "confrontation" has taken place within a political framework-the justification for either deregulation or re-regulation of a variety of traditionally sacred grounds formally occupied by monopolists.

Now, other forces are gathering. Driven by both the geometric rise in demand for data communications and the consolidation of some very large network operators into, well, very, very large operators, the whole foundation of the public network is shifting. Neither cable nor the local exchange companies will control this shift. Their future will be determined by their ability to adapt to it.

Now is the time to understand the forces at work that ultimately will define the future of the network.

Carrier convergence

A new company engaged in providing telecommunications carrier services has essentially two options. Either lease facilities from established carriers for "resale," or build new facilities. History following the AT&T divestiture demonstrates the difficulty of surviving as a "non-facilities" based provider. Most of these "resellers" were squeezed out when the large facilities-based operators went to war.

There may be a message here for the current class of ISPs (Internet service providers) that enjoyed success in the public financial markets. Today, there seems to be plenty of money for start-up companies willing to take on the established telecommunications Goliaths.

Optical fiber capacity

Start-up competitive telecom companies often enter each other's orbit and become the very big guys mentioned earlier. Worldcom and ICG are examples of this. Other times, companies start with enough fanfare and capital to become big or influential from the beginning. Level 3 and Qwest are excellent examples of this.

Companies that are building new facilities inevitably stop and consider where the business is going-not where it has been. The current public network has evolved over the past century, driven by the analog voice application. This evolution has led to a switched-based architecture, where bandwidth is the most precious commodity.

Time division multiplexing electronics and large switching networks are utilized to conserve point-to-point bandwidth and to facilitate point-to-point connectivity. The new guys look around and see that the world has changed. The embedded architecture worked well when voice was the predominant application. But today, it is drowning in data packets as Internet, intranet, extranet and "othernet" applications have swelled and overwhelmed public networks. Just as importantly, the nature of the usage has changed, negating the premise upon which the current public switched telephone network was designed.

For the new guys, it's pretty simple. They will design their architecture around current data applications. Because voice represents a relatively low bandwidth requirement, it will be fit into a data-based architecture more easily than if it were the other way around. In fact, voice has been digitally switched and transported for many years.

However, the current time division multiplexing (TDM) Sonet transport is much different than the packet- or cell-based architecture envisioned by the new telecom players. It is a difference that fundamentally changes the public network forever.

The local area networking (LAN) companies figured this out years ago. Asynchronous transfer mode (ATM) standards were developed to provide a flexible transport container for voice, data and video applications. ATM is a fixed-length cell that can be operated over a variety of "physical layer" transport systems, including the current Sonet TDM platforms utilized throughout the public network.

ATM was slow to catch on, but did continue to develop from a "standards" perspective. The relationship with the Sonet TDM platform and the ability to provide various guaranteed levels of service are two of the most important attributes that ATM provides today.

Meanwhile, in a parallel universe, the Internet grew to become an indispensable tool for business and certain individuals from its beginnings as a government disaster recovery project. The Internet developed around a variable-length packet standard called Internet Protocol (IP). Hardware companies, such as Cisco, left the isolated LAN world and developed faster and larger routers to place in what has become a global virtual network running on IP.

So, the new guys do indeed have a problem. The volume of Internet traffic and the boon of Internet-related applications would suggest IP as a core format for the new network. However, Internet Protocol was never designed to provide prioritization and application-oriented features critical to a public network carrier and its clients.

On the other hand, ATM was designed around a connection-oriented configuration and is intended to operate within the current Sonet point-to-point infrastructure-not the "connectionless" world of abundant bandwidth that the future has promised. Furthermore, ATM is less efficient with its fixed-length cells. The header-to-payload ratio is higher, particularly in cells that are not completely utilized for payload. This efficiency problem is exasperated when IP is encapsulated into ATM for Sonet transport.

The current advantages that ATM has over IP in providing defined quality of service (QoS) and its connection-oriented protocol (closer to the current switch-based public network) will erode over time. Standards, protocols and equipment will ultimately develop to meet customer demand.

There are two developing IP standards important for this discussion. Differentiated Services, known as DiffServ, establishes priority classes that are recognized by routers throughout the network. This would seem to provide a simple mechanism to identify time-sensitive and latency-sensitive traffic, such as voice, and route accordingly. This is not as complicated nor complete as ATM's virtual circuit capability, but it's getting there.

The second initiative is MPLS, or multi-protocol label switching. This provides identification and routing markers, based on grouping all IP packets within a session (e.g., a single voice communication) into a single flow. MPLS is one way that IP and ATM can be married. Established "flows" will be mapped into ATM virtual circuits or Sonet TDM channels based on the application or level of guaranteed service.

The development of IP capabilities to support voice, traffic flow and other network management applications is central to the cable operator's desire to provide IP telephony through an integrated data network. This is an extremely worthwhile goal, as it puts the telephony application into the same subsystem as the Internet data business. There will be, of course, additional incremental cost associated with telephony. However, an integrated voice/data platform will change the economics significantly.

Further refinement and standardization at the IP level will also affect cable's ability to deliver a truly interoperable modem platform. Modem interoperability will be much more feasible if cable can focus on its part of the deal - the physical layer - and adapt to the larger forces at work.

In fact, despite the good results and good intentions of the cable industry's recent standards initiatives, they will have no choice but to adapt. The frustrating thing for cable operators and the new telecom guys is that the standardization and technology development processes take time and neither group can wait.

Where do we go from here?

The next two to three years in the world of telecommunications will be fascinating. The cable industry should closely watch the new guys-companies like Qwest and Level 3-and the incumbents like AT&T. Their networks will eventually plug into whatever the cable industry has to offer at the local level. What they do will greatly impact the design and functionality of cable equipment at the home and in the headend. What they do will also impact the software and operational support activities that cable operators must develop.

Today, in a world not quite ready for core IP, there are interim steps being taken. Qwest is taking advantage of the established benefits of Sonet and ATM, along with what some consider one of the two key technology enablers for the global IP networks of the future.

This first enabler is fiber optic bandwidth. The capacity of an individual optical fiber has increased steadily since the early 1980s, initially as a result of advances in electronics and laser technology. Capacity was improved primarily by putting more ones and zeroes into a single stream. Putting more optical carriers into a single fiber, with larger data streams, has recently led to geometric capacity growth (see Figure 1).

Companies like Ciena pioneered optical wave division multiplexing (WDM) to avoid building new point-to-point fiber optic facilities. As optical capacity exploded and the cost-per-bit plummeted, network planners began to consider the architectural implications of this trend in addition to the Band-Aid applications.

Qwest has two problems with the implementation of its IP network today. The first, which involves the QoS and prioritization issues discussed before, can be mitigated, or even eliminated, by mapping IP into established formats with the necessary protocol support. This approach will involve both routing (ATM) and point-to-point (Sonet) platforms. The second problem is the delay or latency created through packet formation and routing.

A large intricate web of routers can be designed to move packets between points. However, each router adds a certain amount of latency, and each time a packet is reformed, even more latency results. Packets swimming their way through five or six routing stations will accumulate more latency than is acceptable for time-sensitive applications, such as voice.

Qwest plans to solve this problem in 1999 by using an optical brute force approach that takes advantage of the fiber optic bandwidth enabler. With a minimum reliance on ATM as a core strategy, Qwest will utilize Sonet TDM to route IP packets on a point-to-point basis. These are point-to-point Sonet platforms that utilize optical WDM and leading-edge lasers/electronics (10 Gbps) to provide massive and fairly granular transport capability.

In essence, Qwest trades less efficient but lower cost optical bandwidth for fewer router stations (less latency). This is really inverse logic compared to the current PSTN where TDM routing and circuit switching are employed to achieve maximum point-to-point transmission efficiency. In the author's view, this is an important distinction separating the new guys from their older brothers.

The long-term evolution of the Qwest and Level 3 networks could follow the path of further IP/ATM/Sonet integration, but probably not for long.

The second technology enabler involves the ability to code intelligence directly onto an optical carrier.

Templex ( is a small company in Eugene, Ore. that is working with several patents involving a process it calls TASM (temporally-accessed spectral multiplexing). The TASM technology utilizes a form of optical code division multiple access (O-CDMA). This area of optical research is starting to gain attention from a variety of interests, particularly telecommunications transport entities.

Templex can code optical signals through a passive grating and recover the information through a reciprocal process. The implications are staggering in a number of not-so-obvious ways. First, the ability to operate several phase-shifted optical carriers within a single wavelength will significantly advance the first enabler (lower cost optical bandwidth). Secondly, the gratings can be developed to compensate for dispersion, with the potential for optical pulse "reshaping" and an all-optical signal regeneration capability.

The third implication is the important one for this discussion. The ability to optically encode intelligence will provide a key bridge to the future global IP network by facilitating many of the control and routing attributes in the optical domain. This will wreak havoc with the traditional OSI network layer model, as the physical transport layer functionality becomes inseparable from the higher network layers.

This separate, layered functionality currently takes place in the electrical domain and essentially defines the need for Sonet and, to some extent, ATM in the public network. Companies like Qwest could adapt overnight to an all-optical IP architecture. In fact they have already crossed the line with their bandwidth-centric vs. switch-centric view of the architecture.

Templex, or a similar company, will achieve success only if it can produce cost-effective alternatives to the current network models. But this is a huge target, including multimillion-dollar circuit switches, ubiquitous digital cross-connect platforms, TDM platforms, signal regeneration platforms, routers, bridges and more.

Consolidation of the hardware guys

This no great revelation. The "big guys" in the public network transport and LAN/WAN hardware business have already figured out the direction and have begun to seek their long-term position. The classic example is represented by the Bay Networks/Nortel deal. Lucent, Tellabs, Alcatel and others have also been active in acquiring new capabilities that provide answers in either the optical bandwidth or packet processing areas.

The global consolidation of network operators continues to get most of the attention, but the consolidation of equipment suppliers may have at least as great an impact on the pace of network and advanced services development from this point.

There is much at stake for the equipment suppliers, especially as the future trend shifts fundamentally away from what they currently sell. But the momentum for such a change is overwhelming. Innovative companies will spring up to offer new potential that cannot be ignored.

The point of this article is not to suggest that cable operators have no say in what will eventually occur in their networks. In fact, they have ultimate control. The HFC platform has extraordinary potential, but it will be realized only to the extent that the cable operator develops it.

In many ways, the cable operator faces the same issues as the telecom new guys. They need to do something in the absence of a clear, long-term engineering plan. Like the new guys, cable operators will deploy interim systems that will be cost-justified, if not optimal. The flexibility and agility of the cable operator will be challenged. The real danger is that the operators-or new players that become cable operators-believe that they have the long-term plan figured out and proceed dogmatically and obliviously down the road. They may get there, look around, and find nobody cares. The two questions that cable operators must ask themselves are:

  1. How can we cost-effectively deploy and profitably operate a particular new service?
  2. How can we keep our technical and operational options open vis-a-vis the rest of the world?

The second question will provide more of a challenge. Up until now, the cable industry has not listened to anything from the outside world, unless forced to. After all, the cable plant was a "closed" system, delivering non-essential entertainment video. Recently, cable has been reborn as far as standards are concerned.

Important success with the MPEG standard led to the cable modem DOCSIS standard, OpenCable, and others that will surely follow. Now, the cable industry is mixing it up with the very large players from the computer and telecom worlds. These companies have a great deal more at stake and have infinitely more experience in this arena that is often as political as it is technical.

Cable must participate as things move forward. There is a balance between leading and adapting that the cable industry must learn. It appears that cable's core strength is format insensitive bandwidth at the local access level. We cannot forget this. If history is any indicator, customers - particularly business customers - will ultimately define the feature sets that become the winning standards. This really is the vortex of where the telecom new guys and their hardware suppliers are currently engaged.

Cable must understand this process and focus on the elements that mean the most to them in the long-term. I suspect that this is the HFC physical layer and data encoding. Both represent, ultimately, how many bits can be delivered through a given spectrum.

The cable industry enjoys certain key cost advantages vis-a-vis the LECs with respect to the broadband delivery of advanced services to the home. The only thing that should matter is that this advantage is exploited and demonstrated to the world over the next five years.

Cable operators cannot wait for the detailed roadmap, nor can they control what that map will look like. Other entities, characterized by the telecom new guys, will largely define the local access opportunities. There will be some areas that are critical for cable and are likely to be ignored by the telecom entities. This is where cable should focus its efforts, while staying flexible and adaptable regarding everything else.