Advertisement
Articles
Advertisement

Understanding Mobile Backhaul

Mon, 03/31/2008 - 8:40pm
Randy Eisenach, Market Development Director, Fujitsu Network Communications Inc.

Wireless providers have strict service requirements

As wireless networks evolve from voice-only services to broadband data services, there is an increasing need for transport bandwidth to cell sites. Using their widely deployed deep fiber networks, cable operators are uniquely positioned to provide high-capacity transport services for this mobile backhaul application. At the same time, wireless service providers have very specific transport requirements for their services and understanding these requirements is key to choosing the right technology and network for the application.

An understanding of the wireless 2G/3G standards, cell site capacity requirements, and performance metrics (latency, jitter, availability) will ensure cable operators choose the right technology, network, and architecture to implement a successful wireless backhaul business strategy.

Figure 1: Wireless interface standards
Figure 1: Wireless interface standards

Wireless networks have historically relied on TDM transport services for interconnection between cell sites and base station controllers (BSC) and mobile switching centers (MSC). That begs two questions: why do wireless operators rely on TDM services, and why haven’t they shifted to an Ethernet/IP infrastructure?

The wireless industry developed standard interfaces for interconnecting different functional devices within their networks, such as the base station transceiver (BTS) to BSC to MSC. These interfaces go by an alphabet soup of names including the A, Abis, Gb, and Iu interfaces, as shown in Figure 1.

These Abis, A, Gb, and Iu interfaces have historically defined the backhaul transport requirements, since they specify the entire protocol stack, including the physical layer 1 implementation. The wireless industry’s heavy reliance on T1 transport services is defined by and required by these industry specifications.

While 3G/UMTS specifications have recently added native Ethernet interface support, 3G/UMTS equipment supporting native Ethernet interfaces is not expected to be available and deployed until mid-2008.

Even so, there are approximately 155,000 existing cell sites in the U.S., all with T1 TDM interfaces on their base stations. For the near term, T1 TDM transport to these cell sites is the primary business opportunity for cable operators.

WIRELESS CAPACITY REQUIREMENTS
Given the wireless industry’s historical reliance on T1 circuits, a good understanding of the actual capacity requirements is critical to any mobile backhaul service. Today, most cell towers are serviced by one to four T1s, equivalent to 1.5 Mbps to 6 Mbps. The addition of 2.5G and 3G data services will increase the need for more bandwidth to cell sites, but the requirements are still relatively modest.

Figure 2:  Performance metrics2,3.
Figure 2: Performance metrics2,3.

The amount of bandwidth required at a cell site is constrained by two factors: 1) amount of wireless spectrum available, and 2) spectral efficiency of the wireless interface.

Wireless frequencies, or spectrum, is allocated and auctioned by the FCC, typically in 10- or 20-MHz blocks. Half of each block is used for transmitting signals and the other half for receiving signals. Frequency blocks are further subdivided into “channels” that are shared across cell areas.

Spectral efficiency is the amount of data (bits/second) that can be transmitted for every Hz of spectrum. Newer technologies, such as EDGE and HSDPA utilize advanced modulation schemes allowing higher data rates by squeezing more bits/s into the allotted amount of spectrum. These advanced modulation schemes dynamically adjust depending on the channel conditions between the base station and handset (power, noise, interference, etc.).
There is a natural upper limit on cell bandwidth that is simply the amount of spectrum owned and available at a cell site multiplied by the spectral efficiency of the wireless interface (bits/s/Hz).

As an example, assume we have a three-sector cell site providing 2G GSM voice services over 1.25 MHz of spectrum, which is typical for voice services.

Performing the spectral efficiency and traffic engineering calculations results in 1.2 Mbps required to support this service at the cell site – that translates to approximately a single T1 line.

Similarly, a GSM/EDGE application with 3.5 MHz of spectrum results in approximately the capacity of four T1s. An advanced 3G network based on the latest HSDPA technology with 5 MHz of spectrum requires approximately 13 T1s’ worth of bandwidth.

Most 2G and 2.75G EDGE cell sites are easily serviced by one to four T1s’ worth of bandwidth. Even advanced 3G cell sites only require 20 Mbps of bandwidth, or approximately 12 to 16 T1s, even when fully utilized. While bandwidth requirements are increasing, they are still relatively modest.

Another factor that determines the amount of bandwidth required at cell sites are the handsets themselves. Currently, only 15 percent of U.S. handsets are 3G capable1. Even the widely popular Apple iPhone operates over the EDGE network (384 Kbps) rather than the faster, more advanced 3G (HSDPA) network.

Bandwidth requirements will remain modest until there is a wider availability of additional spectrum and a wider adoption and deployment of 3G handsets, smart phones, and mobile PC cards, which in turn drive the need for higher capacities and throughputs at cell sites.

LATENCY, JITTER, AVAILABILITY MATTER
Based on industry specifications and capacity requirements, it is very easy to understand why wireless service providers have historically relied on T1 TDM circuits for cell site transport. There are several methods of transporting TDM services in their native format (SONET, CWDM, DWDM) or by converting the TDM services to Ethernet (circuit emulation service, or CES). Even with these newer options, many wireless service providers continue to rely on and require TDM services be carried over a native TDM transport infrastructure due to a number of performance factors. It’s important for cable operators to understand these performance issues when selecting the appropriate transport technology for their wireless backhaul networks.

With the advent of Ethernet and IP data networks supporting T1 CES, the question becomes whether these packet-based networks can offer the same performance levels as their TDM counterparts and whether these networks meet the expectations of wireless service providers. Wireless providers have strict performance metrics for latency, jitter, and availability, as shown in Figure 2.

While the Metro Ethernet Forum (MEF) specifications for these parameters are a bit “loose,” it should be noted that many vendors have carrier grade Ethernet transport platforms that vastly exceed generic MEF specifications. However, the overall message is that TDM transport provides a very robust and capable method of transporting TDM services. Ethernet can accomplish the task, but there are performance trade-offs.

Figure 3: Hybrid TDM/Ethernet transport network.
Figure 3: Hybrid TDM/Ethernet transport network.

CIRCUIT EMULATION DILEMMA
T1 circuit emulation provides a method for carrying T1 TDM services over an Ethernet network. For many service providers, T1 CES allows them to transition to all-packet networks, while still supporting legacy services. However, T1 CES has its own set of performance issues, which are not acceptable to many wireless service providers.

Circuit emulation involves a trade-off between latency (delay) and bandwidth efficiency. Delay through the network can be reduced, but at the cost of lower efficiency. Likewise, efficiency can be improved, but with longer delays. The trade-off is based on how many T1 frames are stuffed inside a single Ethernet frame. If a single T1 frame is transported in an Ethernet frame, the delay is very low. However, the efficiency is not very good due to the CES overhead bytes, Ethernet overhead bytes, preamble bytes, and interframe gap. The alternative is to stuff many T1 frames into a single Ethernet frame. This minimizes the impact of the overhead bytes; however, the latency is much longer due to the fact that 4, 8, or 16 T1 frames’ worth of information must be buffered prior to transmission.

T1 circuit emulation services are typically 50 percent efficient, due to all of the overhead information transmitted with each Ethernet frame (CES header, Ethernet header, preamble, interframe gap). Many wireless service providers are uncomfortable with the latency, jitter, and efficiency issues related to T1 circuit emulation. For these wireless service providers, their insistence on carrying TDM services in native TDM format is very understandable based on these performance metrics.

THE MULTI-PROTOCOL SOLUTION
The reality is that we live in a multi-protocol world and the networks that are built must be able to carry a wide array of Ethernet, TDM, and SONET services; in other words, a “fusion” of different end user services. The debate shouldn’t be about packet vs. TDM or Ethernet vs. SONET. None of these technologies are inherently “good” or inherently “bad;” they are simply methods for moving digital bits.

The issue is how to best support the embedded base of legacy services, which are traditionally TDM based, as networks evolve to be much more Ethernet/IP centric. The 2G/2.5G GSM and 3G UMTS networks that are currently deployed will remain an integral part of the wireless infrastructure for the next 15 to 20 years, so their T1 physical interfaces and transport requirements will be present for a very long time.

For wireless service providers who require TDM services to be transported in their native format, due to the latency, jitter, availability, and efficiency issues mentioned previously, a hybrid TDM/Ethernet platform provides the optimal wireless backhaul solution. Figure 3 shows an example of a hybrid TDM/Ethernet platform capable of supporting both TDM and Ethernet services, in their native formats. These hybrid platforms allow flexible mixes of TDM and Ethernet services to support existing 2G GSM networks, 2.75 GSM/EDGE networks, 3G UMTS/HSDPA networks, as well as future 4G base stations. In addition, these platforms allow a seamless evolution and transition from all TDM services, mixed TDM/Ethernet services, to all Ethernet services.

Many wireless service providers are highly averse to allowing their services to be transported over an Ethernet/IP infrastructure, especially a third-party network and third-party service provider. The advantage of a TDM/Ethernet hybrid architecture is that it can support customers who require a native TDM transport along with those customers who require a TDM/Ethernet mix.

For a cable operator this is critical, since most cell sites host two or three individual wireless carriers on the same tower. Providing a hybrid network architecture allows cable operators to pursue the backhaul business and revenue from all carriers at the cell site, without getting locked out due to the type of network or technology deployed.

References
1) Businessweek, “Telecom: Back from the Dead,” June 25, 2007.
2) MEF 3, “Circuit Emulation Service Definitions, Framework and Requirements
in Metro Ethernet Networks,” sections 9.1, 9.2, 9.4, April 13, 2004.
3) GR-496-Core, “SONET ADM Generic Criteria,” section 3.3.5.5, Issue 1, Dec. 1998.

Topics

Advertisement

Share This Story

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading