Advertisement
Articles
Advertisement

Building the headend of the future

Fri, 05/31/1996 - 8:00pm
Michael Lafferty

Computers have changed the way people work, play and even create and the cable industry is certainly no exception. Yet, as the 20th century careens toward its inevitable conclusion, computer hardware and software applications have finally vaulted over the last bastion of hardwire, creating fundamental changes in headend design and operations.

And cable headends, as well as the people who design and run them, will never be the same.

As a result, cable engineers are faced with vast array of new issues, technologies and pressures to design and build headends that will work now and a decade from now, when many predict the transition from analog to digital will be largely complete. (Take that, like all other broadband predictions over the last few years, with a handful of salt.) Yet in a technological atmosphere where today's computer advance is literally left in the dust by tomorrow's newest product announcement, many cable pros are initially confused, if not cowed, by the daunting challenge of bringing their headend operations and personnel into the computer age.

Of course, American ingenuity just loves a challenge and cable professionals have never been accused of being dullards, especially when there's lots of new revenues to be realized.

The Alexandria model

Many believe the headend of the future first coalesced in Alexandria, Va. late last year. That's where Jones Intercable launched the nation's first passive, 750 MHz HFC network- a $35 million, fiber optic cable system with a backbone consisting of 10 fiber loops of counter-rotating signals. The system, which boasts "self-healing" capabilities, features 500 fiber nodes, each monitored 24 hours a day and backed up by its own power supply. Each fiber ring contains between 120 and 180 fibers, which mean nearly 3,000 fiber terminations at Jones' showcase headend.

The plant itself covers just 28 square miles and passes 73,000 homes, MDUs and businesses. Nearly 40,000 subscribers are served by the system. Of course, the fact that Alexandria is one of the premier suburbs of the nation's capital and home to any number of the nation's legislators and staff doesn't hurt either.

The Alexandria operation was recently folded into a cluster of area operations and renamed Jones Communications. The new name, says the company, "signifies Jones' emergence as a comprehensive telecommunications company capable of providing a full slate of entertainment, information and communications services to its customers."

Much of Jones' claim to telecommunications fame is being generated from the hands-on experience it's getting through the Alexandria headend.

The headend itself features 88 Barco modulators and related headend equipment, including an expanded version of the company's Remote Control and Diagnostic System (RCDS). The RCDS Open System Architecture (ROSA) allows Jones personnel network-wide monitoring and control capabilities (of Barco and non-Barco network equipment) through a PC interface. The system continuously checks headend performance, generating status information, minor and major alarms and automatically switches to backup modulators based on preset parameters.

Barco, a relative newcomer to the American market, but a 25-year cable in the international marketplace, is a firm believer in microprocessor controlled headend equipment. George Walter, Barco product group manager, points out such computer control, for instance, in the case of their modulators, allows for "everything you can do on the front panel, you can do a hundred miles away on a keyboard and PC."

This remote control capability, which features automatic backups that are on-line within milliseconds of detected error or failure and contacts repair personnel at the same time automatically, all without interruption of service, is crucial to cable success and consumer confidence, says Walter. "For the consumer to really rely upon the cable operator for interactive services, telephony, even security, operators are really going to have to beef up consumer confidence. One of the ways they can do that is to present headends like the Jones facility and say, `Here are all our fail-safe mechanisms.'"

Since the Alexandria facility officially debuted in October of last year, Jones has been steadily making its case to its subscribers. The company is currently providing telephony services, albeit in a limited capacity, through shared-tenant (MDU) services and have filed with state regulators to provide residential phone service as well.

A recently completed cable modem trial, featuring LANcity modems, has "delighted" customers and led to the start of a commercial rollout of the service. The next headend/network hurdle is near video on demand. "Right now," reports Wayne Davis, Jones' senior director of technical operations, "we're in the process of sorting out the digital technology that we'll deploy."

Davis says the past six months have taught everyone involved in the project a lot. "When you convert over to a new network and you've advertised it's a new network and it's the future," says Davis, "man, you'd better do it right the first time. And we've learned that painfully."

He says when designing or rebuilding a headend facility the things you take for granted, the things you've been "doing for a hundred years," are often the things "that are going to kill you." He likens his experience to the ill-fated United Flight 232 where one damaged hydraulic line defeated all the other hydraulics in the aircraft. Being able to recognize such "common points of failure" is absolutely crucial to an operation that's offering lifeline services like telephony, as well as video and data communications.

We found that in thinking through a system (you have to) find those common points of failure," says Davis. "And, I mean we designed redundant paths throughout the network. But for us, the common point of failure turned out to be, and we learned a valuable lesson from this, off-air transmission. In order to get the best picture quality, the antennas are at a different location and we transport those (signals) in. It was a point a failure in that there was not a redundant path for those signals."

A transitional mix

The costly transition from analog to digital is putting budgetary pressures on a whole range of products and services (real and proposed) in the cable plant. But the fact that it isn't going to happen overnight has finally settled in after the initial euphoria of information superhighway hype of just a few years ago.

The first thing they (customers) tell us is that the headend of the future will have both digital and analog," says Peter van der Gracht, vice president and general manager for Scientific-Atlanta 's CATV and Telco Worldwide Analog Headend Products division. "The model that seems to be coming out is that the lower frequency band will be used for the analog broadcast part of cable TV, sort of plain old cable TV, and then the higher frequency components will be used for the digital channels. And together that allows the cable operator to provide service to different types of customers."

And that's going to go, we think, a long way into the future," chimes in Larry Grunewald, S-A's product director for digital video compression and subscriber products division. "As long as you've got TV sets out there that are cable ready and people who don't necessarily need a high tier, it's going to be hard for cable operators to provide a digital-only type of service. So we think as long as we're going to be around that you're going to have the analog-digital mix."

i_9606a1.gif

As a result, van der Gracht and Grunewald note this is "giving rise really to two headends of the future." (See Figure 1.) They describe the first as a "source" or "traditional" headend, or what others have called a "regional" or "super" headend. These headends consist of antenna farms (or are directly linked to off-site farms), source materials and all the new (i.e., digital) technologies. The second, "hub" headend is further down into the plant, and in S-A's thinking, could serve as many as 20,000 homes.

While Grunewald explains many customers have a growing understanding of the equipment needs in the traditional or super headend, there's some question about what's needed at the hub headends, because, says van der Gracht, "The customers want to be able to operate those hub headends either unattended or at least only 9 to 5, Monday through Friday, rather than 24 hours a day, seven days a week." That means increasing technical capabilities at the super headend and "very sophisticated status monitoring and control capability" at the hub, he continues. The efficiencies of such unmanned, computer-controlled hub sites becomes even more critical as the pricey deployment of digital technology begins in earnest.

Yet many, like Clayton Doré, director of sales and marketing for Standard Communications, realize "not every headend in the country is going to be a master headend. And not every headend in the country is going to support 1 GHz. worth of bandwidth.

"We actually see probably 60 percent of the systems in this country aren't built past 450 MHz. And they are still going to have to continue to support and upgrade and offer new services out of those facilities."

Vendors like Standard have begun to develop headend equipment that meets those needs - that's self-monitoring, self-detecting and features automated fault detection and alarms, independent of computer software. The company recently completed release of its Stratum Series-an 80-channel broadcast quality distribution system fully housed in a single, six-foot high rack.

Utilizing NAM550 frequency modulators, each modulator is a self-aligning, slide-out module. The self-healing backup system ensures no down time during transmission and requires no external computer or human intervention. If a failure is detected within a rack, the backplane automatically routes all I/O signals from the faulty modulator to the next back-up modulator on the stand-by daisy chain.

The impact of change

While the Alexandria facility is the first of its kind, it's certainly not the last word in headend and network design. Many operators, eschewing the one-size-fits-all mentality, are striking out on their own, or directing individual systems within their organizations, to develop the facilities they need to take them into the future. Along the way, common themes are addressed, while often, surprising difficulties pop up and are overcome.

Dennis Carter, headend engineer at TCI of Louisiana in Baton Rouge, recently went through the process after he was asked to design a headend that would take his 100,000 subscriber system, one of the first to deploy fiber "years and years ago," into the 21st century. Never one to wait around when there's work to be done, he began sketching out ideas in the sand of a Florida beach during his vacation.

At its most basic, the design project began by determining the space required. Given that the technology is a fast-moving target, that the potential services to be offered are at best vaguely outlined by even the most knowledgeable in the industry and that there are budgetary sinkholes at every turn, this was no easy task.

Originally, Carter thought he would double the space that he figured he would need. He continued to look at the problem and weigh conflicting needs. "You don't want to overbuild a headend," says Carter, "because it's going to take so damn much money to keep it cooled and everything. But then you can't undersize it, because if you have to add on to it, you have critical electronics or maybe 10 or 15 fiber patch bays that you can't have all the dust and dirt flying around when you're trying to expand your facility."

In the end, he nearly tripled the space he thought he needed-and he's glad he did. "We came on line with this headend in November with 21 racks of equipment," explains Carter. "And since November we've added seven 23-inch racks of fiber patch bays, two racks of digital equipment, three racks of demod stuff, and I'm adding two more racks of scrambling/interface equipment. And I had to add 180 combined audio/video DAs, plus three racks of digital equipment because we're fixing to digitally tie in another one of TCI's systems about 40 miles away. I had no idea I was going to need all this additional equipment."

No breaks allowed

Carter readily admits that down time, whether it's planned or not, has to be a thing of the past in cable's broadband future. And, while high-tech electronics bring many advantages and efficiencies, he also notes it's less forgiving if its power source is impaired in any way. Calling himself "a back-up kind of guy," he feels his plan for providing "cleaner" power has been successful.

What I did with the power is that I took my electrical panel coming in and sub-divided it. I put anything that would put harmonics on computers or other equipment, like fluorescent lighting, power switching transformers, air conditioners, etc., on one panel. Then I have another panel for my electronics. That panel goes into a UPS which buffers my incoming.

"Then I said, `OK, that's fine, but we're going to backup both of these panels with two separate generators.' And now, not only do I have the two generators, but I also have a transfer switch between the generators so that if my electronic generator doesn't crank, then my air conditioning generator does and switches over to keep my electronics up. And then it pages me to let me know my generator didn't crank and I need to come in and look at it."

Carter also notes that when dealing with fiber and lasers, grounding becomes a big issue as well. This is doubly true in Louisiana, with its penchant for lightning storms. Carter installed a raised, anti-dissipative floor, which disperses static electricity to extensive ground grids he had installed below.

However, Carter reports the raised floor didn't work with a UPS unit he had installed on it. Once the unit settled and compressed the glue bond that had at first acted as a shock absorber, it began vibrating. Situated close to his optical patch bays, this proved to be a problem. While FM was of no concern, he notes, once "you get into AM, those connectors move just the slightest bit and they'll cause all kinds of reflections and distortions." The UPS unit was relocated.

Planning for people

The rising complexity of headend equipment, especially with the rising dominance of computer technology in headend operations and systems management, is having an impact on people too. Steve Pearse, Time Warner Communication's former senior vice president of operations and information services, puts it succinctly. "It's not just simple RF equipment anymore. It's very sophisticated. Very soon the cable industry, as a whole, will be managing the world's largest and most sophisticated computer and data network in the world, bar none."

Pearse believes for some, "a real painful shift in technical expertise" will take place. He thinks many cable industry professionals are "totally unprepared" for the computer center expertise they'll need in the not-too-distant future. These "new" disciplines include systems management, router management, congestion control, CPU utilization and DASD (an IBM term that stands for direct access storage device, i.e., memory and hard drive) utilization.

And it doesn't stop there. "The critical factors for the headend of the future," says Pearse, "are going to be things like sizing for peak load, (which) is a concept we're not as familiar with today when we engineer a headend. This is a much more difficult science.

You've got to be able to size your equipment for peak load now based on random and variable demand loads throughout your distribution plant. If everybody starts banging on one movie at the same time, you're going to need a lot bigger digital switching capacity, routing capacity and disk load.

The same is true for the Internet. If everybody jumps on the Internet at the same time and you find your performance really plummeting in your fiber nodes, then you may have to look at adding more equipment, more lasers and splitting your laser feeds. Plus, you're going to need to look at beefier routers, beefier servers in your headend. This is stuff that is the lifeblood of today's Internet service providers (ISPs) and it's not easy. It's very difficult."

The complexity is the problem, the issue," continues Pearse. "If you tend to it now, you drive complexity out. If you ignore it and let things grow independently, like silos, you're in big trouble. You're basically in the same trouble RBOCs are in today. Telephone companies today typically have 500 computer support systems running the company. They don't talk to each other. It's a killer."

The accepted standard interfaces in the computer world are starting to show up in equipment and systems designed for broadband communications, as well. Industry vendors are also employing user-friendly interfaces for the technicians in the headend. The increasing deployment of GUI-based (graphic user interface, i.e., point-and-click or touch-screen icons and pull down menus) applications in the headend will go a long way in easing the transition to the computerized headend.

TCI's Carter sites himself as perfect example, saying he's "not a real computer whiz by a long shot." He surrounds himself with those who do have computer "smarts" and he searches out those products that make the high-tech transition and his increasingly complex job easier.

One of the handiest high-tech tools he's found so far is Iris Technologies' Video Commander Visual Routing Software & Module System, a PC-based visual monitor and control system that can be used on site or accessed via modem. Carter claims it's "the heart of the system." The Visual Routing System (See figure above), which is also being used in Jones' Alexandria facility, is so easy to learn, says Carter, that "my wife could be routing in 15 minutes."

A hardwire veteran in good standing, Carter's goal with the assistance of devices like the Iris product, is to wire something one time and never have to rewire it again. His most dramatic enthusiasm for the user-friendly device comes when he details the time it used to take to act out a cable technician's nightmare-the much-dreaded channel change.

I remember one year we did a channel change for 16 channels." says Carter. "Stereo insertion, scrambling and everything else. Three of us prewired for three weeks. When we actually did the change, it took all three of us more than 27 hours straight to do it.

"I did a six-channel change this past New Years. It took 30 minutes to program it on the Video Commander. And to implement it, we weren't even there. We told the VC scheduler here's the smart switch, the smart switch is going to show the new channel lineup. I want you to go to this new channel lineup at 3 o'clock in the morning. At five or ten seconds after 3 a.m., our channel change was done."

A headend wish list

As new products are introduced at the various trade shows and roll off the production lines, the growing list of potential new services operators are eyeing for future deployment continue to spur on cable planners and vendors alike. The physical convergence of video, telephony and data communication services in the headend of the future is particularly intriguing.

Stephen Dukes, vice president of technology at TCI Technology Ventures, thinks it's only a matter of time and money before the three core services converge in one switch. "I think our challenge is to try to figure out how we can integrate switching fabric to minimize our cost and to be able to fit it all in the headend. We're contemplating various forms that can be integrated, including potentially using ATM.

"We think there are switching fabrics that can support more than one application. As an example, we think they can probably support video and data. And if the standards groups can ever figure out how to support the voice piece, likely that could be supported as well.

"That's what we really need right now to get started. But on the other hand it's not available. That switching fabric that supports multiple service types is not necessarily available today at a price point we can support."

Dukes, and others as well, believe economics and strategy will also push operators to interconnect their central headends in the future. The cost of multi-million dollar switches, whether they're able to handle one or more types of service, may preclude their purchase by only the most heavily bankrolled operators. There's a strong economic case to be made for sharing the cost, especially when the expense is used to attract and retain customers and fight mutual competitors.

The cable industry can also expect to undergo the open interface and standards drive the computer industry eventually underwent as a matter of mutual survival. In fact, S-A's Grunewald says the effort has already started. "Our customers are saying, `Well, we've got ATM switches, we've got servers, we've got analog headend equipment, we've got digital headend equipment and each of you guys have a different management system to these things.' There's a lot of pressure on us to build some type of common platform so that it can then go into master headend or network controller that can monitor even all the way out to the set-tops. And that is something that is still a little ways away, but it is a part of the future that we're going to see."

Topics

Advertisement

Share This Story

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading