Video games drive some people absolutely insane, yours truly included. A game like Tetris– where one tries to make a solid foundation by filling in the gaps with descending geometric shapes–is considered downright infantile by many video game aficionados.

Terry Shaw heads up CableLabs’ bandwidth modeling and 
management effort.

But like most games, it's something many have to work at again and again before they eventually discover they're getting better and better with each succeeding game.

Terry Shaw, director, network systems at CableLabs, along with member MSOs, are trying to perfect their skills at the newest game in town–bandwidth modeling and management. While they're certainly no slouches when it comes data network operations, they're still learning (and creating) the rules of a game that has far-reaching implications for broadband communications.

This past spring, CableLabs announced the formation of the Bandwidth Modeling and Management Vendor Forum to help create the "rules" for effective bandwidth management. This new challenge is the result of the industry's success with cable modems and high-speed access, and its pending jump into the voice-over-Internet Protocol (VoIP) void.

Both of these new services have a number of similarities, not the least of which is the fact that no one has really deployed either service on a wide-scale basis before. The board is completely clean in that regard, so broadband service providers are trying to understand network traffic behavior today–so they can plan for tomorrow's new services and the bandwidth demands they will make.

"Our members," says Shaw, "are in the process of building a new type of business–high-speed data for residential services. There just aren't that many types of tools available for collecting data on those systems; being able to monitor them on an ongoing manner, being able to trap faults, to really allow them to get a good handle on that business and manage it to the maximum economic efficiency.

"There's a need in the industry for a good set of tools to allow our members to maximize the efficiency (of their networks) with which they can manage these things. That includes everything from traffic monitoring tools to post-collection analysis of data collected from the networks to assess different performance parameters."

CableLabs and its members quickly realized they couldn't master this new game alone. Shaw is CableLabs' point man for the Forum and is charged with recruiting input from member operators and industry vendors alike.

"One of the reasons we put the Forum together," explains Shaw, "is to start drawing vendors out that have similar thoughts, because we're interested in anyone who is developing any sort of capability of modeling. We're very much interested in talking with them." (contact:

Shaw says that nearly two dozen vendors have already come forward and CableLabs is already "working with some of them on various and sundry things presently." At the same time, he's working with member operators to define requirements for data collection capabilities.

"The members are really focusing on what they would like to see in the ability to collect data," says Shaw. "The analysis of that data is an area for vendor innovation. I think we would probably like to have some standardized reports simply because we'd like to compare apples with apples, and at the same time, throw the pears and oranges out. Or at least be able to convert pears and oranges back into apples.

"There are some very real opportunities for a vendor to develop innovative ways of presenting data and analysis to capture network performance."

The perfect model?

As a trained mathematician, Shaw knows the intrinsic value of effective modeling; in fact, all kinds of modeling. "There are a number of different types of models," says Shaw, "ranging from a beer-soaked napkin on a barroom table to highly sophisticated computer models. And frankly, I've used both, and a number of things in between."

Not too surprisingly, CableLabs and its members have decided to go with the high-end computer model. Working with OPNET Technologies Inc. (, a network management software developer, they created the industry's first DOCSIS 1.1 model for equipment and network design and planning.

The model allows operators to create virtual representations of proposed cable modem networks as they evaluate the capacity and quality of service (QoS) characteristics of alternative designs. At the same time, the model will enable equipment manufacturers to test different product configurations and architectures before building expensive prototypes.

"Essentially, you go through a multiple-step process," says Shaw. "You go out and measure what you have. Then, based on those measurements, you can then create a model of a new service and see what the impact of that new service will be. Ultimately, you end up with new business planning opportunities."

However, there are several burning questions that come with these new services. What do you measure? How often? What data do you combine and what data stands on its own? All this data and information, says Shaw, has bottom-line value for operators.

"There's a big, big difference between data and information," explains Shaw. "Data is when you're sitting there looking at a bunch of squiggly lines that make nice charts. Information is something like 'our system is overloaded at this place, at this time of day.' The point is to collect information that can be used by our members to optimize the economic value of their networks.

"It all comes down to how well they're able to exploit the capital investment they've made. So, we're working with our members to ensure that's going to be as efficient as possible."

It all adds up

One of the immutable laws of cable is that no two systems are exactly alike. That's also true with operators and the reasons they're hopping on the bandwidth management bandwagon.

"Collecting usage information out of the high-speed data network is not a trivial task," says Cheryl Persinger, applied mathematician at AT&T Broadband Labs. "The reason it's important is because the way people are using the network changes very quickly. It's not just for e-mail anymore.

"There's a lot more streaming content than there used to be. And all this stuff looks very different in the network. So, if we can get a really good handle on what's happening and build up some historical data, then we can do some forecasting and it'll help us all manage our networks better."

She says even effective short-term forecasts will have value at this time. The basic idea, she says, "is just keep our ear to the ground and make sure that when things change, we're listening."

Persinger says that the biggest challenge for creating a functional model is getting real-world input that shows what's really happening. That data is then used to validate a particular model. If the outputs of the real-world data and the simulation model don't match, then the model is essentially useless for any effective planning.

She acknowledges that CableLabs isn't the only group trying to get its arms around Internet traffic patterns. "There's lots of work underway all over the world on how to model this," says Persinger. "There is lots of academic work going on with modeling high-speed data in general. I suspect we're all keeping tabs on that, too.

"It's not like modeling a simple telephone network where you've got dedicated calls going on. Mathematically speaking, it's a little bit messier than that."

Putting a plan in place

Louis Williamson, senior director of network traffic engineering at Time Warner Cable, says his newest title and job description are a direct result of his company's focus on bandwidth modeling and management. Things are quickly becoming complicated, he says, as Time Warner looks beyond high-speed data to such services as VoIP, VOD, SVOD, network PVR services, and a variety of business services.

As a result, bandwidth modeling and management, says Williamson, is directly tied to his company's ability to function as a telecommunications provider. "Now, we have these headend costs that are growing with our subscriber base," says Williamson. "Not only are they growing with our subscriber base, they're growing at an undetermined pace.

"Because we do our budgeting every year, it's been extremely difficult to figure out how to budget for what it is, and do the normal return-on-investment calculations. So, we're trying to understand these new businesses and services that we're offering and come up with a model that helps our divisions, engineers in the field and executives plan, forecast and budget."

Working with the Forum is an important part of Time Warner's focus on bandwidth modeling and management, says Williamson. He thinks the Forum effort will go a long way toward bringing consistency to the whole process, which in turn, will assist Time Warner in developing a unified approach to monitoring the network.

"We're trying to centralize it (monitoring) more into an overall uniform approach," says Williamson. "That effort is underway at Road Runner to do consistent monitoring. Right now, there are so many people monitoring the plant and everybody has their own terms for what they're looking at and how they approach it. We don't want to generate unnecessary traffic with different people doing different things depending on who's doing what.

"I've seen Road Runner stuff. It's lots of information, but it's hard to look at it and make intelligent recommendations. Just taking snapshots of it (doesn't work); you really need trending tools and other things on the back end of all that data.

"We're trying to get consistent results so that we know what we're looking at, and more importantly, have a uniform approach to monitoring and managing all the devices. We've simply got to do it."

First things first

Richard Dowling, senior vice president of corporate development for GCI, a telephone and cable provider based in Anchorage, Alaska, believes the Forum will play a pivotal role in eventually providing "a specified means and a baseline set of tools" so that service providers can make critical underlying measurements. In the meantime, the Forum has some important near-term decisions to make.

"It has to make some judgments based on existing network performance and characteristics that lead one to figure (out) what exact tools we need," he says. "We need to specify those in a generic-enough way that as the utilization patterns in the network evolve from an asymmetrical to more of a symmetrical traffic flow, we have the right set of tools.

"As a result, as we migrate from a more data-centric deployment to one that has voice mixed into it, we can accurately assess the impact that those two products are having on each other, as well as on the network.

"This effort should determine the right set of what I call 'primitives' in the measurement space, and then figure out, without telling the suppliers how they have to implement them, things like fundamental rates operators have to be able to collect data at, granularities, and whether we have to collect data by every stream and by every interface. We want to do that without making it economically burdensome."

Drawing from his company's 20 years as a telephone provider, Dowling believes the telephone piece of future networks should be fairly predictable. He thinks IP telephone calls will "have the kind of holding times that calls have had historically," and that call intensities per household will remain essentially the same. He says the fly in the bandwidth ointment as it were, is data.

"You only have to look at the change from just plain Web browsing maybe five years ago, to people essentially having servers (in their homes) today. This whole peer-to-peer model is radically changing the way the network gets used, and I think we're only seeing the leading edge of that.

"I think people are going to discover more beneficial ways to use peer-to-peer (P2P) technology that aren't entirely based on violating copyright laws. And I think that's going to drive traffic patterns in the network that we can't really anticipate right now."

The P2P effect

The impact of P2P traffic has many operators, both large and small, concerned. One of them is Joe Jensen, CTO at Buckeye Cablevision in Toledo, Ohio. He believes this kind of traffic will have a very real effect on system policy decisions.

Says Jensen, "We've also been watching very carefully what's happening with the peer-to-peer solutions that effectively turn anyone's PC into a server. One test we did showed you could sustain about 100 kbps out of a server in the mode where it would be hanging on the end of our network and sourcing that information, that bandwidth out to other customers.

"With these peer-to-peer networks, you start seeing a lot of bandwidth (usage in the) upstream, and that's where we want to make sure we manage this properly. We can engineer it as long as we know what the demands are (going to be). That also necessitates a policy decision.

"Our customer usage contract, says they (individual subscribers) are not allowed to act as a server. So, we have to work through both policy and engineering issues as we pound through this. But, we want to make sure we understand the engineering issues so that we can understand what the impact should be on the policies."

In the meantime

Data, voice and P2P traffic is also a concern for Mike Giobbi, vice president of new technologies for Armstrong Cable Holdings in Pennsylvania. He says he's only just beginning to figure out the impact each service has on the other.

"Once you add voice traffic," says Giobbi, "voice obviously has to have the priorities so that the latencies stay low. And, you also don't want your best-effort Web surfing guy to slow down to a crawl. So, understanding our traffic patterns today on a (DOCSIS) 1.0 network, with just best-effort Internet traffic, is step one in traffic engineering a combination voice/data network.

"What we're doing with Terry Shaw is providing him with data we collect from traffic on one particular CMTS so that we can trend it for six months or a year, or however long we do this. Terry then is massaging the data and graphically analyzing it to come up with patterns."

Giobbi says his company is committed to engineering traffic correctly and isn't interested in adding "gobs of capacity" by throwing CMTSs at it. He says that "solution" only makes sense if "you've got a tree out back that grows money."

"We don't want to throw bandwidth at it," says Giobbi, "because when you're designing for high day, busy hour, most of the time, your network is going to be under-utilized. That's because it's just (a) best effort day with people talking on the phone normally. That's when you're providing no worse than a five percent grade of service where five calls out of a hundred are being blocked due to lack of capacity. We're designing for one percent, but we don't want to go over five percent."

Meanwhile, Giobbi and his colleagues are using free software to keep tabs on their system's DOCSIS 1.0 traffic. While it isn't perfect, without it, he "wouldn't have a clue about what's going on."

For some time now, Giobbi has used free software called MRTG [Multi-Router Traffic Grapher- (for description, details, examples and users); (for specs and downloading)]. "What it does, is that every five minutes, it takes a sample of every interface on every device. That includes our switches, routers, and all of our CMTSs."

It takes that information, puts it into a database and creates a P&G-type file that graphs 32 hours of traffic, averaged over five-minute (increments). It also takes the five-minute samples and averages them over 30 minutes for a week-long graph. It can also graph monthly, quarterly and yearly usage patterns, based on a day's worth of traffic.

Giobbi says he normally uses the five-minute average graph so he can determine when he needs to add capacity. The MRTG software can also graph traffic going in and out of an interface, as well as CPU utilization. By providing the IP address of the device and the SNMP community string, the software can also discover all the interfaces and allow the user to set up the polling rate. However, one drawback is that he can't determine exactly what kind of traffic is being generated, whether it's P2P or something else.

Working for tomorrow

Giobbi and Armstrong are strong supporters of the Bandwidth Modeling and Management Vendor Forum, not only by providing data and information, but they're also conducting a "full-blown PacketCable trial using a soft switch (and) a call management server," Giobbi says. So far, with the DOCSIS 1.0-based trial, he says, "the quality is just phenomenal without QoS" and he has "full confidence that PacketCable will probably provide as good, if not better, voice quality than people are getting today."

He also believes the present-day efforts of the Forum will have a positive impact on how operators plan, design and operate their networks, and more importantly, serve their customers.

"The goal," says Giobbi, "is to gather this data today and release some specifications so that vendors can build things. That way, we'll have in software a modeling tool that we can (use to) look at capacity needs without the traffic actually occurring.

"In other words, we can see what happens if P2P takes off, or if if MPEG-2 streaming media takes off like crazy.

"We can provide them (vendors) input on what we want to monitor. But also, we can have real, live data to compare against a software model to make sure the two jive.

"(For example), we could add a certain amount of voice traffic (e.g., Mother's Day) in software and model what happens to the network capacity, and where we run out of it. The idea is to predict it far enough ahead of time so that you can add it (bandwidth) before customers suffer any degraded service."

Ready, set, go!

As the bandwidth modeling and management game gets off the ground, operators will be watching its progress with considerable attention. The Forum will play an important role in filling up the bandwidth blanks so that operators can provide the new services, the new revenues they need to succeed in an increasingly competitive market.