100G, power management, and optical interconnect
TE Connectivity (TE) has created a line of low-power optical interconnect that includes transceivers that can be aggregated in 25G increments (up to 400G), including a singular on-board optical module that moves the traffic chokepoint in routers, switchers, and similar equipment off the faceplate to simultaneously alleviate power and thermal issues while enabling increased port density. The company is in the process of rolling out the line, called Coolbit.
Data centers, video headends and hubs, and other facilities are digesting Internet traffic in streams that can measure into terabits per second. Pushing all that data around takes copious amounts of power, and a good portion of it gets dissipated as heat. Google, Facebook, and other companies that handle massive amounts of traffic are recognized for the remarkable lengths they have to go in order to keep their data centers cool. The largest and most crowded video headends have similar cooling challenges.
Power/thermal management has always been a problem, but as traffic rates go from 10G to 100G and beyond, the issue threatens to become a major impediment. Equipment vendors are taking several approaches to solve imminent problems, with technologies that go from the macro scale – including advanced cooling technologies for large buildings, to the micro scale – including advanced optical interconnect products that run cooler even as they handle more capacity, enabling even greater equipment density.
That’s what TE is doing with its 25G active optics featuring Coolbit optical engines. The company is in the process of introducing a line that includes a CDFP version, the aforementioned mid board module, and QSFP optical modules and assemblies.
We spoke with TE Data Communications technologist, systems architecture, Nathan Tracy about data traffic trends, the issues those trends bring up, and TE’s approach for meeting those challenges.
CED: As you increase traffic on a wire, you increase the amount of power drawn and heat dissipated. Gigabit transmission is becoming common. Can you give us an overview of the trends in networking that are pushing the industry to faster and faster transmission rates, and explain what that means for power/thermal management in network systems?
Nathan Tracy: Just think of the way we use our smartphones today and do the things that we do that only a couple of years ago were completely unheard of, such as streaming video on this little handheld device you throw in your pocket. And then when you get home, all these over the top content providers are truly putting us in control of all our entertainment. There are all these new applications that we’re using in the home and in the business environment that are driving tremendous bandwidth demands on the network.
Whether we’re talking about the access network, or the core network, or the very heartbeat of the network – the data center, all this equipment is working much harder to deliver these services that we didn’t imagine a few years ago. That translates into the bandwidth capacity per shelf, or per linecard, having to increase. The only way to do that is with higher-bandwidth components. And the cost of bandwidth is higher power consumption when we get to high data rates over the media, whether the media is a printed circuit board or a fiber optic transmission line.
All this heat is coming from power consumption of all the new electronics trying to drive 10 gigabits yesterday, 25 gigabits today, and 50 gigabits in the future. Now when you generate all that heat, now you are consuming more power to cool all that equipment – the problem is compounded because you have to work harder to cool it.
Then factor in the complexity of the design of the equipment. The design gets more complex so you can get better thermal management, which is driving even more cost.
So the simple act of watching video on your cell phone is driving the power to get the signals across, to keep the equipment cool so it doesn’t fail, and the complexity of the equipment to better manage the thermal that’s generated.
CED: How imminent is the problem? Are we getting near points of failure?
NT: When we all get home tonight and start our on-demand movies and catch up with social media, the network is going to work just fine. The problem is the trends, and the trends are loud in terms of the problems the network operators and equipment designers are going to have. The incremental cost you see today, I would argue it’s annoying. But the future trend is power levels beyond annoying. People are having trouble seeing how to do the next generation.
When I talk to equipment designers, for example, someone who has to build a core router, they look at their design as being constrained based on power, and I say, “yeah, I heard power is a big deal in data centers,” and they say “no, no, no, that’s not it. We just can’t get the heat out of the equipment. We’re just running into walls how to get all the thermal that’s being generated away from the equipment.”
It all still works today, but it’s the trend that’s causing concern, and the problems are not that far out.
CED: So, it’s routers and switchers. Does that extend to other equipment in a data center or a video hub?
NT: The issue is power consumption. There are hot spots. The challenges are in core switchers and routers, and data centers, and network hubs.
At the same time all this is going on, another part of the dynamic is trying to get to higher levels of density – the amount of bandwidth per unit-area. The network has to be more cost effective. So as you’re reducing your footprint, now even in spaces where you didn’t have thermal issues, now it’s getting worse due to reducing the footprint. So the problems are growing across the network.
I think the core equipment is where it’s at its biggest challenge. That’s where we’re seeing the highest aggregate data rates. Also the data center, where we’re finding the highest equipment density.
CED: So TE took a look at the trends, and responded by developing the 25G active optics with Coolbit optical engines. Tell us about the development of that technology.
NT: The Coolbit product story is really multifaceted. It is as simple as keeping up with industry trends. We had to develop product that would operate at 25 gigabits per second just to stay competitive.
That’s the starting point. Okay, as long as you’re going to do that, what are your key objectives? We decided to come out of the gate with a lower power consumption approach than we believed others were going to do, and we stayed focused on that during development. We had to be clear with our vision, which meant not compromising our approach.
Another objective was a platform that would support multiple aggregate data rates, so that we would have functional design we could reuse very easily to support multiple platforms. You can see that with Coolbit optical engine announcements so far. We have a product at 100 Gbps, one that does 300 Gbps, and one that does 400 Gbps. And it’s all based on the same components in the Coolbit optical engine, all used in the aggregate data rate products.
We also wanted to support what we think will be an evolution in equipment architectures. We’ve talked about the thermal challenge. When you put all your optical transceivers at the faceplate, you create a tremendous cooling challenge there. If we could put our optical transceivers further back onto the PCB, on the linecard, closer to the data source that we’re trying to access with these I/O ports, then we can support future equipment architectures where the industry evolves away from pluggable transceivers.
So the Coolbit product project in our opinion hit a home run on all those objectives. We delivered a 25G per lane solution to the market as fast as we could. It has extremely low power consumption, and it’s a significant difference than comparable products
Platform support as I said is 100G, 300G to 400G, and it’s capable of supporting other data rates in multiples of 25. And then there’s the mid-board optical module that supports the evolution to future architectures.
CED: What ramifications will optical interconnect have for system design?
NT: When we look at our customers’ equipment, we view the faceplate, the exposed edge of the linecard as the most valuable piece of real estate they have. That’s where they differentiate their products. How many users or how much aggregate usability can you get on that faceplate? That’s how they deliver value to their customers.
So the 400 gigabit CDFP form factor, the 100 gigabit QSFP28 form factor – those are industry standards.
But with the mid-board optics module – a 300 gigabit product – now there’s an opportunity where the transceivers aren’t at the faceplate, they’re back on the linecard. We’re creating an opportunity for customers to deliver a higher value—through a product that’s not based on industry standards—at least not yet.
CED: Are OEMs ready to move to an architecture that would accommodate the mid-board module?
NT: It’s a fairly new option. This option has existed previously at lower data rates, principally with the lunatic fringe. That’s not to say the designs were crazy – they weren’t. It just wasn’t mainstream. What we see now is that what used to be lunatic fringe is increasingly going to be mainstream. I think every OEM has on their roadmap a point at which they will leverage that architectural change. When? They all have different answers; they’re all serving different applications, so they’re not all going to do it at the same time.
Are the customers ready to embrace it? It all depends on their business model and how they’re delivering perceived value for their customers. This can create a new opportunity for them.
CED: What about the downstream ramifications for environmental cooling?
NT: You hear about Facebook’s data centers, and Google’s – they’re all very aware of the carbon footprints. This is not just a PR thing; they all want to find ways to operate more efficiently. Anything any of us can do to help the industry get there? That’s a big deal. Even people who aren’t in the industry are aware of the power consumption in data centers.
Optics help because they consume less power, but also simply because the diameter of copper cable is larger than the diameter of fiber cable. When you’re trying to cool these multi-terabit switchers and routers, you need to get a lot of air into and out of the equipment to cool it down. The actual diameter of the cabling becomes a significant impediment to that airflow.
CED: Will Coolbit products be cost competitive? Can customers expect cost-of-ownership savings down the line?
NT: The general rule is to use copper whenever you can because it’s cost-effective – it’s cheap, and you use fiber optic cabling where you must, because it costs more. I think what we’re seeing, with the continuing innovations in the optical transceiver space, the cost is coming down, and that paradigm is shifting.
It’s inarguable that optics are becoming lower cost, and more effective in the shorter-reach environment. Then you throw all the thermal stuff into the conversation.
There’s just a fundamental shift in which optics is continuing to encroach in the copper cabling space. We’re seeing fiber optics for shorter and shorter reaches.
Everyone’s feeling the same pressure.
As we developed Coolbit products and brought it to market, we wanted it to be competitive on a performance level and on an purchase cost basis, while still considering total life cost. If a 100 Gigabit transceiver is what you’re buying today, it’ll be competitive, and the differentiating factor will be the lower power and lower total life cost.
We believe based on all the discussion we’ve had, for every watt you save with device power consumption, there may be as many 3 watts saved at the end of the day when it comes to managing a data center that’s pushing the envelope on thermal performance. It’s power savings; it’s savings achieved in cooling infrastructure. We haven’t quantified it, but we think there’s a ripple effect.
CED: Have Coolbit products been tested?
NT: Coolbit product prototype units are undergoing customer evaluation today with a new generation of equipment. It’s all happening in the labs, and everything is going well.
CED: So when will the products be commercially available?
NT: The QFSP28 transceiver is due to be launched this Spring. That will be both the transceiver version as well as the active optical cable assembly version.
The mid-board module will come about the same time, maybe a few weeks later.
Then the CDFP 400 gig version is scheduled to be released this summer. It will all be launched over the next few months.
CED: So when can we expect to see commercial products with Coolbit optical engines integrated?
NT: We may see that sort of activity by the end of this year.
100G, Optical Interconnect, and Power Management: an interview with TE Connectivity's Nathan Tracy