Inefficient monitoring is a costly disadvantage for any service operator.

For some years, quality has been a key selling point for service providers and multiple system operators seeking to differentiate their brands from the competition, and the more diversified viewing habits and delivery platforms become, the more the delivered quality of service is a major concern for providers.

For consumers of triple-play packages, recurrent problems in service quality quickly lead to disenchantment and the search for an alternative provider. Viewers tend to judge quality of service by the high standards reached by conventional broadcasting networks, but today’s media delivery architectures are much more varied and complex: Multi-screen delivery using IP networks introduces myriad new potential sources of service degradation, and the task of monitoring service quality is now more complex than before.

Part of this complexity arises from the need to supply to multiple platforms: Media and data sources are delivered over a hybrid network to a range of devices that may include the traditional or Web-enabled set-top box, broadband connected computers, tablets and other mobile devices. And making life even more difficult for the engineers, the quality metrics used for traditional monitoring of audio and video are only of limited use. For hybrid broadcast/IP networks, a much more comprehensive approach is required.

The sheer variety of content delivered over the network further complicates the task. It can come from many sources, and in many forms. Apart from video and audio, the network may be delivering streams of metadata, closed captioning and digital cue tones (SCTE 35) for ad insertion. But it is what happens to video and audio when delivered over an IP network that is the most important contributor to the monitoring challenge.

Compressed into MPEG packets, the data is in turn transmitted within IP packets, and this layering is the source of further difficulty, with stream errors possible at any point along the data path, and in any layer. Because of the multiple layering of the content stream, a problem in one layer may set off an interaction with other layers, and it can be extremely difficult for the provider to unpick these interactions to find the true source of the error.

Any of the devices handling the MPEG packets can be responsible for creating errors in the stream, such as dropped packets, PCR jitter, inconsistent or corrupted metadata, video/audio buffer under/overflow, and under-provisioning. To the viewer, degraded data quality may appear in the mildly irritating form of poor lip sync or audio loudness issues, or as a more serious problem such as loss of program guide functionality, break-up in pictures or sound, or the complete loss of video or audio. For the provider, it’s necessary to find a quick and effective resolution, which means quickly sifting through the symptoms to find the root cause.

Figure 1: Video/MPEG monitoring...

When dealing with all of this complexity, a monitoring architecture based on advanced monitoring software on a resilient router platform yields major benefits over the use of standalone intelligent monitoring units. In-router monitoring integrates well with network management and operations support systems, delivering unobtrusive core network monitoring at the deep transport level, with the added advantage of support for automated switching to backup systems.

The mix of separate IP and video monitoring appliances often deployed by cable operators and other service providers does not scale well across larger networks as operations expand. This approach also leaves parts of the network outside the monitoring umbrella, and this reduces the overall effectiveness of the solution: Providers cannot track the health of the transport stream through the entire network, and it is difficult to identify trends affecting the entire delivery chain.

Basing the monitoring system on router-integrated, multi-layered DTV analysis provides a better result, giving the provider a means of quickly identifying and isolating video quality issues at multiple packet layers in both IPTV and RF video networks. This creates a scalable solution, applicable both at the core and edge networks, which means that there is a common reference model for analysis across those locations.

With continuous remote multi-layer video monitoring, the provider can see into both the networking layer – Media Delivery Index (MDI) – and into the application MPEG layers. This gives analysis of the IP performance and integrity, using MDI, and highlights packet loss, packet delay and similar errors. The MPEG layers are analyzed to isolate errors such as those leading to loss of sync between audio and video, incorrect input configurations, encoding incompatibilities, and corrupted metadata. MDI provides important data where content remains encapsulated – for example, at network transit nodes – while MPEG analysis provides the most useful data at locations where video is manipulated in some way, as in encoding or transcoding. In combination, MPEG and MDI analyses provide a comprehensive picture of data health.

To sort and prioritize the large volumes of data that such a system will produce, sophisticated customization is required so that the monitoring capability accurately reflects the particular requirements of each provider, and so the fatigue from alarm overload is avoided. For the MPEG layers, real-time validation can be tailored to individual workflows using a rules-based approach. The key monitoring standards underpin this rules engine, with SCTE 142 or DVB TR 101 290 providing the parameters by which any deviations are identified. Stored templates or profiles make it easier for engineering staff to monitor live services in real time, with simultaneous automated comparison against the parameters established in the templates giving filtered alarms ranked in severity. To facilitate cost-effective 24/7 monitoring, alarms can be delivered to engineering staff at any location, via text message or email.

By integrating the monitoring functionality with the router platform, there is scope for extra streamlining and efficiency of operation. If a backup stream is brought into the router with the primary stream, service-affecting errors in the transport stream that are identified by the monitoring software can initiate automated switchover to the backup feed. This has the dual benefit of minimizing degradation of the service as delivered to the subscriber by switching quickly when errors occur, and also reducing maintenance costs.

Again, an important element is the use of templates or sets of rules to define the parameters and priorities for monitoring the transport streams flowing through the router. Signal flow should be uninterrupted throughout this ongoing analysis, but the software can trigger any corrective actions required at the network layer, with alarms and alerts sent to the designated staff, the performance monitoring system and the network management system via SNMP trap. Stream data is also recorded to enable historical analysis by operators.

Inefficient monitoring is a costly disadvantage for any operator. Not only does it soak up engineering time and resources, generating excessive expenditure on truck rolls and failing to deliver economies of scale, but it also results in longer intervals between a fault arising and its eventual resolution. Service operators, fighting for market share when subscribers can easily choose another provider, cannot afford to let their service standards slip.

By implementing a comprehensive monitoring model in place of a patchwork of IP and broadcast devices, providers can manage and simplify their maintenance operation. The single point for resolving transport stream issues that such a system offers makes it cost-effective to scale up; protects existing investment in routers, architecture and OSS/NMS systems; and delivers a greener, lower-energy solution with reduced requirement for capital investment.