A process, according to Messrs. Merriam and Webster, is “a series of actions or operations conducting to an end.” A professor I once studied under was somewhat more graphic and straightforward in his explanation to us nascent engineers: Process is what takes place between sausage meat input and sausage output. As this – ahem – viscerally points out, sometimes the details of a process are best left to the imagination – or, in engineering terms, represented by a black box – but the output is what our customers are concerned with. As engineers, we are charged with the design of the process and ensuring that the output is as expected.
How does one know that the output is of the desired quality? In order to make sure that what comes out of a process is acceptable, we must first define what we expect as an output and have a means to measure it, and then we compare it with the expectations. The missing element in the simple process description is feedback. Feedback is what allows the process to correct itself from errors introduced by various impairments.
The process of note here is the transport of programming content to the user. Simply stated, the desired outcome is to have a reproduction of the “input” information appear at the receiving “output” with no perceivable alteration. In terms of triple-play services, impairments to video are the most noticeable and objectionable to the average customer. Small errors in the digital data stream are not so apparent to an online user and are detected and mitigated through error-correction schemes built into cable modems and Internet transport protocols.
Of course, viewers are not trained to detect small video errors, nor should they be expected to act as quality assurance technicians. New high-value products, such as the delivery of 3-D content and future higher-def TV, demand a minimum of introduced errors. Operators want to deliver the best-possible experience at the most cost-effective level of performance. Once a tolerable level of error is defined, they need a way to measure and control the source of the error.
Typical manifestations of impairments include everything from complete loss of frames to visuals like tearing, tiling and pixilation, motion freezing, choppiness, etc. Less noticeable or intermittent video errors may be an indication of an impending failure of some component in the delivery path, or indicative of data errors or other artifacts introduced during the encoding/decoding process. Having a tabulation of the possible error sources and visual effects is the first step in deciding the tolerable effect of impairments for specific QoS and QoE levels.
Numerous organizations, to say nothing of university and private research efforts, are active in the pursuit of video quality definition and measurement. The Video Quality Experts Group (VQEG) is an international body of experts that develops subjective and objective quality measurement tools and methods that are used by standards developers and researchers to quantify the meaning of “good video.” And SMPTE has published many papers and standards on the subject of video quality.
The SCTE’s HFC Management Subcommittee (HMS) has produced several standards and recommendations geared toward defining the quality assurance parameters necessary to determine impairments and their location in the network. The first is SCTE 168-4, “Recommended Practice for Transport Stream Verification Metrics,” which provides a “methodology for associating transport stream verification metrics with functional subsystems in cable architectures.” This document, working from a defined reference cable network, breaks the complex system into common functional blocks, then lists known possible impairments by location and type of equipment. Dozens of metrics are defined, as well as the level of severity of impact on the digital signal stream. These metrics can be used to find the root cause of network impairments. The values and limits of the measured quantities are left for operators to determine in their specific circumstance.
SCTE 168-6, “Recommended Practice for Monitoring Multimedia Distribution,” provides guidance and recommendations to operators on several topics related to the deployment of multimedia delivery systems, including recommendations for quality and service assurance data acquisition, visualization, reporting, and data export. Integrated and independent system monitoring strategies are also discussed. In addition, SCTE 168-7, “Recommended Practice for Transport Stream Verification in an IP Transport Network,” is a guidance document that discusses the detection of errors in IP transport networks used for the delivery of MPEG data streams.
Current work by HMS is an expansion of these recommendations, providing additional QoS metrics and new QoE metrics in draft document HMS 176. Lastly, and most significantly, the subcommittee is drafting HMS 177, “Visual Compression Artifact Descriptions,” which aims to describe various video defects and aid in subjectively evaluating video quality.
Knowing where and how errors can occur and their subsequent manifestations is the crucial process feedback mechanism that ensures quality and provides network designers with data to build even more robust systems.
For more information on joining the SCTE’s Standards program, visit "How to Join".(http://www.scte.org/standards/How_to_Join.aspx )
E-Mail: firstname.lastname@example.org