Progressive TVs, PCs, tablets and smartphones are here to stay.

In the old days, “video display” used to mean “the TV” – more specifically, an analog input standard-definition TV. But that has all changed now. Today's new televisions are digital, and video viewing is becoming more popular on PCs, tablets and smartphones, creating a broader category of video display devices. These new devices are different from the old one in many ways, but one way in particular is that they’re not designed to support an interlaced (i) video format. The traditional 480i standard-definition programs and the new 1080i high-definition programs (both interlaced displays) just don’t work on these new devices, which are all displayed in progressive (p) video format.

Progressive displays show the whole picture or frame, while interlaced displays split each frame into an alternating series of lines (fields) and present them in sequence. While interlacing had big benefits for analog video transmission bandwidth, progressive video has a smoother picture and is much better suited for digital video distribution.

As more and more video content becomes available on these new devices, the need for video transcoding products that can ingest content in one format and output content in the video formats used for distribution will grow. Progressive video displays have come to dominate the in-home and mobile landscape, and the ability to effectively de-interlace 1080i (interlaced HD) and 480i (interlaced SD) program streams has shifted from a niche feature required only for “Web video” to a critical, must-have requirement for all TV Everywhere transcoding solutions.

Although several de-interlacing solutions have been available for a few years now, they are either ridiculously expensive platforms designed for post-production studios or super-low-complexity (and lower-quality) software packages designed for desktop and amateur production projects. The fact is we’re living in a multi-screen TV Everywhere world, and we need a better solution for multi-format transcoding – one that provides the professional-grade image quality to attract and retain HD subscribers, as well as the bandwidth efficiency to minimize the load on the IP video network at a cost that doesn’t break the bank when scaled up to hundreds or thousands of multi-profile streams.

Mobile devices, by far, represent the most rapidly growing video display market. Even though PCs, laptops, tablets and smartphones are all progressive (p), without robust support for interlaced video, the vast majority of HD programming is still delivered to service providers as 1080i (interlaced) video. But that’s not all: Take a wander through your local TV retailer, and just try to find a new HD television that supports 1080i but not 1080p. You guessed it: They’re not there. So the need for de-interlacing is clear.

Mainstream video distribution has historically required an interlaced format, and for that reason, de-interlacing is a relatively new requirement we’re just now coming to understand. There are many differences among the de-interlacing technologies and what they mean to us in terms of quality, efficiency and cost. Not all de-interlacing is created equally. Bad de-interlacing damages video quality and drives up per-stream bandwidth utilization, so it’s vital to get it right.

Combing Examples

To fully understand the need for deinterlacing, we first need to take a brief look at its history, starting with interlaced video display. The detailed historical background and context for interlaced video can be found on Wikipedia, but, in summary, the interlacing method was developed to save on transmission bandwidth in the early days of analog TV. Because so many televisions and converters that only supported interlaced content were purchased and installed over the last 30-plus years, content providers distribute a matching format, and much of the content distributed today is still interlaced. As the progressive displays take over the videoviewing landscape, these feeds need to be converted – thus, the need for a new and better de-interlacing solution is required.

One medium that has always been produced in a progressive format is film. Film is typically shot in 24 (progressive) frames per second. In order to adapt film for interlaced screens, encoders used a technique called telecine. Telecine basically means taking 24-progressive-frames-per-second content and turning it into interlaced content at an equivalent of 30 frames per second. Because of the popularity of home movie watching, de-interlacing as applied to telecine film content has been in widespread use for many years, and it essentially reverses the telecine process prior to display – putting objects back into the frame in which they were originally recorded. However, unlike film, native interlaced video content represents a much more difficult challenge to de-interlace due to the fact that objects in motion may have moved between fields. In other words, you can’t put them back into the original frame because there is no original complete frame. This means that each field may contain unique temporal information (where an object is moving), while simultaneously containing spatial information for static objects.

Early attempts at de-interlacing native video include blind field combining, whereby adjacent fields are simply combined into a single frame. This approach, called “Weave,” works great for spatial details of static content, but it immediately breaks down with content that is in motion between fields. The artifact created is called “combing” or (no joke) “mice-teeth” and can be seen in Figures 1 and 2.

Another de-interlacing approach commonly used to improve on combing artifacts is to perform intra-field processing. Intra-field processing uses line doubling, or line interpolation (commonly called “Bob”), on individual fields. Using Bob improves quality by reducing combing artifacts, but because the alternate field is either thrown out or replaced with a basic interpolation, the resulting frame has reduced vertical resolution and can be overly soft and fuzzy. While softer airbrushed images were popular 40-plus years ago, they’re not exactly what today’s HD consumer has come to expect.

The result of a Bob de-interlaced video is an improvement over Weave when objects are in motion, such as players on a field, but it comes with a loss of detail for static content that is not in motion, such as a half-time anchor behind the desk. Furthermore, line doubling or line interpolation schemes of Bob de-interlaced video may also introduce stair-step artifacts along diagonal lines, known as “The Jaggies” (see Figure 3).

In some cases, more sophisticated solutions may combine both techniques and adaptively use Bob or Weave methods either on entire fields or a smaller region of a sub-field. These solutions, however, all suffer in varying degrees from similar artifacting and resolution loss.

The other drawback of the Bob and Weave de-interlacing methods is that the artifacts they introduce add distortion to the image and drive up the encoding rate to achieve an acceptable viewing quality. As de-interlacing technology is improved, it not only makes the picture look better, it actually reduces the complexity of the signal and makes the encoding job easier and more bit rate-efficient.

Best-of-breed de-interlacing schemes contain two fundamental advancements: They are directionally interpolated, meaning that any interpolated field is based not just on the adjacent field, but on the motion of the objects contained in that field. Further, they are motion-adaptive, meaning that the directional interpolation adapts to the speed and trajectory of the motion.

The JaggiesThey are granular and accurate to a per-pixel level. A pixel-accurate implementation will account for fine detail that less-complex implementations will gloss over. A great example of this sort of picture detail is the scrolling tickers at the bottom of news and financial programs. Without a sophisticated and pixel-accurate de-interlacer, the parentheses and symbols will lurch, rather than glide across the screen.

Motion-adaptive de-interlacing on a per-pixel level employs significant processing resources to perform motion and edge adaptive techniques. By simultaneously processing at the pixel level and across several fields, the pixel-accurate technique can de-interlace cleanly with minimum artifacting, while also preserving the full available resolution. Extremely advanced techniques go one step further by including directional interpolation in order to eliminate any "jaggy" artifacts that may be present.

The Bob and Weave approaches discussed earlier lack these advanced features and will not be able to adequately compete when it comes to picture quality and compression efficiency. These advanced techniques are incredibly compute-intensive. It’s only been within the last 10 years that motion-adaptive de-interlacing on a per-pixel level has even been possible using hardware acceleration, and it’s still not possible on a pure software-only implementation. To realize this benefit, you not only need hardware, you need new hardware.

While directionally interpolated, motion-adaptive, pixel-level de-interlacing may have every possible bell and whistle required to produce the best-quality picture for progressive screens, it hasn’t come cheap. High-end, professional noise-filtering and de-interlacing solutions were marketed to studios and post-production facilities in the six-figure range and supported just a single channel in a 2/3RU device. In order to bring this technology to a scaled deployment, it’s been equally important to shrink the size and cost of the solution. Luckily, the growth of the high-end home theater market started to take off in the middle of the last decade. This created a larger and growing market for premium image-processing technology, drove up volumes, and supported the innovation required to meet the market demand for size and cost.

However, even if this technology is now available in the high-end home theater market, the trends for consumer electronics and mobile devices are moving in the opposite direction. Mobile PCs, tablets and smartphones have been under intense price and cost competition, and adding this advanced technology to these devices is just too much for the CE manufacturers to stomach. In most cases, the cost of the per-pixel, motion-adaptive de-interlacing chip is more than the cost of the actual device.

If we want to deliver on the promise of premium video quality to all devices, advanced pixel-processing needs to move into the transcoder. One of Imagine Communications' key areas of research and development is to further explore and understand the de-interlacing dynamics as Imagine's transcoding platforms process more and more video for progressive multi-screen applications.

Let’s fast-forward into a future when all network-side transcoding includes perpixel, motion-adaptive de-interlacing. What benefits might we expect? Picture quality is at the top of the list. Progressive content has been nicely de-interlaced; pictures are clean and sharp, edges are seamless and motion is smooth. This is particularly nice for content with heavy motion, like sports. It allows for the same picture quality that's being delivered to the big-screen HDTV in the home to be replicated on PCs, tablets and smartphones, providing consistent and continuous high quality of service (QoS) and ensuring that picture quality remains an important tool to attract and retain premium HD subscribers. Secondly, the wired and wireless IP video networks are deploying more services at ever-reduced bit rates and saving valuable network bandwidth as the unnecessary overhead for compressing interlaced artifacts has been carefully and completely removed. Last, but not least, the technology for enabling this advanced processing has moved into the headend transcoder at a nominal cost, providing a benefit of effectively delivering hundreds and thousands of channels and profiles that just a few short years ago would have been cost-prohibitive.

Progressive TVs, PCs, tablets and smartphones are here to stay. De-interlacing is a must-have requirement to support them, and advanced network-side, pixel-accurate, motion-adaptive de-interlacing is the only way to deliver them to ensure the best picture quality while achieving network and operational efficiency.