Video on-demand services are expected to grow, as the VOD market is estimated to hit $87.1 billion by 2025. The new streaming frontier is now opening up to both live and linear streaming. What are the key success factors for streaming services as more offers enter the market? Quality of experience is certainly top of mind. Low latency streams play an important part in viewer's perceptions, and will likely persist as streaming continues its move on the traditional main screen.
In this guide to low latency streaming, we’ll cover:
The definitions of latency and low latency
The business case for low latency streaming
UHD HDR streaming in low latency
QoE optimization in low latency DAI workflows
What Is Latency?
The streaming industry has agreed on the term low latency, and there are many new ultra low latency or ULL offerings. In this context, latency is the time between real-time action for live video and the video displaying on the subscriber’s screen.
What Is Low Latency?
In video streaming, low latency describes the subjectively quick time frame from which chunks of data are sent glass-to-glass, between the source and the subscriber’s screen.
Another way of defining low latency streaming is calling it “low delay streaming”, but what are the timeframes that are considered low or high?
The characteristics of live streaming latency are often compared to broadcast. Traditionally, broadcast video has delays of around five seconds. You can just barely see this delay when watching an event broadcasted to your TV or when listening to the commentary on an analog radio. Is it safe to say that since we rarely hear consumers complain about this 5-second delay, then it's perhaps a delay that is generally acceptable? Possibly.
What about a 6, 10, or 15-second delay, are they acceptable?
The Business Case for Low Latency Video Streaming
As we will discuss more below, there is a business case for delivering low latency streaming but it first makes sense to provide some context to where we are with video streaming and latency.
Adaptive bitrate (ABR) technologies represented the critical ingredient that enabled the streaming revolution a decade ago: when we were talking about over-the-top (OTT) video processing and delivery using the internet, versus broadcast, satellite, cable and other traditional access infrastructures. Previous approaches, such as RTMP, couldn’t scale or take advantage of the Internet’s architecture with caches and content delivery networks (CDNs) that are designed for data packets.
Enter adaptive bite rate (ABR) encoding. This ability to adapt streams based on the quality and bitrate of video and audio dynamically to ensure smooth delivery over the internet initially fooled the Internet into treating video streams just like any other data. This was great to resolve the scalability issue. However, in fixing the scalability challenge, ABR introduced stream delay as the content is buffered to cater to the Internet’s varying capabilities.
There a new ways to approach the stream delay issue and video technology is evolving with CMAF, Low-Latency HLS (LHLS) and Apple’s LLHLS. Today, video streaming capabilities have become reliable enough to carry commercial services such as Netflix, but the latency issue is still a concern, especially for live events.
Let’s take a look at how and where that latency has a role to play in quality of experience.
How Latency Affects the Video Experience
Let’s first look at how some different types of content may be affected by latency.
Non-linear content is only impacted by latency in that this could, in some cases, cause initial buffering before a stream starts.
Re-linearized content is similarly impacted by latency, which can also be akin to channel-change time.
Live news only suffers from latency when it is significant, i.e., over 30 seconds.
Interactive shows have traditionally not required low latency, for example, with the Eurovision song contest that uses telephone, SMS-based voting. However, a new genre of truly interactive shows, where participants interact in real-time based on a stream, will require a latency below two seconds.
Niche live sports will typically only be streamed on a single platform and, therefore, not broadcast simultaneously on different ones. They are not affected by latency, except perhaps if there is also a social media aspect.
Mass audience live sports are the premium market that most of us focus on in this debate. But as with niche sports, most events, even a major event like the European Champions League final, will be carried by a single provider in most markets. Potential competition in terms of latency will be between the same operator’s broadcast and OTT streams. Such a difference isn’t an issue within a household as the event will either be on the main screen or second screens, not both.
The World Cup, which happens once every four years, is a unique case where latency does become an issue. Indeed, the national team’s matches are, in most cases, carried by a public broadcaster that is itself freely available on all platforms.
Betting is a unique use case where low latency is preferable, but the requirement is for a fully controlled and consistent latency.
Inherently real-time video applications like telemedicine, flying a drone, or using a massive screen at a live event are niche applications that require latency in milliseconds, not seconds, and are beyond the scope of this post.
Do I Need Low Latency? Latency Requirements Depend on Use Case
Any operator offering any of the above use cases will eagerly embrace lower delay times for video streaming. However, the business case to spend significant money to reduce delay will depend on the use case.
The one factor that applies to all use cases, however, is social media. Indeed, posting a tweet from a live event has a similar five-second delay to linear broadcast. That, therefore, becomes a key target for live streaming.
Although there are different technical approaches to reducing stream delay, most newer solutions use CMAF (covered below). Explaining these different techniques requires a deep-dive into packaging, chunks, origin servers, video players and CDNs.
Commercial products have been available for over a year, and media processing and delivery providers have been making their case through growing marketing efforts. Some operators like the BBC have been vocal about the issue for several years. Still, in 2021, operators are learning more about the business growth potential gained from lowering delay.
Beyond solving delay issues, expectations on video quality add an additional layer to broadcasters’ business challenges.
UHD HDR Video Delivery and Cost-Effective, Low Latency Solutions
Consumers today are watching videos on a growing number of screens, and video quality expectations are constantly increasing. While HD video is the mainstream viewing experience today, UHD HDR is getting attention — both from consumers and standards organizations.
The reason UHD HDR is attractive is that it provides viewers with an immersive video experience at a resolution that is four times greater than HD. HDR expands the available color space, creating more vivid pictures, and increasing brightness and points of darkness.
The wow factor of UHD HDR hasn’t gone unnoticed. Already, broadcasters have produced and delivered several major sports events in UHD HDR.
The standardization scene for UHD HDR is also busy and several different organizations are currently working on improving HDR standards. As the consumer and market demand for UHD HDR grows, the need for efficient, cost-effective UHD HDR production and delivery solutions has never been greater.
Consumer and Market Demand for UHD HDR Content Is Driving Adoption
Consumer support for UHD HDR is strong. Research found that about 30% of subscribers around the world have opted for the UHD premium Netflix tier, and global shipments of UHD HDR are rising.
Standardized HDR Versions
Given the consumer demand for UHD HDR, the industry has quickly jumped on board to support the technology. There are several different HDR versions currently being standardized, including:
HDR10 is supported by all TV manufacturers, and many industry professionals consider it to be the generally used standard. Detailed information about each of those standards is outlined in Harmonic’s UHD HDR technical guide.
Who Has HDR Offerings?
On the deployment side, several major service providers have positioned themselves in the HDR space. Amazon, BT, Comcast, DirecTV, Dish Network, NHK, Sky Group and Vodafone all currently have UHD HDR offerings.
Live sports, especially, stand to greatly benefit from the pristine resolution that UHD HDR provides. For example, Fox broadcasted the “Big Game” in 4K HDR in 2020 and it is these types of premium events that are likely to boost the adoption of UHD HDR.
4 Steps to Efficient UHD HDR Delivery
UHD HDR video is all set to go mainstream. However, there are a few issues that need to be resolved before this can happen. The industry players need to collaborate to solve kinks around device compatibility, costs associated with production and delivery, and of course, latency.
1. Ensure compatibility for devices
Device compatibility is a challenge for mass UHD HDR adoption. IHS Markit expects that nearly all of the 112 million 4K TVs shipped worldwide will be compatible with HDR, but only 30% will have true HDR performance capabilities.
To solve this issue, UHD HDR content needs to be delivered using a standard that offers stream backward compatibility and display backward compatibility.
Stream backward compatibility is when an SDR decoder can decode an HDR stream.
Display backward compatibility is when an HDR decoder can decode an HDR stream and send it to an SDR display.
2. Increase production and delivery efficiency
Technology today for UHD HDR is still evolving, so it’s important to optimize your UHD HDR production and delivery. Producing and delivering higher resolution video can be an investment, in terms of equipment, storage and bandwidth.
Today, all deployed broadcast systems are using the HLG10 standard. For unicast, PQ10 Live is an option currently being considered. In a unicast (VOD or live) scheme with HDR10, a simulcast SDR will have to be provided to the legacy device, which requires increasing investment in production, encoding, ingest to CDN and CDN storage.
Advancements in IP networking and compression technology, including HEVC encoding and HDR advanced processing, have become central to boosting UHD HDR workflow efficiency. Compared with MPEG-4, HEVC reduces the bitrate by 50% and allows you to deliver better video quality at the same bitrate.
Choosing a video delivery platform with flexible deployment options is also important to eliminate unnecessary hardware costs. You have options today to deploy your UHD HDR workflows:
Customer supplied hardware
In the public or private cloud
Or as a SaaS solution like Harmonic’s end-to-end UHD HDR offering
3. Guarantee low latency for UHD HDR
UHD HDR and live sports content make a great combo. With UHD HDR, sports fans get a taste of the action in stunning clarity and detail. However, it’s a challenge to deliver UHD HDR OTT content with the same low latency as broadcast delivery.
By embracing a live streaming platform with innovative technologies like cloud, content-aware encoding (CAE) and the CMAF standard, you can deliver as low as 5 to 6-second end-to-end latency. The industry norm today is around 30 to 35 seconds. The gains are amazing.
How do these optimized video delivery solution for CAE work? CMAF LLC works by delivering a segment of small chunks of about 200 ms before the full two-second or six-second segment is calculated.
Data transmission is accelerated across the entire workflow, including in the decoder, which can potentially start decoding before a complete segment is received. CAE further solves latency issues by offering up to a 50% reduction in bandwidth for UHD HDR delivery. Leveraging the mechanics of the human eye, CAE can assess video quality and optimize encoding parameters in real-time.
4. Combine live events and 5G with HDR in the future
Already, major sporting events have been produced and delivered in HDR, bringing high-res, immersive viewing experiences to audiences around the world. In the future, you can expect to see a lot more events delivered in UHD HDR — and not just premium ones. You can cost-effectively produce and deliver events in UHD HDR in the cloud with HEVC compression and a SaaS business model.
As the consumer and market adoption of UHD HDR continues to grow, now is the time for service providers and broadcasters to engage. Delivering UHD HDR channels is an exciting revenue opportunity. It’s a great way to differentiate your offering and give live events like premium sports an extra wow factor. You just need the right solutions.
Harmonic has solutions for UHD HDR production and delivery that are not only efficient but also cost-effective while still guaranteeing low latency. We demonstrated how we can put HDR into action using our VOS®360 platform to deliver a UHD HDR channel featuring content from NASA. As 5G networks are rolled out around the globe, we look forward to the next steps to deliver more UHD HDR, 8K and higher resolutions.
An additional layer of low latency and HDR is return on investment (ROI). Let’s dive into some of the latest technologies related to low latency and dynamic ad insertion (DAI).
4 Expert Tips to Optimize QoE in Low Latency DAI Workflows
Dynamic ad insertion (DAI) is the ability to swap out ads in linear, live and VOD content. DAI workflows enable you to personalize your video streaming services and offer a path to monetizing your content with targeted ads and advanced ad insertion capabilities.
DAI workflows stitch advertising content into your video streams and add more steps in the media processing chain. Whenever the workflow is impacted with additional video processing, it becomes important to ensure that the viewer experience remains at the least, unchanged, or even better, improved.
The perceived quality of experience can both positively and negatively impact viewer behavior, either keeping subscribers watching your video services for longer periods of time or creating waves of canceled subscriptions. DAI workflows help to create unique experiences that are tailored to subscribers’ needs and expectations.
However, when you leverage a DAI workflow, latency becomes a risk factor for the perceived quality of experience. A noticeable delay between a live video stream and the broadcast is the kind of streaming experience you want to avoid.
Here are four tips to leverage DAI while delivering low latency video.
1. Leverage CMAF for low latency
If you want to deliver low latency video, Common Media Application Format (CMAF) is a solution. CMAF is a media file format that provides a common workflow for the delivery of live content in MPEG DASH and Apple HLS. As a standardized format, CMAF is backed by an entire ecosystem of 60+ companies.
CMAF offers the option to create sub-segment entities called CMAF chunks, often called Low latency chunks (LLC). These chunks can be typically in the range of a few 100 ms, allowing a more progressive delivery of the segments. The capability to deliver the video using smaller entities makes it possible to achieve an end-to-end delivery latency of 5 seconds (typical broadcast latency) without any compromise on the video quality.
Another important value of using CMAF is the promise it offers toward having a common delivery workflow. You can avoid the cost of duplicate caching for HLS and DASH segments, which is common today. The problem stems from having separate delivery workflows for the different streaming formats, which creates additional costs related to processing, caching and storage. With CMAF, because you only have one set of media files that can be used for HLS and DASH, you don’t need to duplicate caching in the CDNs.
This benefit of CMAF is still a work in progress in terms of being translated into actual production workflows. The industry is currently working on the last detailstoward using a single ecosystem for the delivery of live content in DASH and HLS, instead of needing two separate ecosystems.
2. Deploy simple blackout management tools
Video streaming services typically reach a wide range of audiences on a wide range of devices. Ensuring compliance with viewing requirements, based on viewers’ location, is critical. This is done using what the industry calls scheduled blackouts.
Blackouts are used mainly for live TV content, especially sports. Service providers also implement blackouts when a viewer does not have right to air the programs outside of a specific country.
Blackout management solutions can simplify this process, handling event scheduling and ensuring that you are accurately ingesting substitute content and slates. They’ll also provide you with an updated manifest to ensure legal commitments are fulfilled.
To efficiently manage and configure a blackout it’s important to be able to substitute segments dynamically. The dynamic aspect is essential because while ads are planned for a specific time slot they can be switched instantly, based on predefined data to best target the customer.
A DAI solution that supports blackout management through manifest manipulation allows you to change or customize the manifest dynamically. With advanced manifest manipulation you can, for instance, create custom manifests per user, in real-time.
A solution that provides advanced capabilities for blackout management via manifest manipulation will give you the most flexibility to personalize the user experience and deliver the relevant content that viewers crave so much.
3. Deploy a cloud solution for additional flexibility
If you want to achieve the highest possible level of flexibility for the entire DAI workflow, run it in the cloud. Having the entire end-to-end workflow on the cloud eliminates unnecessary deployment complexity and speeds up the launch of video streaming services. A cloud-based DAI approach will give you flexibility to dynamically substitute ads within live, linear, catch-up TV and recorded content.
You can run an entire DAI workflow on the cloud, including content ABR transcoding, ad transcoding, packaging, manifest generation and manipulation.
The flexibility of the cloud also includes agile scaling. Scaling is another key factor that could impact quality of experience. Using a cloud-based DAI solution enables the scalability you need by serving a manifest per each user for as many viewers as necessary.Ultimately, that means you can start selling targeted advertising and monetizing your content faster.
4. Reduce workflow and deployment complexity with a pre-integrated solution
Even after you deploy a DAI solution that supports CMAF, with simple blackout management tools running on the cloud, there are still many different elements involved with delivering targeted advertising to end-users, including integration with ad servers, clients and players.
Leveraging a DAI solution that offers an open architecture and an extensive range of pre-integrated ecosystem partners will help to speed up and uncomplicate that process.
Bringing DAI and Latency Into Focus
By selling targeted advertising, you can significantly boost your video streaming revenue. Global online TV episode and movie revenues are projected to increase by 101%, from $83 billion to $167 billion, between 2019 and 2025, according to Digital TV Research.
However, ads are just one part of the equation that results in happy subscribers that stay on-screen longer. The quality of your streaming and VOD service must offer a pristine viewing experience and low latency that meets the same expectations as live broadcasts.
Get a DAI Solution With Low Latency
Harmonic’s DAI solutions benefit from our close working relationship with the essential vendors in the vast DAI ecosystem. As an active member of multiple standards organizations, including DASH-IF, Streaming Video Alliance and CTA Wave, we are ready, when the ecosystem becomes mature, to integrate advanced capabilities into our VOS360 cloud video streaming and delivery platforms.
Contact us to discuss any questions you may have or to learn more about our industry-leading VOS360 solutions and how to optimize your DAI workflows for low latency.
Patrick Gendron is Director of Innovation at Harmonic for Digital Television Applications. He joined Harmonic with the acquisition of Thomson Video Networks. Patrick recently moved from managing the Harmonic R&D Innovation team to the Marketing Innovation & Evangelism team and is Harmonic’s representative at DASH IF, DVB TM and Streaming Video Alliance. Previously, Patrick held senior program and engineering management positions in the digital television headend domain, with international R&D management activities, at Grass Valley and Nextream. He started his career as a research engineer at the Laboratoires Electronique de Rennes (Thomson CSF) where he developed new technologies for professional video transmission over optical fiber (long-haul, single-mode links). As digital technology was maturing for television applications, he moved to Thomson Broadband Systems in a project management role for a number of first-generation digital TV products such as satellite modulators and contribution MPEG codecs. Patrick is a graduate in Computer Science and Telecommunications from the Ecole Supérieure d’Electricité (Supélec).