With a growing number of OTT service providers counting live sports as an attractive part of their offering, many consumers are now willing to watch live events through their video streaming service. The gap in QoE between a traditional broadcast and OTT is starting to narrow. An important ingredient for delivering exceptional QoE for live sports coverage is end-to-end latency, so it’s no surprise that this topic is often ranked by users as one of the most critical parameters of a good live OTT service.
Here, we will explore the latest improvements in delivering low latency for live OTT services and expected developments in the coming months.
What is the situation so far?
The two dominant delivery formats (i.e., HLS and DASH) both offer latencies of between 30 and 45 seconds due to sequential buffering of multiple media delivery segments. Thankfully, the industry came up with a solution a couple of years ago with the MPEG Common Media Application Format (CMAF). Despite the fact that this format is designed to support both DASH and HLS, only the DASH ecosystem has been working to define and document a way to deliver an OTT service with an end-to-end latency on-par with broadcast (typically between 5 and 10 seconds). Having a good, reliable, standardized solution for low latency only addressing DASH-enabled devices is seen as a limitation by many broadcasters and operators. To some extent, they would prefer to delay the deployment of new services until there is a similar solution for the HLS ecosystem.
The wait is over
During its annual developer conference in June, Apple announced an important update to the HLS specification, with a complete solution to address latency issues. This solution will be based on CMAF segments (although TS segments are still authorized). With this announcement, the previously mentioned hurdles should disappear.
The new HLS update offers some significant enhancements compared with the HLS specification that is so widely deployed and used today:
Partial segments: The previous HLS specification was rather simple, with delivery segments of 6 seconds. Low latency HLS will provide tools to deliver media at the live edge of the media playlist, where the media is divided into a larger number of smaller files, such as CMAF chunks that are 250-300ms in duration (few frames but not a full GOP). These smaller files are called HLS partial segments. A partial segment has a short duration and can be packaged and added to the media playlist much earlier than its Parent Segment. A first partial segment might be published only 250ms after the previous segment has been published. To avoid too large playlists, partial segments are automatically removed from the media playlist once they are older than three target durations from the live edge.
HTTP/2 push segments: Traditionally, for every segment, an HLS player polls the playlist file to check for new available segments, and then, with a second HTTP request retrieves the media segment. When low latency delivery is required, the overhead of these traditional HTTP requests becomes problematic with regards to latency. Apple’s new specification uses HTTP/2 push to push out the shorter media “parts” in response to a playlist request. The playlist, however, has to be fetched very frequently, as this can happen up to 3-4 times a second.
Blocking playlist requests: On a media playlist update, the player can add query parameters to specify that it wants the playlist response to contain a future segment. The server can then hold onto the request (block) until a version of the playlist that contains that segment is available. The client can also ask the server to push the indicated segment to the client along with the playlist response. Blocking playlist reload eliminates polling, and server push eliminates request round trips.
Playlist delta update: For long running events, the playlist size can become a problem. With the introduction of shorter segments and parts, this is even more critical, as more frequent playlist update requests will be made. In this HLS update, Apple enables a way for “delta” playlists to be generated, which means the playlist only contains some of the segments from the full playlist. This allows players to request the full playlist once, store it and add to it using smaller delta playlists, which only contain the latest few segments along with the low latency “parts” at the head of the playlist.
Faster bitrate switching: With low latency delivery, the player has less time to react to a network condition change. Low latency HLS allows the playlist responses for a particular rendition to contain information about the most recent chunks and segments available in another rendition. This should open up the possibility to jump into another representation without needing to make a full playlist request before it starts the switch.
Low latency HLS main implementation challenges
This new Apple low latency HLS specification offers a lot of improvements. No doubt it will take time to be fully digested by the ecosystem, but keep in mind that Apple came up with a specification to build the content as well as a native player to properly support these new features. This makes a BIG difference with low latency DASH ecosystem when it took more time to have reliable players.
Another challenge that needs to be mentioned is the support of HTTP/2 push on all CDNs, which is not the case today. While there’s good general HTTP/2 coverage on the big name CDNs, push is less widely implemented.
Finally, media files may need to be pushed along with the playlist response, requiring the use of the same edge endpoint for your playlist requests and media requests. This is a big change compared with traditional HLS, where segments and playlists can be managed separately.
Now that we’ve given an overview of the new HLS specification, let’s do a comparative examination of DASH and HLS low latency solutions:
What is common between low latency DASH and low latency HLS?
Use of CMAF for media segments with CMAF chunks as sub-entity markers
Backward-compatible solutions, i.e., allowing legacy players to still deliver content even with more latency
Both solutions can use the same DRM
Expected latencies with the DASH and HLS approach is in the ‘’broadcast range’’ between 4 to 10 seconds
What is different between low latency DASH and low latency HLS?
On the delivery format side, low latency DASH relies on HTTP/1.1 chunk transfer encoding while low latency HLS uses HTTP/2 push directives
Low latency HLS needs a playlist refresh at each new chunk whereas low latency DASH still doesn’t require this
Low latency HLS requires that the same server provides the playlist and the media files, which can be separates entities for low latency DASH delivery
How can DASH and low latency HLS coexist?
Since the specifications have a different manifest files and sometimes different encryptions, operators will have, in that case to store two different copies of the same asset at the edge. On the origin side, one mezzanine file can be stored for both formats to be distributed, thanks to on-the-fly packaging techniques.
Harmonic was involved early on in the MPEG standardization of CMAF and was the first to publicly show DASH CMAF low latency chunk live demonstrations with Akamai CDN at its IBC2017 stand. In addition, Harmonic has been working closely with Apple on low latency HLS and will show a live demonstration at IBC2019. During the demo you can see our VOS® Video SaaS solution take a live stream and package it in the HLS and DASH formats. On the delivery side the origin server is connected to the Akamai CDN. Which one will provide the lowest delay? You will have to wait until IBC for the answer!
Patrick Gendron is Director of Innovation at Harmonic for Digital Television Applications. He joined Harmonic with the acquisition of Thomson Video Networks. Patrick recently moved from managing the Harmonic R&D Innovation team to the Marketing Innovation & Evangelism team and is Harmonic’s representative at DASH IF, DVB TM and Streaming Video Alliance. Previously, Patrick held senior program and engineering management positions in the digital television headend domain, with international R&D management activities, at Grass Valley and Nextream. He started his career as a research engineer at the Laboratoires Electronique de Rennes (Thomson CSF) where he developed new technologies for professional video transmission over optical fiber (long-haul, single-mode links). As digital technology was maturing for television applications, he moved to Thomson Broadband Systems in a project management role for a number of first-generation digital TV products such as satellite modulators and contribution MPEG codecs. Patrick is a graduate in Computer Science and Telecommunications from the Ecole Supérieure d’Electricité (Supélec).