In my previous blog, I talked about the impact of all-IP workflows for production — encompassing everything from uncompressed and compressed video and audio flows as well as IP-based control and talkback. In this post, I look at some ways to resolve workflow requirements for remote production, where content acquisition takes place thousands of miles away from the staff that do the bulk of the production work.
The context for this example is live events, such as sports, where there may be dozens of event feeds, with raw and/or live production taking place on location. The objective is to allow the event content to be made available to the production staff to create programming — including highlights, promos and full-length programs — without having to be onsite where the event is taking place.
In the simplest case, contribution links could be used to send the event feeds to the production facility where they are captured and made available to editing and packaging for the various delivery platforms that will air the content. However, there may be cases where some production takes place on location. In this case, having access to the event feeds on location and at the product facility is important. On-location capture can also provide redundancy in case links to the production facility fail.
In the latter case storage and storage management is needed on location and at the production facility, along with good connectivity to send file-based content to the production facility. Even with good ingest compression it may be impractical and cost-prohibitive to send streams or files of high-resolution content for dozens of feeds. Frame-accurate low-resolution proxy content can be used instead. It is common for low-resolution proxy content to be one resolution lower (e.g., SD resolution proxy for HD content) or the same resolution as the high-resolution content, at a significantly lower bitrate to ease editing.
The low-resolution proxy and high-resolution ingests take place simultaneously, and since proxy content can be viewed and copied a few seconds after ingest begins (similar to high resolution), the proxy can be duplicated on-location and in the production facilities with little delay (i.e., seconds vs. minutes). In the production facility staff can view and make edit decisions during ingest based on the low-resolution proxy streams.
With editing decisions made, there are two practical options for getting the completed high-resolution assets. The edit can be conformed — meaning the edit decision list is sent to the on-location storage where compute resources for rendering complete the edit. It is possible to create a low-resolution version of the conformed edit on location and make it available to the production facility to preview while the high-resolution version is still being transferred. Alternatively, the high-resolution frames from the edit are copied from the on-location storage to the production storage where the edit seat or rendering compute resources create the final edit.
Does cloud storage have a role to play in these kinds of workflows? Yes, it most certainly can. Rather than duplicating low-resolution content on the on-location and production facility storage systems, it can simply be sent to the cloud. Now a much later pool of potential users have access to the incoming feeds. The other aspects remain the same. Conforming the finished high-resolution edit can take place in the production facility or on location. Media management tools take care of ensuring that the source content for the edit is in the appropriate location for rendering to take place.
The upshot of these approaches to remote production are cost savings while still keeping usability high since low-resolution proxy media is available to all users within seconds of feed records beginning. The delta in time between an all on-location production solution versus one of the above distributed approaches is marginal. Therefore, it has little impact on the immediacy with which content is available. Broadcasters can realize cost savings for event-based production, fewer logistical issues and an overall streamlined operation with large on-location deployments.
In the next installment of this blog series on IP in broadcast studio production, we’ll look at how the use of APIs and other technologies can provide core functionality for these workflows.
On a related topic, we recently co-hosted a webinar with IPV, titled Leveraging the Cloud for High Resolution Editing. In this webinar, we discuss with IPV how to harness the power of a hybrid approach to support the most demanding production workloads while enabling global collaboration and access to content via the cloud. Watch the webcast replay now.
— Andy Warman, Director, Playout Solutions, at Harmonic, and Board Member and Marketing Working Group Chair, AIMS
About Andy Warman
Andy Warman is the Director of playout solutions at Harmonic. He provides business development and strategic direction for Harmonic’s line of playout enabled solutions for cloud and appliances including Spectrum media server, the Polaris automation suite, MediaGrid shared storage solutions and VOS cloud-native media processing. Warman also serves on the board of directors ofthe Alliance for the IP Media Solutions (AIMS) and chairs the trade association’s Marketing Working Group. Warman joined Harmonic after 11 years at Harris Broadcast in product management, where he drove Harris’ channel-in-a-box strategy, server platform and storage consolidation initiatives. With deep domain experience in the production and playout arena, he also has experience in automation, news production, content creation and infrastructure common to broadcast workflows. Andy holds a degree in Electronics and Management Science from the University of Kent at Canterbury.