EVS’ Stellpflug Talks At-Home

EVS’ Stellpflug Talks At-Home

JamesStellpflug 2014 v4 300When the industry’s frontrunner in IP video transport starts making waves about a new workflow for live production, you tend to pay attention.

In our discussion with James Stellpflug, VP of Product Marketing for EVS, we get a much better picture of what’s behind the at-home or new evolutions in live event connected trends and what broadcasters should be thinking about when it comes to capturing and producing content in volume.

WS: “At-home” production seems to be on everyone’s mind lately. What do you make of this new workflow?

JS: Just about everyone is trying to solve the problem of how to generate more content, meaning doing more live production for sporting events, live entertainment, etc. One of the ways to do that is to bring some of the production staff back home, or to the broadcast center, and try to do more volume with that same crew and also to get more consistent production. That’s a very real solution.

WS: There are a lot of changes driving this, obviously. 

JS: Yea. I don’t have the exact market data, but if we look out over the last decade or two, we can see that there’s this explosion in consumer devices. Traditionally the only way you could see a live event was to be sitting in front of a TV that had a hard wire on it through a cable system or off air. Now you’re seeing consumers with devices wherever they go. Their ability to access content right there at their fingertips has changed the way they think about content. Consumers seem to like more bite sized content, rather than the linear form. They’re not only streaming the entire broadcast, they want to consume bite-sized chunks when they have a pause in life.

WS: So how is this all playing out in terms of broadcasting?

JS: We’ve seen this evolution in the last decade where they started out doing a linear broadcast production, then they’d create their highlights for the linear storytelling, and then we saw smaller synopses to websites and other things. And now that’s picking up steam year over year, so it’s not linear, it’s a logarithmic growth (in producing content)….where they’re not having to make just one content for their website, they’re making multiple renditions. They’re now having to make subsets of a content, which allows consumers to personalize the content themselves depending on who is looking at it. If you have one fan log in (to the website) who likes a certain team, he could actually see a set of content that would reflect his personal desires rather than see static content groomed by a producer that is for the entire audience base. You’re just starting to see this hyper-individual content that can also be monetized different ways, like linking it to sponsors or paid tiers.

WS: How does audio fit into this? As you know, we’ve long established IP audio in radio, whereas TV broadcast is fairly new to IP.

JS: I’ve been on the fringe of audio for two decades. We’ve seen for a while now that radio has adopted this model where they were doing voice overs from the home or where different radio shows take place away from the studio because they had adopted IP. It’s just been the last seven to eight years that video has started to catch up to that production trend. If you go one step deeper into the infrastructure itself, radio has embraced IP for moving audio around, for a while now. The same problems this industry went through are now manifesting on the video side. We still need a unifying standard similar to AES67.

WS: Do you see standards happening similar to AES67 on the video side? (AES67 is an audio IP transport interoperability standard).

JS: I think so. SMPTE 2022 is probably the closest parallel right now. It’s a first step of making an open interoperable standard that all the manufacturers can adopt to bring us into that realm. But just as it is with AES67, we’re missing certain pieces that are needed to meet all of the functionality of today’s production.

WS: What’s missing in IP video standards?

JS: Well, 2022 is basically taking the world from an SDI video domain and encapsulating it holistically into the pipe. Which means we’re moving into an IP infrastructure with this standard, and that’s good. But we’re not being as efficient with it as we should be. We’re moving a lot of excess stuff around that we don’t need at all, and in other cases, where we don’t need at all times. If I were to deliver a SMPTE 2022 flow toward a Wheatstone audio console, for example, the console probably doesn’t care about the video object and in that case it’s a lot of wasted bandwidth. So when we look at this realistically, we are still missing the ability to do separate flows. If I need just a data object or audio object or video object, I shouldn’t have to move all of it in one wrapped up bundle and that’s what’s missing in standards as of yet.

WS: Just one final question, this one on 4K. Why are 4K and IP so tightly linked?

JS: For us, 4K is an emerging standard. We call it ultra HD. We’ve been recording it for years now through our production servers using what’s called QUAD-HD, or four SDI signals to accommodate the total payload of an ultra 4K signal. We recognize that today it is used as a production tool and if you want to do 4K today, it’s possible. But we also recognize that moving signals with four wires per signal is not practical. That’s a main driver for why we need IP. IP gives us a mechanism that is useful, whether I’m working in standard ultra HD or beyond. As long as I can packetize it, my infrastructure doesn’t have to change. That’s the advantage of an IP facility. Whereas today, when we look at it from the lens and view of an SDI plant, that’s been our problem in the course of the last two or three decades. Every time we’ve had a new video standard, that means we’ve had to replace the video cabling, replace the video patch panels, replace all the touchpoints – all the places it touches has to be replaced holistically because of a new standard. When you look at the IP industry and what we can leverage, it means we may not have to replace all the pieces. It might mean we have to replace a couple of chunks because we’re moving more bandwidth, but it doesn’t mean we have to change out each and every piece and the core technology itself. 

WS: Thanks, James

Site Navigations