WheatNews Feb 2019

WHEAT:NEWS FEBRUARY 2019  Volume 10, Number 2

Radio Sputnik, Live from D.C.

ControlRoom1

Sleep is about the only thing most of us can do for eight hours straight, and some of us can’t even do that. How, then, does Radio Sputnik produce a live transatlantic broadcast for eight hours every day? There’s no music to speak of, just straight up talk and news from Radio Sputnik’s news bureau in Washington, D.C., to the mothership in Moscow. 

That’s eight solid hours, live between Washington and Moscow, every single day – except for that remote they did from Cuba a while back. And, by the way, how is that even possible: a live remote from Havana by way of Moscow’s Radio Sputnik through the news bureau in Washington, D.C.?

FaultLinesStudioIt helps to break it down. Live programming goes from Sputnik’s Washington, D.C., news bureau on K Street, across the Internet and into the Sputnik studio headquarters in Moscow. The live signal is then sent via STL or uplinked to satellite for redistribution to the Sputnik broadcast network or run out again on the Internet for pickup by listeners wherever they happen to be on the planet. On either side of the Atlantic are WheatNet-IP audio network ecosystems with control surfaces, mic processors, and in the case of the Washington bureau, live camera control. The two studios automatically coordinate around the inevitable latency due to internet connectivity, along with the occasional help from operators who count down to air. 

So it goes, a continual stream of news and talk in the four-plus years since the Kremlin-funded Radio Moscow morphed into Radio Sputnik, which has stations all across the global (including D.C.’s 105.5 FM and 1390 AM) as well as online content and a Sputnik app for smartphones. Radio Sputnik has studios in Washington, Edinburgh and Moscow, which broadcast in eight-hour shifts around the clock. 

But what happens when you add another dot on the map, as was the case when Radio Sputnik D.C. reporters flew into Havana, Cuba, to cover the funeral of the late Fidel Castro? 

BAMN StudioAside from the challenge of finding adequate Internet access, not much. “We can record from just about anywhere and immediately drop it into the studio and then go live within minutes,” said Sputnik U.S. Editor-In-Chief Mindia Gavasheli. The Washington bureau’s two studios – a main studio with the LXE console surface for live broadcasts and a production studio with an L-8 console surface for pre-recording interviews and shows – are fully IP networked. The main studio is set up for live broadcast with call-ins from listeners as well as daily Facebook Live streams. “Here in the Bureau, we have people working on content for the news wire, for social media and Internet sites, and for radio, but they all share information,” explained Gavasheli. 

Everything is tied together with IP. For those daily Facebook Live feeds, for example, studio cameras are slaved to microphones using camera automation software integrated into the WheatNet-IP audio network. With this, the studio can automatically control camera switching based on whether a mic is on, the mic fader is up, and audio from the mic is coming across as meter data.

The news and talk continue to flow out of Sputnik Radio’s U.S. News Bureau, eight hours every day, adding to the more than 10,000 hours of live programming streamed across the Atlantic so far. 

Transitioning to the Cloud in Five Steps

CloudScene

By Dee McVicker

You've got to love the mercurial nature of clouds. No conversation on virtualization is complete without going there. Sometimes the cloud in our scenario is light and fluffy, other times issues like latency cast a long shadow on this ideal of being able to download programming and pluck whatever else broadcasters need from the ether. 

A few issues need to be ironed out before we can run real-time content and control of devices over an internet connection. But in the meantime, something like a cloud model has begun to form in the studio.

For example, we’re seeing more and more hardware being morphed into software apps. We now have mixer GUIs up on tablets and, in the case of our WheatNet-IP audio network system, virtual mixers inside the network I/O units themselves. EQ, dynamics, and compression, as well as signal monitoring and control, are on tap throughout the WheatNet-IP network. We have complete remote control over devices, workflows, and signal flow in the station studio, at the transmitter site and on remote locations. 

Meanwhile, outside the industry, an estimated 73% of companies run at least one application in a cloud. They are making good use of Software as a Service (SaaS) to centralize their operations, and the supervisory technology is fairly mature for overseeing deployment of these resources. Private and public cloud service providers are popping up in droves, with providers Amazon Web Services (AWS) and Microsoft Azure doubling in business year after year. 

All of this will be important for consolidating stations and workflows, which seems to be progressing in five basic steps. 

Step 1: Virtualizing the functions and control of local hardware. With the development of virtual mixers and the use of virtual development tools like Wheatstone’s ScreenBuilder, we are now controlling more functions and more hardware from one tablet and an IP connection than we could ever dream of controlling from a bank of hardware. 

Step 2: Virtualizing the boxes. Many of the functions now found on hardware can be moved to apps that run on a computer. We’ve been steadily moving functions to the IP audio network realm for some time. We’ve mobilized mixing, silence detection, routing, audio processing – all those functions live as software in the IP audio network and are accessible, scalable and malleable as needed. 

Step 3: Move all the local computers to VMs on a local server. Moving computing to local VMs, or Virtual Machines, in the studio building is likely to be the next step. This will allow broadcasters to maintain real-time program generation in the studio, without the latency and bandwidth availability issues that currently make off-site cloud operations difficult to maintain over an internet connection. VMs create the desktop environment in software for running various applications used in the studio. Thin clients, which could be a stripped-down application device or may even be an app on your laptop, access the apps on the VM, thereby simulating a “cloud” model.  Automation companies are already centralizing automation on virtual machines, and manufacturers like Wheatstone are developing ways to increase the redundancy of these systems, similar to what was done with distributed AoIP networking. 

Step 4: Move the VMs from a local server to multiple server farms somewhere else, such as regional studios or colocation facilities. This is one type of cloud. This model requires cloud supervisory technology that manages access to and maintenance of those machines in a way that keeps track of what they are doing. Think of it as a local/virtual switch in the cloud for making sure all the VMs interact the way they would as desktop machines, or in the case of WheatNet-IP, I/O BLADEs on your local network.

Step 5: Move operations to a Cloud Service Provider (CSP). The advantage of cloud service providers like Amazon’s AWS and Microsoft Azure is that they can run virtual machines in several data centers simultaneously, with automatic load balancing and latency monitoring automatically routing users to the closest, fastest data center resource. This is an important cloud principle. When you put something on Dropbox, you’re not putting it on one server. You’re putting it on many servers strategically located all over the country. When you request it again, wherever you are, the server you’ve got best access to will automatically serve you the information. This is the cloud model in its truest form. 

To learn more about the mercurial nature of IT clouds, be sure to sit in on Dominic Giambo’s presentation Components of Cloud-Based Broadcasting, from Content Creation to Delivery at the upcoming NAB BEITC, Sunday, April 7, from 10:40 am to 12:00 pm. Dominic is a senior development engineer at Wheatstone. He will cover latency, codecs and other issues broadcasters need to be aware of as they transition the broadcast operation to the cloud. 

Thoughts on AES67

BLADES AES67

AES67 is another year older. We talked to our Phil Owens about how and where AES67 is being used now that the IP audio network standard is approaching its sixth year.  

Wheatstone: How are broadcasters using AES67 today?  

Phil Owens: AES67 tends to address add-ons at this point, although this will likely change as SMPTE 2110, and its audio component, SMPTE 2110-30, become more prevalent. Currently, the two areas where I see AES67 interest are, one, for interfacing WheatNet-IP with a live sound component in a system and two, for interfacing with intercom systems. We have also seen smaller local studio implementations, such as interfacing with AES67 equipped mic preamps. 

Wheatstone: Explain what that means to anyone who might have a WheatNet-IP audio network in the studio and wants to hook into a live sound or intercom system.

Phil Owens: As an example, we are currently working on two university projects that have an auditorium with a live sound board that is tied into their WheatNet-IP via AES67. One of those projects uses a Yamaha live sound board, so AES67 is how we add that to the WheatNet-IP environment. Another project we’re working on will require us to interface WheatNet-IP with an RTS intercom using their Omneo protocol. That likely will be an AES67 interface as well.

Wheatstone: What are some milestones will we see in 2019? 

Phil Owens: As everyone knows, AES67 doesn’t include discovery, control, and connection management. At this point AES67 is an audio only interface that has to be set up manually. But it looks like we have a solution for that: the NMOS (Networked Media Open Specification) spec, which is currently being finalized. The IS-04 part of that spec addresses discovery, and the IS-05 part addresses connection management. NMOS will make AES67 much easier to set up and use. 

NAB OFFER

WTOP Cutover

WTOP 053 1 1880x1254

Click above image for a photo gallery

After more than a year of planning, WTOP, the nation’s top-billing station three years in a row, signed on from its new all-Wheat facility. 

Just before 10 pm on Saturday, Feb.2, WTOP anchor Sarah Jacobs gave the final temperature check from the old studios on 3400 Idaho Ave, signaling the cutover to WTOP’s new Star Trek-like space on Wisconsin Ave. 

Unique to the facility is a new Glass Enclosed Nerve Center that is laid out similar to a starship’s bridge, with a ring of 35 news workstations featuring virtual audio mixers designed by RadioDNA using ScreenBuilder development tools. 

WTOP_Thumb.jpg

At the helm is a Wheatstone LXE mixing console, which WTOP reporter and anchor Mike Murillo says “will be able to bring the audience interviews from near and far, and allow us to bring listeners the most extensive coverage of live breaking news in the D.C. region and beyond.” From the LXE, anchors can access live audio from any of the 47 editing stations throughout the facility and control WTOP’s broadcast in real-time by changing volume levels, toggling dozens of audio feeds and playing prerecorded ads or news reports. 

WTOP’s new facility is 30,000 square feet of pure tech, right down to the office café and the digital reporting and web development sections that feature workstations with WheatNet-IP audio networked virtual mixers. 

Racking Up a New Processor? Read This.

FM55 RackUpProcessorStory

So, you’ve ordered a new FM-55 (or other audio processor) and you’re camped out at the back door waiting on the delivery truck. While you wait and before you rack up that new unit, here are a few air chain “gotchas” that you’ll want to check.

Set gear input levels for adequate headroom.  The recommended practice for setting input levels on digital studio gear is -20 dBfs average, -12 dBfs peak, giving you, on average, 20 dB headroom before the absolute maximum level of 0 dBFS is reached. This is especially important given today’s overly processed source material. Even if you’re putting in a new FM-55, which is specifically designed to handle the large density variations found in today’s source material, setting studio gear to a standard input level with enough headroom will give the processor more to work with and result in a better sound overall. 

Check for channel imbalances, overdriven DAs, and lossy audio codecs. Your on-air signal could be passing through two or three devices in the air chain that are no longer needed, but no one took the time to remove or bypass. The less gear the on-air signal needs to pass through, the better your new processor can perform. 

One box that’s worth adding to the air chain these days is an HD/FM diversity delay box. You’ll need this to ensure that your HD and FM signals are aligned so there’s no distortion or dropouts in fringe coverage areas. We firmly believe that this alignment is needed, so much so that we’ve included the entire HD/FM diversity delay scheme in some of our audio processors; no additional box needed. 

Optimize STL paths. It’s not always possible to have a linear STL path, but if you’re given a choice, do so. You also will get better results with a composite rather than a discreet AES left and right STL. This is because the stereo generator in a modern audio processor is almost always better than one built into an exciter. Also, once you get your new processor, plan on sending the complete composite baseband over AES, from the processor to the exciter in full digital form. This gets rid of the AD/DA between the two as well as the noise resulting from an unbalanced analog signal in the transmitter building. The FM-55 and other Wheatstone processors include the baseband192 feature for this purpose. 

Do a quick audio sweep. Check to make sure there’s a flat frequency response throughout and what, if any, distortion is being added by various equipment. Here’s a quick tutorial on how to do Audio Performance Testing on the Cheap by our Jeff Keith that might be helpful. 

After you have completed a thorough sound check, you can now begin to experiment with your new audio processor. We suggest you use a good reference radio you’re familiar with and that you start with a conservative preset. Once you lock in your sound, you can then use your processor to solve a few problems. For example, there are multipath mitigating tools in our other processors that drastically reduce the magnitude of multipath by tweaking the L+R/L-R ratio. 

IP QA

Q: We’re adding a new talent team for one of our stations. Most of our talent use VoxPro for call-ins, and we’d like to add one or two more VoxPros for our new talent team. How easy will it be to network them?

A: Very. You can network together all VoxPro digital audio recorder/editor units, or a select few to make it easier to share files and collaborate on shows. In fact, you can network VoxPro into your WheatNet-IP audio network to bring in sources and route final cuts to the console. Integrating VoxPro into the studio network gives you more tools at your fingertips during editing and recording, including control for routing, salvos and playback tallies with end warning flash.

You can be selective in determining which remote VoxPro workstations will be allowed to share accounts with specific computers. For example, clusters of VoxPro workstations belonging to one station can be logically separated from those belonging to a different station while keeping everyone on the same network. VoxPro has several functions for collaboration purposes and tools for protecting files during multiple user access so there is no danger of file corruption. And, as is standard in all Wheatstone networked systems, network setup is easy. Simply put two or more computers running VoxPro on the network, and they automatically find each other, swap information and connect.

MAKING SENSE OF THE VIRTUAL STUDIO COVERMaking Sense of the Virtual Studio
SMART STRATEGIES AND VIRTUAL TOOLS FOR ADAPTING TO CHANGE

Curious about how the modern studio has evolved in an IP world? Virtualization of the studio is WAY more than tossing a control surface on a touch screen. With today's tools, you can virtualize control over almost ANYTHING you want to do with your audio network. This free e-book illustrates what real-world engineers and radio studios are doing. Pretty amazing stuff.

AdvancingAOIP E BookCoverAdvancing AOIP for Broadcast
TAKING ADVANTAGE OF EMERGING STANDARDS SUCH AS AES67 VIA AUDIO OVER IP TO GET THE MOST OUT OF YOUR BROADCAST FACILITY

Putting together a new studio? Updating an existing studio? This collection of articles, white papers, and brand new material can help you get the most out of your venture. Best of all, it's FREE to download!

IP TV EBOOK COVER

IP Audio for TV Production and Beyond

WHAT YOU NEED TO KNOW ABOUT MANAGING MORE CHANNELS, MORE MIXES, AND MORE REMOTE VENUES

For this FREE e-book download, we've put together this e-book with fresh info and some of the articles that we've authored for our website, white papers, and news that dives into some of the cool stuff you can do with a modern AoIP network like Wheatstone's WheatNet-IP. 

Stay up to date on the world of broadcast radio / television.
Click here to subscribe to our monthly newsletter.

Got feedback or questions? Click my name below to send us an e-mail. You can also use the links at the top or bottom of the page to follow us on popular social networking sites and the tabs will take you to our most often visited pages.

-- Scott Johnson, Editor

Site Navigations