Wheat:News December 2022

WHEAT:NEWS December 2022 Volume 13, Number 12


Scripting Rushes In

The problem: Too many physical monitors in the studio.

The solution: Scripting LXE or GSX console surface buttons and OLED screens for displaying what youd otherwise put on a monitor screen.

OLEDs on the console surface itself is the first place Inrush Broadcast Services integrators Mike Dorris and Brian Sapp look to reduce the number of monitors in the studios. One or two OLED displays can provide cue countdowns, dynamics data, bussing information and so much more at the turn of a knob or a push of the button. Much of this can be done through the LXE and GSX Consolebuilder™ app, but for creating more complex customized macros across facilities, Dorris and Sapp rely on Wheatstones script developer for writing, debugging, testing and implementing routing, logic and control routines.

OLED Screens

A huge design criteria for our larger customers especially is to minimize the number of physical monitors in the studio, so we spend time up front figuring out how to get a limited number of buttons and OLEDs on the console to do all the things we would otherwise do on the screen,” said Dorris, whose company started with smaller projects three years ago and is now working on one of the largest WheatNet studio projects to date.

The biggest challenge is tying together facilities and things like sharing codecs in a way that we can track where bus-minuses are coming from and all the routing associated with a particular talent or studio,” he explained.

Log onto our Scripters Forum for ideas and starter scripts for your Wheatstone consoles or screens.



WTOP, D.C., is making news again, this time as a radio-via-ATSC 3.0 ancillary audio service on Sinclairs TV station in Washington, D.C.

As many of you know, WTOP News is the top-billing radio station in the U.S. Weve featured WTOP here on several occasions because of its futuristic glass-enclosed news nerve center, among other notable WheatNet-IP studio features. 

Sinclair-owned television station WIAV in D.C. is now streaming WTOP News as part of its ATSC 3.0, or NextGen TV transmission. WTOP and other audio services are available to anyone in the area who has the app, which is part of the NextGen TV interactive layer. 

If it catches on, ATSC 3.0 audio services could open the door for additional content beyond television and could even be a local radio alternative to another way to access local radio. Through one radio in the car, ATSC 3.0 can provide entertainment (radio and TV), e-GPS, file delivery and a host of other new services.

By combining television with radio on one platform such as this, the hope is to deliver a much more simplified, unified consumer experience. 

ATSC 3.0, or NextGen TV, is a suite of standards combining over-the-air television with IP. It is being rolled out across the nation, with a projected availability in 82 percent of the major U.S.  markets by years end. Elsewhere, South Korea launched 4K ATSC 3.0 broadcasts in May 2017 that now reach more than 70 percent of the South Korean population. Jamaica has launched ATSC 3.0 services and India is exploring ATSC 3.0 for direct-to-mobile services and broadcast traffic overload, while Brazil is planning to use select ATSC 3.0 technologies for its "TV 3.0" project. In Canada, Humber College is building its own ATSC 3.0 5G lab, and Mexico is focused on distance education use cases.

Wheatstone is a proud WTOP News technology partner. For more information on how we built, staged, and tested WTOPs studios from our factory in New Bern before shipping out the system in its entirety, click to Inside WTOP; WTOP News Room, Technical Build; or WTOP Cutover.


By Jeff Keith

Jeff Keith Layers Diagram

Its impossible to accurately forecast the weather, let alone the future of audio processing. But we can say with certainty that the future of processing for on-air and streaming will involve servers, software and maybe even a cloud or two.

Were seeing this major shift because of a few advancements in enterprise technology. 

We used to rely on DSPs, dedicated silicon designed just to do math very quickly. And while DSPs were, and still are, useful for mixing and processing audio, Moores Law has further increased the power of generic CPUs. The continuous evolution of CPUs, especially server grade CPUs, has made it possible to move audio and streaming functions onto a server rather than having a dedicated DSP doing the work. 

It just so happens that audio processing designed to run in DSP can now live in software and be easily ported to run on commodity servers, as is the case with our Layers software suite. For example, once ported to Linux, the algorithms running on DSPs in our hardware Wheatstone audio processors can run just as readily on Dell or Hewlett Packard servers. 

A standard Dell or Hewlett Packard server can now do all those things we relied on a purpose-built audio processor to do. The difference is that just one server can run multiple Layers FM audio processing instances, all with full MPX to the transmitter, and at the same time send provisioning and metadata for multiple streams out to a CDN provider.

This, combined with increased connectivity and the availability of ever greater bandwidth, offer increasingly attractive options for broadcasters consolidating operations. And not only in the geographical sense, but also with regard to cloud-based systems, whether said system is a giant server farm run by a large broadcast group or true cloud-based systems running well off-site on third-party entities such as Amazon Web Services and others.

Porting what were once dedicated hardware-based products such as standalone audio processors to software instances or apps that can run virtually anywhere gives us almost unlimited capability and availability…you can essentially run multiple instances of on-air processing for several transmitter sites from one host server, all running side by side, in real time, further lowering the cost of facility consolidation. 

It helps to think of this new capability in terms of layersconsisting of various kinds of equipment in the broadcast operation today. Studios have mixers, the rack room has processors dedicated to streaming, on-air, and other ancillary uses; and, there are codecs of various kinds assigned to a multitude of uses; and lets not forget the specialized gear like audio watermarking gear for ratings measurement. This layeringof broadcast requirements, all working together and in concert to create the on-air listener experience, is, in fact, why we named our server software suite Layers for consolidating these functions.

In our case, were layering in audio mixing, processing, codecs, watermarking, etc., and everything within the system capable of operating remotely from anywhere. The components of our layer system consist of software apps designed to run singularly or together on a dedicated server. These components run specifically on the Linux operating system so theres none of the infamous unexpected, usually destructive OS updates.

Adding a component to the system is as easy as starting another instance of whatever app might be needed. For example, adding a new streaming channel for the holiday season is just a matter of spinning that up on the server rather than commissioning it from a fixed hardware unit. Another benefit is the ease of being able to use stream-specific AGC and limiting that minimize the effects of processing by the codec. It allows us to more specifically apply on-air techniques that we might not have been able to spend the DSP cycles on in a purpose-built unit. It also makes it much easier to adapt to new developments, such as adding Nielsen watermarking, something available in our Layers streaming on a license-per-stream basis. 

What you end up with, in the case of our Layers, is a full-featured FM+HD audio processor with all the bells and whistles of a top-of-the-line hardware box: multiband AGC and leveling, EQ, stereo width enhancement, advanced bass enhancement, FM stereo multiplex encoder, and RDS. The efficiency of CPUs also gave us more MIPS for running completely new algorithms for managing the behavior of the multiband gain stages, technology with no resemblance to prior methods and which serves to minimize the audibility of processing while enhancing program dynamics and loudness.

A server for the purposes of audio processing, whether for backup or for feeding several transmitters, can be implemented today in a typical broadcast facility. This gives you the many benefits of cloudwithout risking everything on a cloud provider you have no control over, while using the same containerization methods and management you would use with a dedicated third-party provider. 

When and if the time comes to offload processing to a third-party cloud provider, youll already have most of the server technology in place to do so. 

This article by Jeff Keith, our Senior Product Development Engineer for all things audio processing, appeared in a recent Radio World ebook. 


Blades Cover

Heres your annual reminder that there is no such thing as a slacker I/O Blade. These I/O units that make up the WheatNet-IP audio network can, in fact, pick up the slack in a number of ways. These are just a few. 

They’re unstoppable, those Blades. By connecting a WheatNet-IP Blade I/O unit to each end of an IP wireless audio STL, you can continue IP audio from the studio to the transmitter site. IP radios connect to the switches on each end, which can connect to Blades already in use for managing audio and any devices hanging off the network. What’s more, if the IP radio should lose connection, a Blade I/O access unit will not only detect silence, it can trigger the startup of playback audio stored on the Blade unit itself.


“Automagic” mix-minus for fast-paced talk shows. For shared resources like a codec, Blade I/O access units will ‘automagically’ give the proper return feed to the codec based on its destination. So, if you pull up the codec in Studio One, the mix-minus from Studio One will automatically and magically be routed to the return feed. Then, minutes later, when someone else calls up the same codec in Studio Two, the Studio Two mix-minus will be routed to that codec. How useful is that for those fast-paced call-in and live talk shows?


“Automagic” source attributes, too. Each source signal can be assigned attributes (corresponding back-feed, audio processing, and more) that are automatically activated on whatever destination the source is routed to in the Blade 4.


Routable audio processing. Each Blade I/O unit includes a multiband processor useful for processing incoming audio from callers, remotes, codecs, satellite feeds and microphones. You can also use it to process output audio for headphones, web streams, pre-processors, IFB, or for level protection for STL applications. This is routable audio processing that includes 4-band parametric equalizer, 3-way crossover, 3 compressors, 3 limiters, and final look-ahead limiter.


Doing the automation mix-down: Each Blade has two stereo 8 x 2 utility mixers that can be used to mix down multiple channels to a single output. Shown is a Blade utility mixer being used to mix down multiple RCS automation channels to a stereo output, which can then be programmed as the automatic failover source in an emergency. This is also useful as a way to bypass the studio, so that with the push of a button or a command from the automation system, this output can feed the transmitter and free up the on-air studio for production or voice tracking, for example.


Mixers for mic groupings, talkback. One of those two stereo 8 x 2 utility mixers in the Blade can become a source or input to the system. This is useful for grouping several mics to a single output. You can use the output of each mixer as a talkback source.

Mixers for panning mic and caller feeds. And because the two stereo 8 x 2 mixers in the Blade are independent of each other, they can feed audio to each other or to another Blade. The output of mixer #1 can be brought up on a fader in mixer #2, for example. With balance control on each fader, this can be useful for recording a telephone mix with the callers” on the left channel and the announcers” on the right channel. The output of the mixer feeds the recording device.


Dual audio codecs. Each Blade 4 can now support two channels of encode and two channels of decode capability for home or other remote locations. The codecs use the Opus compression algorithm and can also support SRT for enhanced security and reliability. Also for those remote connections: Blade 4 offers individually adjustable buffering to compensate for jitter.


Dual audio clip players. While Blades have had an optional clip player, Blade 4 now provides a second clip player, both of which give you enhanced full remote control capabilities. When used with a hardware button panel (or a virtual button in ScreenBuilder) these players can be used in real time as effects/sounder decks in addition to executing song playlists. Because Blade 4 has increased memory, you can store more clips (it can now play MP3 files for even more storage capacity). Clip Player features level controls and meters, and status shows elapsed play time and clip metadata. Clip players can even run files from a front panel USB stick.


Dual NIC from the Blade 4 rear panel. This provides for network redundancy or separate LAN/WAN connections, and is especially useful when using the built in audio codecs. NICs now provide for DHCP addressing.

Built-in OS for running customized scripts and specialized software, metering apps and virtual interfaces.


Built in software apps and scripting tools. Run navigation, metering, and other software apps directly from the Blade 4 without the need for separate hardware devices or PCs. Blade 4 devices can run user scripts directly on the Blade.


Anything’s possible with CPU inside: You’ll probably never see it, but you’ll definitely know there’s a powerful CPU complete with operating system inside each Blade I/O unit. Which is why there’s no PC running the show, and why these guys can think for themselves. Plug in a Blade, and it knows exactly what to do, where it’s at in the network hierarchy, and what to do should a failure be detected.


Oh, and don’t forget the SNMP: Each BLADE includes SNMP for centralized monitoring of all Blades in a large distributed network. SNMP is a standard defined by the Internet Engineering Task Force (IETF), and is useful for monitoring network-attached devices like Blades. You can use it to configure alarms and set thresholds in order to be notified should a problem occur, allowing you to respond with quick corrective actions through email, SMS, traps and executing custom scripts.


Enhanced compliance for AES67 and SMTPE 2110. 1ms packet timing available for all Blade 4 signals, true 1ms mono and surround stream capability, .125ms receive capability, automatic failover and recovery on redundant networks, basic PTP master clock for simple systems and NMOS stream visibility and exposure for third-party routing control. It’s all in the Blade 4.


Wheatstone Christmas 2022

Stay up to date on the world of broadcast.
Subscribe to our monthly newsletter.

The Wheatstone online store is now open! You can purchase demo units, spare cards, subassemblies, modules and other discontinued or out-of-production components for Wheatstone, Audioarts, and VoxPro products online, or call Wheatstone customer support at 252-638-7000 or contact the Wheatstone technical support team online as usual. 

The store is another convenience at wheatstone.com, where you can access product manuals, white papers and tutorials as well as technical and discussion forums such as our AoIP Scripters Forum

Compare All of Wheatstone's Remote Solutions

REMIXWe've got remote solutions for virtually every networkable console we've built in the last 20 years or so. For basic volume, on/off, bus assign, logic, it's as easy as running an app either locally with a good VPN, or back at the studio, using a remote-access app such as Teambuilder to run.

Remote Solutions Video Demonstrations

Jay Tyler recently completed a series of videos demonstrating the various solutions Wheatstone offers for remote broadcasting.

Click for a Comparison Chart of All Wheatstone Remote Software Solutions


Curious about how the modern studio has evolved in an IP world? Virtualization of the studio is WAY more than tossing a control surface on a touch screen. With today's tools, you can virtualize control over almost ANYTHING you want to do with your audio network. This free e-book illustrates what real-world engineers and radio studios are doing. Pretty amazing stuff.

AdvancingAOIP E BookCoverAdvancing AOIP for Broadcast

Putting together a new studio? Updating an existing studio? This collection of articles, white papers, and brand new material can help you get the most out of your venture. Best of all, it's FREE to download!


IP Audio for TV Production and Beyond


For this FREE e-book download, we've put together this e-book with fresh info and some of the articles that we've authored for our website, white papers, and news that dives into some of the cool stuff you can do with a modern AoIP network like Wheatstone's WheatNet-IP. 

Got feedback or questions? Click my name below to send us an e-mail. You can also use the links at the top or bottom of the page to follow us on popular social networking sites and the tabs will take you to our most often visited pages.

-- Uncle Wheat, Editor

Site Navigations