Gigabit Ethernet. Just the Facts.

Gigabit LargeNumbers don’t lie. That’s what your friendly police officer will tell you when he clocks you going 70 in a 35 mph zone. But, this isn’t entirely true when it comes to the speed of Gigabit Ethernet networks.

Most of us assume that Gigabit Ethernet links transfer data at one gigabit/second, or 10 times faster than 100Mbps Fast Ethernet.

But, in fact, a Gigabit Ethernet cable contains four twisted pairs of wires that are each clocked at 125 Mbps. What the "Gigabit" actually means is that a gigabit of information (data payload plus overhead) can travel across the cable in one second. Because of the efficiency of the modulation scheme and the use of all four pairs in both directions, instead of a pair each way as is the case for Fast Ethernet, Gigabit Ethernet is effectively 10 times faster than 100BaseT (Fast Ethernet).

At an order of magnitude improvement over Fast Ethernet, Gigabit Ethernet allows the audio network to deliver many more packets that much faster and therefore mitigate some issues.

The Gig on Latency

Take latency. Latency in an IP audio network is the delay between when audio enters the system and when it comes out. Every audio network has some latency because it takes a small but measureable amount of time to take analog audio in, convert it to digital, construct the AoIP packets, transmit them across the network and then reverse the process at the other end. In any IP system, the transit time across the network of an individual piece of data is not guaranteed or predictable. Ethernet networks are designed to avoid data collisions (which happens when different bits of information try to occupy a wire at exactly the same time) by squeezing out packets in between other packets in a multiplexing process controlled by the network switches. You just don't know when "your" packet is going to get there. The IP audio network deals with this by using temporary storage in buffers on each end. It fills up a pool of information on the transmit side so there is a ready source of data whenever the switch is ready to send a packet. Likewise, it fills up a pool of data on the receive side so there is enough data to carry you over the breaks when the network is busy sending someone else's packets.

Read More...

As long as the transmit and receive buffers fill and drain at the same rate there is no interruption in final data delivery. The buffers absorb the variance in packet delivery. The catch is that for this scheme to work, the buffers are designed to be half full of data on average, so as to be deep enough that the data in the buffer never runs out or overflows during the worst-case variance in packet timing. This means that the receive data can't start playing out until its buffer is half full or the scheme won't work. The length of time it takes to fill the initial buffer half full is a main part of latency.

What does this have to do with Gigabit Ethernet, you might ask? Just about everything, actually.

Because a gigabit link is 10 times faster with 10 times the throughput of Fast Ethernet, packets can get to their destinations faster. Furthermore, the large capacity of the link allows for many more packets to traverse the network without risk of congestion and collisions and delays by the switches trying to find an opening on the wire for a packet. Because there is less concern with congestion, packets can be made smaller and more of them can be sent more frequently. Thus, buffers can be smaller and therefore, latency can be decreased. On the flip side, less link capacity often means larger data payloads, which can be necessary to ease congestion in lower bandwidth environments but at the unfortunate expense of increased latency.

Big Capacity

From the system perspective, the capacity of a link is all-important. As advertised, Gigabit Ethernet can reasonably handle 10 times the capacity of Fast Ethernet. For example, whereas you might push the upper limit of your Fast Ethernet link at 16 stereo audio channels, a Gigabit Ethernet link will be able to easily do 160 stereo audio channels.

One hundred sixty audio channels might seem like overkill in your studio, however it doesn’t take long for signals to add up. The more you ask of your audio network, the more it will need capacity to handle busses and foldbacks, backup sources, mixes, and headphone streams -- not to mention control and monitoring signals. If you want to automatically switch between live assist and dayparts, for example, that takes something like a utility mixer (which is part of our WheatNet-IP BLADEs) to switch them at the right time and level – plus the capacity to handle that switching. Put a few I/O devices in a studio and pipe their audio over a link to your rack room and the channel count goes up quickly.

It’s a given that you will probably need to run more than 16 audio channels through a link at one time. Any time you add more capability onto the system beyond a basic input or output channel, that’s when you need capacity. It’s also nice to have enough of it available for when you want to add something like an audio clip player or multiband audio processing to a network I/0 unit (which we did recently with the introduction of our new BLADE-3 I/O units). Having the available channel capacity allows us to add in the new features and functions that enhance the power and flexibility of the system without running out of network resources.

There’s also the flip side of capacity, or what happens when you run out.

As you add more channels to a link, the possibility of dropouts is increased until they are commonplace and you hear them routinely. It’s a logarithmic function up to the final cliff, not linear.

In fact, there’s a lot at play in the audio network that affects the quality of the end result. IP audio networks are highly stressed, running much more traffic than initially expected. That’s why it makes sense to use a topology (Gigabit Ethernet) that is more tolerant of the workload IP audio puts on it.

For example, the bigger the switch capacity, or what is referred to as switch fabric, the more packets it’ll be able to move. Just as on the Ethernet link itself, IP audio network switches should be sized and configured to handle the amount of traffic you're going to throw at them -- both today and five to 10 years from now when you'll ask your system to handle the new features we haven't even dreamed about yet.

By using Gigabit Ethernet links and switches you'll have the highest capacity, lowest latency, most future-proofed system available today.

Site Navigations