Here’s the Answer to Truly Converging AV/IT Networks

In order to truly converge AV and IT networks, the capacity of the data network needs to be upgraded. 10 Gbps infrastructure is the solution.

CI Staff

Today’s 1 gigabit per second networks are insufficient to enable “convergence.”

There, I said it.

Our industry has known for years that “convergence” is coming. IT systems and AV systems will, one day in the distant future, magically meld into one another seamlessly. 

No one really knows when or how, but everyone seems to agree that it hasn’t happened yet.

Why not?

In the past, we were held back by the fact that IT systems could not meet the performance requirements of AV users.

Then control systems moved to the cloud, network audio slowly started gaining traction, and finally a few approaches have been used to carry video through Ethernet hardware. 

The holdup is no longer bandwidth.  It is shared bandwidth. 

The promise of convergence is AV and IT systems coexisting together on one infrastructure.  What today’s 1 Gb solutions offer is AV and IT systems coexisting beside one another, on two infrastructures that look similar.

In its native form, HDMI video consumes a ton of bandwidth – from around 2 Gbps to around 18 Gbps, depending on resolution, frame rate, color depth, and how you count bandwidth. (HDMI adds a lot of overhead of its own to the actual video data – overhead you might not need when moving through a network.) 

To fit this into a 1 Gbps pipe, you need compression.  A lot of compression.

Latency vs. Bandwidth vs. Quality

When thinking about compression codecs, there are three features being traded against one another: latency, bandwidth, and quality. 

Achieving lower bandwidth without significantly impacting quality requires more sophisticated algorithms – those take longer to run, so latency goes up.  Keeping latency low and quality high means compressing less – so bandwidth goes up.

Two basic approaches to this compression have been developed: MPEG and JPEG.  MPEG (h.264 and its relatives) uses extremely sophisticated arithmetic applied across multiple frames of video to achieve extremely high compression rates with reasonable quality, but significantly high latency. 

The benefit is bandwidth low enough that the video can pass over wireless links to mobile devices, which can also tolerate some image degradation thanks to relatively smaller screens. 

Although this latency and quality level is acceptable in some use cases, other use cases demand better performance.

In order to increase quality and decrease latency, we need to turn down the compression and therefore consume more bandwidth.  Here is where we have a problem with 1 Gbps networking gear. 

The very best sub-1 Gb compression products have noticeable (though typically small) defects in video quality, create a few frames of latency, and still consume hundreds of megabits per stream. 

Even if we could live with the quality and latency issues, those hundreds of megabits matter when it comes time to converge the networks.

Parallel Networks Aren’t the Answer

Find an IT guy with a working 1 Gbps data network.  Now ask him he it’s ok for you to add dozens of video transmitters consuming hundreds of megabits each to his system. 

He won’t say yes, because he knows that can’t work. 

The users of the data network expect that nearly all of the 1 Gbps bandwidth is available to them for file downloads, collaboration, and even PC and mobile video streaming. 

Read Next: What You Want (and Don’t Want) From the Internet of Things

If the AV system starts consuming the majority of that bandwidth, then the data user’s experience is significantly compromised.

Integrators, IT system administrators, and even 1 Gbps video product manufacturers have already realized this, and so standard practice is to create two parallel networks.

A 1 Gbps data network (likely preexisting), and alongside it a physically independent 1 Gbps Ethernet network dedicated solely to the AV system.  What’s the point of that? 

If you want to build two infrastructures, just use HDBaseT and don’t pay any penalty for video quality or latency.

True AV/IT Convergence = 10 GbE

To truly converge these networks, we need to upgrade the capacity of the data network. 

10 Gbps infrastructure is here, and it’s cheap. 

Using this class of network, HDMI video can be transmitted with little to no compression, and a full 1 Gbps (or more) can be set aside for the data network users.  Their experience is completely unaffected by the behavior of the AV equipment. 

On top of that, no compression means no latency and no compromise of image quality.  And the 10 Gbps equipment is an easy expansion of the 1 Gbps network – they can be interconnected, so that your existing 1 Gbps infrastructure can function as it always has, while the 10 Gbps upgrade is only needed where you want to add video capacity.

In the compression tradeoff mentioned earlier – latency vs. quality vs. bandwidth, bandwidth stands alone. No one can create more time (latency). 

Video quality demands are going up and up (4K, 8K, HDR). Of the three, only bandwidth is getting cheaper. Make the right trade off!

Author Justin Kennington is the director of strategic and technical marketing for AptoVision, a Montreal-based technology company that provides its advanced BlueRiver chipsets for AV-KVM signal extension, matrix switching, IP-based switching, video wall and multi-view applications.

If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our digital newsletters!