How Video Production and Broadcast Markets will Benefit from AI, According to Futuresource Study

The research company Futuresource says artificial intelligence (AI) will create more opportunities in the worlds of broadcast and video production.

Leave a Comment
How Video Production and Broadcast Markets will Benefit from AI, According to Futuresource Study

A study from Futuresource estimates that AI and machine learning will make video production more efficient.

As technologies such as artificial intelligence (AI) and machine learning continue to mature, research company Futuresource estimates they could help minimize common problems faced in the broadcast and video production markets.

Futuresource says the fields of broadcast and production are fraught with challenges such as:

  • reduced bit rates
  • application of precise subtitles
  • delivery of consistent quality throughout the content

“Machine learning is beginning to impact on the video market, unlocking a range of opportunities for the industry,” says Simon Forrest, principal technology analyst at Futuresource Consulting.

“One of the most notable, but perhaps lesser-known areas are video encoding technology.”

“Machine-learning techniques allow encoders to optimize video encode parameters on a scene-by-scene basis. Meanwhile, the results are fed back into the system to enhance future video encoding; this feedback loop ensures machine learning applies better encode parameters in subsequent sessions.”

Building upon the “experience,” he says these new video production processes will eventually provide cost savings to video professionals.

“Over time, the system accelerates towards the optimum compression for a given scene,” says Forrest.

“This leads to significant cost savings in network bandwidth and delivery, and the more efficiently a broadcaster or OTT [over the top] service provider uses bandwidth, the more profit becomes available to them.”

Ongoing Impact of AI and Machine Learning on Video Production

Forrest foresees these technologies having a positive impact on closed captioning.

Using algorithms that have been filtered through large language databanks, speech can be translated into text in real time and automatically applied to broadcast content.

Additionally, he notes AI can be engineered to identify the context of speech to some level.

“It’s clear that machine learning and AI technology is being actively pursued in the video industry and companies are already reaping the benefits,” says Forrest.

“AI is still a nascent technology with many developments yet to come. Machine learning and AI have been limited to running on servers within the cloud. This is beginning to change, with semiconductor vendors now building neural network accelerators [NNAs] into silicon chips.

“This will allow elements of AI to run at the edge, on consumer electronics devices, and it will be possible to identify users locally, either vocally through voice fingerprint or visually via camera.

“No data needs to leave the device itself, leading to improved security and increased privacy for the consumer. Companies like Synaptics are already developing the next generation of set-top boxes [STBs] chips, capable of AI at the edge, so the future is already beginning to reveal itself.”