AES Show Notes: Broadcast Audio Gets Its Due

The 137th AES Show, which took place last week in Los Angeles, was widely considered a success. The exhibition component was good, with 15,403 registered attendees and 307 companies exhibiting, an increase over the last show held on the West Coast, in 2012, but less than was recorded for last year’s New York event, which had 18,453 registrants, a five-year-high for the show.

However, the loudest accolades were reserved for the show’s expanded panels and presentations program, the largest and most diverse yet and offering several broadcast-audio modules. “Broadcast and Streaming Media: Audio Issues for 4K and 8K Television” looked at the tools and practical experience for creating a high-resolution visual-aural experience and the standardization considerations of bonding the two in a single production-to-consumer format. Implementation of compression formats, auto scalability, and backwards compatibility were topics of discussion.

David McIntyre, president, corporate strategy and development, DTS, pointed out that all the experience that went into formulating standards will have to be reassessed when it comes to using LPCM audio for 5.1 surround sound for the object-based audio that’s expected to accompany 4K video at some point. “Object-based audio is a big opportunity,” he told the audience. “But it’s also a chance to revisit all the mistakes we made with surround. We came to agree what LPCM is for surround; now we have to agree what LPCM is for objects.”

Thomas Lund. CTO, broadcast and production, TC Electronic, said that lossy codecs for hi-res audio were unacceptable and that loudness normalization would have to be predictable and easy to apply.

Tim Carroll, CTO, Linear Acoustic/Telos, recounted that the shift from stereo to surround was “jarring. Still,” he continued, “that was nothing compared to [going to] 4K and 8K. It’s about to get a lot more complicated.”

Jeff Reidmiller, senior director, Sound Group, Dolby, echoed that, noting that typical cinema audio using Dolby’s Atmos system can have as many as 100 objects, with a 90-minute movie’s audio creating a file as large as 77 MB, at 48-kHz/24-bit resolution. However, he added, mezzanine-type compression can help manage massive files.

Other panels included “Understanding Audio Processing” with representatives from Dolby, DTS, Orban, Telos Alliance, and Wheatstone; “Audio Issues for Live Television,” featuring presentations from mixers, including Michael Abbott, Kevin Cleary (ESPN), and Ed Greene; and an extensive networked-audio track offering presentations on AVB, Dante, and AES67 protocols.

In another area of hi-res audio, a panel led by the Digital Entertainment Group pointed out that the organization had created a delineation of high-resolution sound, defining high-resolution audio as “lossless audio that is capable of reproducing the full range of sound from recordings that have been mastered from better–than–CD-quality music sources,” with a minimum resolution being 48 kHz and 20 bits. (By comparison, CDs are 44.1 kHz/16 bits.)

On the same panel, representatives from the three remaining major record labels — Universal, Sony Music, and Warner — indicated that they each have between 1,200 and 1,300 high-resolution titles available for download or streaming. The category is still a niche — even vinyl dwarfs those numbers — but, with the higher prices that consumers seem to be willing to pay for hi-res sound, it’s hoped to be a robust niche, a sentiment that extends to broadcast as well.

DTV Audio Group
The DTV Audio Group’s annual AES seminar offered such presentations as “This Is Not Your Father’s MVPD,” a look into how the transition to IP infrastructure and streamed content is transforming cable and how this facilitates advanced audio codecs, and “So Long, SDI and MADI,” a look at IP infrastructure for audio and video contribution within the broadcast plant and in the field. A panel on the distribution of next-generation audio services looked in depth at object-based mixing and revealed that, while cinema is moving forward with systems like Atmos, the process is still nascent for broadcast, up to and including agreement on the terminology, with more than a few attendees still recommending that the “objects” be referred to more intuitively as “elements,” a term already in common use in mixing for broadest.

The same complexity is expected as broadcast transitions to an IP-based infrastructure. One presenter noted that “cloud-based distribution will be challenging,” with issues relating to fault tolerance and switches. Another point that emerged was that, whatever the plant infrastructure of the future is, it must also be format-agnostic. One speaker observed that Japan is expected to implement 8K broadcasting by 2020, even as U.S. broadcasters are still ramping up for 4K. He said, “We’ll need a common infrastructure for services for new media formats that will use the same components for each new format, such as switches.”

It became apparent during the event that the language of IT was becoming ever more integrated into the broadcast conversation, as was the looming presence of the Millennial generation. One cable-network audio executive said that “Millennials want their content when they want it and where they want it, and personalization [in the form of object-based mixing] is the way to get it to them. How we do it is the real question.”

However, another cable-network audio executive countered, “How we’re going to pay for it is the question, if the consumer doesn’t want to pay more for television.” To which another cable exec responded that, regardless of costs and processes, when it comes to moving broadcast audio into the next IP/object-based plane, “We can’t not do it.”