September 28, 2018 by Kirsten Nelson

BONIVER_SBBOWL_MJB-17_600x400Photos by Matt Benton

Beneath many layers of electronic enhancement and distortion, the sound of Justin Vernon’s voice captivates. Surrounded by four musicians in this show’s configuration of the ever-evolving “indie-folk” band Bon Iver, his placid tone is at the center of a spellbinding reverie that compels the audience into stillness.

As the sold-out crowd of 4,500 Santa Barbara Bowl concert-goers experiences a collective closeness to tears, I hear a guy behind me say a little too loudly, “Intense!”

He’s right. What he’s feeling is intense, and it is utterly unique, because this is the last show of Bon Iver’s summer 2018 North American tour, and the band is conducting a gigantic experiment. They’re adding the ultimate visual to a performance already stacked with lushly calibrated video and lighting. It’s the radical addition of that original analog optical effect — connecting the sight of a musical performance with its sound.

LA_BONIVER_SBBOWL_MJB-12_600x400The bridge between these two senses emanates from an unusual array of speakers that hangs above the stage. This is a site-specific implementation of “Hyperreal Sound” by L-ISA from L-Acoustics, which combines art and science to overcome historic stereo adversity and deliver spatialized audio to a wide majority of the audience.

This is a particularly powerful finale to the summer of immersive sound, and many members of the audience appear transfixed, staring at the stage instead of their smartphones. Afterward, numerous fans and critics will boldly declare that it was the best, most emotional concert they’ve ever witnessed.

That seems like an appropriate response to an event that was actually six years in the making. When L-ISA was still in its early conceptual phases, L-Acoustics’ Head of U.S. Touring Support Scott Sugden thought specifically of Bon Iver as a band built for the system’s immersive capabilities. With more than 100 inputs going into the mix even when there’s just a five-piece version of the band on stage, the technology's object-based audio capabilities allow for a spacious separation and distinction between sonic elements. An audio object is assigned a location within the 3D image of a mix, and then, using spatial processing, FOH engineers can select from a variety of sensory shape-shifting options.

So basically, the width and distance of specific pieces of the mix can be adjusted for maximum effect across the multiple arrays suspended across the stage. As each audio object is placed in a unique location, it might emanate from one speaker array or a combination of many. This spatial thought process in mixing is an intriguing one — you can make things sound close, or far away. You can add width, so the vocals are huge, while perhaps a rhythm guitar can be tucked into the back of a mix, just riding along. And of course, there’s some fun panning action that can happen.

LA_BONIVER_SBBOWL_MJB-28_600x400All this invisible arrangement of sound somehow intensifies the visual element of what the audience is seeing on stage. “We’re trying to re-harmonize and reconnect the visual and auditory senses,” Sugden explains. When audience members are able to better perceive where sound is coming from on stage, it creates a realistic distinction between instruments and voices, localizing each to their space within a performance. And that makes the show feel more intimate, he says, which is desirable for audiences and artists alike.

The sensation is something new, even for PA provider Clearwing Productions, which has more experience with the technology than most, as it also deployed the system for an ODESZA show at the Santa Barbara Bowl earlier this year.

Asked to describe the audience perception at the Bon Iver show, Steve Harvey, Key Accounts Executive with Clearwing Productions said, “Wherever you were, the sound was coming from Justin Vernon’s mouth, not just from a radiating mass of speakers in the air. And that’s where the connection to the audience comes, because they’re not hearing it from their left or their right. They’re hearing it in the center. It’s hitting them in the eyes. You actually ‘hear’ it with your eyes, because you see the sound coming from his mouth, rather than the PA. That’s the takeaway. The instruments sound like where they are on stage.”

And it’s not just the people at FOH who reap the benefits. Speaking to several sound humans who walked the venue throughout the show, I heard proof of just how many audience members enjoyed full immersion. Even standing at the far right, front lip of the stage, the mix was complete and spatial as it would be in the center, rather than sounding like it was coming from the right hang of a traditional concert setup.

In fact, according to the spatial audio software, something like 90 percent of the audience was in the “L-ISA zone,” notes Jamie Earle, Director of Engineering (Production) for Clearwing. That’s compared to the meager 20-30 percent of seats that get any meaningful stereo effect at a regular concert.

After the concert, with the “life-changing show” comments rolling in across social media and music critics’ reviews, it appeared that the experiment was a success. And in the weeks since, it’s been announced that several new tours that are hitting the road with the system, and a few permanent installs as well, including Aerosmith’s residency in Las Vegas.

Clearwing’s Earle notes that the uptick in L-ISA use is intriguing proof that it really is different, especially given the production impact of seven speaker hangs versus the usual two. More time, more consideration from everyone including video and lighting teams, is required. “But the fact that these tours are willing to take it on, must say something about how much of an improvement it is — how even a person that’s not an audio tech could perceive a big difference,” he observes.

As for me, in my status as an audio-aware, regular human, I noticed something pretty major among my fellow audience members. People weren’t looking at their phones. They were actually watching the show. The sensation as I experienced it was that if you looked down for even a second, you might miss something. Every note, every breath across the reed of the saxophone, felt like it was delivered in its most evocative form, directly from the stage. It made me love Bon Iver again in a huge new way, which might be exactly the best thing about immersive audio. It’s good for artists and engineers and audiences alike.

About Kirsten Nelson

Kirsten Nelson has written about audio, video and experience design in all its permutations for almost 20 years. As a writer and content developer for AVIXA, Kirsten connects stories, people and technology through a variety of media. She also directs program content for InfoComm Center Stage. Kirsten was the editor of SCN magazine for 17 years, and has written for numerous industry publications and the InfoComm Daily. In addition to technology, she also writes about motorcycles, which provides a professional outlet for her obsession with MotoGP racing.