Learning from SXSW Part 1: Designing Sound For The Future

So if you didn’t make it out to Austin this past week, you may have missed out on a great few days — especially if you are an audiophile. We know you’re out there! One of our favorite talks was Designing the Sound of the Future.

Among the panelists were some heavyweights in the audio industry. Steve Milton represented Listen, a sound design studio in Brooklyn, Uwe Cremering represented Sennheiser Ambeo, Terence Caulkins represented ARUP 3D Audio research lab, and lastly Leslie Ann Jones, a Grammy award-winning director of music and scoring — represented Skywalker Sound.

If you didn’t make it out to the panel, fear not. We’ve put together a breakdown of all that was talked about. Let’s dive in.

Sonic identities WiLL BE EVEN MORE IMPORTANT in the IMMERSIVE product landscape

Milton from Listen opened the conversation on how sonic identities still play an important role in defining future products and brands, just as much as the brand logo or the design of the product itself.

Listen Studio developed the sonic identity for the Microsoft’s HoloLens, a groundbreaking augmented reality headset. Their approach was to identify the places where sound would play a key role in the interactive experience, from the environment of the AR world, to the physical touch of the human interface, to finally the ambience of the operating system. 

The sound DESIGNER role will INCREASE dramatically

Sound will continue play an ever-increasing role in immersion and emotion responses. Cremering from Sennheiser explained how sound provides context for visual elements — especially in VR environments. Sound makes the unbelievable seem more believable. In order to ground the user in reality, sound designers will need to take in account a lot more information to create deeper immersive experiences and take in account the visual context.

Channel Based + object-based audio is key to successful immersion

For years, channel-based audio playback has provided a sense of space and realism in music, film, and games. But what channel-based audio does well in creating a sound environment, it misses out on bringing the experience closer to the listener, like painting a picture with only elements in the background.

Object-based sound allows the designer to place sound anywhere in space. By adding object-based audio, you essentially can add the middle ground and foreground of the audio landscape — thus adding the additional depth that truly brings an extra level of intimacy and realism to an acoustic experience. Paired with a compelling channel based mix, the two together can increase overall immersion within an audio experience.

Ambisonics are a future format

One format that we are heavily researching ourselves at OSSIC is ambisonics, and this format was one that was heavily mentioned during the talk for its role in the future of sound design. For those of you not familiar, Ambisonics are an open source audio format that can represent more spatial acoustic information of a recorded environment — even above and below the listener.

The power of ambisonics is that it gives the listener and sound designer even more flexibility with the recording and the mixing, and Caulkins recommended that sound designers should leverage a hybrid of both mono mic recordings with full ambisonics within their mixes.

Example of Ambisonic B Format capture. Credit: Creative Field Recording 

Example of Ambisonic B Format capture. Credit: Creative Field Recording 

Consumers have more freedom to experience the sound

Future content formats will allow for an ever-increasing amount of user interaction. This means that less and less will people be passive listeners to an experience and instead actively manipulating, moving, and adjusting their perspective. Like VR visuals, this calls for both audio capture methods and sound design methodologies that plan for the user that is actively engaging with the content they're experiencing.

The future of audio is interactive

Similar to the point above, Caulkins from ARUP SoundLab reiterated how the audio experiences of the future will be increasingly interactive and even social. Much of Caulkins work involved working with sound as a tool to not only provide an experience for one person, but to scale that immersion and interactivity beyond the single listener and even consider how one mix can play accurately across a variety of different environments — a concert hall, a car, or a headphone — without the need for a new mix.

Sound is still very much an emotional sense

Industry veteran, and Grammy award Jones from Skywalker sound is a well-known champion of sound to tell stories. With her work at Skywalker, she’s more than well versed in how music and composition has the power to move people in ways that visuals cannot.

As a bit of grounding for the session, Jones reminded everyone that while we’re considering the future of audio – we can’t forget that at the end of the day it comes down to how the sound, and the music, makes you feel. In her words, it is much easier to tell as story and evoke an emotion with just audio and no visuals, but it’s much harder to achieve that same effect watching a moving picture with no sound at all.

What are your predictions for the future of sound in audio? Let us know in the comments.