Augmented Reality Audio App for Bose Frames at MIT Media Lab


Our team came together at MIT Media Lab to explore completely new UX/UI paradigms in audio AR with Bose AR glasses! The digitally augmented world is not just visual! Things are looking up! (pun intended)
Augmenting the real world with contextual audio.
Location: MIT Media Lab, 6th Floor main hall.

Design Challenge

Our team came together with a curiosity and desire to explore the idea of augmented reality through audio. Given the nascent technology and 3-day time constraint, our research plan included a sprint that merged lean entrepreneurship and design. We began with a conversation about factors of health, entertainment, and public space. We then mixed value proposition design with the double-diamond process for design thinking. We were also able to benefit from the accessibility expertise of our team mate, Sunish Gupta who is also vision impaired and helped the team understand his use of mobility technology. From our research, the following questions surfaced.

  • How might we make immersive experiences accessible (abilities, price) ?
  • How might we reduce injuries from mobile phone distraction?
  • How might we invite people to enjoy public space?
  • How might we give local residents an opportunity to tell their story?

Inspiration

Cities are “smart” and getting smarter, we’re told. As information processing becomes embedded within and distributed throughout ever broader regions of urban space, we, humans, arrive at an opportune moment to take back the streets from cars and introduce a new opportunity for optimal pedestrian experiences using augmented reality. There is no higher fidelity than the real world. By augmenting our surroundings with audio, we can preserve a connection with reality while contextualizing it with human stories, critical information, and new paradigms in user experiences and interfaces.

What it does

World Whisper allows users to take the scenic route and enjoy the sights and sounds of reality with an additional layer of audio augmentation that tells you stories, and provides contextual information about your surroundings.

How we built it

Using Bose AR frames with projected audio and positional tracking, our team created scenes, menus, and audio pins in Unity and apple ios using the Bose SDK.

Challenges we ran into

With new hardware that was only released to the public weeks before the hackathon, our team ventured into the unknown both on the software side as well as by having to explore new UX paradigms with audio only interaction.

  • No documentation on how to implement audio in SDK.
  • Had to implement our own gesture controls, no library.
  • Calibrating device to keep center point of view
  • Location services implementation
  • Emerging paradigms in audio UX / UI

Accomplishments that we’re proud of

Coding the head gestures ourselves was quite a challenge. Also, designing audio AR experience for comfort and creating intuitive interactions led us to dig deep into the human experience for queues that would start a conversation between the technology and the user.

What we learned

Immersive technology should not be exclusive to visual media. By exploring audio AR our team has learned that it is possible to create menus, icons, and conversations with users without a graphic visualization.

What’s next for World Whispers

After further testing of our build, our team is eager to improve our ability to start a conversation with users and engage them in optimal experiences leveraging their surroundings. We will also collect a survey of our UX for Audio AR challenges and how we solved them.

Update! I was selected to pitch some more use cases at SXSW and won a Developer Award Grant from Bose Corporation and Capital Factory!

Follow the story on LinkedIn!