Key Spatial Principle
CONTEXTUAL ENVIRONMENTS
Experiment Duration
8 Weeks
Reading Time
3-4 Mins
How can we use spatial computing to transform our environment into a canvas for creation? Our curiosity drove us to conduct our next experiment.
We were keen to explore how to use the environment within spatial experiences for something other than gaming (which we had already touched upon in a previous experiment). This felt like the perfect opportunity to see how far we could test contextual spaces of a different kind.
Instead of a visible reaction, what about an audible one? We set out to create a prototype that’s all about environmentally-impacted audio. Strings acts as a mixed reality musical instrument for users to place and play wherever they like in their space. Dropped into the real world environment, groups of virtual strings vary in tonal values depending on their physical placement - so users can position them in different areas of their room to generate different types of sound.
We designed this experiment for Meta’s Quest headsets as well as Apple Vision Pro - not only were we keen to develop one of our first apps for the Vision Pro, but this approach also served as a useful exercise in understanding the capabilities of each headset and how we could approach parallel workstreams.
We used room mesh to allow users to anchor the virtual strings to their environment, and hand gestures to enable them to pluck the strings and generate sounds. The audio is procedurally generated based on the length of the strings themselves, as well as how the user chooses to interact with them - a light pluck generates a short note, while pinching and pulling the string back further before letting go generates a larger audible reaction with a more varied note range. We also introduced an open palm gesture to allow the strings to be treated more like a theremin - controlled without physical contact.
It was crucial that we presented our mixed reality instrument as users would expect it to behave in order to have the most realistic experience possible, including making sure it sounded consistent and satisfying throughout the experience - for instance, as with any stringed instrument, it only makes a sound once a string is plucked and released. Our team learnt a lot about music theory in the process. Once again, we used Bezi for early prototyping - even though it doesn’t have specific functionality to integrate sound, it did help us validate that we needed to keep our string interactions close to real world expectations - eg pluck, pull, grab. We created a separate application to test what kind of audio reactions we could generate based on our input parameters in order to get it just right.
Applying spatially mapped VFX to the experience allowed us to visualise the audio outputs as users interacted, with rippling currents generated from each interaction. These ripples are mapped to the room mesh, revealing a user’s room geometry in a fun and immersive way. It also worked as a nice counter to the physical environment impacting the audio - in turn, the mixed reality audio visualisation transformed the appearance of the physical environment.
We loved the idea that we could use the size and complexity of the room mesh to impact the resonance of the tones as well - for instance, a long hallway would generate more echo and reverb compared to a small, cube-shaped room like a living room. We were able to apply this to the Quest build, although the Apple Vision Pro (at the time of prototyping) did not support this feature.
Understanding how our experience of sound can be transformed through spatial computing is incredibly exciting. The new mechanisms we’ve developed could evolve into a whole new way of creating music or unlocking other forms of creativity, all linked to an individual’s space. This idea of physical space as a canvas for creation is powerful - we can’t wait to explore other opportunities across the full range of the creative spectrum.