The ‘how’ and the ‘what’

Posted on Posted in Research

Last week I spent my evenings creating a digital “musical instrument” for a unique and rare person I met a while ago. My objective was to create a musical instrument based on her passion for chemistry, music and gaming. So I decided to create something which could transform as many materials and fluids as possible into sound. After a few thoughts, I came up with the idea of setting up the CatheBoard. It is a plastic box containing a Makey Makey board which through a Max patch allows triggering sounds by touching any electroconductive material connected to it. At the same time, it can also be used as a game controller.
While building up the CatheBoard, I had few thoughts about interaction design, learnability, usability in different practical and social context and creative possibilities. Most of these questions can be answered by thinking about how we do things and how we interact with it (Dourish P., 2014), and others are taking into account what we want to interact with.

Before going to the point, it is better to know that all written in this blog post is based on my knowledge and experience of interacting with the real and virtual world, thus if you interact or think differently about it, please post a comment below!

The how

The how we interact with things is something I started exploring with two works on sound interaction design in mixed realities (work 1, work 2). In these two works, I used paper and water as “objects” to interact with. The interaction design is informed by how we grab, throw, or crumble paper; and how we interact with water in a glass or a glass of water in the real world. Taking that into account, I tried to design and develop a system to replicate within a virtual environment the same auditory feedback obtainable through the same interaction performed into the real world.

After analysing the gesture interaction in the real world, I tried to generate similar audio feedback through a similar gestural interaction within a virtual environment.

The what

The three videos above were realised using three different hardware and software. The first experiment was accomplished using the Myo armband and Integra Live; the second  XTH Sense and Pd; and the third using the Makey Makey and Max. From these three experiments fascinating results emerged:

  • It is possible to experience the same auditory feedback through the same gestural interaction but through using the cited technologies in a different combination.
  • The algorithm used for to generate the audio feedback for the three experiment followed the same principles.
  • The gestural interaction resulted directly linked to the auditory feedback in all cases also when combining the cited technologies differently.
  • The only difference between the three experiments is the audio file used to feed the algorithm.

The outcomes from these three different experiments bring me to say that in some cases, the how and the what may be the key to sound interaction design, and not through what technology we try to realise it.

Leave a Reply

Your email address will not be published. Required fields are marked *