This seminar will focus on the application of Human-Computer Interaction principles to Music, so to encourage a natural and embodied interaction with interactive audiovisual software during performance. During the lecture will be presented different commercial technologies, commonly used in industrial applications, gaming and VR applications, and how these have been transformed into musical instruments through a User-Centered Design process and by taking advantage of Interactive-Machine Learning.
Audio Mostly is a conference which brings together musicians, sound designers, and technologists to discuss their latest understandings. This year the theme of the conference was Augmented and Participatory Sound and Music Experiences. This theme was explored through 8 oral sessions, three poster and demo sessions, four workshops, two concerts and an unforgettable dinner on the river Thames.
The conference opened with a fascinating talk by Luca Turchet about the Hyper-Mandolin, and augmented Mandolin which enhanced the live electronics sound manipulations through sensing technology applied on the mandolin’s body.
On the second day, Rebecca Fiebrink gave a talk on how machine learning can support human musical practices. Her speech raised numerous questions and an interesting debate about the advantages and disadvantages of using machine learning.
Demo and poster session showed the incredible advancement in music and sonic interaction design by the research community. Remarkable was the demo presentation of the Mixed Reality MIDI Keyboard by John Desnoyers-Stewart, David Gerhard and Megan Smith. The gestural interaction was a topic widely explored during the conference. SoundThimble, a real-time gesture sonification framework was presented by Grigore Burloiu, Stefan Damian, Valentin Mihai and Bogdan Golumbeanu. These two works were remarkable. However, I believe that these works do not support the artistic practice due to the poor affordability of the implied technology. Also, very interesting was my discussion on embodied interaction with the SoundThible’s authors.
On the third day, Eleanor Turner and I performed The Wood And The Water,. at Oxford House in Bethnal Green. The concert was shared with four other performances (see program), the which reflected the applications of the themes touched during the conference.
This has been a long, but very fruitful academic year!
I've been working on the development of MyoSpat and Myo Mapper. Music and dance performances, workshops have been realised across Europe and soon they will be presented at the International Computer Music Conference (ICMC) in Shanghai and the Audio Developer Conference (ADC) in London.
Moreover, I've been working on the HarpCI project, which made possible the realisation of The Wood And The Water. This has been performed across the UK and soon will be performed at the Electronic Music Week at Shanghai Conservatoire. More details about this project will be published soon in the Contemporary Music Review-Journal, Special Issue on the 21st Century Harp (Taylor & Francis).
However, my funds are running low, and I need your help to finish my current projects and start new ones. I then decided to launch a Patreon page to ask your help!
Please support me in creating new interactive systems for artistic performances, sound art installations and for teaching.
The workshop was aimed exploring "traditional" MIDI and latest interactive gestural devices. We looked at their usability during musical performance, limitations and how these devices enhance expressivity and the level of engagement with the audience during the performance.
Integra Live was adopted as audio engine for quickly designing and implementing audio interactions. Hardware included Korg's nanoKONTROL2, ROLI's BLOCKS, mobile phones, Leap Motion and the Myo armband. After exploring direct mapping techniques using Integra Live, we explored techniques to easily implement machine learning algorithms using Max and ml.ib. During the following days, MyoSpat was used to extended and apply concepts presented during the first day. MyoSpat is a hand gestural controlled system to manipulate sounds and lighting projections.
Thanks to the support of Berklee College of Music - Valencia and Birmingham Conservatoire, I experienced three amazing days in the city of Valencia and acknowledge an exciting and innovative music environment.
This weekend I and Eleanor Turner, we have been running the HarpCI workshops at Integra Lab and Cardiff Metropolitan University for Camac Harp Weekend 2017. It has been and incredible and inspiring weekend with a wide range of harpists, kids and adults. Eleanor explored "traditional" ways to manipulate sound using guitar pedals. I then introduced the MyoSpat system, a gestural controlled interactive system for sound and light manipulations. We let attendee practice with both systems. They expressed positive and constructive comments concerning the audiovisual feedback and the usability of the system. Interestingly, it was also approached the topic of collaborative performance, in which performers can manipulate each other music.
We are currently looking for beta testers to test and evaluate MyoSpat, a creative sound spatialisation system controllable through hand gestures developed by our PhD student Balandino Di Donato. The system aims to facilitate the manipulation of spatial sound and light projections. MyoSpat is developed using the Myo armband, Myo Mapper, machine learning implemented using ml.lib, and Pure Data. HERE more info about the MyoSpat system.
We are delighted to invite you to become a beta tester and take part in a user study which aims to evaluate MyoSpat. If you are a musician, you are more than welcome to bring your instrument to explore the system during live performance.
Sonia Sabri has established an international reputation for presenting Kathak dance in a contemporary context. She creates work relevant to modern audiences, that is inspired by Indian and British culture, and the rich possibilities that arise when they meet. She was recently awarded the Maker Monday Commission, which will provide support for the creation of an interactive performance. J. Dooley and B. Di Donato are working closely with Sonia for the development of an interactive system, which allows a dancer to control sound, visual and light projections through hand-gestures.
The system developed by James and Balandino aims to extend Sonia’s artistic ideas and project them to the audience using auditory and visual elements. Sound, visual and light projection are controlled through gestures tracked using Myo armband, Myo Mapper, rIoT board and Pd. Gestural data are sent to Wekinator for the gesture recognition process which uses machine learning. As each gesture is detected, a message is sent to Pure Data where audio, visual and light projection are processed.
Understandings from a 2 days workshop conducted with Sonia at Parkside MediaHouse, showed a good flexibility of the system, which allowed the developer to assist Sonia at any step of the creative process. The first day focused on exploring interaction design using machine learning techniques to recognise footwork, leg movements and hand gestures. During the second day, the mapping of gestural data with sound generation and elaboration, visual and light projections was developed. In addition to proving the stability and usability of the developed system, this two day workshop generated a new creative workflow to be considered as a method for working on interactive dance performances.
Conference: Electronic Visualisation and the Arts
Last week I spent my evenings creating a digital “musical instrument” for a unique and rare person I met a while ago. My objective was to create a musical instrument based on her passion for chemistry, music and gaming. So I decided to create something which could transform as many materials and fluids as possible into sound. After a few thoughts, I came up with the idea of setting up the CatheBoard. It is a plastic box containing a Makey Makey board which through a Max patch allows triggering sounds by touching any electroconductive material connected to it. At the same time, it can also be used as a game controller.
While building up the CatheBoard, I had few thoughts about interaction design, learnability, usability in different practical and social context and creative possibilities. Most of these questions can be answered by thinking about how we do things and how we interact with it (Dourish P., 2014), and others are taking into account what we want to interact with.
Before going to the point, it is better to know that all written in this blog post is based on my knowledge and experience of interacting with the real and virtual world, thus if you interact or think differently about it, please post a comment below!
The how we interact with things is something I started exploring with two works on sound interaction design in mixed realities (work 1, work 2). In these two works, I used paper and water as “objects” to interact with. The interaction design is informed by how we grab, throw, or crumble paper; and how we interact with water in a glass or a glass of water in the real world. Taking that into account, I tried to design and develop a system to replicate within a virtual environment the same auditory feedback obtainable through the same interaction performed into the real world.
After analysing the gesture interaction in the real world, I tried to generate similar audio feedback through a similar gestural interaction within a virtual environment.
The three videos above were realised using three different hardware and software. The first experiment was accomplished using the Myo armband and Integra Live; the second XTH Sense and Pd; and the third using the Makey Makey and Max. From these three experiments fascinating results emerged:
- It is possible to experience the same auditory feedback through the same gestural interaction but through using the cited technologies in a different combination.
- The algorithm used for to generate the audio feedback for the three experiment followed the same principles.
- The gestural interaction resulted directly linked to the auditory feedback in all cases also when combining the cited technologies differently.
- The only difference between the three experiments is the audio file used to feed the algorithm.
The outcomes from these three different experiments bring me to say that in some cases, the how and the what may be the key to sound interaction design, and not through what technology we try to realise it.