Performing at Audio Mostly 2017

Posted on Leave a commentPosted in Performance, Research

Just back from a fantastic week in London attending the Audio Mostly conference held at Queen Mary University by the Centre for Digital Music.

Audio Mostly is a conference which brings together musicians, sound designers, and technologists to discuss their latest understandings. This year the theme of the conference was Augmented and Participatory Sound and Music Experiences. This theme was explored through 8 oral sessions, three poster and demo sessions, four workshops, two concerts and an unforgettable dinner on the river Thames.

The conference opened with a fascinating talk by Luca Turchet about the Hyper-Mandolin, and augmented Mandolin which enhanced the live electronics sound manipulations through sensing technology applied on the mandolin’s body.

On the second day, Rebecca Fiebrink gave a talk on how machine learning can support human musical practices. Her speech raised numerous questions and an interesting debate about the advantages and disadvantages of using machine learning.

Demo and poster session showed the incredible advancement in music and sonic interaction design by the research community. Remarkable was the demo presentation of the Mixed Reality MIDI Keyboard by John Desnoyers-Stewart, David Gerhard and Megan Smith. The gestural interaction was a topic widely explored during the conference. SoundThimble, a real-time gesture sonification framework was presented by Grigore Burloiu, Stefan Damian, Valentin Mihai and Bogdan Golumbeanu. These two works were remarkable. However, I believe that these works do not support the artistic practice due to the poor affordability of the implied technology. Also, very interesting was my discussion on embodied interaction with the SoundThible’s authors.

On the third day, Eleanor Turner and I performed The Wood And The Water,. at Oxford House in Bethnal Green. The concert was shared with four other performances (see program), the which reflected the applications of the themes touched during the conference.

Patreon page launch!

Posted on Leave a commentPosted in Education, Installations, Performance, Publications, Research, Software Development, Sound Engineering

This has been a long, but very fruitful academic year!

I've been working on the development of MyoSpat and Myo Mapper. Music and dance performances, workshops have been realised across Europe and soon they will be presented at the International Computer Music Conference (ICMC) in Shanghai and the Audio Developer Conference (ADC) in London.

 

Moreover, I've been working on the HarpCI project, which made possible the realisation of The Wood And The Water. This has been performed across the UK and soon will be performed at the Electronic Music Week at Shanghai Conservatoire. More details about this project will be published soon in the Contemporary Music Review-Journal, Special Issue on the 21st Century Harp  (Taylor & Francis).

However, my funds are running low, and I need your help to finish my current projects and start new ones. I then decided to launch a Patreon page to ask your help!

Please support me in creating new interactive systems for artistic performances, sound art installations and for teaching.

Music Interaction Design (MiXD) workshop

Posted on Leave a commentPosted in Education, Publications

Just back from Berklee College of Music - Valencia, where I delivered a workshop focused on Music Interaction Design using Integra Live.

The workshop was aimed exploring "traditional" MIDI and latest interactive gestural devices. We looked at their usability during musical performance, limitations and how these devices enhance expressivity and the level of engagement with the audience during the performance.

Integra Live was adopted as audio engine for quickly designing and implementing audio interactions. Hardware included Korg's nanoKONTROL2, ROLI's BLOCKS, mobile phones, Leap Motion and the Myo armband. After exploring direct mapping techniques using Integra Live, we explored techniques to easily implement machine learning algorithms using Max and ml.ib. During the following days, MyoSpat was used to extended and apply concepts presented during the first day. MyoSpat is a hand gestural controlled system to manipulate sounds and lighting projections.

Thanks to the support of Berklee College of Music - Valencia and Birmingham Conservatoire, I experienced three amazing days in the city of Valencia and acknowledge an exciting and innovative music environment.

HarpCI Workshops

Posted on Leave a commentPosted in Education, Research

This weekend I and Eleanor Turner, we have been running the HarpCI workshops at Integra Lab and Cardiff Metropolitan University for Camac Harp Weekend 2017. It has been and incredible and inspiring weekend with a wide range of harpists, kids and adults. Eleanor explored "traditional" ways to manipulate sound using guitar pedals. I then introduced the MyoSpat system, a gestural controlled interactive system for sound and light manipulations. We let attendee practice with both systems. They expressed positive and constructive comments concerning the audiovisual feedback and the usability of the system. Interestingly, it was also approached the topic of collaborative performance, in which performers can manipulate each other music.

 

The Wood And The Water

Posted on Leave a commentPosted in Performance
After four months of hard work, the main outcome of the HarpCI project sees the light.
 
The Wood And The Water for harp and electronics is composed by Eleanor Turner using MyoSpat. The performance explores principles of human-computer interaction in the field of musical performance. The performer, Eleanor Turner, elaborates the auditory and visual feedback through hand gestures. Such elaborations make herself, and the audience explores the acoustic space and sounds living in it as tangible.
 
Although, the video is mastered for a stereophonic reproduction, live performances of the piece take place using a quadriphonic audio system. Thanks to a more accurate spatialisation, the audience can experience a fully immersive experience made of light and sound projections.
  
More videos about previous live concerts will be released soon!!!
 

 

Call for beta testers

Posted on 2 CommentsPosted in Research

We are currently looking for beta testers to test and evaluate MyoSpat, a creative sound spatialisation system controllable through hand gestures developed by our PhD student Balandino Di Donato. The system aims to facilitate the manipulation of spatial sound and light projections. MyoSpat is developed using the Myo armband, Myo Mapper, machine learning implemented using ml.lib, and Pure Data. HERE more info about the MyoSpat system.

We are delighted to invite you to become a beta tester and take part in a user study which aims to evaluate MyoSpat. If you are a musician, you are more than welcome to bring your instrument to explore the system during live performance.

This user study is approved by the Birmingham Conservatoire Assessment Unit and supported by Integra Lab, Birmingham Conservatoire, and Birmingham City University.

BECOME A BETA TESTER

Please, contact Balandino here to become a beta tester!

 

 

Eleonor Turner performing using MyoSpat

Dancing with Sound and Light

Posted on Leave a commentPosted in Performance, Research, Software Development

img_1661Integra Lab is happy to announce a new project! James Dooley and Balandino Di Donato are currently collaborating with Sonia Sabri in an interactive audiovisual dance performance.

Sonia Sabri has established an international reputation for presenting Kathak dance in a contemporary context. She creates work relevant to modern audiences, that is inspired by Indian and British culture, and the rich possibilities that arise when they meet. She was recently awarded the Maker Monday Commission, which will provide support for the creation of an interactive performance. J. Dooley and B. Di Donato are working closely with Sonia for the development of an interactive system, which allows a dancer to control sound, visual and light projections through hand-gestures.

The system developed by James and Balandino aims to extend Sonia’s artistic ideas and project them to the audience using auditory and visual elements. Sound, visual and light projection are controlled through gestures tracked using Myo armband, Myo Mapper, rIoT board and Pd. Gestural data are sent to Wekinator for the gesture recognition process which uses machine learning. As each gesture is detected, a message is sent to Pure Data where audio, visual and light projection are processed.

Understandings from a 2 days workshop conducted with Sonia at Parkside MediaHouse, showed a good flexibility of the system, which allowed the developer to assist Sonia at any step of the creative process. The first day focused on exploring interaction design using machine learning techniques to recognise footwork, leg movements and hand gestures. During the second day, the mapping of gestural data with sound generation and elaboration, visual and light projections was developed. In addition to proving the stability and usability of the developed system, this two day workshop generated a new creative workflow to be considered as a method for working on interactive dance performances.

Surce: http://integra.io/projects/dancing-with-sound-and-light/

Approaches to Visualising thee Spatial Position of ‘Sound-objects’

Posted on Leave a commentPosted in Publications, Research
Title: Approaches to Visualizing thee Spatial Position of ‘Sound-objects’
Authors: Jamie Bullock & Balandino Di Donato
DOI: 10.14236/ewic/EVA2016.4 
Conference: Electronic Visualisation and the Arts

Abstract

In this paper we present the rationale and design for two systems (developed by the Integra Lab research group at Birmingham Conservatoire) implementing a common approach to interactive visualisation of the spatial position of ‘sound-objects’. The first system forms part of the AHRC- funded project ‘Transforming Transformation: 3D Models for Interactive Sound Design’, which entails the development of a new interaction model for audio processing whereby sound can be manipulated through grasp as if it were an invisible 3D object. The second system concerns the spatial manipulation of ‘beatboxer’ vocal sound using handheld mobile devices through already- learned physical movement. In both cases a means to visualise the spatial position of multiple sound sources within a 3D ‘stereo image’ is central to the system design, so a common model for this task was therefore developed. This paper describes the ways in which sound and spatial information are implemented to meet the practical demands of these systems, whilst relating this to the wider context of extant, and potential future methods for spatial audio visualisation.

DEMOS

The ‘how’ and the ‘what’

Posted on Leave a commentPosted in Research

Last week I spent my evenings creating a digital “musical instrument” for a unique and rare person I met a while ago. My objective was to create a musical instrument based on her passion for chemistry, music and gaming. So I decided to create something which could transform as many materials and fluids as possible into sound. After a few thoughts, I came up with the idea of setting up the CatheBoard. It is a plastic box containing a Makey Makey board which through a Max patch allows triggering sounds by touching any electroconductive material connected to it. At the same time, it can also be used as a game controller.
While building up the CatheBoard, I had few thoughts about interaction design, learnability, usability in different practical and social context and creative possibilities. Most of these questions can be answered by thinking about how we do things and how we interact with it (Dourish P., 2014), and others are taking into account what we want to interact with.

Before going to the point, it is better to know that all written in this blog post is based on my knowledge and experience of interacting with the real and virtual world, thus if you interact or think differently about it, please post a comment below!

The how

The how we interact with things is something I started exploring with two works on sound interaction design in mixed realities (work 1, work 2). In these two works, I used paper and water as “objects” to interact with. The interaction design is informed by how we grab, throw, or crumble paper; and how we interact with water in a glass or a glass of water in the real world. Taking that into account, I tried to design and develop a system to replicate within a virtual environment the same auditory feedback obtainable through the same interaction performed into the real world.

After analysing the gesture interaction in the real world, I tried to generate similar audio feedback through a similar gestural interaction within a virtual environment.

The what

The three videos above were realised using three different hardware and software. The first experiment was accomplished using the Myo armband and Integra Live; the second  XTH Sense and Pd; and the third using the Makey Makey and Max. From these three experiments fascinating results emerged:

  • It is possible to experience the same auditory feedback through the same gestural interaction but through using the cited technologies in a different combination.
  • The algorithm used for to generate the audio feedback for the three experiment followed the same principles.
  • The gestural interaction resulted directly linked to the auditory feedback in all cases also when combining the cited technologies differently.
  • The only difference between the three experiments is the audio file used to feed the algorithm.

The outcomes from these three different experiments bring me to say that in some cases, the how and the what may be the key to sound interaction design, and not through what technology we try to realise it.

Towards grabbing and throwing sounds away

Posted on Leave a commentPosted in Research, Software Development

Over the last year, I’ve been working on the development of an interactive audio spatialiser designed to move a vocal sound during musical performance. I tried to solve this problem by empowering the user to position the sound within an acoustic space through orienting the arm in that position (read more about it). It was just a starting point towards the design and development of an interactive audio spatialiser to move and interact with sound, here called a sound object, through typical human physical behaviour. Contrary to P. Schaeffer’s sound object definition, for me, the analogy to the sound producing object is of particular importance, because of the nature of the sound and the most common human interaction and behaviour with the sound producing object inform the sound interaction design.

My current objective is to allow users to grab a sound object, to move it around, to drop or throw it away within a virtual acoustic space. The grab-and-throw metaphor within a 3D virtual environment is something which has been explored  by Mine, M.R., 1995Robinett W. and Holloway R., 1992 and I was particularly inspired by the select-grab-manipulate-release process, which is described in Mine, M.R. et al. (1997) and  Steinicke and Hinrichs, 2006 and also applied in the DigiTrans project, to realise a system to grab, move, and drop or throw sound objects away. To achieve my objective, I proceeded in the following way:

First, I established the target hand poses for the grabbing, throwing and dropping gesture. The hand pose for the grabbing gesture is a fist, and the hand pose for throwing and dropping is a spread finger gesture.

solid_grey_LH_fist@2x
Grabbing hand pose.
solid_grey_LH_spread_fingers@2x
Dropping throwing hand pose.

These hand poses have been chosen for two main reasons. The most significant one is because these are the most likely poses which our hand would assume when we grab, throw or drop an object, whit one hand only. This makes the system easily discoverable and efficiently learnable (Vatavu R. and Zaiti I., 2013; Omata M. et al., 2000). The second reason is related to the technology I use for my research project to map gestures, which is the Myo armband. The Myo armband is perfectly able to track these two hand poses, but as you can imagine, knowing the hand pose only is not enough to recognise every nuance of a grab, drop or thrown gesture when trying to replicate the auditory feedback of a grabbed and then dropped or thrown sound object, taking into account its main features such as trajectory speed, direction and gravitational force influence. Thus, I created a model for a Support Vector Machine machine learning system fed with the EMGs’ mean absolute value, orientation and acceleration’s deviation of the arm within the 3D space.

Once I was able to track all gestures properly, I developed the audio part of the system. It consists of a stereo spatialiser which draws trajectory within the auditory scenery. Trajectories are established by mapping the machine learning output, orientation and acceleration into the envelope properties (attack, decay, sustain and release) of the sound object for each the output audio channels.

Envelope properties

 

The video below shows the throwing bit only, I will update it once I get the chance to realise a new video 😉