Brain Computer Music Interfaces

A brief overview of Brain Computer Interfaces in relation to music. Human brainwaves were first measured by Hans Berger in 1924. Initially the results were ignored until Adrian and Matthews backed up Berger’s results in their 1934 paper ‘The Berger Rhythm: Potential Changes from the Occipital Lobes in Man‘ published in the Brain Journal. The first piece of music which used EEG was composed by Alvin Lucier (working with Edmond Dewan) in 1965 with his composition ‘Music for Solo Performer’.

Over the years a number of composers have used brainwaves to control music. Richard Teitelbaum, a member of Italian electronic music group Musica Elettronica Viva, used biological signals (EEG and EKG) to control electronic synths.

In the early 1970’s David Rosenboom founded the Laboratory of Experimental Aesthetics at York University in Toronto where they explored the relation between aesthetic experience and produced musical realisations. Many artists and musicians visited and worked there during the time including John Cage, David Behrman, La Monte Young and Marian Zazeela. Rosenboom produced his album Brainwave Music and published results of his experiements in ‘Biofeedback and the Arts’ in 1976. He wrote a a second book called ‘Extended Musical Interface with the Human Nervous System’.

Roger Lafosse and Pierre Henry used a live performance system called a Corticalart on a number of recordings.


Jacques Vidal, a computer science research at UCLA began developing the first direct brain-computer interface using a batch-porcessing IBM computer. He published a paper in 1973 called ‘Toward Direct Brain-Computer Communication [pdf]‘ A paper from 1998 called ‘Cyberspace Bionics‘ provides an overview of technology and it’s impact on human life.

In the 1970’s Pierre Droste, Andrew Culver and Charles de Mestral made up a Montreal Group called Sonde performing a number of improvisational brainwave concerts.

Between 1990-1992, Benjamin Knapp and Hugh Lusted developed the BioMuse, an 8 channel ‘biocontroller’ that analyses muscle movement (EMG), eye movement (EOG), heart (EKG) or brainwave signals (EEG) using non-invasive transdermal electrodes.

Atau Tanaka has used the BioMuse for compositions and performances. Together with Zbigniew Karkowski and Edwin van der Heide, they established Sensorband, a sensor instrument ensemble. In 1996, Scientific American published an article written by Knapp and Lusted about the BioMuse called ‘Controlling Computers with Nueral Signals [pdf]‘.

The IBVA system provides ‘interactive control from brainwave activity’ allowing the user to trigger audio, images, software and other hardware devices.

The IBVA inhales brainwaves but exhales a brain-computer interface. Your brainwaves can control everything from sounds that go ping to almost any electronically addressable device.

Head of IBVA Luciana Haill uses the system to control the pitch and velocity like ‘playing a Theremin with your brain’. Other notable musicians that use the IBVA system are Paras Kaul and Towa Tei.

CalArts student Adam Overton used SuperCollider, a custom EEG/EKG device and a respiration harness and sensor for his project Sitting.Breathing.Beating.[Not]Thinking. The equipment analyses breaths, heartbeats and brainwaves whilst Adam meditates. The software plays its own datafiles, which results in the noisy/chaotic textures, the signals & movements are then used to manipulate the sound.

The EyeTap Personal Imaging Lab is a ‘computer vision and intelligent image processing research lab focused on the area of personal imaging, mediated reality and weable computers’, set up in 1998 by Steve Mann. In 2003 they started projects combining music and brainwaves. James Fung‘s project Regenerative Brain Wave Music Project explores ‘new physiological interfaces for musical instruments’. At DECONcert1 48 people were wired up with EEG sensors which were then used to control the sound. By playing back the music created in realtime a biofeedback loop was created with the audience reacting to the sound they were creating. At REGEN3 jazz musicians played music which is ‘driven and altered by the brainwaves of the audience’. An interesting idea to unite the performer and the audience.

Andrew Brouse used Max/MSP and OpenSoundControl to create InterHarmonium. The project aims to generate music using human brainwaves and then transmit the music over to another location via the internet. You can find a more detailed explanation in the project thesis report ‘The InterHarmonium : An Investigation into Networked Musical Applications and Brainwaves’.

Brouse working with Eduardo Miranda who runs Neuromusic department at University of Plymouth, developed the BCMI-PIANO. Matlab and Simulink is used to perform power spectrum and Hjorth analyses of EEG signals in realtime to control music.

In order to have greater control over this system, we are developing methods to train subjects to achieve specific EEG patterns in order to play the BCMI-Piano system. We have initial evidence that this can be made possible using a technique commonly known as biofeedback.

The BCMI-Piano is mentioned in a paper by Brouse and Miranda titled ‘Toward Direct Brain-Computer Musical Interfaces‘ from NIME.

A number of homebrew BCI devices are currently in progress, one such example is by Mick Grierson. It is still in the early stages, but there is a short video demo on youtube.

I didn’t intend to write so much about this subject! Developments in this field are sure to continue advancing rapidly as researchers, companies and hobbyists seek to explore new ways of interaction, whether it’s for music, gaming or general use. Perhaps the key issues to tackle as Miranda and Brouse point out is the ‘task of intereting the meaning of the EEG’ and secondly how to create equipment that is more comfortable and portable. It will be interesting to see how the projects develop.


A Brain Computer Interface system, Brainloop allows a user to control devices by thinking specific motor commands. The system detects sensorimotor electroencephalography (EEG) rhythms when the motor commands are thought, so a user can navigate Google Earth without physically moving and select and manipulate tracks. A detailed paper on the project can be found in the Computational Intelligence and Neuroscience journal.

Magnetic Musical Training

There are two projects currently under development at MIT’s Hyperinstruments group testing Magnetic Musical Training. The systems provide the user with ‘a kinesthetic preview’, to help them learn the gestures required to play the musical instrument. The project aims to find out whether motor functions can be learnt at a faster and more efficient rate using this system compared to traditional methods.

Graham Grindlay’s project called FielDrum uses a drum fitted with electromagnets and permanent magnets which control the pushing and pulling forces of a drumstick. Currently the system only has two states (attract or repel) although they are hoping to introduce more. Check the simple video demonstration of the FielDrum in action.

Craig Lewiston’s Trainer Technology project has two streams of development, the Trainer Piano and the Trainer Prototype, both using magnets to control the movement of the user. The Trainer Piano uses an upright piano together with a computer screen which displays visual feedback. The Trainer Prototype uses a gloove with magnets in to control finger movements. I’m looking forward to reading the results of the tests.


Jens Wunderling developed loopArena, a touchscreen generative music interface. The user has the ability to control up to nine MIDI instruments.

Swept RF Tagging

The Swept RF Tagging device developed by Kai-yuh Hsiao, ‘detects proximity of magnetically resonant tagged objects in field’. The system has been optimised to function as a musical instrument. The following video shows John Zorn using the system in April 2004.

Emonator & The MATRIX

The Emonator was developed jointly by Dan Overholt and Paul Nemirovsky. The project split in two, Paul Nemirovsky continues to work on the project under the name Emonantor whilst Dan Overholt is now developing The Matrix. Why the split? The MATRIX focuses more on the interface design and development of musical synthesis where as the Emonantor is used as a gestural controller for the Emonic Environment.

It offers a 3-dimensional interface using a square set of pushable rods that measure the movement of the hand.


The Squeezables are a set of balls that can be squeezed, stretched and moved in order to produce music. The project was developed by Seum-Lim Gan and Gil Weinberg.

The balls are positioned on a table surface and each ball contains a sensor block with five force sensing resistors. The sesnor block is also connected to a variable resistor slider located underneath the table surface which measures the amount of movement when pulling the balls away from the surface.

Magnetic Levitation Haptic Interfaces

An interesting project by Peter Berkelman and Ralph Hollis from 1998. The project is no longer active, you can check out the current projects from Microdynamic Systems Laboratory. The Psychophysics of Haptic Interaction studies the way in which users interact with real and virtual haptic worlds.

Here is a simple overview of the Magnetic Levitation Haptic Interface and a more detailed video.


The FlexiGesture dates from 2003 and is a device created to explore the relationship between input gesture and output sound.

For the majority of traditional acoustic instruments, the relationship or mapping between the input gesture and output is predominantly fixed. The advent of digital technology has allowed for the separation between the input and output, and with it the potential of defining new mapping systems.


Murat Konar developed loopqoob , an interative performance system which uses sensor-equipped cubes to produce sound. A unique music loop is mapped to each side, so the orientation of the cubes determines the music played.

A similar project is Neel Joshi‘s music_blocks which consists of four wooden blocks each containing a 2 axis photointerrupter tilt sensors and a speaker. Three notes and silence are mapped to each side of the blocks, so positioning them in certain ways creates different sound output. The system is controlled via max/msp.

Next Page »