The DIMI synthesizers were designed by Finnish electronic music pioneer Erkki Kurenniemi in 1970. He created a number of early electronic instruments using original control methods. The DIMI-A was the first in the range standing for ‘Digital Music Instrument – Associative Memory’ essentially an early sampler. The DIMI-O or ‘Optical Organ’ displayed the musical notes via a screen and also had a video camera that could be used to convert movements into sound. The Dimi-S or ‘Sexophone’ was an instrument used by four players each wearing handcuffs and wires. Musical tones were generated as the players touched each other. The electrical resistance between the players was measured and ‘with increasing skin moisture and contact area, the intensity of the music increased’!
The Dimi-T or ‘Electroencephalophone‘ measured the EEG signals from the user’s earlobes. The signal was ‘amplified, band-pass filtered and used to frequency modulate a voltage-controlled oscillator’.
The original idea was to build four of these instruments, and let the musicians to go to sleep while hearing each other’s generated sounds. During sleep there appears in the EEG slow high-amplitude delta waves, and short duration ‘sleep spindles’. Would the brain waves of the sleeping players get synchronized? This test was never made.
In the early 1970’s David Rosenboom founded the Laboratory of Experimental Aesthetics at York University in Toronto where they explored the relation between aesthetic experience and produced musical realisations. Many artists and musicians visited and worked there during the time including John Cage, David Behrman, La Monte Young and Marian Zazeela. Rosenboom produced his album Brainwave Music and published results of his experiements in ‘Biofeedback and the Arts’ in 1976. He wrote a a second book called ‘Extended Musical Interface with the Human Nervous System’.
In the 1970’s Pierre Droste, Andrew Culver and Charles de Mestral made up a Montreal Group called Sonde performing a number of improvisational brainwave concerts.
Between 1990-1992, Benjamin Knapp and Hugh Lusted developed the BioMuse, an 8 channel ‘biocontroller’ that analyses muscle movement (EMG), eye movement (EOG), heart (EKG) or brainwave signals (EEG) using non-invasive transdermal electrodes.
The IBVA system provides ‘interactive control from brainwave activity’ allowing the user to trigger audio, images, software and other hardware devices.
The IBVA inhales brainwaves but exhales a brain-computer interface. Your brainwaves can control everything from sounds that go ping to almost any electronically addressable device.
Head of IBVA Luciana Haill uses the system to control the pitch and velocity like ‘playing a Theremin with your brain’. Other notable musicians that use the IBVA system are Paras Kaul and Towa Tei.
CalArts student Adam Overton used SuperCollider, a custom EEG/EKG device and a respiration harness and sensor for his project Sitting.Breathing.Beating.[Not]Thinking. The equipment analyses breaths, heartbeats and brainwaves whilst Adam meditates. The software plays its own datafiles, which results in the noisy/chaotic textures, the signals & movements are then used to manipulate the sound.
Brouse working with Eduardo Miranda who runs Neuromusic department at University of Plymouth, developed the BCMI-PIANO. Matlab and Simulink is used to perform power spectrum and Hjorth analyses of EEG signals in realtime to control music.
In order to have greater control over this system, we are developing methods to train subjects to achieve specific EEG patterns in order to play the BCMI-Piano system. We have initial evidence that this can be made possible using a technique commonly known as biofeedback.
A number of homebrew BCI devices are currently in progress, one such example is by Mick Grierson. It is still in the early stages, but there is a short video demo on youtube.
I didn’t intend to write so much about this subject! Developments in this field are sure to continue advancing rapidly as researchers, companies and hobbyists seek to explore new ways of interaction, whether it’s for music, gaming or general use. Perhaps the key issues to tackle as Miranda and Brouse point out is the ‘task of intereting the meaning of the EEG’ and secondly how to create equipment that is more comfortable and portable. It will be interesting to see how the projects develop.
There are two projects currently under development at MIT’s Hyperinstruments group testing Magnetic Musical Training. The systems provide the user with ‘a kinesthetic preview’, to help them learn the gestures required to play the musical instrument. The project aims to find out whether motor functions can be learnt at a faster and more efficient rate using this system compared to traditional methods.
Graham Grindlay’s project called FielDrum uses a drum fitted with electromagnets and permanent magnets which control the pushing and pulling forces of a drumstick. Currently the system only has two states (attract or repel) although they are hoping to introduce more. Check the simple video demonstration of the FielDrum in action.
Craig Lewiston’s Trainer Technology project has two streams of development, the Trainer Piano and the Trainer Prototype, both using magnets to control the movement of the user. The Trainer Piano uses an upright piano together with a computer screen which displays visual feedback. The Trainer Prototype uses a gloove with magnets in to control finger movements. I’m looking forward to reading the results of the tests.
A new paper pushing the theory that the area of a birds brain that controls movement is the same region that controls singing and learning to sing. It is the first study to use Molecular Mapping to examine the areas of a birds forebrain that control movement. Erich Jarvis suggests that ‘spoken language areas evolved out of pre-existing motor pathways’. Perhaps it is one possible reason why humans gesture with their hands as they are speaking. It is believed that the common ancestor of reptiles, birds and mammals, Amnitoes, shared similar motor pathways.
Cerebral systems that control vocal learning in distantly related animals evolved as specializations of a pre-existing motor system inherited from their common ancestor that controls movement, and perhaps motor learning.
The results back up claims that gestural language came before spoken language. Even now children are seen to gesture before they learn how to talk. ‘Gesturing is something that goes along naturally with speech. The brain areas used for gesturing may have been co-opted and used for speech’ says Erich Jarvis.
Musical Furnishings has released a customisable musical table. The table is made up of modules which you can swap around to create a unique playing surface. Check out the old musical furnishing‘s website which has more with examples.
The Swept RF Tagging device developed by Kai-yuh Hsiao, ‘detects proximity of magnetically resonant tagged objects in field’. The system has been optimised to function as a musical instrument. The following video shows John Zorn using the system in April 2004.
The Emonator was developed jointly by Dan Overholt and Paul Nemirovsky. The project split in two, Paul Nemirovsky continues to work on the project under the name Emonantor whilst Dan Overholt is now developing The Matrix. Why the split? The MATRIX focuses more on the interface design and development of musical synthesis where as the Emonantor is used as a gestural controller for the Emonic Environment.
It offers a 3-dimensional interface using a square set of pushable rods that measure the movement of the hand.
The Squeezables are a set of balls that can be squeezed, stretched and moved in order to produce music. The project was developed by Seum-Lim Gan and Gil Weinberg.
The balls are positioned on a table surface and each ball contains a sensor block with five force sensing resistors. The sesnor block is also connected to a variable resistor slider located underneath the table surface which measures the amount of movement when pulling the balls away from the surface.