DIMI Synthesizers

The DIMI synthesizers were designed by Finnish electronic music pioneer Erkki Kurenniemi in 1970. He created a number of early electronic instruments using original control methods. The DIMI-A was the first in the range standing for ‘Digital Music Instrument – Associative Memory’ essentially an early sampler. The DIMI-O or ‘Optical Organ’ displayed the musical notes via a screen and also had a video camera that could be used to convert movements into sound. The Dimi-S or ‘Sexophone’ was an instrument used by four players each wearing handcuffs and wires. Musical tones were generated as the players touched each other. The electrical resistance between the players was measured and ‘with increasing skin moisture and contact area, the intensity of the music increased’!

dimi

The Dimi-T or ‘Electroencephalophone‘ measured the EEG signals from the user’s earlobes. The signal was ‘amplified, band-pass filtered and used to frequency modulate a voltage-controlled oscillator’.

The original idea was to build four of these instruments, and let the musicians to go to sleep while hearing each other’s generated sounds. During sleep there appears in the EEG slow high-amplitude delta waves, and short duration ‘sleep spindles’. Would the brain waves of the sleeping players get synchronized? This test was never made.

The last in the seires was the Dimi-6000, an analog voltage controlled synthesizer using an Intel 8008 microproccessor. An article written by Kurenniemi ‘History of Dimi Instruments‘ which came out with a DVD called ‘The Dawn of DIMI‘. Although there are some short video demonstrations online.

Brain Computer Music Interfaces

A brief overview of Brain Computer Interfaces in relation to music. Human brainwaves were first measured by Hans Berger in 1924. Initially the results were ignored until Adrian and Matthews backed up Berger’s results in their 1934 paper ‘The Berger Rhythm: Potential Changes from the Occipital Lobes in Man‘ published in the Brain Journal. The first piece of music which used EEG was composed by Alvin Lucier (working with Edmond Dewan) in 1965 with his composition ‘Music for Solo Performer’.

Over the years a number of composers have used brainwaves to control music. Richard Teitelbaum, a member of Italian electronic music group Musica Elettronica Viva, used biological signals (EEG and EKG) to control electronic synths.

In the early 1970′s David Rosenboom founded the Laboratory of Experimental Aesthetics at York University in Toronto where they explored the relation between aesthetic experience and produced musical realisations. Many artists and musicians visited and worked there during the time including John Cage, David Behrman, La Monte Young and Marian Zazeela. Rosenboom produced his album Brainwave Music and published results of his experiements in ‘Biofeedback and the Arts’ in 1976. He wrote a a second book called ‘Extended Musical Interface with the Human Nervous System’.

Roger Lafosse and Pierre Henry used a live performance system called a Corticalart on a number of recordings.

Corticalart

Jacques Vidal, a computer science research at UCLA began developing the first direct brain-computer interface using a batch-porcessing IBM computer. He published a paper in 1973 called ‘Toward Direct Brain-Computer Communication [pdf]‘ A paper from 1998 called ‘Cyberspace Bionics‘ provides an overview of technology and it’s impact on human life.

In the 1970′s Pierre Droste, Andrew Culver and Charles de Mestral made up a Montreal Group called Sonde performing a number of improvisational brainwave concerts.

Between 1990-1992, Benjamin Knapp and Hugh Lusted developed the BioMuse, an 8 channel ‘biocontroller’ that analyses muscle movement (EMG), eye movement (EOG), heart (EKG) or brainwave signals (EEG) using non-invasive transdermal electrodes.

Atau Tanaka has used the BioMuse for compositions and performances. Together with Zbigniew Karkowski and Edwin van der Heide, they established Sensorband, a sensor instrument ensemble. In 1996, Scientific American published an article written by Knapp and Lusted about the BioMuse called ‘Controlling Computers with Nueral Signals [pdf]‘.

The IBVA system provides ‘interactive control from brainwave activity’ allowing the user to trigger audio, images, software and other hardware devices.

The IBVA inhales brainwaves but exhales a brain-computer interface. Your brainwaves can control everything from sounds that go ping to almost any electronically addressable device.

Head of IBVA Luciana Haill uses the system to control the pitch and velocity like ‘playing a Theremin with your brain’. Other notable musicians that use the IBVA system are Paras Kaul and Towa Tei.

CalArts student Adam Overton used SuperCollider, a custom EEG/EKG device and a respiration harness and sensor for his project Sitting.Breathing.Beating.[Not]Thinking. The equipment analyses breaths, heartbeats and brainwaves whilst Adam meditates. The software plays its own datafiles, which results in the noisy/chaotic textures, the signals & movements are then used to manipulate the sound.

The EyeTap Personal Imaging Lab is a ‘computer vision and intelligent image processing research lab focused on the area of personal imaging, mediated reality and weable computers’, set up in 1998 by Steve Mann. In 2003 they started projects combining music and brainwaves. James Fung‘s project Regenerative Brain Wave Music Project explores ‘new physiological interfaces for musical instruments’. At DECONcert1 48 people were wired up with EEG sensors which were then used to control the sound. By playing back the music created in realtime a biofeedback loop was created with the audience reacting to the sound they were creating. At REGEN3 jazz musicians played music which is ‘driven and altered by the brainwaves of the audience’. An interesting idea to unite the performer and the audience.

Andrew Brouse used Max/MSP and OpenSoundControl to create InterHarmonium. The project aims to generate music using human brainwaves and then transmit the music over to another location via the internet. You can find a more detailed explanation in the project thesis report ‘The InterHarmonium : An Investigation into Networked Musical Applications and Brainwaves’.

Brouse working with Eduardo Miranda who runs Neuromusic department at University of Plymouth, developed the BCMI-PIANO. Matlab and Simulink is used to perform power spectrum and Hjorth analyses of EEG signals in realtime to control music.

In order to have greater control over this system, we are developing methods to train subjects to achieve specific EEG patterns in order to play the BCMI-Piano system. We have initial evidence that this can be made possible using a technique commonly known as biofeedback.

The BCMI-Piano is mentioned in a paper by Brouse and Miranda titled ‘Toward Direct Brain-Computer Musical Interfaces‘ from NIME.

A number of homebrew BCI devices are currently in progress, one such example is by Mick Grierson. It is still in the early stages, but there is a short video demo on youtube.

I didn’t intend to write so much about this subject! Developments in this field are sure to continue advancing rapidly as researchers, companies and hobbyists seek to explore new ways of interaction, whether it’s for music, gaming or general use. Perhaps the key issues to tackle as Miranda and Brouse point out is the ‘task of intereting the meaning of the EEG’ and secondly how to create equipment that is more comfortable and portable. It will be interesting to see how the projects develop.

Brainloop

A Brain Computer Interface system, Brainloop allows a user to control devices by thinking specific motor commands. The system detects sensorimotor electroencephalography (EEG) rhythms when the motor commands are thought, so a user can navigate Google Earth without physically moving and select and manipulate tracks. A detailed paper on the project can be found in the Computational Intelligence and Neuroscience journal.

Magnetic Musical Training

There are two projects currently under development at MIT’s Hyperinstruments group testing Magnetic Musical Training. The systems provide the user with ‘a kinesthetic preview’, to help them learn the gestures required to play the musical instrument. The project aims to find out whether motor functions can be learnt at a faster and more efficient rate using this system compared to traditional methods.

Graham Grindlay’s project called FielDrum uses a drum fitted with electromagnets and permanent magnets which control the pushing and pulling forces of a drumstick. Currently the system only has two states (attract or repel) although they are hoping to introduce more. Check the simple video demonstration of the FielDrum in action.

Craig Lewiston’s Trainer Technology project has two streams of development, the Trainer Piano and the Trainer Prototype, both using magnets to control the movement of the user. The Trainer Piano uses an upright piano together with a computer screen which displays visual feedback. The Trainer Prototype uses a gloove with magnets in to control finger movements. I’m looking forward to reading the results of the tests.

Vocal Learning in Bird Brains

A new paper pushing the theory that the area of a birds brain that controls movement is the same region that controls singing and learning to sing. It is the first study to use Molecular Mapping to examine the areas of a birds forebrain that control movement. Erich Jarvis suggests that ‘spoken language areas evolved out of pre-existing motor pathways’. Perhaps it is one possible reason why humans gesture with their hands as they are speaking. It is believed that the common ancestor of reptiles, birds and mammals, Amnitoes, shared similar motor pathways.

Cerebral systems that control vocal learning in distantly related animals evolved as specializations of a pre-existing motor system inherited from their common ancestor that controls movement, and perhaps motor learning.

The results back up claims that gestural language came before spoken language. Even now children are seen to gesture before they learn how to talk. ‘Gesturing is something that goes along naturally with speech. The brain areas used for gesturing may have been co-opted and used for speech’ says Erich Jarvis.

You can view & download the paper “Molecular Mapping of Movement-Associated Areas in the Avian Brain: A Motor Theory for Vocal Learning Origin” from PLoS ONE webiste.

Musical Rumba Table

Musical Furnishings has released a customisable musical table. The table is made up of modules which you can swap around to create a unique playing surface. Check out the old musical furnishing‘s website which has more with examples.

loopArena

Jens Wunderling developed loopArena, a touchscreen generative music interface. The user has the ability to control up to nine MIDI instruments.

Swept RF Tagging

The Swept RF Tagging device developed by Kai-yuh Hsiao, ‘detects proximity of magnetically resonant tagged objects in field’. The system has been optimised to function as a musical instrument. The following video shows John Zorn using the system in April 2004.

Emonator & The MATRIX

The Emonator was developed jointly by Dan Overholt and Paul Nemirovsky. The project split in two, Paul Nemirovsky continues to work on the project under the name Emonantor whilst Dan Overholt is now developing The Matrix. Why the split? The MATRIX focuses more on the interface design and development of musical synthesis where as the Emonantor is used as a gestural controller for the Emonic Environment.

It offers a 3-dimensional interface using a square set of pushable rods that measure the movement of the hand.

Squeezables

The Squeezables are a set of balls that can be squeezed, stretched and moved in order to produce music. The project was developed by Seum-Lim Gan and Gil Weinberg.

The balls are positioned on a table surface and each ball contains a sensor block with five force sensing resistors. The sesnor block is also connected to a variable resistor slider located underneath the table surface which measures the amount of movement when pulling the balls away from the surface.

« Previous PageNext Page »