Composing Music with Brainwaves

Introduction

This is a project I did for Yahoo HackU 2010. They come to Georgia Tech every year, and I thought it would be fun to compete. I remembered seeing the Star Wars Force Trainer when it came out and I really wanted to do something with it. A little work had already been done on reverse engineering the hardware. The head piece uses 3 contacts with the head to take an EEG. Then it sent the data wirelessly over a regular RF connection somewhere in the 2.4 GHz range. But most importantly, Zibri discovered that there are header pins that were left in from testing and you can get sensor data out over RS232. So it’s really easy to interface with. I thought about playing certain songs based on your mood. If you felt happy, then play upbeat music, if you feel sad, play sad music. But I was afraid that once you started listening to a song, you would get stuck in that mood and wouldn’t change. And then I realized that it would be way cooler if you could create music.

The Force Trainer

First, a word on what the Star Wars Force Trainer is. In case you don’t know, it’s a toy that comes with a base station and head piece. Depending on how hard you concentrate, a fan blows faster or slower and blows a ping pong ball up and down. Here’s a video of it in action.


In a real EEG used for research or medicine, there are many many more contact points. There are only three here, so its not terribly sensitive. Muscle movements create small electric fields that can be picked up by the head piece, so small eye movements and stuff can have an effect. Overall though, you can have a surprising amount of control over height with a little practice.

Hacking the Hardware

I wasn’t completely sure how I would make everything work since I didn’t know what the serial port would send me. I took it apart, found the header pins, and plugged it into an FT232RL IC so I could see the data over USB. It turns out that it sends a 3 dimensional data vector. The first component is the “Attention” number which comes from the EEG. The second is the “Meditation” number which also comes from the EEG. Each of those is a particular type of brainwave (alpha and beta brainwaves, respectively). The third number is the connection quality. If the sensor isn’t against your skin or getting a good reading, then that number goes to 200. The first two numbers can range from 0 to 100. The last one can range from 0 to 200 (as far as I can tell). Not every number will always show up on any of the columns, it seems to prefer certain numbers. I think that this may have to do with the fact that an FFT (fast Fourier transform) is being done. It outputs this data roughly 1-2 times per second (that is, 1-2 rows of the 3 columns of numbers). This isn’t a rock solid rate, and when you keep the same brainwave level for a certain amount of time, then the sensor stops transmitting. That means that it won’t send a new update of numbers for several seconds if you are able to keep the position of the ball in the tube steady.

(click on the pictures to see them bigger)

Orange LEDs I installed beneath the fan

Software

To actually make the sounds, I used Processing. If you’ve ever done any work with either Processing or any computer music, you know that this was a TERRIBLE idea. I agree. I chose processing because I wanted to eventually have a cool visualization to go along with the sounds, and it can easily interface with an Arduino (although I ended up not using this feature). I also didn’t know about the existence of Supercollider or any other languages specifically meant for this purpose. Anyway, I basically read in the serial data, do some formatting to it to get 3 separate numbers, and then take the average of the last 8 “Attention” numbers and “Meditation” numbers. Based on that average, a certain sample is played from a sound library. The “Attention” average plays a melodic sample and the “Meditation” average plays a background, ambient sample. These samples can be played over top of each other, so you end up with several of these ambient sounding samples being played at the same time. They are all pentatonic so they work in any order. I later wrote a second version that plays guitar instead of ambient sounds. I recorded myself playing 4 bars of several chords, and rearranged the code a bit to try to make it sound a little better. You can play lots of ambient sounds simultaneously and it won’t sound bad, but that isn’t always the case for the guitar or more melodic stuff. It never sounds dissonant, but it can get interesting sometimes. Most of the time I find it downright pleasant.

Right now, the code is extremely simple, but got the job done considering it’s the fruit of a sleepless night of work in an attempt to hack together a demo. You can download my code here:

https://github.com/blueintegral/Mental-Note

UPDATE: ClockworkRobot made another revision of this code that includes a visualization that you can download on his website. He’s also got more pictures and information about the hack.

UPDATE 2: Yeah, Processing sucks for this kind of application. I tried to implement a more complex algorithm in Processing, and it’s actually impossible. I’m planning on moving to supercollider or another language meant for sound generation. I’ve also been reading a lot of books on computer music composition, so I’ll be able to do more original and creative things.

UPDATE 3: I talked to some graduate students from the Georgia Tech GVU Center. They are doing some amazing things there, and one of those things happens to be composing music using data from an EEG! I talked to them, and they are taking a slightly different approach to figuring out what the person wants to hear. Here’s their paper [pdf]. They had better sensors and they were also using supercollider. I’m working on something else, but I don’t want to go into detail yet. The grad students at GVU were working the problem from a bit of a different angle than I am, so they haven’t implemented my main idea.

Demos

These sound clips were recorded while I was using the force trainer. This gives you an idea of what kind of music can be created with Mental Note.

Guitar version:

Ambient version: 

Its fun to try doing different things and seeing what kind of music you make. For instance, doing calculus, listening to other music (like jazz or death metal), or just sitting and trying to relax.

Future Work

Since I did this whole thing in a single night, there’s plenty of room for improvement. I’d like to change the algorithm to depend on previous sounds played. I’d also like to experiment with more sounds, and perhaps use a midi library to play individual notes (basically controlling an entire piano rather than just samples). I’ve got a friend who’s a student at the Eastman School of Music who I’m hoping can help me out in writing a better algorithm to create better sounding and more original music. I’m also going to get an FT232RL and put it inside the shell so it looks nicer. It can be powered by USB and I’ll just install a panel style USB port in the case. I’d also like to add support to use multiple Force Trainers at once, so one person can be composing the melody and the other can be composing the harmony or countermelody.

Press

This project was featured over at Hackaday, the MAKE Magazine blog, the German Engadget, and even by NeuroSky, the company that makes the electronics in the Force Trainer!

 

 

1 Comment so far

Leave a Reply