The internet is great, isn’t it? With the right idea and some technological know-how, a person can tap into billions of people’s minds and unearth their opinions, expertise, or talents. Sometimes this ends up with cool citizen science projects like SETI@Home, sometimes it ends up with really stupid memes, but it’s always fascinating to see what vast pools of the population will create.
I recently found a couple of these crazy crowdsourced projects applied to music. I don’t know about you, but when someone says “crowdsourced music,” I immediately think of MTV or Clearchannel–networks that pump out the tunes that the majority of people are supposed to like. The majority rarely likes good music, it seems. These two projects take unique approaches to choosing music based on the masses, though. I think you’ll find them kind of cool.
The Listening Machine
What is the sound of all of Twitter tweeting? Terrible, probably. But against all odds, a team in London led by researcher Daniel Jones and cellist Peter Gregson managed to make something like that not sound half bad.
The Listening Machine is a piece of software that monitors the Twitter activity of 500 specially selected people (and a handful selected at random) from around the UK. Essentially, the software takes each tweet these people post, analyzes it for certain elements, and uses those elements to create real music played by real musicians (that is, snippets recorded by real musicians beforehand. Suck it, MIDI!).
And here’s where it gets cool: instead of simply playing these snippets of melody one after another in a pasted-together cacophony, the program synthesizes certain qualities of the tweets to create a piece that actually gives the listener an overall impression of what’s going on on Twitter at that moment. It does this by determining the overall sentiment (as positive, negative, or neutral, based on words like “awesome” and “terrible”) of the participants’ collective tweets and adjusting the musical mode and tonal style of the piece. On top of that, the density of the notes adjusts according to how often or infrequently the participants tweet, and from what I can gather, the choice of instruments changes depending on the topic being discussed.
Cool, huh? Still, the music the Listening Machine generates is a little more random and computer-y than I’d prefer to listen to. The next project is entirely designed (as it were) to formulate music that people like.
You heard that correctly. Darwin. Tunes. British biologists Robert MacCallum and Armand Leroi decided to shift the theory of evolution away from living things and apply it to music.
Here’s how it worked: People who visted the DarwinTunes website encountered a series of short sound loops, which they were asked to rate on a 1 to 5 scale. The loops that got the lowest ratings were removed from the site, while the highest ratings went on to the next round. Before that happened though, the scientists made simple auditory tweaks to represent the gene mutations that take place during evolution, then combined the loop’s code with that of another surviving loop to produce a musical “baby.” That baby is what went on to the next round, where users rated what they heard once again, and the cycle continued.
The playlist above contains the results of this project– each track is a musical collage of all of the loops combined. It’s fascinating to listen to the differences between the first generation and, say, the 400th generation. What was once a mishmash of discordant sounds suddenly becomes something that sounds closer to music, with rhythm, pleasant harmony, and evident organization.
The team has published the results of this project in the Proceedings of the National Academy of Sciences, but isn’t stopping there. They’re starting a whole new project that uses this same technique, except applies it to rhythm. Go on and give it a try!
Featured image by Flickr user canonsnapper.