Please introduce Meta-eX – how you formed it and developed its way of making music.
Meta-eX is a live coding band: we improvise with code to weave immutable data structures into ephemeral sounds. Computer code is everywhere: running; executing; controlling; all hidden from sight. We wanted to take that code and turn it into music, but instead of hiding it, we wanted to expose it to our audiences. For us, the code is our interface, our instrument, our composition and performance, and we play that code live. As we riff with our code, live in front of the audience, the music responds instantly to our whim, much like a guitar screeches or a drum booms. In the same way that an audience can appreciate the virtuosity of a rock band because they are familiar with guitars and drums, we aim to build an appreciation of code as an instrument.
Our development as a band has been driven by a desire to build a system that allows real-time, collaborative improvisation of electronic music. Our journey has been experimental and iterative, fusing together the arts of improvisation, synthesis design, studio engineering and software development. We develop and evolve our instrument from each live performance to the next, formalising new ideas, techniques and abstractions into our system. Serving as our foundation is Overtone, an open source toolkit that focusses on collaborative, programmable music. Overtone allows us to build a system that we can simultaneously manipulate and control. For example, one of us might be modifying the guts of the instrument from the inside whilst the other attempts to control and direct it. Without sufficient practice, chaotic behaviour can ensue.
However, this ability to fundamentally change the instrument during a performance gives us immense power. By manipulating live running processes we can inject sonic interventions, ranging from tiny, accurate timbral movements to bold, new compositional directions. We have the control to improvise and interact with our environment, our audience, and with each other.
How have your own backgrounds and your own histories in using
computers helped to shape Meta-ex?
Sam Aaron is a programmer and researcher at the University of Cambridge Computer Laboratory, and had been actively developing Overtone for over four years. He felt the system was ready for real performance use, but he needed to break out of the traditional software development cycle and become a user of Overtone in addition to a developer. Sam, therefore, sought a non-programmer to collaborate with, to ensure all conversations, work and progress was primarily music-related.
Chatting over drinks in Summer 2012, Jonathan Graham saw the potential of the live coding approach to push the boundaries of what is possible within a performance setting for electronic music, and so Meta-eX was born. Despite no prior programming experience, the creative, open-minded, problem-solving approach that Jonathan uses as a research scientist enables him to work with Sam to continuously develop the live-coding environment that Meta-eX use.
How have you chosen your tools, particularly the use of Clojure and Emacs?
From a software development perspective, we wanted to ensure that the tools we used gave us the most power to express ourselves as artists. We therefore had very specific requirements regarding our interaction with the systems we were building, and the tools that the programming language Clojure provided were ideal for our purposes. For example, it was very clear to us that we wanted to be able to modify and change the program whilst it was running. Without this ability, we'd have no way of controlling the music during our performances, and it was important for us to be able to react to our ideas, mood and the audience in the moment. Clojure supports hot-swapping of code to allow us to seamlessly inject and replace the functionality of our system whilst it is running. Clojure's state-of-the-art concurrency semantics allowed us to explore the possibility of both of us simultaneously modifying and controlling the same code. Could we build new instruments of such complexity and sophistication that they would require multiple people to play them? Where would this take us?
To control our software, we use a variety of approaches, opting for the most expressive wherever possible. Sometimes that means traditional approaches, such as manipulating a physical slider or rotating a potentiometer. However, physical controllers are merely a convenience extension to the code. Our main interface is a full code window into the internals of the system. This gives us both complete power over all aspects of the system, and enables us to modify and change anything at any time. We currently use a text editor called Emacs for this code window, which, despite being over 30 years old, is still one of the most powerful programming environments in the world. Emacs is highly malleable and has allowed us to carefully mould it into a highly-tuned performance instrument. We have released our modifications to Emacs as an open source project, Emacs Live, and are intrigued that a large number of professional developers have chosen to use it for their daily industry work.
Machine Run (live session)
What were the reasons behind making your code available on GitHub and, more generally, taking an open source nature approach to your work?
We believe that most value in the world comes through sharing, and that music is no exception. One of the benefits of sharing is that it makes learning much easier. For example, we often hear amazing sounds on albums and wonder how they were made. However, musicians rarely share their production methods. We release all the software we write, which includes our synthesiser designs, interaction abstractions, and compositional structures, under an open source license, and we distribute it to everyone through GitHub. This makes it free and easy for anybody to copy and recreate any aspect of our work. For example, when one of our fans is interested in the sound of one of the synths we're using, they can go to GitHub and download the synth design as text. They can email it to their friends, read it, study it, copy it, modify it and run it to recreate the exact sound.
In addition, any pre-recorded samples we use are available under a creative commons license and distributed through Freesound.org. We don't use any proprietary software or sounds in any of our performances.
By making our code freely available, people can explore what we do in their own way and replicate/mash-up our ideas and work themselves. We want to inspire and enable a new generation of live coders by providing the tools for them to get started. And with an open source culture, we in turn will be able to learn from others, and so accelerate both our own development and musical ideas, as well that of the general live coding community.
What are the reasons, in your view, behind the recent increase in interest in Algorave, and do you feel part of an "Algorave scene"?
The Algorave concept is something that clearly grabs people's imagination and paints programming in a new light. The idea of coding music live in clubs is something that people can start to understand and is a very visible application of creative programming. It's part of a whole new wave of more tangible creative uses of programming that is more understandable to people than traditional "business software apps" or websites.
The Algorave scene is very nascent, and given the small number of live coding bands that exist today it helps to strengthen our community, and gives us the opportunity to learn from and inspire each other. It also gives emerging live coders a platform to perform to an open minded and inquisitive audience, and gives people outside of the live coding world an interesting perspective with which to understand aspects of what we do. Most of our previous performances, from intimate ambient sets to pumping club nights, have not been as part of an Algorave, but the Algorave nights are always a lot of fun and it's a great feeling being part of a larger community.
What are the challenges of live coding, from the perspectives of both the performer(s) and the audience?
One of our major challenges is not to be seen as a magic act - we want our virtuosity to be in our performances and for the audience to understand as much as possible about our approach.
We understand that one of the major barriers to live coding music is coding itself. Most people have never seen real code and often have no clear way to perceive the virtuosity inherent in live coding. This is why we firmly believe that coding should be properly taught in schools and are actively working to provide support for this. Sam Aaron has worked closely with the Raspberry Pi Foundation to build Sonic Pi, software that turns the Raspberry Pi computer into a music synthesiser where the interface is a programming language. This environment has been used in classrooms to teach introductory Computer Science and a new project is underway to use it to teach Music as well. We also use Sonic Pi to run Meta-eX workshops for new programmers of all ages, teaching the basics of programming through live coding music.
Where do you see Meta-eX going?
There is a huge appetite for live music. Meta-eX is offering an alternative live experience, and we believe that as understanding grows for the potential to improvise electronic music through code, so will the demand for artists and groups such as Meta-eX. We want to be at the forefront of this new wave in music. We want to unlock the door to the power of live-programmable music and form part of the foundations on which a new musical culture grows. Despite the huge progress that we have made in the relatively short period of time since forming Meta-eX, we are still very much at the beginning of our journey. The power, flexibility and control of code is essentially limitless, and we are only just starting to unlock its potential.