The teoretical fundaments of my project


These can also be found in my medium posts.

Layers of creation in music

The last couple of decades have been plentiful of new tools in the area of music creation, and the knowledge of electronic music making is mostly about music making technologies knowledge. Albeit obvious, I find this idea pretty important, because unlike previously being human hearing, harmony and acoustics knowledge, the factor of music technology is determined deliberate actions. Behind every technology there is a person and an intention (regardless of whether this intention was accomplished or not by the technology). So now, the sequencer knows how to play, and you don’t have to perform by yourself a TB-303 bass-line, but in order to do this, you must purchase the piece (not different from traditional instruments) and you must know how to use this technology. This brought the -once innocent- consequence that the performer now was designed by an industrial counterpart, and not by the composer. The brick maker becomes more influential than the architect when it comes to architectural design.

Many different decisions in the electronic music instrument making, have heavily shaped the path of music making, so much that the earliest groove-boxes and rhythms machines, pretty much dictated the creations of new styles, whose characteristics were based on the instrument’s architecture and sound. If the most direct rhythm structure to make in a drum machine is 4/4, divided in 16 steps, suddenly the whole resolution of electronic music becomes 4/4, 16 steps. It didn’t matter that there was a way of making patterns in 3/4 signature. This has been great fun, because there has been some sort of ecosystem where designers and engineers create a stage where musicians and creators experiment, and have some musical outcomes. If the designers provide a low cut knob to the synth, electronic music then starts expressing through that knob.

Current state on electronic music performance instruments

It has been 34 years since the TB 303 and I believe that the electronic music instruments is much less naive than it used to be. Each of the musical instrument makers offer their own universe of possibilities, and have an interaction statement. For instance, Ableton aims to a very computer based, performance, and pretty much tries to blur the limits between production and performance when they produce hardware that turns a DAW into a performance instrument (Push). Reaper is trying to do this as well with the Nektar Panorama. Native instruments is searching for a very dj based performance: they come from the dj culture with Traktor, and Maschine tries to decompose music making into single tracks and samples. The Dj’ing software Traktor is trying to go towards merging with live production by introducing the stems concept (to separate music tracks into separate stems, so the DJ’s can get more creative with these) and introducing some looping features that are completely new to the idea of DJ’ing. In the other side, Maschine, achieves a very good groove-box by presenting a controller for their own Maschine DAW, that has very limited possibilities, and presenting in a hardware piece, the most common music making parameters. Limiting possibilities is not a bad strategy for a performance instrument, as I will explain later. Pioneer has now launched their own version of a live electronic music performance instrument; the Toraiz, that intends to mix the two worlds of groovebox and DJ’ing. I personally think that the Toraiz needs a bit of maturing, but will be a great tool in a bunch of software updates later. Korg also has been long ago creating groove-boxes, aiming more for a more analogue feel: the EMX 1’s even have a little window that let a couple of tube amplifiers exposed to the view. Korg has it’s own synth culture, and the newest Electribe versions EMX 2 and ESX 2, manage to keep some of this inheritance albeit they are purely digital machines: the sequencer is very limited, the synths simple, and all the timbres and effects are a Korg version of classic oscillators and filters. The way these Electribes are designed to be used in a performance, is pretty much by launching pre-sequenced tracks, and making some minor tweaks to the sound when on stage.

An instrument should not be able do everything

The idea of putting limitations to an electronic music performance instrument, is in order to make it more inspiring. The Electribe ESX2 could probably be the most limited sequencer I have used (I am used to DAW’s and digital tools) and it has been the most inspiring one. One possible misconception about music making, is that an author first has an idea, and then he puts this idea into sound through his tools. While this may be true in some cases, the biggest amount of electronic music is made out from exploring synthesizers, and when something interesting comes out, it is recorded or saved, then arranged to make a song.

The latest years, have been full of new instruments that have a strong identity: they resign from designing widely usable tools, and aim instead towards originality. Or in Seth Godin’s words: remarkability. Teenage engineering have been really good at recognizing the fact of music making is an exploratory process. Starting with the OP-1; which won’t offer parameters such as filter cut or FM level, but rather an arbitrary abstract shape. This demonstrates that the parameter clarity is unimportant, and often detrimental for a musical creative process. It makes sense: an improvising violin player wouldn’t apply a certain technique of digitation because it produces a low pass. It applies a certain digitation technique, because he remembers how it sounds.

Musical expression instruments are often designed intentionally in a limited way. This is what I will call affordance chop: you don’t allow the instrument interaction to afford all what the provided hardware could afford, because the musician prefer something that will make his process into music easier, rather than an instrument that allows everything and is too complicated to use. I have heard people citing Stockhausen, that one must make sounds from what is possible and available (I have never ran into the cite myself). This means that music is not only made by the musicians, but also in great part by the musical instrument designers.

When it comes to live performance of the instrument, the affordance chop is crucial, because the player must generate musically sensible changes in real time, while the music is played; and after all, the ideal instrument to afford every sonic possibility, would be just a bunch of spare parts. The drawback of this, is that we trade a better experience of music at expense of musical possibilities. One example is that whereas a guitarist could change a chord of the arpeggio he is playing just by changing the finger position, a man with a sequencer will need to go note by note, and while the pattern is playing, the not-yet-ready pattern will keep looping. This means that a whole scope of important musical transformations can’t take place during a live electronic music performance. Every music performance instrument will implement it’s own, specific means for making these musical changes. They were thought and set by the instrument designers, and many other musical possibilities were left out.

The truth of live electronic music

I have been working some years now, on making my own live performance of electronic music. When I started, I had a misconception about how electronic music is played on stage. I thought that the music was created live on the stage and thus, could follow any stream of improvisation from the musician. Maybe this is true for some cases, but for most of the cases, I have discovered that playing music follows a very rigid predefined set, because setups are too static, and don’t allow a creative flow on the live. Over all, musicians want to feel safe on stage. We fall into a paradox where live electronic music making becomes a rite without practical meaning: the live musician could bring everything pre-recorded to the stage and have the same effect; and indeed, it happens so for many cases. In general it is accepted, because the musician is the one who created the music anyways; and the performance is like a gathering to celebrate that music.

Now, as we discussed in the first and second part of this serie; if it is the instrument the one who brings so much of the expressive character to music making, and at the same time, the performance can potentially be just pressing play: what is then the role of the musician? Certainly the idea of building a performance, and how the whole music happens during the performance.

The instruments that are brought to stage allow certain limited flexibility, and the musician can decide which of these options to deploy on the stage. For instance, Electribe as it can really be only performed through selecting prepared sequences, it becomes some sort of performance designer: you set which are the tracks that start muted on a pattern, and you have some sort of idea on how to start introducing the sounds along time. In a DJ gig, you have some idea of what goes well with what and so on. Musicians would usually connect many of these electronic music instruments into a mixer and switch between them, obtaining a broader scope of possibilities.

But this orchestration of instruments has a big problem of segregation: after all, by orchestrating many grooveboxes behind a mixer, we don’t obtain a more complex groovebox, but just only a plurality of disconnected grooveboxes. One good example for this segregation is that a sequence that is recorded in one groovebox, doesn’t go easily to other different groovebox or synth. This effect can be seen and heard in the performance of Octave (video below). Despite it works well, you can’t hear any real integration between patterns. You don’t see a musical theme taken from one instrument to another, and then getting developed. Instead, you can only hear instruments fading in, and other instruments fading out; in the same way as it happen when mixing records in a turntable. Live music development happens as an audio mix rather than a development of rhythms, harmonies or themes.