On this edition of Beat Connection we dive into the macro idea of this thing called generative music. Some of you may not know it, but in some fashion you’re experiencing (or have quietly experienced) the rise of it. From apps like Endel, to things like YouTube algorithms generating playlists for you, we’re living in a world where computers are starting to encroach on things we believe sentient beings have a monopoly over. And like many things we might have little knowledge of, realizing what those mysterious things do, and what potential lies in them, can open up our own musical world to what we can do with them.
This first bit will guide us to what generative music is. Stay tuned for the other bit explaining what tools you can use to create your own generative music.
Feed the recording back out of the medium.Brian Eno and Peter Schmidt, from Oblique Strategies
As I get older, I realize that I value music more if it sounds a bit more organic. This isn’t to say that I want to give up my synths and head out with my dulcimer to hike the Appalachian trail. What going “organic” means to me is that I look for music that sounds more naturally conceived. That word “natural” sounds like an oxymoron when what this blog post is about is (seemingly) something as unnatural sounding (and created) as generative music.
In nature lies the key, though. Nature is a system that is auto-generated. Unexpected results have created much of what we see outside today. What we’re exploring today is technology that exists out there to create music out of your control. To make it match what we see outside, where things play out naturally.
What is Generative Music?
For those who haven’t encountered the term generative music before, a quick definition is necessary. Imagine coding into a bit of software an application that would generate the whole essence of wind chimes. First though, think back to how wind chimes work.
Wind chimes function on the mechanics of chance. One day, before many a porch-lover hung one on a cord, someone conceived that this device — made up of various poles of different lengths — could generate all sorts of “musical” melodies simply by reacting to the push of a soft breeze.
Wind itself functions as a randomizer. Wind has no set pattern, speed, or velocity. Which rod strikes another rod is uncontrollable. Wind chimes leave up in the air (no pun intended) how many notes will be struck and the measure of time they will resonate freely.
In theory, constructing such an object allows you to create a musical instrument that could generate immeasurable amounts of melodies and melodic patterns. The thing about contemporary music and music recordings, is that music (once recorded) is utterly predictable. Once you’ve heard a recording, you can come back to it and know where something is going to land.
Classical music, like classical architecture, like many other classical forms, specifies an entity in advance and then builds it. Generative music doesn’t do that, it specifies a set of rules and then lets them make the thing. In the words of Kevin Kelly’s great book, generative music is out of control, classical music is under control.– Brian Eno, On “Generative Music”
Predictability is built into ledger notes. Once you specify a tempo, pick a time signature, and puts notes on paper, you’ve chosen a musical framework. Once you’ve performed this arrangement and recorded it on a medium, you’ve created a musical statue. There is no way this music will change again to those hearing it. Generative music upends this flow.
The Roots of Generative Music Systems
In the early ’60s, as seen in fellow zZounds’ blogger Connor’s post on early tape music, composers begun to toy with the idea of letting audio machines play with the building blocks of music.
It’s an idea, arguably, you can say started even further back with Mozart’s musical dice (musikalisches würfelspiel) games and later Arnold Schoenberg’s 12-tone technique. With one using a roll of a dice to “compose” a final piece from as you land on available musical bars to play and the other employing mathematically-derived atonality, where you cease making music that has to go from point A to point B by using tropes to free yourself from “classical” linearity.
Tom and Jerry, of all things, in the music of composer Scott Bradley, shared an early idea of the kind of music you can create. Mimicking the sound of actual slapstick. It’s music created with the freneticism of uncertainty, inspired by and performing that uncertainty back. You never quite know what will turn out and it feels “organic” because it is in essence life-like. It seems to have no apparent final point.
Taking simple recordings like the same phrase and playing it back in different speeds yielded a mathematical result: you’d never hear the same thing. The playback system functioned as the role of creator. Steve Reich’s “It’s Gonna Rain” served as a monument for letting things get out of your control. Future ideas would put it through tonal music, as heard above. Robert Ashley married the musical part with phonetics and vocalization.
As things play back without a structure, our own brain tries to create a framework to understand such compositions. In Terry Riley’s In C, much like in the work of other composers such as John Cage and the works of minimalists like La Monte Young, the instructions written outside and across the ledger notes would make it so you’ll never hear the same piece played the same way ever again.
Players could pick one bar out of the 53 available and repeat it as many times as each would want. Others could come in on a different bar and build up (or down) the complexity. This was a very “rudimentary” community-based generative system.
Music For Airports By Tape Machines
Brian Eno, in his iconic album Music For Airports, tried to take this idea in another direction while further removing the human control of such a system.
Using audio tape spools, he’d create long tape runs, wrapping spools around tube aluminum chairs, in effect tinkering with time simply by never letting the tape heads play in sync.
As this “generative system” played something, once again, differences in tape speeds made it possible to generate a new melodic piece every time, never encountering the same composition at any time. This system let anyone who had access to it to slot in their own loops and let the contraption (or at least the theoretical idea found in the build) feed an entirely new musical creation. Music had opened itself to some architectural design.
A Musical Mouse
In 1981, American electroacoustic composer and software engineer Laurie Spiegel had started to bridge the gap. What if we took the ideas of a wind chime and added granular control to them?
If one could sculpt the amount of wind, the breath of its pitch, and the pressure of the atmosphere, one could (in theory) have some modicum of control over the “generative” part of the wind chime. Writ large, if one thought of music itself as a system of modules, one could take those bits apart and create the starting points from where the generative ideas grow.
Laurie’s Musical Mouse allowed computer composers to feed, via MIDI from the software program, things like whether this section should arpeggiate, be chordal, cycle through musical patterns that functioned in tandem with harmonic rules, and so much more. It was an “intelligent” instrument that at the wave of a mouse composed for you. It was capable of, seemingly, an infinite amount of ideas. On her own “Appalachian Grove” you hear those ideas come to life.
Even now a musician from that era could create something of this epoch using such a system (as heard in Vito Ricci’s A Symphony For Amiga album). A web, browser-based version of Music Mouse now exists, giving you a taste of Laurie’s original achievement at your beck and call.
A Musical Koan
You see, that possibility found in Eno’s music was but a small seed. With the introduction of digital software, hosted on personal computers capable of treating algorithmic ideas as a starting point, whole generative musical systems could be created “virtually” to expand on that idea even further.
Eno, initially, worked with the brotherly duo of Pete and Tim Cole on software dubbed “SSEYO Koan Interactive Audio Platform.” Using that early generative music software, he “composed” Generative Music 1 with SSEYO Koan Software and released it onto floppy discs. Whoever owned those discs and that software could hear snippets of the album as conceived by Eno through this software at the time and then (more importantly) let the software take it from there, expanding on the original’s idea into a much longer, never before realized, composition.
The building blocks of the compositions – sounds, meter, note duration, and all sorts of compositional minutiae was in there. The only difference is that now you were able to alter parameters that would affect how these musical modules interact with each other.
What you hear composed with such programs now presented a real-life koan: Whose authorship is this? Does this music belong to the software creator? To you? Or to the software itself? Married with other generative software, like in EA’s Spore, generative music seems tailor-made for a time when it seems like we’ve already exhausted all our musical ideas.
In the future, we’ll dive deeper into what tools out there exist to dip your toes in generative music and go a bit further: how you can integrate them into your own setup.
Images by author (with generative assist from Kate Compton’s “Flowers”).