Words by Pascal Wyse
Zaphod Beeblebrox, ex Galactic president, owner of two heads and an enlarged ego, is the only person – according to his creator, the late Douglas Adams – to have survived the Total Perspective Vortex. The entry covering it in the Hitch Hiker’s Guide to the Galaxy describes the Vortex as one of the most horrific experiences a sentient being can go through:
“For when you are put in the Vortex, you are given just one momentary glimpse of the size of the entire unimaginable infinity of Creation along with a tiny little marker saying ‘You are here’.” Seeing the whole of infinity and your vanishingly small place within it is enough to pickle the brain, so the idea goes.
Now this, frankly, is what I feel like when I open up a blank music session in a DAW, terabytes of stored sound lurking in the background. Talking to composers and musicians about how they use machines to create music, the common theme is always the prevention of paralysis in the face of infinite possibilities. Like spaceships desperately struggling against the event horizon, we stare down the black hole and try to resist falling in.
Humans don’t get these hardware upgrades. We just struggle to remember passwords.
In this struggle, computers have had Moore’s Law on their side. The law says they get twice as fast every two years, presumably ticking off the calendar days before the Singularity – a sort of computer version of Christmas when machines develop consciousness and force us to hand over the keys to the planet. Humans don’t get these hardware upgrades. We just struggle to remember passwords.
“I dream of instruments obedient to my thought and which with their contribution of a whole new world of unsuspected sounds, will lend themselves to the exigencies of my inner rhythm,” said composer Edgard Varèse in 1937. A decade later, that world of unsuspected sounds was emerging with the work of Pierre Shaeffer, Stockhausen, Raymond Scott, Daphne Oram and Varèse himself – but there’s something about that word “obedient” that looks forward to the digital world, which was next around the corner.
Computer music (nestled somewhere inside “electronic music”) has grown from a distinct niche to a flapping great category whose definition is a confusion, since so much music passes through a computer at some stage. Historically the map may have had lines drawn between tape pieces, Music Concrète, all manner of electrical signal generation with oscillators, Ondes Martenots and ring modulators, early Moogs and so on, and the beginnings of digital computing from around the 1950s. Now, all those sounds can be created, or at least pretty well emulated, by computer.
There is still a distinction between computerised music – which potentially utilises MIDI and virtual instruments to realise an idea – and computer music, which has its own lineage of sound celebrating the voice of the machine. When that distinction gets blurred, alarm bells ring for some – including Jaron Lanier, a philosopher, computer scientist, musician. His book You Are Not a Gadget, written in 2010, uses MIDI as a cautionary tale for what he envisaged happening to lots of our dealings online.
“Before MIDI, a musical note was a bottomless idea that transcended absolute definition…”
“One day in the 1980s, a music synthesizer designer named Dave Smith casually made up a way to represent musical notes,” says Lanier, explaining that despite some Herculean efforts over the years, that system has become the norm – “locked in” as he would put it.
His problem with that? “Before MIDI, a musical note was a bottomless idea that transcended absolute definition … After MIDI, a musical note was no longer just an idea, but a rigid, mandatory structure you couldn’t avoid in the aspects of life that had gone digital … When Dave made MIDI, I was thrilled … We felt so free – but we should have been more thoughtful … We have narrowed what we expect from the most commonplace forms of musical sound in order to make the technology adequate. It wasn’t Dave’s fault. How could he have known?”
Agree with that or not, it might bring to mind time spent making sure a MIDI line sounds more convincing. To err is human – so one of the strange developments after years of “perfecting” sounds in the digital world is that we now need to reintroduce error, distortion, irregularity, fuzz, grit and wonkyness. Humanity, basically.
Spitfire’s Christian Henson believes it is best to engineer in that humanity from the off: “My feeling is computers are very boring things that should be used to do the repetitive, dull stuff. What some people tend to do is get the most pristine signal as possible then get computers to recreate reality, with reverbs and so on, but what computers are great at is harnessing reality and controlling it – removing unwanted sound, for instance.”
Part of that engineered humanity is the so-called round robin – relieving us from the robotic giveaway caused by the shotgun repetition of the same sound. Composer Jeff Rona, whose sounds can also be heard forming the basis for the Orbit instrument for Kontakt, believes exact repetition of sound does something peculiar to the brain – and there is still a place for it.
“Here’s my theory. Before the invention of the drum machine, nobody had heard the same sound twice. I could hit a snare drum a million times and if you really analysed it, each one would be different. But Prince or Michael Jackson puts out a hit song, and for the first time you have sound that repeats itself on a molecular level. It goes back to Giorgio Moroder and those guys.
“So when you sing over it, your voice is the only truly surprising element over time. Our mental apparatus for sound is so acute. I mean the ability to decode speech is fucking remarkable! Rolling back a few eons our ears helped us to avoid being eaten, and here we are talking about it – uneaten! We are so tuned in to such minute variations in air pressure. The minute we’ve decided something is safe, we can divert our attention. If it is neither growing or shrinking etc, we kind of dismiss it and turn our attention to something that needs it. It gives you permission to do something else. For us musicians that’s amazing. We can create energy, but allow the listener to relax out of it.”
“In electronics we have an infinite array of instruments and an infinite way of combining them. That concept is nothing short of debilitating.”
Meanwhile, back to that black hole we were parked next to, engines at full-throttle reverse. Never before has limitation felt like such an important word for musicians. “Obviously with electronics there is that terrifying thing of ‘all things are possible’,” says Rona. “An orchestra is a finite set of instruments with an infinite way of combining them. In electronics we have an infinite array of instruments and an infinite way of combining them. That concept is nothing short of debilitating. For me it’s all about setting limits, and setting limits before you write a single note. If you don’t do that, you may as well go right back to bed. I mean, just talk to someone trying to choose a date on Tinder.”
Some like to imagine a virtual band of players. Once you’ve created the band you stick to it, maybe bringing in the odd guest soloist. For Rona, that process goes one stage further back, as he likes to create his own instrument first: “In an electronic music score, I build an instrument, learn how to play the instrument, then learn how to write for it – then write.”
Even before you get the band together, says Christian, concentrate on the composition. Solve the problems with music, not instruments. “I’ve often found people creating emotion in listeners by scale and not by composition. So instead of a moving cadence, something that sounds as good on a piano as played by an octet or a symphony orchestra, it’s just like shock and awe.”
Being led by the instrument rather than the music is not new, say Will Gregory of Goldfrapp: “It’s like when the people with the big budgets, the rich folk, were able to get the latest synths. They would cherry-pick the sounds and brand them as their own, so that all of us who followed on behind felt like we were just copying them because we could only afford that instrument second hand and five years later.”
Limiting yourself as a way of triggering ingenuity is one thing, deliberately dumbing down is another. “I find in modern mainstream pop there is this contrivance of dumbing down that’s based on hip-hop, but without the same motivation.” says Christian. “With hip-hop, tension was created by people coming from musical families where the instruments had been sold off, but there were these great record collections, and decks, which became the instruments. There was a limit to what you could do with them so it became all about invention. But I find what people are doing these days sounds contrived – this dumbing down at a time when technology allows you to do practically anything.”
One way to limit yourself is by returning to older equipment. Will Gregory still has a fondness for the luxurious four seconds of sampling time his Akai S900 offered. “Storage has got cheaper and cheaper, sample recordings can be longer and longer, but I actually think some of those old libraries are still valid since they had to work so hard with the space they had. They can have a lot more character. These days it can be like a crazy dressing-up shop. Do you like it this way? Or this way? All on banjos? Sure. What’s really fruitful I think is when you start combining things. You hear it in Italian 1960s film music, with organs behind string lines, or putting traditional classical instruments through a Moog filter.”
In the Total Perspective Vortex, holding on to yourself in the face of infinity is the challenge. We are always trying to write something that is “us”, but in time-starved production cycles, it is easy for the basic building block of music to grow. Hence the phrase library, or the “working lunch” preset, which is not just a sound, it is a miniature piece of music – and one that you didn’t write. When creating Orbit, Jeff Rona knew he had to walk a line with the core DNA of the instrument, which basically allows four different sounds to circle each other at different speeds and with different filtering and treatments:
“These aren’t just sounds, they are really performances, because each one is me doing a sort of performance. Any less and it is just a waveform, any more and it is a composition, then I’ve done the work for you. What’s the fun in that? I like sounds that have character but which don’t supplant the need for composing. I could create a sound that is so complex that it sounds like a piece of music, but then you’re basically giving birth to an adult.”
In the middle of this ocean of sound, we are all told, our ears have to be the raft to cling to. Follow what you think sounds good, and ultimately that has to be at least true to you. The problem is, the ears can get fooled. When Thomas Edison was demonstrating his phonographs at the beginning of the 20th century, he employed singers to perform the material that was on the discs, but then got them to switch to miming, and eventually stopping, while the music miraculously continued on the phonograph. It’s hard to believe people felt the sounds of the phonograph were the same as that of a real singer in the room, but they did. Just as I thought the sampled piano sound wasn’t going to get better than the Korg M1 in 1989. Oops.
Zaphod Beeblebrox had no problem holding on to himself. He didn’t know it, but he actually survived the Total Perspective Vortex because of some complexity over parallel universes. In his mind, though, he survived because he was amazing. “It just told me what I knew all the time. I’m a really great guy. Didn’t I tell you baby? I’m Zaphod Beeblebrox”. If in doubt, just say that to the executive producer on the next gig.