Paul Taylor is the co-owner of Mode 7, an indie dev studio based in the UK (@mode7games)
Music for games is in a really exciting place at the moment. Artists like Eirik Surkhe, Danny Baranowsky and Kettel have delivered some awesome indie game scores in recent years, with excellent musicians like BigGiantCircles injecting some life into the AAA domain.
A lot of indie devs are doing their own art these days; I'd love to see some start making their own tunes!
So, how do you go about learning to make electronic music? Starting off can be pretty daunting but I heartily recommend it to anyone with even a modicum of melodic or rhythmic competence! I’ve written three game soundtracks to date under my nervous_testpilot alias (Determinance, Frozen Synapse and Frozen Endzone) and I’ve been making music for around 15 years. I still feel like I’m getting to grips with the technical side of things but the reward for that is hearing my tracks stand up against those by musicians I admire.
I’ll be going through the absolute basics with an emphasis on the practical rather than the theoretical, then moving on to some more advanced thoughts. I will, at times, wantonly throw my own opinions around as if they are facts: you will just have to deal with this.
If you’re not a complete beginner there is some detail on my own setup and some more general thoughts at the end: skip to the gloriously narcissistic section entitled “Let’s Talk About Me”...
Choose Your Weapon
The first thing you will (almost certainly) need is a computer. It is still possible to create entire tracks using hardware sequencers and samplers etc. but there’s very little good reason why anyone would ever want to do that.
This is what your computer should have, as well as a couple of other things you’ll need:
- A very fast processor
Processors are extremely boring and there are lots of boring websites where you can find out which ones are currently good.
- Lots of RAM
With 64-bit OS’s these days you can’t really have enough RAM. I have 64gb in this machine and yes, people have laughed at me about this. However, when I run a session with over 120 channels, many of which contain extremely memory-intensive sample-based instruments, with absolutely no issues or hint of any performance hit, I also find myself laughing.
- An audio interface
This is also deeply boring but it’s a decision you’ll have to make early on when you don’t know anything.
Consumer level sound cards are usually butt. You’ll need to ignore these and go for a proper audio interface intended for music.
There are two facets to choosing an interface: what input/output options you need and how good the interface is in general (A/D conversion, latency and other boring blah stuff that isn’t important yet).
If you have any equipment - like hardware synthesizers or guitars - that you want to get talking to your computer you’ll probably need an interface that has a couple of audio inputs as well as possibly MIDI in and out. MIDI is mostly just a system for sending musical note data (ie not sound, just the information about what note is being played and for how long) between devices. You can send other messages via MIDI as well (for example most hardware synthesizers these days allow you to send any parameter as MIDI). You can store and manipulate MIDI in the software you’re going to be using to make music, so it’s a handy protocol for getting equipment to do what you want.
There are a ridiculous number of audio interfaces on the market, from bananas PRISM magical million dollar A/D converting monstrosities down to $10 stuff that belongs on a market stall. I recommend reading some reviews on sites like MusicRadar or in magazines like Sound on Sound, looking at handy threads like this...
...or simply buying what I have, which is a Native Instruments Komplete Audio 6. I love this interface: it’s never given me any problems which is completely remarkable as audio interfaces in general have an ineluctable deviant passion for causing mischief.
- A MIDI keyboard of some description
You could get an old keyboard / synth and connect its MIDI OUT port to the MIDI IN of your audio interface (that’s what I do) or you could buy a USB MIDI keyboard and just plug that in. Controller keyboards are much of a muchness - just buy one you like. A good number of octaves is essential for me but some people like playing on tiny things that look like a toy for the under-5’s. Personal preference rules here.
Monitors are a special type of speaker that are engineered to reproduce sound accurately, unlike hi-fi speakers. You need to hear what is actually going on in your tracks to give you any chance of making something that will translate well on a variety of different sound systems. Of course, you can (and should) check your tracks on a normal hi-fi setup as well, but mixing on one will be a total disaster.
There’s no real way to choose monitors other than by listening to them. Having said that, I really recommend Mackie HR824’s or their smaller cousins the 624’s for dance and electronic music - I’ve used mine for around seven years and I love them.
Monitor stands are ideal if you can fit them in: putting your monitors on your desk can cause some strange things to happen to the sound. If they must be on a desk then use a product like Auralex Mopads to “decouple” them from its surface. Also, you want to position your monitors in an equilateral triangle with your head and not too close to the corners of your room if possible. The tweeters should be at the same height as your ears as well.
Don’t worry about room acoustics and monitor placement too much at the start though: I once saw a video tutorial from a really accomplished drum ‘n bass producer who had speakers precariously balanced on his kitchen sink, so it’s possible to make good music in almost any situation.
I really do feel like it’s a good idea to stay away from headphones: not only are they a lot worse for mixing in general but they can be very dangerous to your hearing if used for long periods.
The DAW’s of Perception
Right, now you’ve got the basic hardware together you need software called a DAW, which stands, ludicrously, for “Digital Audio Workstation”. This is just a pretentious term for what used to be called a “sequencer”. Some examples are Logic (which is Mac-only, even though it wasn’t always that way: thanks a lot for that one Apple); Ableton Live; Cubase; FL Studio and new up-and-comer Bitwig.
The DAW is the main “host” where your music will take place. You’ll use a lot of plugins (sometimes called VST’s on PC or AU’s (Audio Units) on Mac) to generate your sounds and process them, so in a lot of situations the actual DAW itself becomes fairly moot.
A side-note: some people think that there is a difference in “sound quality” between DAW’s: these people are either idiots or operating on a level that nobody really needs to concern themselves with. There really isn’t any meaningful difference in “quality” between the ones I’ve mentioned.
There’s tonnes of stuff on the internet about each one and you can try demos, which I recommend, but here’s my completely unfounded opinions about these:
This is probably the best “standard” sequencer. Really powerful editing; unbelievably great internal plugins; super sampler; you can learn loads of shortcuts that seem to make everything happen instantly. A really good Logic user can probably do a lot of things faster than someone who is fluent in another DAW.
I use this and love it. Most people get put off by the fact that it starts up in the weirdo “Session View” that’s intended for Live performance and “jamming” (whatever that is). Ignore that and go straight for the “Arrangement View”: this is just laid out like a standard sequencer.
Live has a good coherent internal logic and is very flexible. A lot of the internal plugins and effects are good (particularly the reverbs and delays) and Ableton have a phenomenal relationship with their users. They take a lot of feedback and continue to progressively refine and enhance Live. MIDI editing is worse than Logic but that doesn’t really matter. I’d say it’s easier to work with loops and samples in Live but I’m sure some Logic users would disagree with me. You can’t go wrong with Live.
I started out with this and tried it again recently; unfortunately I found it fiddly and annoying then quickly went back to Live. It doesn’t seem anywhere near as popular these days but quite a few pro musicians still use it.
FL Studio (used to be called Fruity Loops) is a bit of an oddball: the workflow seems different from most other sequencers. It does, however, have some really cool internal plugins now. A lot of younger producers seem to like it; I have a feeling that might be something to do with how easy it is to get on sites like the one that rhymes with The Pirate Hay. Give it a try if it seems to appeal to you: personally I don’t get it.
What Happens Next
From here, I’d recommend doing all of the tutorials that come with your DAW. Live in particular has a really comprehensive set that will take you through a lot of the basics.
Definitely have a play around with the built-in instruments as well: that’ll help to teach you about some different methods of generating sound. Here’s some really basic notes that may help as well:
When you hear words like “warm”, “fat” and (possibly) “squidgy”, you have either found your way to one of the less salubrious parts of the internet or you’re talking about analogue synthesis.
Obvious examples: Moog hardware synths; “virtual analogue” software synths like V-station
Digital synthesis stereotypically sounds harsh, sharp or brittle. It often has a metallic quality, so a lot of “bell” type sounds come from digital synths. However FM (“frequency modulation” - one type of digital synthesis) is very versatile: it can even sound warm like an analogue synth. Finally a lot of those huge drum ‘n bass / dubstep “reese” basslines are actually made with digital synths running through a lot of processing; Native Instruments FM8 is a particular favourite with producers like Skrillex and Noisia.
Obvious examples: Yamaha DX7 hardware synth; FM8 software synth
Wavetable synths tend to sound quite 80’s and “soundtracky”: they are often capable of producing quite complex “cluster”-type sounds.
Obvious examples: PPG Wave hardware and software synth; Korg Wavestation hardware and software synth
Other Synthy Stuff!
Quite a few synths (like Native Instruments Massive or the infamous Access Virus) offer a lot of hybrid potential between different synthesis types. This kind of synth is capable of making some really huge sounds.
There are also other types of synthesis around (such as granular synthesis). In general, synthesis types that differ from the above are generally better for very specific tasks: personally I wouldn’t worry about them too much until you’re familiar with the more standard types above.
Samplers are devices that, as you might expect, can take a small chunk of audio and manipulate it. Traditionally, they would then change the pitch of that bit of audio to match whatever note was being played into them via a MIDI keyboard. They’d also allow you to loop sounds, process them or chop them up into smaller bits and rearrange them.
One useful thing you could do with a sampler plugin would be to import some drum sounds into it.
You could have a kick drum assigned to one note on your MIDI keyboard, then a hi-hat and snare assigned to two other notes, allowing you to tap in a basic rhythm on the keys.
In practise, samplers these days can do everything from allowing you to play an incredibly realistic sampled piano made of thousands and thousands of separate audio clips to making classic “cheap” 90’s house vocal effects. My advice is to just have a play with one, then see what else you can do by just sticking audio clips straight into audio tracks within your DAW - you’ll soon figure out which method of manipulating audio for basic tasks.
For example, I use Live’s “Simpler” plugin for pretty much all of my drum sounds, whereas I tend to drag and drop loops into audio channels and then edit them there.
ROMplers (music tech is full of stupid words - I won’t even bother explaining this one) are basically big chunky libraries of sampled instruments. They tend to offer a fast route to getting good-but-conventional sounds. Spectrasonics make fantastic ROMplers; their Trilian bass instrument in particular is very handy.
ROMplers don’t usually offer the same editing potential as synthesizers but these days quite a few of them allow you to really tweak the preset sounds so as to make them almost completely unrecognisable. Spectrasonics Omnisphere, which basically is a combination ROMpler and synthesizer, actually exposes all of the raw sample material and the entire synthesis engine to the user, effectively giving it infinite potential.
Obvious examples: reFX Nexus; Spectrasonics Omnisphere
Modular synths are essentially just synthesizers where any component can be connected to any other component, irrespective of common sense. They’re good for creating weird and wonderful sequences where parameters get constantly buffeted and tweaked by each other.
Hardware or Software
A lot of people still seem very preoccupied with whether hardware or software is “better”. This attitude largely comes from a time when software synthesizers were rubbish and computers weren’t powerful enough to model complex analogue circuitry with any degree of plausibility, let alone authenticity.
These days, software analogue emulation in particular is really good: you have to be a pretty experienced audio person to tell the difference between a good emulation and the original hardware. However if you’re working at the absolute highest level there is still a good argument for using hardware in quite a few situations.
As a beginner, though, you really shouldn’t be worrying about this kind of thing: if you want to use hardware then go ahead; if you want to work entirely on your computer then ignore the crazy hardware snobs who talk endlessly about the “unpredictable” nature of hardware and the magical saturation of valves.
Twisting a real knob (OOH MRS WIGGINS! SPIN MY TOP HAT!) or plugging in a patch cord is always going to feel different from clicking a number with a mouse. Hardware synths in particular do compel you to work in a different way. Again, this difference can be quite subtle and is something to think about more when you get over the initial hump of figuring things out.
From my point of view, I started off with all hardware out of necessity (computer only doing MIDI!), then moved to an entirely software-based setup for a long time. I’ve done every soundtrack since Determinance entirely in software. Now, I’m starting to get back into hardware again: I currently have an Access Virus, MicroKorg, DX100 and Roland D550 in my studio. I like the immediacy and occasional weirdness of hardware; also people often forget to model some of the more weird sonic characteristics of various synths (the horrible noise on the DX100 output, or the crazy aliasing on some of its sounds, for example). Ultimately, when you get to the point of thinking about your workflow, I think a mixture of hardware and software is idea for most musicians: just an opinion.
A Critical Juncture
So you’ve got a basic setup and a way of making sounds, plus you have started to figure out your DAW. Now is a really good time to make your first basic tunes and play them to some sympathetic friends! The act of finishing a track for the first time is really thrilling, so don’t deny yourself the fun of playing around out of a fear that you don’t know what you’re doing, or that nobody will want to hear it: just go for it!
If you’re not a great keyboard player, don’t worry. You may suit a “tracker” style of production (where you manually enter note data by typing it in) better, so check out something like Renoise. If that’s not the issue, you may take comfort in the fact that a lot of electronic musicians are pretty bad at the piano: I’m certainly quite poor in comparison to most people who know what they’re doing. I would recommend just sitting down with a nice piano sound and playing whatever you want for thirty minutes or so every day. Figure out which notes you can play with the right hand go with which chords you can play with the left; look up some simple chord sequences and learn how to play those. I learned pretty much all of the useful bits of keyboard playing for composition by just improvising for hours and hours when I was supposed to be doing school work: I don’t think there’s any better way!
I have a friend who is getting into music: after learning the basics he started struggling with how to get his ideas into a workable shape. I wrote him some advice which I will regurgitate here; although it was intended to address his specific issues it may also help others who are trying to figure out structure and workflow…
Let's talk about structure.
Unlike a lot of electronic musicians, I tend to aim for roughly a "verse / chorus" structure. I actually tend to find that this is generally superior to a more freeform thing, not least because it allows you to write melodies that people remember, but also because it gives you some context for everything.
The ultimate proponents of verse/chorus were probably the KLF (who are entirely awesome). This book is hilarious and describes the most brute-force, cynical approach to song-writing I've ever encountered. You should get it - it's really fun- http://www.amazon.co.uk/The-Manual-Have-Number-Easy/dp/1899858652
Here's their recipe for the Ultimate Number 1 Hit which I've adapted a bit:
- It has to have a compelling drum and bass groove running all the way through
- It has to have the following song structure
Intro (which contains an echo of the chorus, usually played on a softer or more basic sound)
Double-length chorus (sometimes you can switch up the main melody during this but you don't have to)
Outro (maybe echo the chorus again?)
- It has to have a melody you can remember with a 24-hour gap from the last time you listened to the track. You also have to be able to whistle it in the shower!
If you want to keep things simple you can ditch the second chorus, or even ditch the second chorus and breakdown! All my _ensnare_ chiptune tracks, apart from two or three, are based around that kind of structure.
Dance music, on the other hand, tends to do this:
Beat intro with bass
Beat outro with bass
That kind of thing can be useful for evolving electronic tracks. There's a great dance producer called Ben Gold - I'd recommend giving some of his stuff a listen - his use of structure and the way he builds things up are both fantastic. I like a track called Where Life Takes Us which you can get from Beatport.
A lot of electronic stuff is about challenging structure, of course, but I feel like doing a track with one of these paradigms will actually help you initially and at least help you work out what you're battling *against*, if that's what you want to do!
Bearing that in mind, here's how I tend to approach writing (timings are very, very approximate - it mostly doesn’t ever go this smoothly; “Day 1” could sometimes take a week!):
1. Write an 8- or 16-bar chord sequence for the "chorus" or "main riff" section, usually on a piano or pad sound
2. Make an attempt at a main melody or riff - both musical pattern and sound design - I usually won't end up using this for the final version
3. Start working on drum sounds - kick, snare/clap and main hi-hat (usually an open hi-hat); loops
4. Do a drum pattern that I'll be using mostly throughout the whole track
5. Stick this drum pattern under the melody and chord sequence to see how it sounds
6. Bass for the chorus - pretty much always the root note of the chord in dance music (it helps to figure out what chord you're playing for this - more on this later if needed)
7. Complete the main riff section
That'll usually be at least a day's work and makes a convenient place to stop.
1. Listen to my main riff or chorus again and decide it's rubbish - ignore it for a while
2. Take my drums and bass and start working on an 8 or 16 bar verse section with them
3. Either use the same chord sequence for this, stick on the first chord of the progression I've written, or write something else based around that first chord
4. Write a new verse melody if the track needs one; some dance tracks are just kind of random noises in the verse...
A note on verses: the KLF once postulated that it absolutely literally does not matter at all what happens in a verse as long as the chorus is good. There is a pop record called "Bring me Edelweiss" which was a huge hit in the late 80’s based on this principle. If I remember correctly, the verse consists entirely of atonal yodelling and chainsaw noises, then the chorus (which was a total Abba rip-off of, I think, "S.O.S.") was a kind of tight, repetitive pop thing! It was still a hit despite the daft verses...it really doesn't matter what you do in a verse.
5. Try and get everything to stick together - listen to the verse and chorus to make sure everything sounds good
6. Get annoyed with my original chorus and re-write it loads of times, change the main riff sound, get annoyed again, listen to other tracks, give up in frustration
I really agonise over main melodies, particularly in dance stuff. I don't know anyone else who has this difficulty but I do get consistently praised for my melodies and I like to think that's because of the amount of blood I sweat creating them!
1. Try and finish that damn main melody - if the chorus hasn't stuck in my head, be brutal and replace it.
2. Write the breakdown / build up section
3. "Track out" the track - put everything in the right places so it can be played start to finish with the right structure
4. Write an outro and intro - frilly bits, basically
5. Stick some FX in so it sounds roughly like a proper record
STUFF I DO AFTER THAT
From then on it's all about mixing and flow. I spend a lot of time trying to get the mix right and trying to get the track to have momentum all the way through; make sure things like build-up sections work well; replace any crappy sounds, especially the FX I've used already. Do any "full-mix" FX, like filtering stuff down, or adding a little phaser on the mix in fill sections (usually only in dance records)
Basically I would say try and do the following:
- Get your "chorus" (doesn't have to be a super catchy pop thing, can just be the "main bit") musically perfect first; you don't even have to write this in your DAW - I have written a lot of chord sequences and main melodies just on a piano nowhere near a computer - this can be a great way of doing it
- Write the verse
- Get the groove going
- Figure out how it fits together
- Do the other stuff
Hooray, you can make a song! Maybe you even made something that someone else said was good, or you can hum the chorus to yourself in the shower. Excellent.
This is the point where you will most likely think one of the following things:
- My song doesn’t sound as loud as other people’s songs
- My song sounds muffled, or weak, or weird when I compare it to other things
Resolving those problems is what mixing is all about: making every element of your track do its job, creating a pleasant overall sonic picture.
Mixing is very difficult: learning to mix competently is, in my opinion, much harder than learning to play an instrument competently. CONTROVERSIAL. As with most difficult things, some people grasp it faster than others, so don’t worry if you struggle at times: I don’t know a single musician who finds mixing tracks a total breeze.
One important point before we go any further: mixing is highly subjective. You might love the subtle tones of an indie rock record compared to the ear-bleeding, shrieking slamathon of a Skrillex wig-out; that’s fine. What matters is that you have the ability to mix your tracks in a way which is appropriate for them.
Here are some basic mixing tips…
Mixing starts with the sounds you choose. If your lead sound is a thin, high-pitched and reedy string sample, no amount of EQ or compression will make it sound like a big chunky fat synth tone.
Get a professional track you like and run it through a spectrum analyser plugin. Don’t worry about trying to understand what is going on particularly: just look at the shape; a well-mixed track will cover a wide distribution of the entire frequency spectrum.
You need to choose sounds that fill specific areas and compliment each other. This takes experience but listening carefully to the kind of sounds that are used in tracks you like is also very important.
Most really “big” sounds are usually comprised of layers, especially in the case of things like dubstep basslines. Most basslines are comprised of a fairly quiet super low “sub” bass, which is usually a sine wave or filtered sawtooth wave, and then a very loud, compressed harsh mid-range bass tone. So when people say “massive bass”, they actually often mean “massive mid-range”!
A slight tangent there but my main point remains: pick good sounds to start with and mixing will be a lot easier.
Once you have achieved sounds you like for each part being played in your track, you then need to mix them. You need to start this process by setting their levels relative to each other using the mixer section of your DAW.
Every sound you put in your track should be on a separate channel, allowing you to control its volume independently. The sounds will all, eventually, be “summed” into the “master channel” and there will usually be some kind of meter there to show you the realtime overall peak volume level of your track. This will be expressed as a negative value in decibels below 0db: the point at which your sound card would be overloaded by excessive volume.
For reasons that will become apparent later, it’s a good idea to not really let this level exceed - 3db. In the digital domain, making tracks that “peak” at -6db is also completely ok, and this is what I tend to do just so that I absolutely don’t have to worry about the signal getting too loud. The gap between the peak of your mix and 0db is called “headroom” and it’s basically space you can use later on during the final stages of getting your track ready for other people to hear it (“mastering”).
In my tracks, I tend to have the kick and snare drums hitting at around -12db and then mix everything else around that. Some people would consider that a bit low, but it absolutely doesn’t matter; once you’ve left enough headroom, then it’s all about how the mix sounds and the relative levels of everything.
As well as setting the level of each sound, you’ll need to think about panning and how to space things out in the stereo field.
A good rule of thumb is that lower frequency sounds should have less stereo content and be panned to the middle, whereas higher frequency stuff, like hi-hats or bright pad sounds can be panned more extremely or have stereo spread effects applied.
Stereo spread can be a bit fiddly: it essentially revolves around delaying one half of the stereo signal, for example making the right channel’s audio play a bit later than the left. This effectively tricks your brain’s ability to locate sound.
Two good tools for experimenting with stereo spread are Waves Doubler and the stereo section of izotope Ozone. Although both of these are quite expensive, unless you have Logic’s handy built-in Spread tool, you’ll find that you can get a bit stuck for options. A lot of free tools for stereo spread seem to be totally rubbish: I’ve never been sure why that is!
After sound selection and setting your levels, I’d argue that EQ is the most vital thing you will be doing.
EQ is about tweaking the frequency composition of a sound, boosting or removing existing frequencies. There’s a lot to say about this but here is probably the most important mixing tip of the lot:
- For every sound that isn’t bass, roll off the low-end frequencies below about 100 - 120hz
If you don’t do this, your track will sound “muddy”, as the low end will be cluttered up with all the weird low bits of other instruments that you don’t need. Keep the low part of the track clear and you can use targeted bass and kick drum sounds to do the “heavy lifting” with much more ease.
You’ll find a lot of this kind of thing around, purporting to explain all of the different frequency regions. Such charts are fine as a general guide, but here’s a much better way of figuring stuff out:
- Get a track you like
- Turn the volume of that track down a bit so the output of your sound card isn’t “in the red”, “clipping” or “distorting” when you do the next steps
- Put an EQ plugin on it
- Make a loud boost (say 6db) with a fairly narrow “Q” or “frequency range”
- Sweep that boost around
- Pay attention to how the boost sounds at different frequencies
This will really give you a feel for which frequencies do what in a sound. As is traditional with discussion of EQ, let’s end with some of my misguided subjective opinions on the various frequency ranges you will encounter:
General tip - boost wide frequency ranges; cut narrow frequency ranges. This is a super tip that has helped me a lot.
60hz - I tend not to mess with this too much - this kind of area is very subby and will only really show up on bigger sound systems
120hz - 200hz - This is the “magic frequency” area for the low end of snares. Putting a tom sound with a snare and then boosting somewhere in this area can create quite a “heavy” or “punchy” feel.
400hz - A great (and very funny) engineer I used to work with described this region as “gravy”. “Ooh, we need some gravy” he would growl, apparently at random. This area can really warm up a sound if you boost it gently.
1k - This is nice for “presence” - it kind of pushes a sound forwards without the harshness of higher frequencies. Too much here though and it’ll sound tinny. Some sounds have annoying weird spike around this area that you can take out with some surgical EQ.
8k - Quite a harsh area - you have to be careful around here. Don’t just always chop it out though as your sounds will lose edge.
10k + - The “air” area - don’t boost too much or things will start getting fluffy
20k - You often don’t need much up here as you start to get into “beyond common sense and the range of most people’s hearing” so there might be some cuts you can make to avoid cluttering up your bandwidth with pointless frequencies
As well as my “EQ sweep” method described earlier, you could try using a highpass filter to completely chop out low frequencies from existing tracks, so you can listen to the mid-range and treble in isolation. Then, try a lowpass filter and listen to just the low-down bass and sub region - you’ll soon figure out what is doing what!
Compression is a bit weird and hard to understand.
Firstly, the term “compression” when used in audio engineering normally means “dynamics processing” (eg running a sound through a device designed to mess around with its amplitude over time) and not “data compression” (eg making an MP3).
Try this: take an audio sample, like a snare drum, and look at the waveform. You’ll probably see a big spike near the start, usually referred to as a “transient”, then a longer section that tapers down as the sound gets quieter.
Let’s say we wanted to make this snare drum sound louder. Well, the most obvious thing to do is to turn up the volume. If we did that however, the initial transient might get too loud: there’s only a finite amount of amplitude that a sound can have before it starts to distort or “clip”, so that big spike could easily take us over the top for a moment, which we wouldn’t want as it might result in a loud, harsh click instead of the nice snare sound we’re after.
So, here’s the thing - before we turn up the volume, we want to make that really loud part of the sound quieter - we want to “compress” it. Once that’s done and then we do our volume boost (often referred to in this context as “makeup gain”, “gain” being just one of many music-techy words for “volume” or “amplitude”), we will have a sound which is, on average, much louder than the one we had before. It’ll hit with more impact: rather than just being a loud snap with a quick decrease in volume right after, it’ll be one solid block. Think of how a speaker cone moves when very loud sound is passing through it: a short sharp sound moves the speaker just for a second, but a sound which is very heavily compressed would cause the cone to give the air in front of it a heftier shove.
I won’t go into the depths of how to control a compressor here - there are plenty of tutorials for that - I will just say one thing: pretty much all music you listen to (other than classical music) will make very significant use of compression; you are so used to hearing compression everywhere that a track where it’s incorrectly implemented (or which lacks compression entirely) will just sound very thin and weak.
Effects like reverb (basically the sound of spaces: imagine singing in a big empty cathedral and the effect that would have on your voice) and delay (what we traditionally would call “echo echo echo echo”) are really common and often fairly easy to use. My one tip here is to think really carefully about when you use reverb in particular: the amount of reverb you use can really define a mix. Effects like these tend to make the overall “space” of a track sound bigger but diminish its “punch” and impact, so think about that trade-off when you’re using them.
As I hope I’ve shown, 99% of the entire sound of your track is down to how you mix it. It’s mostly about the sounds you choose, your skill at combining them and then your deftness with various tweaking methods.
Mastering is the process of taking a “mix” (an audio file where all of the individual channels are set at the correct level and have had effects applied to them) and turning it into a “master” (a production-ready audio file which is ready to be burned to a CD, turned into an MP3 or simply distributed as a WAV).
A lot of people still believe that mastering can make any track sound great. The first question they will ask an artist will be “how do you master your stuff?” The assumption is false: tracks only sound good because they are *mixed well*; these people really need to be asking questions about mixing instead.
So what is mastering? For some artists these days, mastering is literally just about applying a limiter or “maximiser” to the master channel of the track, then creating the final audio file. That’s it! A limiter is basically a very extreme kind of compressor (in fact, it’s technically a compressor with infinite ratio...but that’s not very interesting): it’s intended for taking a complex sound and making it generally seem a lot louder.
Sometimes a special kind of EQ is applied to the whole mix during mastering; sometimes a little bit of compression is added before limiting to give the entire mix a certain character.
Don’t worry about this yet: I think those kind of things can do more harm than good when you’re a beginner. I recommend just using something simple like the Waves L2 limiter, bringing the threshold control down until you achieve a gain reduction setting of somewhere between 3 and 6db: this will enable you to hear the effect of “brickwall” limiting and give you a context for future work. If you can’t afford Waves L2 then use this (http://www.yohng.com/software/w1limit.html) which is free. Quite a few dance producers use Izotope’s Ozone: this is a very powerful suite of tools but I’d probably steer beginners away from it as you can do quite a bit of damage to your track if you don’t know what you’re doing!
I would also recommend taking your mixes (without a limiter on them) to a professional mastering engineer at least once early on in your musical career and talking to them about what they are doing. Mastering engineers have some of the best ears in the audio world and will have heard every kind of track going: they will be able to give you pointers on your mix as well as letting you hear it on an amazing sound system intended to reveal every possible mistake you have made. You will find it easy to make friends with your mastering engineer as you will have left 3 - 6 db of headroom on your mix: mastering engineers love this because it gives them “space to work”! I unreservedly recommend Finyl Tweek if you’re in the UK: they mastered the Frozen Synapse soundtrack as well as some of my earlier dance stuff.
Some people will make their entire track with nothing on the master channel, then add the limiter at the very end when everything is done. Others will mix “into the limiter”, having it on almost from the start of production and enabling them to hear a much closer representation of the final track as they go. This is down to personal preference. I’ve done both at various different points: I’d say that mixing into a limiter leads to more bad habits overall; I’ve probably made more mistakes mixing into a limiter than I have mixing without one.
Time for a Break
There’s a huge amount of technicality to absorb there, so don’t worry about taking it all in to start with.
The most important things with music in general are perseverance and honesty. That’s why a lot of really annoying musicians will stay stuff like “it doesn’t matter what equipment you use” when asked technical questions.
A lot of what you do when you start will be rubbish; when you think you’ve got quite good, unfortunately you will probably get knocked back a lot and have to calibrate your expectations.
Always play your music to people and get their feedback; keep comparing your mixes to other mixes you like and asking questions. If your bass sounds weak compared to the bass in another track, there will be a good reason for it. 9 times out of 10 the reason will be sound selection, as I mentioned before, but sometimes there will be a technique involved that you can pick up.
Let’s Talk About Me
I’ve tried to keep this as general as possible but I do still get asked very frequently about my own setup, so here’s a kit list:
Native Instruments Komplete Audio 6
Synths / ROMplers etc.
Korg Legacy Collection
DCAM Synth Squad (especially Amber)
Almost every East West sample library ROMpler
Voices of Passion (this does almost every vocal sound on the Frozen Synapse soundtrack)
Soulsby Synths Atmegatron (soon!)
Effects and Stuff I Use a Lot
Waves Renaissance Bass
Izotope Ozone 5
Arts Acoustic Reverb
Fabfilter Collection, particularly the EQ
Sample Pack Producers I Like
Freshly Squeezed Samples
The Atom Hub Kontakt stuff is crazy awesome as well
For nervous_testpilot, I wanted to find a sound that combined a load of the things I liked: Abstract late 90’s electronica like Plaid, Boards of Canada and Ulrich Schnauss; the more abrasive or dissonant sounds Autechre, Venetian Snares and Amon Tobin; the emotional impact of uplifting trance. If that sounds like a bit of a tall order then it definitely is! I see it as a huge challenge to even slightly live up to any of my influences.
Sometimes I write genre music, like my _ensnare_ chiptune house side-project or my more traditional trance releases (for example Holding Fire from the Frozen Endzone soundtrack). This kind of stuff is more like trying to solve a very specific puzzle than it is about self-expression in some ways: the music emerges from a set of quite strict rules.
At the moment, I’m moving further away from that to write stuff that doesn’t necessarily conform that well to any particular standards. It’s about aiming for a sound, structure and general mix that I enjoy listening to, really exploring my own personal preferences apart from the expectations of other people.
It’s really important to hold on to what motivates you as a musician, even if you’re working in a commercial environment. So many times we get told what we can and can’t do; some of that has a good basis but ultimately every great musician is something of a rule-breaker. Pushing your own peculiar brand of music is the most interesting thing you will ever do creatively so I think it’s worth pursuing.
I’ll be holding on to the complex layering and melodic content of my earlier work: I don’t really see that changing. I do like a lot of music which is a bit more “chordal” in nature rather than being based around more hooky single-note melodies, and I might do some stuff a little bit more like that in the future, but at the moment I’m sticking with the tunes!
In terms of technology and the way I’m using it, as I mentioned before I’m getting a little bit more into hardware and programming my own sounds again. I find that I want to hear things that are a slightly awkward and unusual; dance music in particular has really got itself into a serious rut with the current crop of big room house tracks that use an extremely limited palette so I’m listening to older electronica again and trying to figure out how to make that kind of sound work for today’s listeners.
I’ve found the response to all my projects massively encouraging: with the Frozen Endzone soundtrack being fully completed and released soon, as well as a new _ensnare_ album out next week and another nervous_testpilot album just started, it’s a really good time for me.
I hope you’ve found this information useful and also that you’ll check out my music if you haven’t before.
@mode7games on Twitter.
You can also ask me questions here ask.fm/PaulMode7
Now, good luck with your stuff!
|Ettore Luigi Gislon|