Spider Robinson once wrote that we live in an amazing time, when the reproduction of music—with perfect fidelity—is possible. Well, it’s possible, but it’s a hell of a lot of work.
I was riding back from rehearsal this afternoon, when it occurred to me that a lot of what goes on in the studio is, to musicians, a black box: In goes music, out comes a beautiful recording.
Those of you who are here looking for an editor for your book? I’m taking off my “editor” hat for this article, and putting on my “sound guy” hat. (I’m actually wearing a green bandana and headphones while writing this.)
I’m not trying to talk down to anyone in this article; I’m just keeping it accesible, and assuming very little. Nevertheless, if you’re a musician with a hair-trigger sensibility—feel free to skim past the parts that are “beneath” you. (And please mention this when you call to schedule a session so I can give you my special “discount” rates.)
If someone mentions recording the basic tracks, they’re probably talking about tracking. All those videos you’ve seen where the singer makes funny faces in front of an impressive-looking microphone, while the rest of the band is playing in air-conditioned glass cubicles behind him? This is probably called tracking because the musicians are laying down the basic tracks of the song. Audio engineers can be depressingly literal.
The goal is to get a clear, clean signal. In comparison to the shaped audio you’ll hear later on, the raw tracks can sound a little bit dull, even maybe lifeless; that’s by design. We want the audio to be representative of the sound source, and it should be recorded at a good volume, but not so loud that it distorts or clips. (Translate: “Sounds like crap” because the singer was singing too close to the mic, or because the dumb engineer put the microphone too close to the singer.)
This is the stage where you should spend time like 2009 money. Get it all done right now: Place those microphones at the correct distances. Select the right kind of microphone, even. (They make an enormous number of types: Some are good at vocals, some are good at sitting in front of a screaming guitar amp, some will pick up a gentle acoustic guitar—or a sniffle in the next room.)
Guitarists: Tune those guitars. Singers: Drink a glass of water (or whatever) to keep your throat hydrated. Drummers, do… what you do.
Seriously, spend most of your time here. Listen to what you recorded, and maybe do another take or two. Save those takes, even the ones with mistakes. That data may come in handy later on, and storage is cheap.
In the olden days, musicians would show up in a studio and play into a microphone. You’d make the standup bass louder by having the musician scoot closer to the microphone. The trumpet player is too loud? Dude, back up a little bit and play facing the wall or something. (Go back even earlier, and they recorded the entire band straight to vinyl. No room for mistakes!)
Multitrack audio allows us to record separate parts at different times—say, drums, bass, guitar, and vocals—and then mix them all together later. What if you need to add a harmonica? A piano? A kazoo? Rude noises?
In an ideal world, all musicians have their band. And it’s a band with enough musicians to play all the parts live in the studio—perfectly every time. (They also have the patience of saints, and infinite time for rehearsing. Maybe all those musicians have their own bands as well.)
Overdubbing can be part of the tracking stage, where you lay down a vocal harmony. But you’ll want a specific dubbing session to, for example, add a string orchestra or maybe get a good guitar solo.
Maybe you planned the arrangement that way. Maybe you came up with a great idea the next day after the session, sitting on the toilet at work. (Remember, in an ideal world, musicians only have stimulating day jobs.)
Say the bass player played a C# that just sounds wrong. If there’s a single bad note in an otherwise awesome take, it’s possible to snip out a good note from somewhere else and paste it in, obliterating that horrible, horribly played C#. (Nobody listens to the bass track, anyway, but we’re perfectionists here.) It’s sometimes quicker than getting the absolutely perfect take.
This can be a minor part of mixing, or maybe it’s a step of it’s own if you have a lot of edits. (If you record everything live in the studio, that makes it harder to so this, but it’s still possible.)
The vocal’s too quiet? The drums sound terrible? The keyboard has this weird spacy noise going down? Now’s the time to fix this stuff. (It’s all the tracking engineer’s fault, of course, and nothing to do with that singer who eats the microphone.)
But we’re using multitrack audio, so we can fix that. Sorta. (If not, we may be able to keep the listener from noticing the problem.) We can turn up the vocal track, add some bottom end to the bass drum (like the kids say, boom boom fucking thump); maybe run the keyboard track through a de-space filter.
Let’s maybe add a little reverb on the vocal, and just a smidge on the other instruments, enough to make them all sound like they’re in the same room.
Now things are sounding good. So what’s left to do?
You’ve got eight mixed songs, and they all sound good in the studio. But they’re, well, quieter than the other songs on your iPod. They’re also not working with each other. What’s going on?
A good mastering engineer will take those tracks and make them sound good together. Making them “louder” is part of the process. Adding some EQ so they go together, sonically. Maybe some echo?
This is hideously oversimplified, and a lot of these stages get blurred together, but these are the basic steps. I’m going to keep this page going as a living document, for use when someone asks me a question.