Our current analogy for how things work is we have a bunch of guys running around in the circuit, electron guys. They can run down hallways—wires—we can narrow the hallway or put junk in it, which gets in the way and makes it harder for them to pass through a certain point in the circuit, and we call that a Resistor. We can add a Capacitor, which is like a room into which the guys run, get stuck, and then run out while the room on the other side of a partition fills up. We have an Inductor, which is a very crowded hallway that the electron guys have trouble getting through because it's full of other electron guys—kind of the opposite of a capacitor.
Do you notice that all of these components get in the way and impede the travel of the electron guys? Keep that in mind, and that word "impede."
We have to amend our analogy a bit. Guys running around makes it seem like the electrons are whipping up and down empty hallways. Not really. It's more like they're packed together shoulder-to-shoulder, and they give each other a push or a poke. They basically stay still, but they push each other. One guy pushes the next guy that pushes the next. That push travels through the electrons at just below the speed of light. That push is like a message or a signal. The electrons are all basically playing "telephone," transmitting a signal around the circuit.
The push they give each other can be weak, like a little love tap, or strong, like a punch in the head. We can think of this as voltage. The potential strength of the signal (push) is Voltage.
Picture that the power source sends out the initial push, and let's say it is strong. Now, it might go through the electrons and keep its strength, but things could also happen to weaken that push, or perhaps add power to that push. The power source might send a low voltage push, too. There are all sorts of potential voltages that affect the force of the electrons pushing each other.
But there is something else to consider. Voltage is how powerful a push could be, but it also affects the number of guys involved in the pushing. High voltage tends to get a lot of guys pushing. Lower voltages tend to get less guys pushing. But we can also have a lot of guys pushing, or very few.
The amount of guys pushing is called Current. Do you see how current and voltage are different things, but very related?
What electrical circuits basically do is dick around with the force of the push and the number of guys pushing... or, said more maturely, electric circuits manipulate the signal by changing voltage and current.
So, the next question is, how do they change voltage and current?
A big changer of current and voltage is resistance.
Resistance does two things: it limits how many guys can push, and it makes it harder for them to push. So when the signal runs into resistance, the push gets weaker and fewer guys are pushing.
Long wires, like long audio cables running from amplifiers to speakers, offer more and more resistance as the wire gets longer. The push loses energy. Where does that energy go? It turns into heat.
Some materials are better at passing the push along, and they might even have more electrons available to push. Copper is good at this, gold even better. Wood sucks. Things that transmit the push along well are good Conductors. Things that don't transmit the push at all are called Insulators.
Size also makes a difference. Thick, good conductors—like the cables you see on telephone poles—have lots of guys transmitting a strong push. But try to squeeze that strong push through something thin that adds a lot of resistance, and you'll turn that signal into heat—this is how a heater works. And lightbulbs.
So, now we have Voltage, Current, and Resistance. That's the big three. And they all affect each other according to a formula called Ohm's Law. Uh oh! MATH!!!
V means Voltage (the strength of the push). I means Current (the number of guys pushing). R means Resistance (what gets in the way of the guys pushing)
SO...
The push is equal to how many guys are pushing, times what gets in the way of the guys pushing.
V = I x R
And we can flip this formula around:
The number of guys pushing equals how strong the push is, divided by how hard it is to push.
I = V/R
How hard it is to push equals how strong the push is, divided by how many guys are pushing.
R = V/I
NOW... all of this works just fine if the pushes are happening in one direction—if all the electron guys are facing in the same direction. It's easy. You just tap or punch the guy in front of you on the back, and then he does that to the guy in front of him, etc. This works fine with Direct Current, or DC. DC is when the push goes in one direction.
But audio signals switch direction. They're AC, Alternating Current. Current is the amount of guys running around. Direct means they all go in one direction. Alternating means they switch direction.
So, now these guys basically have to stop pushing in one direction, turn and push in the other direction, and then turn and push in the opposite direction, and if we want to hear that, they have to do it between 20 and 20.000 times a second. It's like an insane dance that keeps varying in terms of strength (the push, or Voltage), the number of dancers (Current), how hard it is to push (Resistance) and now, dancers getting confused and mixed up, and not knowing which way to push.
Imagine what a mess this could be: you're dancing along and suddenly everyone in the room turns in a new direction and you do too, but some guys can't turn fast enough before they have to turn back, because as frequency goes up it makes the changes of direction happen faster, and you end up slamming into that guy and you're both pushing on each other's face AND... you're also dancing in something that doesn't conduct well, like peanut butter, so it is harder to move, and maybe there's less room for dancers and less dancers available, or more.
Do you see how that confusion caused by time, which is frequency, adds to the difficulty?
Remember the word "impede" we used at the start? Do you see how Resistance and these timing issues sort of blend together? That's called Impedance. It's resistance but it takes into account the issues caused by Alternating Current and Frequency.
Now we have a different formula, still Ohm's law.
V = I x Z
Z is impedance.
It's like resistance but it's affected by frequency.
That's more than enough for now. I hope you're seeing that this stuff isn't as hard to understand as you might have thought.
We're talking about EQ circuits for the next few weeks—call this audio tech talk for beginners.
We'll start with passive EQs and then we'll move to active and then digital emulations, but we need to start before all of that with an understanding of Capacitors and Inductors.
Let’s start with a circuit.

We have electrons flowing from a power source through a wire. We stick a resistor in the circuit.

Now, when the electrons come to it, it gets in their way a bit—picture a bunch of guys running down a hall who come to a doorway and have to squeeze through. The resistor is the doorway. It’s like the hallway gets narrower.
Now let’s stick a capacitor in the circuit.
A capacitor is like a room. The guys running down the hallway pile into the room—but there’s a wall across the middle with another door on the far side. They can’t cross that wall. They can only go back out the way they came.

Physically, a capacitor is two metal plates separated by a dielectric—an insulator. The “room” the electron guys get stuck in is one plate or the other, and the wall is the dielectric.
If the electrons in the circuit always move in one direction (DC, or Direct Current), they fill up one side of the “room” and have nowhere to go. The current stops—everything backs up. Electron constipation. That’s different from the resistor, which just makes the hallway narrower but still lets everyone through.
But audio signals are AC—Alternating Current. They flow one direction through the circuit and then the other. Because an audio waveform is a series of peaks and valleys—or as the English in the early 1960s might say, peaks and troughs—ups and downs.
How often does the current alternate? Well, at 60 Hz, it changes direction 60 times a second. Which means the electron guys run into one room, then leave and run into the other room 60 times in one second. At 2850 Hz, they’re in-out, in-out 2850 times per second.
So, a capacitor is a room with a divider, and a bunch of guys fill up one side of the room, leave, and then fill up the other side, each side entering and exiting through the only door they have access to. Electrically, the electrons accumulate on one plate, discharge, then fill up the other plate.
So, you’re wondering why this is important—or handy...
Back to our electron guys. We stuff them in the room, the room fills up, the flow of electron guys stops. Then we reverse the flow, the guys have to leave the room, and the other side fills up, which stops the flow of guys in the opposite direction. If the frequency is low, the guys have more time to stuff themselves into the room, the room gets more filled up. It’s like there’s a guard at the door saying, “Get in, plenty of room, keep coming, get in, get in…” And then when the flow reverses, the guard says, “Ok, get out, come on, keep moving, get out, get out…”
If we slow things down—fill up the room fully, empty it, then refill it fully from the other side—we gum up the works a bit. Things back up. There's electrons waiting around while the room fills up. The guys (electrons) can’t flow easily. So, there’s less movement, less “power” in the circuit.
But if the frequency is higher, the guard says, “Get in, get in—oh wait, we’re changing directions—get out, get out!” The room doesn’t fill up as much, which means it can empty faster. Not as many electrons collect on the plate, so there aren’t as many that have to leave the plate. The result is more movement in the circuit, which means more power.
So... as the frequency goes up, the capacitor makes it easier for electrons to flow through the circuit. As the frequency goes down, the capacitor makes it harder for electrons to move. When it’s harder for electrons to move, the power goes down—which means the low frequencies have less power. They roll off.
You’ve just made a high-pass filter.

Now we need a low-pass filter.
How do we do that?
Well, if a capacitor gives us a high-pass, then we need something like a backwards capacitor to make a low-pass. Makes sense, right?
What the hell is a backwards capacitor?
Well... if a capacitor is a room with a divider, and a bunch of guys fill up one side of the room, then the other, it seems to me that a backwards capacitor would be like a huge hallway that is always full of guys. No one leaving. No emptiness. Tons of guys.
So, if our electrons are running around the circuit and suddenly they get into a huge hallway packed with other electrons, that screws them up. How do they get through? “Excuse me, pardon me, pardon me, excuse me, sorry, I didn’t mean to touch your butt, pardon me…”
This gums up the works differently, doesn’t it? It’s kind of the same thing as a capacitor—we’re still messing with the flow of electrons—but in a different way.
Let’s bring frequency into this.
First of all, let’s give this hallway stuffed with extra guys a name. We’ll call it an inductor.
We have our running electron guys, and let’s say they’re changing directions at 100 Hz—one hundred times a second. That gives them 1/100 of a second to get through the inductor, which is a room packed with guys. So there’s time for them to go, “Excuse me, pardon me, pardon, whoops, sorry, pardon me…” But if we increase the frequency, we start giving them less time to get through the room, so it’s more like, “Pardon me, excuse—oh, I have to go the other way—excuse me, you again, pardon—oh! I have to go the other way, damn… me again, sorry, sorry…”
Do you see that as the frequency goes up, the electron guys kind of get caught in the inductor? At low frequencies, the electrons have time to get in and out, but as frequency goes up, they have less, and they get stuck in a swamp of other guys. When we gum up the works and inhibit movement, we decrease the power. Now we're gumming up the works at higher frequencies. The high frequencies have less power. We're rolling off the high frequencies. We have a low-pass filter.

Inductors pass the lows and roll off the highs.
What is an inductor physically like? Since we need space for lots of electrons, we need something big. Like a fatass chunk of wire. Like a coil of wire. And when we coil the wire, it’s sort of like taking the hallway and bending it a bunch, which makes it even harder for the electron guys to get through. Picture you’re at a long, bendy nightclub full of people and you’re trying to get through. The more time you have, the more able you are to make it to the other side. But at high frequencies, you come to the door of the club and think, “Screw this. I’m not even going in.”
We can control the frequency response of the inductor by making it from different types of materials, thicker wire, thinner wire, more coils, less coils, wrapping the wire around something, etc.
What if we stick a capacitor AND an inductor in the circuit at the same time?
Hmmm... so like this?

Well, if we stick this:

With this:

We get this:

And by adjusting capacitors and inductors, we can move the hump up and down the frequency spectrum, and make the hump narrower or wider. This is a bandpass filter.
The problem is that we're reducing gain only. We're rolling off the lows with the capacitor, and rolling off the highs with the inductor, so to make that hump louder compared to the highs and lows, so we have to amplify the signal after that. We need to add gain. So we stick an amplifier after, and now we have something like a Pultec, a passive EQ.
Why passive? Because the tone shaping uses passive components: resistors, capacitors, and inductors are passive: they work without additional electricity fed into them, and they can't amplify a signal.
A passive EQ means the signal is being reduced overall, because we're cutting things out of it, and then we amplify the whole thing, which brings the stuff we reduced back up to nominal and the things we passed get louder.
Kind of like what happens with a compressor, right? We selectively reduce the signal and then overall amplify it.
My circuit diagrams are ridiculously simplified—they don't have an input or an output—but the point is that you understand what's going on and start learning the symbols. It isn't hard to read a circuit diagram. You're already doing it. True, it's "Run, Spot! Run" for now, but eventually it will be, "The only completely stationary object in the room was an enormous couch, on which two young women were buoyed up as though upon an anchored balloon."
I also simplified a TON of electrical engineering stuff. Like electrons and what’s actually happening as things flow through wires and components. It’s not quite guys running around. And there are more words to learn, more jargon. But it’s a useful way to think of things at this stage. We’ll add some more ideas next week.
You don’t need to know this theory perfectly, but you’ll learn enough in the next few weeks that it will improve your engineering.
Distorted drums are a thing these days—it's always been a thing, generally by accident. These days it's on purpose.
Motown Records... check out the overloads on the tom fills... really, the whole record is grainy and saturated.
A lot of this was accidental—happy accidental. It happens to sound good, but that’s a bug, not a feature.
It’s easy to overload tape. The VU meters used were designed to accurately track the envelopes of voices and entire ensembles. The faster the transient, the less accurate the meters. Recording drums with a peak meter reading at 0VU meant that you were hitting the tape closer to +12dB, well over its headroom. Couple that with a shortage of tracks. When you only had eight or sixteen tracks to work with, you couldn’t burn a bunch of them on the drums, you had to smush all that together to a few tracks to conserve space. Smushing things together = saturation, especially if you’re overloading some things, or if you’re bouncing tracks together and adding another generation of noise and distortion.
The sound that really grabbed me was Radiohead’s Planet Telex. Of course, the whole thing is distorted, but there’s an intentionality to the sheer amount of distortion on The Bends. It wasn’t a happy accident.
Distorted drums are now a signature thing for bands like Tame Impala.
Kevin Parker (he’s Tame Impala) uses a variety of drum sounds, from lo-fi mic’d up kits to samplers run through processors. Let’s take a look at how to get some of those sounds.
Lo-Fi
Check out Tame Impala drums here. You’ll notice they’re in mono, are roomy and quite boxy sounding. This sounds closer to Motown than Radiohead.
For a lo-fi sound, use lo-fi mics. Lo-fi micing means fewer of them, and cheaper mics, too.
What would you find in the basement where a punk band records their demos? A couple of dynamics—like a bunch of SM-57s. And you’d probably find a badly tuned drum kit, heads covered with old duct tape.
Try using a dynamic mic for an overhead, and another on the kick. You can add something to the snare as well, just make sure you check the phase against the overhead. Maybe mic the snare on the side of the shell or some such. Something goofy. Stick the mic on the beater side of the kick so it’s picking up the snare as well. Use a box for a kick. Hit the snare with brushes. Do a Tchad Blake and stuff a mic down a tube and stick that into the middle of the kit somewhere.
Feel free to overload everything a bit, but beware: if you’re micing with dynamics and clipping mic preamps, you’re losing a lot of transient response. Some of that loss is the charm of the sound, but you might not want that. Listen on the Tame Impala clip: those drums are lost and don’t punch through the mix. No transient, no punch.
I have always preferred to track the sound I wanted decisively and not invent it in the mix. That means letting the tracks you record have some say in the way the song is built. If you missed with the drum sound, you might have had to overdub something else—someone stomping the floor or hitting a tabletop. You might have lost the highs of the cymbals, so you added a tambourine. Change the bass part. Add claps. Run the drums through some strange EQ setup. Or compress the hell out of it.
The point is there were all sorts of creative solutions that came into existence because of a problem you unintentionally made for yourself. I set levels for a kit based on the drummer using brushes and playing lightly. I totally forgot that for the second half of the song he switched over to sticks and beat the crap out of things. It was my arrangement—damn, I’m a moron! But it sounded great so we kept it.
But that isn’t what happens these days. So, let’s figure out an in-the-box approach.
Faux Fi
There are many plug-ins that mangle drums, because it's easy to set things wrong. Assistants would often ask me in sessions, “How do I get things to sound cool?” I would reply, “Record them badly.”
But if you’re working with samples, loops or drum machines, chances are things are recorded really well. Enter our Shure Level-Loc plug-in.
The Korneff Audio Shure Level-Loc is a precise emulation of the classic hardware piece. Shure designed the original to even out microphones on a PA system. It is a merciless limiter that makes every signal passing through it the same level. Use it on a microphone at a school board meeting, and the quiet teenager is the same volume as the pissed-off dad. Use it on a drum set, and the kick, the snare, the hihat, and the room ambience are now all at the same level. Smashed flat. And distorted to all hell, because the circuit uses inexpensive transformers. Transformers overload, saturate, and clip, adding in those messy harmonics that result in crunch and character.
Start out by routing all your clean drums to a stereo bus and instantiate a Level-Loc on the bus insert. You can also route kick, snare and room mics and leave out everything else—whatever you want to affect, route it to the bus and through the Level-Loc. Press play.

Mouse over to the Korneff nameplate and click to flip things over and access additional controls. Find the OUTPUT IMPEDANCE switch just left of center and set it to LOW. This changes the transformer configuration and lets more of the body of the drums get through.
The MICROPHONE LOAD will change the frequency response of things. Play with it and see what you get.

Play with the RELEASE TIME controls by changing resistor and capacitor values. I tend to like things on the faster side when it comes to lo-fi drum sounds.
To the far right is POWER SUPPLY. The default setting is BATTERY. Click on the very handsome Engstrom 9v and turn it counterclockwise. Things should get fun and distorted at about 5v. The lower you set that voltage, the more fabulously awful things will sound. This control, by the way, emulates what happens when you use dead batteries. In the old days, engineers would deliberately use old batteries to get certain sounds. Imagine traveling to gigs with a bag full of semi-dead batteries.

If you pop back around to the front, you’ll discover that INPUT LEVEL now has a much more pronounced effect on the signal. Set it as you want it.
Two last things to play with: Use the PRE-FILTERS to remove some of what feeds into the compressor and transformers. The Level-Loc makes the bottom end HUGE and boomy, and you might not want that. Cut some of it out to focus the action on the snare. Or, if you want that kick isolated more, roll off the highs.
Experiment with the POST-FILTERS as well. These roll off the signal after it is plundered and pillaged. The difference is when you filter things out before the compression, the amount of harmonics generated by the circuit is less. You change tone, but you also change the distortion characteristics. When you use the post filters, all the distortion is present, but you’re changing the overall tonality of the combined signal. You can, of course, use both.

It is very easy, and lots of fun, to really get a wild drum sound with our Level-Loc and then drop it into the mix and find out it doesn’t work as well as you thought it would. Rather than change the way the Level-Loc sounds, grab the DRY/WET balance and turn it counterclockwise to let in some of the unprocessed drums. This allows you to keep the natural punch and energy of the original sounds.
You can use the Level-Loc on individual drums as well, or on any track in your recording.
Kevin Parker tracks things using a Level-Loc. It is a great piece of hardware. Be advised, though, that our plug-in version gives you access to controls that are inaccessible on the hardware piece, unless you’re handy with a soldering iron.
Last thing: the Godfather of distorted drums is the inimitable Tchad Blake. He’s the reason the Level-Loc is a thing in modern audio production and not relegated to the dumpster out behind the high school. He uses a Level-Loc somewhere on everything he recorded after 1989—the year he bought his first Level-Loc for $5 at a swap meet in California. Here’s a track from the Los Lobos album Kiko, a wonderfully inventive, musical record. There’s Level-Loc all over this, from the drums to the guitars. In fact, the Level-Loc used on it is inside of our plug-in: Tchad let us sample and measure it.
Have fun making some noise!
If you went back to a recording studio 30+ years ago, you wouldn’t encounter a piece of audio hardware called a Saturator. That wouldn’t be in any of the racks, nor would there be a knob or a switch labeled Saturation anywhere on the console. Fast forward to right now, saturators are everywhere, tucked into everything from compressors to EQs to dedicated saturators. All of our plug-ins have some sort of saturation capability, in some cases quite a bit.
Why no saturators back then? Why so many now?
The answer is that there was a TON of saturation happening back in the good old days, but it was spread out across the entire signal chain, from microphones to the speakers.
To recap: analog circuits have an operational "sweet spot.” It can be thought of as the signal level in which the waveform feeding out of the circuit is identical to the signal feeding into the circuit. This is called linearity. The sweet spot is where the circuit is at its most linear.
What does that look like? It means the waveform feeding in has identical peaks and valleys to the waveform feeding out.
But if you were to zoom in on that output waveform, you might see some very small differences compared to the input waveform. The peaks and valleys might have a slightly different curvature. The waveform might look a little bit distorted.
Which is because it is. Because no analog circuit is so perfect that it doesn’t leave its imprint on the waveform as it passes through it—all circuits add distortion. Now, that distortion might affect the envelope of the wave passing through, changing the distance between peaks and valleys, or it might change the phase of things ever so slightly, or it might add additional waveforms to the original waveform. These additional waveforms are usually incredibly low in power, inaudible. But they’re there, and they’re called Harmonic Distortion.
%THD and ears
When you look at equipment manuals and spec sheets, harmonic distortion is often listed as a percentage. %THD refers to the Total Harmonic Distortion being added by the circuit. Generally, the lower the THD, the better the piece of gear is, and the more expensive. Because getting the %THD low requires carefully designing and optimizing circuits, using precisely made components, keeping very high levels of production quality during the manufacturing process, and all that adds up to charging a higher price.
What %THD can a person hear? It depends on the frequency range, the person, and the distortion. A trained ear might be able to hear as low as .3%THD, but that might vary depending on the math of the harmonics being added—odd harmonics tend to be easier to detect than even (odd harmonics tend to be unpleasant). It’s also easier to hear harmonic distortion on high-frequency signals.
Gear gets too clean?
As integrated circuits came into use and manufacturing processes improved, the average %THD went down. Equipment went from colorful to very clean indeed by the mid-80s. SSL consoles had total harmonic distortion down around .02%THD. Compare that to Shure's Level-Loc, released in 1969, which had 3%THD, which Shure touted at the time as being incredibly low. UREi’s 1176 compressor, released in 1967, has .5%THD—lower than the Level-Loc but still noticeable to the average ear.
However, with an all-analog signal path, even very low amounts of harmonic distortion add up. The mic, the pre-amp, the outboard, the analog tape deck, the console, more outboard, and all of this stuff sprinkled heavily with transformers and transistors and tubes, all of which add a tiny bit of harmonic distortion. Multiply this by a bunch of channels. The result is really quite a bit of harmonic distortion, but it is spread out all over the studio and not concentrated in one or two pieces of equipment. Quite a bit of harmonic distortion is another name for saturation.
And, of course, our ears like that saturation. It sounds “warm” and “sparkly” and all sorts of other adjectives.
But consoles disappeared, magnetic tape disappeared, everything went into the box, and suddenly the only thing adding saturation to the recording was outboard mic pres, whatever outboard compressors or other processors might have in the home studio, and the analog side of the AD/DA converters involved. Things got too clean, so everyone started adding saturation as an option or as a specific product.
In the good old days, saturation was a bug. These days, it's a feature.
Saturation in our Plug-ins
All of our plug-ins have a saturation component. In most cases, it’s baked in. We carefully model analog circuits. Analog circuits add harmonic distortion. If you crank things up a bit, the circuit starts becoming non-linear and that’s saturation. Crank it up more and it's plain-old distortion.
The Pawn Shop Comp has multiple elements that add harmonic distortion/saturation: there’s a tube preamp, transformers, an FET style compressor (the 1176, with its .5% THD is an FET unit) and the Operating Level knob on the back. The Pawn Shop was designed to replicate a vintage tube analog console channel with a compressor strapped across it.
The Amplified Instrument Processor’s Proprietary Signal Processing feature is there to add three different flavors of harmonic distortion/saturation, depending on its settings. And if you crank up the input a bit, the AIP will saturate in a very analog circuit kind of way. And then there’s the EQ on it, which is emulating a tube equalizer off a 1950s German film mixing console, so some transformers and tubes in the signal path, too.
The Talkback Limiter and the Echoleffe Tape Delay both add lots of saturation if you hit them hard. The TBL adds a FET kind of thing—an analog console vibe, while the ETD does the tube and tape thing.
The Puff Puff mixPass is hard to describe, but what it basically does is add in harmonics using math in a process called Waveshaping. If one reshapes the wave, one is, by definition, distorting it, so you can think of the Puff Puff as a saturator, albeit a special one. Waveshaping is also used on the El Juan Limiter—flip around to the back panel. Finally, the Pumpkin Spice Latte can be thought of as a saturating compressor with additional reverb and delay features.
Lesson: An analog approach to Saturation
An analog approach is to use minimal amounts of saturation all over the mix, turning your DAW into a virtual model of an analog console.
What might this look like?
Set up a rough mix with nothing on the channels or the buses and do the following:
Start out by adding a mix of Pawn Shop Comps and Talkback Limiters spread across all your different channels. Set thresholds high so these things aren’t applying any compression but signal is still passing through the modeled circuit. On the Pawn Shop, perhaps play around with the different tubes and transformer combinations. Don’t be surprised if you really don’t hear much of a difference. We’re adding saturation in a cumulative process.
I tend to use Talkback Limiters on drum tracks and guitar tracks, and PSCs on bass, vocals, etc. Different kinds of saturation.
Quick note: set all of these things at unity gain so you’re not goosing any of the channels louder.
On an analog console, individual channels feed into submix buses and then into the main stereo mix bus. To simulate this, put Amplified Instrument Processors (AIPs) on the submix busses. The AIP is designed to work across a stereo input. Click on the Proprietary Signal Processor and go around to the back panel and select one of the three different settings, yielding three different types of saturation.
Now, on the master bus, normally we throw on the El Juan and a Puff Puff, but for the purpose of this lesson, put on an AIP—to simulate a stereo bus summing circuit—and follow that with an Echoleffe Tape Delay, but switch the ETD to Tape Saturation Mode, to emulate the saturation caused by mixing down to an analog two-track.
Listen for a bit, then switch off all of the plug-ins and listen. A/B compare a few times. The difference is going to be subtle but it will be there.
In fact, we’re used to hearing artificially high amounts of saturation on songs these days. People are overdoing it, resulting in records that are harsh and chiffy-sounding.
Why not just one Saturator at the end?
Good question: why not just throw a saturator over the stereo master and light the sucker up? You can do that, but you have to beware of Intermodulation Distortion.
I wrote a full article on this here. A short take: saturation adds harmonic distortion, which means additional waveforms are added in, using math, to the original waveform. When you start cramming a lot of different waveforms from different instruments through a saturator, they mix with each other and generate yet more waveforms, some of which aren’t mathematically correlated and sound off and out of tune to our ears. Bad math.
Intermodulation distortion is typically nasty sounding to our ears and also fatiguing. Our ears tire of it very quickly. Go into a restaurant with a shitty sound system and try to eat a burger when there’s a lot of intermodulation distortion.
Using a lot of saturation at the end can sound awful. It might be an effect you want, and I have no doubt a lot of mixers get it to work, but this article is about a more nuanced, analog approach, and more than that, it’s about you getting a deeper knowledge of what is going on, and developing audio engineering skills that set you apart from guys that press buttons without really knowing what is going on.
We want you to really know what you’re doing.
Feel free to write us at theguys@.... if you have questions. We’ll always try to help you out.
While you might not like the drum sounds on Metallica records, or the way the bass is mixed, the guitar sounds are killer, especially the rhythm parts courtesy of James Hetfield. Big. Chunky. Articulate.
Of course, all instrument sounds are about the player, the instrument, the pickup—in that order—and then the amp, the cabinet, the mics and the processing. There are plenty of videos covering these variables. We’ll cover that briefly and then get to a weird, secret sauce contribution to this iconic guitar sound, which is The WOW Thing. Which we just happen to have available as a plug-in for $19.99.
So, the guitars have humbuckers—you won’t get this sound with a lipstick tube or a Telecaster neck pickup—and the amps are a Mesa/Boogie IIC+ and a Diezel VH4. These are both high-gain tube amps utilizing 12AX7s on the pre-amp stage. The Boogie uses 6L6s on the power amp, the Diezel uses KT77s. Both amps have low end equalization centered at 80Hz, so if you don’t own either of these or have a model, boosting around 80Hz will move you into the neighborhood a bit more. The presence region of the Boogie is centered around 2200Hz; the Diezel is at 4000Hz. That’s a big clue as to how you might EQ to cheat yourself closer to these particular amp sounds.
Mics: Hetfield’s 4x12s were mic’ed up with everything—from SM57s to U-87s to Royer Ribbons, with everything you can throw on a cabinet in between. I’m always looking to capture complementary ranges of sound, so a more punchy dynamic mic, which rolls off some of the lows and highs but has a pushed midrange (as well as a gentle clipping of transients because heavier mic diaphragm), combined with the deeper low end and glossy top of a condenser or a ribbon (and their faster transient response). Then in the mix, I can favor the condenser or ribbon to get clarity and articulation, or favor the dynamic for that all-important midrange and roughness. I think of these two sounds as working something like this:
Preamps: If you’ve got a bunch, try a bunch. The main thing with preamps is that their character comes from how they’re amplifying the mic signal, using tubes or something solid state, and how hard you’re pushing them, and thus getting additional harmonic distortion. I find that when I’m already slamming a guitar through a high-gain amp, the last thing it needs is the additional character (and harmonic distortion) of a clipping mic pre. Metallica guitar sounds are very very articulate—the rhythm of the playing is essential to the sound and the groove and slopping that up by further mangling transients isn’t going to get you that sound. Tube preamps don’t help this either, as they do tend to be smeary on transients. Remember, too: heavy metal guitars are distorted guitars recorded cleanly through modern solid-state circuits. Vintage guitar sounds (50s, 60s and early 70s) are kind of the opposite: cleaner-sounding guitar tones recorded through pushed tube or early technology mic preamps.
Compressors: Depending on the engineer, there could be compression going in as well as coming out. 1176s? DBX160s? SSL channel compressors? Korneff Audio Pawn Shop Comp? I might compress a part going to tape, to either even out a loopy performance (a player that was inconsistent with his right hand) or to get some needed transient activity from a guitar set up that seemed mumbly. To even things out, you want a lower ratio (under 4:1) and a longer release (400ms+) to keep that compressor always on the signal and riding it. For additional punch, enough attack to let that transient through and a faster release so that the compressor fully cycles before the next incoming transient. As a rule of thumb, a release around 200ms generally works, but speed it up as the tempo gets faster.
I also tend to compress things a touch on mixdown, so in addition to an overall mixbus compressor, individual channels are likely to be compressed. I love compressors and if I had a choice of which to use, compressors or EQs, I’d pick compressors (especially compressors with EQs snuck in... did I mention our Pawn Shop Comp yet?) but I don’t like the pumping and squashed sounding artifacts of compression, so my philosophy has always been 'compress a little bit often'. Spread the word around.
The WOW Thing - Randy Staub's Secret Guitar Box
The secret sauce on the Black Album was the SRS WOW Thing, a cheap plastic box that used delay and phase to make signals sound wider. It wasn’t designed for use with guitars or in the studio; it was designed for home computers. Black Album engineer Randy Staub found that it worked great on guitars—another one of those quirky misuses of gear, like Tchad Blake’s experiments with the Shure Level-Loc (coincidentally, we have that too as a plug-in) that resulted in pure magic.
So, put The WOW Thing across either a stereo bus of the rhythm guitars, or across a stereo or 2 mic panned out pair of guitar tracks. The WOW Thing is designed to work on stereo things. It doesn’t do mono. Turn up the WOW knob to about 1 or 2 o’clock.

You’ll, of course, notice the guitars step out beyond the speakers a bit, but you’ll also hear them get a little bit brighter and the lower mids sort of scoop away a touch. This is part of The WOW Thing's magic—it lifts the highs for an increase in presence and cuts a chunk out to clear the midrange. And this sort of sonic sculpting is very much in line with what’s typically done on heavy records: an emphasis on highs and lows and a thinning out of the mids. This scoop moves up and down depending on the TrueBass frequency. More about that lower on the page.

TrueBass
The emphasis on the lows from The WOW Thing comes from its TrueBass circuit. This adds back the bass that’s removed by the WOW process. TrueBass is a misnomer: this circuit more reinvents the bass in its own mad image.
Turn up TrueBass to hear it. Now, at upper right is a drop-down from which you can select the emphasis frequency around which the TrueBass will operate. There are choices from 40 to 400Hz—it skips the 80Hz band I mentioned above. Careful setting this too low: 40 and 60Hz is where either the kick or bass live, so pushing guitars out down there will definitely clutter up the bottom end. For heavy guitars, the 100Hz or 150Hz will be your best bet. Now, if you’re throwing The WOW Thing on a guitar solo, depending on the note range of that solo, you might want to go a bit higher to separate out that guitar from the rhythm parts and also to keep the sound from thinning out as the player climbs the neck.

If you’re using The WOW Thing on two different rhythm parts—a rhythm and a double—I’d tend to use two different WOW Things, one on each, and then stagger the TrueBass frequency, 100Hz for one part, 150Hz for the other, to provide a bit more frequency separation, but that is me. I like hearing everything individuated—a charcuterie board as opposed to a pudding.
There’s a switch for TrueBass Dynamics. It basically pops in a frequency-dependent gain reduction—a compressor—which gives you more control down there. Slower attacks and faster releases can eek out a little more punch and articulation on the guitars.

Finally, there’s a DRY/WET blend. I tend to tweak this during mixes—usually have things too wide at the start and need to dial it back. A fun trick is to automate this: pull it a little bit towards DRY to tighten things up during the verses and then open it up during choruses to subtly differentiate these sections in the mix. Another trick to do, either automating this or the WOW knob: open up the width of the rhythm guitars a little bit too much during solos to de-clutter the center of the mix and make some room in the middle for that solo to breathe.
In Conclusion
The WOW Thing is a handy little devil of a plug-in. It’s not just for heavy guitars; it works well on all sorts of mixing issues. I’ll have some more ideas for its use in the near future.
In 1984, Echo and the Bunnymen released the critically acclaimed album Ocean Rain. In 1986, their record label began clamoring for a follow-up, but they had something specific in mind. The label chief dragged them into his office, sat them down and played them Peter Gabriel’s So, a fantastic record for sure, but nothing like Echo and the Bunnymen. The band were not happy.
And they proceeded to have a basically unhappy experience recording Echo and the Bunnymen, their 5th album. Sessions started out disorganized, going through studios, drummers and producers before winding up in Cologne Germany at Conny Plank’s studio (I wrote a bit about Conny Plank here).
The album was produced by Laurie Latham, who had previously worked with the Bunnymen on the single 'Bring on the Dancing Horses'. However, Latham and the band were very much at odds with the direction of the album from the start. The band was looking to knock the record out in a few months, similar to how they recorded Ocean Rain. Laurie Latham wanted to do one song at a time... spending months on one song before moving onto another. It drove the band up the wall. They hated the experience and were really unhappy with the album. And reviews were fairly "meh" as well.
The Bunnymen had some great songs, but they do tend to write the same song often—a common occurrence during the 80s, or really anytime in music history. But I love 'Lips Like Sugar', as both a song and an amazing bit of recording and production.
'Lips Like Sugar' was mixed at Amazon Studios in Liverpool by Bruce Lampcov. My guess is it was a 24-track recording, based on what I can find on the sessions, the gear at Amazon at the time, and also from listening to it. My initial thoughts were that this is a dense recording, but it's not: it's well arranged.
At the end I'll take a guess as to the track sheet!
Lips Like Sugar
Apple Music
Amazon Music
Spotify
Tidal
YouTube
Drums
Sounds like a real kit in a room, floor tom to the left, snare center, single rack tom, I think two cymbals, the one on the left seems to cut off really fast throughout the song. Not so much someone grabbing it but more like the channel is muted. It sounds like white noise cutting off. Maybe they overdubbed these? Sounds like a single room mic more than a pair of room tracks. It has no width to the ambience. It could also be all artificial reverb.
Hihat... this is fun. The hat is off to the right, but there's this other thing which sounds like a hihat or a shaker off to the left but on the upbeat. It's hard to hear exactly what is going on here. Kinda sounds like the "Ch Ch" of a shaker and each "ch" is panned wide right and left, so the part is jumping between the speakers. Tears for Fears did a similar thing on 'Everybody Wants to Rule the World'. Except the hihat on the right seems consistently a hihat.
I think the drums are real sounds and not samples. There's just enough variation on the tone and feel. Pete de Freitas was nothing if not very very consistent. Good player, good feel.
There's also some percussion—a bell tree kind of sounding thing at the top and now and then for color.
Bass
The bass, courtesy of Les Pattinson, is locked up on the kick and drives the song along. It's simple for the most part, really a huge pedal tone thing, but there's a lovely figure during the pre-choruses, moving opposite to the vocals. The part also loosens up and moves in and out on the guitar solo. What can I say—it works perfectly with the song.
Guitars
Will Sergeant lays down a guitar orchestra of textures and colors. These are wonderful, inventive parts and they fit together and play off of each other perfectly. It sounds very complex when you first hear it, but I'm hearing five main things going on. Now, these could be five tracks cut each as one performance, or multiple parts overdubbed and then panned and mixed. I don't know. If it's a 24-track master, it's five tracks. If it's 48-track, then it's more. Let's break it down.
There's a lead guitar center at times and off to the right. We'll call this GTR-Right. Then on the left is an almost country-sounding guitar with a sputtering delay doing a 5 note burst, then turning into a two-note figure. We'll call this GTR-Left. I think these are the two main guitar performances or tracks for the song. The lead guitar at the top, I think this becomes the main guitar we hear on the right—someone clicks the overdrive off. This part is drenched in reverb and a delay, playing descending figures, little snippets of notes, chord strikes. GTR-Right is more chaotic than GTR-Left. I think this guitar, GTR-Right, also plays the solo.
These two parts drop out on the pre-chorus, then come back in hitting chords for the choruses. Then they're back for the verses again, but note that they're not doing the same thing each verse. We keep coming back to "Diverse in Unity, Unified in Diversity" as a production maxim.
On the pre-chorus, a thin-sounding guitar with a fast chorus effect on it—GTR-Chorus, comes in, playing a very hooky downwards flowing melody that fits around the vocal and moves in opposition to the bass, which is playing a figure that goes up.
Then, during the verses and every now and again, is a very wet guitar pulled way back into the mix, blended in with all the keyboard tracks. We'll call this GTR-Accent. It is hard to hear and sometimes I don't know that it exists or if it's just the harmonics from everything all blending together.
To simplify all of this in your head: There's single note guitar parts left and right on the verses, they drop out on the pre-chorus and for the guitar solo section, and then come back in for the choruses. And there's a thick chorus'd guitar that comes in for the pre-chorus and drops out after the chorus. And then this other part, GTR-Accent, drifts in and out.
And at the very top is an acoustic guitar strumming off to the left that never comes back. Figure 5 tracks of guitars.
Keyboards
Keyboards... these also could be little guitar parts, sometimes I can't tell, but I think keyboards. They're mainly used for color, not pads.
The intro establishes them off to the right and back. This is a flangy sounding, pizzicato part, wet with reverb—the record overall is very wet. At the top, this keyboard part is echoing a series of picked guitar harmonics. We'll call this part KEYS-Pizz. It goes throughout the song, working around and with GTR-Right.
In the pre-chorus, moving around the center, there's something which sounds like a polyphonic theremin. Call it THEREMIN on a track sheet. The sound of someone rubbing the rim of a wineglass. It comes in right before the pre-chorus starts. It's nearly a vocal and blends into them. On the chorus it's doing whole note arpeggios, octaves. At 1:17, squint for it—it sounds like the voice on the closing music credits of the original Star Trek TV show. It's a really cool part if you can hear it. You almost have to try to not hear it, and suddenly you'll hear it.
At the start of the solo, there's a keyboard bend off to center right, its echo panned to the left. While the GTR-Right plays a solo, that KEYS-Pizz pans left and dries up a little bit, stepping closer to us. Then again, this could be someone smacking guitar harmonics or hitting the strings above the nut, but I think it's that same keyboard part. It makes sense to move it to the left to make room for the solo on the right. I also think the solo is doubled by a keyboard. Listen for this on the right, behind the solo. High flute/string-like sound. A synth patch. 2nd half of the solo.
At the end of the solo, KEYS-Pizz plays a hit on the left and then goes back to the right, opening space for GTR-Left to return to the mix, playing yet another rhythmic part.
At 3:44 on the left there's a beeping sound, like a high pitched electronic baby frog part, blended into the guitar. Call it BABY FROGS.
Into the vamp out, THEREMIN is quite apparent, moving from left to right. Maybe autopanned slowly?
Vocals
Ian McCulloch... a very distinctive singer with a lot of range and vocal color. Sexy voice.
There's a main lead vocal. Call it L-Vox on the track sheet. Close up on a condenser. A touch sibilant but not awfully so, There's a faint whiff of a small room on it. That could be a booth or reverb. There's a harmony double on it—call it VOX-2, but when we get to the "Lips Like Sugar" lyric in that opening, there's a new track—a vocal cut very quietly and very close to the mic. It could be L-VOX still, but it's much drier. He could either be closing up on the mic for it, but to me it sounds like another track. Call it VOX-Close. Listen at around 42 seconds in and you can hear all three of these vocals together. The verse continues with this sort of pattern, these three tracks swapping in and out.
Think of the Verse Vocals like this: L-VOX is in the middle, VOX-2 is wetter and further back, VOX-Close is dry, quiet and up front. They're all panned dead center.
I think there's a separate vocal for the pre-chorus. Call it VOX-Pre. Dryer sound, less warmth to it. Sounds like it's in its own little space.
The chorus vocals are doubled with delays feeding to the left and right on quarter notes. Call them VOX-Chor.
The second verse is similar to the first, L-VOX, VOX-2 and VOX-Close alternating so the distance and the size of the vocal keeps changing. VOX-Pre comes in for the pre-chorus.
Second chorus, the VOX-Chor comes back, but the delay patterns are reversed, now going right to left.
Third verse. This very strange vocal comes in at 3:30. Way close to the mic and strained sounding. Again, I think they're just bringing in those three verse vocals in and out in interesting ways, but when you've got such an expressive singer, it's a wonderful effect.
Other Thoughts
I think they're sneaking the master fader up and down a bit for increased drama and dynamics—listen for a considerable drop in volume after each chorus. The whole thing is beautifully dynamic and invisibly compressed. The feel of 'Lips Like Sugar' is kind of rollicking and very human. The drums are tight but with a lot of groove. Played to a click but not quantized. Wonderful vocals. He gets a ton of mileage out of his voice. And the guitar parts are sublime. Really an outstanding bit of work.
My Track Sheet Guess
Ok... I don't think any of the guitars are recorded to stereo. The whole thing sounds like lots of panned mono tracks to me—the positioning is very tight and precise. Stereo recordings are usually slushier than that. So... 5 guitar tracks, 6 vocal tracks, 2 or 3 keyboard tracks (probably not midi at this time) then... say 2 bass tracks—an amp and a DI, and then maybe kick, snare, rack tom, floor tom, hihat, 2 overheads, maybe 1 room mics and a percussion track... 24 or 25 tracks... loose 2 tracks, one for the timecode and one guard track to keep it from bleeding. The acoustic guitar track at the beginning could have easily been tucked onto a vocal track or one of the keyboard tracks, likewise the percussion. Very doable on a 24-track tape with an automated console.
I laid the sheet out based on conventions of the time and my own way of doing things. Typically, you wouldn't find important things on track 1 or 24, because they could get edge track damage. Of course, people put SMPTE there and if that was damaged, you might have huge problems, but... In thousands of sessions, I never damaged a tape.
I tend to group things visually: if something is supposed to be on the left of the speakers, I want it on an odd-number track sheet box, odd-numbered fader and under my left hand. Left off, right even. I always put bass on 15 and 16, and usually grouped instruments 9-16 and vocals 17 and up. This was mainly so things were consistent from session to session and that if I was setting up a rough mix I could predict where things were without the console being labeled. Basically, everything was about not making mistakes while working as fast as possible.

Back in the good old days, when consoles were fourteen feet long and you had to wear headphones if you wanted to hear things in stereo at either end, you couldn't put a different reverb unit on every channel.
Actually, unless you had a console with dynamics on every channel, you had probably six or ten compressors total, and maybe two or four reverbs at the most.
Three reverbs. That is what I usually worked with. Two patches off a Lexicon 480L and maybe an actual plate reverb if the studio had one and it still worked—once digital reverbs came out, old mechanical plates and springs became dusty boxes down in the basement.
I also brought a reverb with me in one of my racks, an Alesis Midiverb 2. I loved that thing.
Anyway, enough going down memory lane, the point is many of your favorite records—many of the best records ever made—were mixed with one reverb unit, maybe two. Beatles albums generally had one reverb chamber on them and a tape delay.
We don't need a separate, different reverb on every channel, making our mixes gloppy and tasteless, like dumping a clot of ketchup, mayo, mustard, Hoisin sauce, Sambal Oelek, and Worcestershire sauce onto a perfectly delicious sandwich.
Let's be smart with our condiments. Let's taste the actual stuff in the sandwich.
Let's just use one reverb, instantiated three times, and on sends and returns, not inline on individual channels.

The first reverb is a small room. This 'verb brings things together and blends. The Micro Digital Reverberator has wonderful, natural-sounding small rooms. Machine 2 preset 02 Small and Bright, with decay time of 0.2 seconds, is a favorite. If it is a little too close-sounding, I put maybe 35ms of pre-delay on it to give it a tiny sense of slap. Return this into your mix in stereo.
You can apply Reverb 1 to many elements of the mix to pull them together and put them into the same room. This is especially useful if you're using drum samples and guitar modeling rather than real rooms and mics. A little on the snare and the drum overheads, a little on the backing vocals, a touch on the guitars and maybe some keyboard parts, will add a bit of commonality to your mix and pull it together into a common space. You don't need to crank this up: even small, quiet amounts of reverb can psychoacoustically affect the listener, add realism, and, dare I use an overused term, glue.
Reverb 2 is the attention getter: MDR Machine 1 preset 4, which is a medium, bright plate reverb.

Plates are dense and slightly artificial-sounding, and they tend to "catch" our ears. While Reverb 1 blends things, Reverb 2 pops things out. Use Reverb 2 to bring out the lead vocal, perhaps the snare, or anything that needs a little bit of extra attention from the listener.
Not everything in a mix deserves all of our attention. We want to lead our listening in, and direct them to hear what we, the mixer, decide is important. Democracies don't produce good mixes. Be a benevolent despot or a film director when you mix.
Reverb 3 is for emphasis and effect. Something huge, deep and dark. I like Machine 2 preset 25 large warm with a decay time of 2.8 seconds. This is a chamber sounding preset. It's way long and big. There's a couple of ways you can use it.
You can feed things into it very very slightly—a tiny touch on vocals and drums, or really anything. This reverb hangs back in the mix, barely there, but it adds a front-to-back distance to things. Sometimes I set it so it can only be noticed when the mix thins out, during breaks and bridges, as an example.
There are many uses for Reverb 3. Have it in the mix during verses, and then mute it for the choruses, which subtly differentiates these parts from each other. Automate the lead vocal's send to it, varying the reverb on the lead vocal depending on the emotions or the density of the lyrics. Crank it up a bit to emphasize a note, turn it all the way down, and also Reverb 1, to dry the lead vocal up for an in-your-face vibe.
Crank up Reverb 3 to increase the sustain of notes—this is especially effective at the ending of solos.
Automate sending to Reverb 3 to add something special to every other snare hit. Better yet, add another snare track, remove every other snare hit, send that to the reverb and then take the second snare out of the mix. Less automation!
Actually, since I hate automating things, I do this all the time. If I want to add reverb, or delay, on say, a vocal, I make a copy of the vocal snippet that I want to echo and put it on its own track, and then add reverb to that. No automation.
One last fun thing to do. Use this reverb to fade away, not fade out.
If you're doing a fade out, automate the master fader to execute the fade, and then dump the reverb as the fade starts. Dump as in turn it way up, either by cranking the sends or pushing up the return, or do both. The result sounds more like things are going away than getting quiet. Distance is a more interesting thing than volume changes.
A last trick: press the Korneff Nameplate to flip Reverb 1 around to the back. Find the width control and turn it up (clockwise) a bit to push the reverb'd signals out beyond the speakers a bit. This gives extended space around the entire mix.
I could write post after post on how to get bass sounds, but let’s start here, with a quick discussion of speakers and overtones.
The low E on a bass is at 41Hz. A typical bass part might get up to the B on the D string—123Hz. Let’s get fancy! The part covers two octaves, all the way up to the E on the 9th fret of the D string - 165Hz!
An NS-10 speaker, everyone’s favorite studio monitor (not mine), starts dropping off significantly below 100Hz. Modern small speakers, like IK Multimedia iLoud Mini Monitors, are much better these days; they can get down to 45Hz.
Please note that none of these numbers means that the speaker just cuts off abruptly at 45Hz. Instead, there’s a large drop in energy below 45Hz. How much of a drop-off? 3dB? 6dB? 10dB? It depends on how speaker manufacturers measure things. I’ll have to write an extended article on lying for you all.
The point is that the fundamental (the loudest frequency of the note that gives the note its pitch) of a note on a bass typically lives down below 150Hz, and speakers, especially small speakers, can’t get down there. Cheap speakers can’t get down there, either.
But, thanks to physics and biology, the fundamental isn’t the only part of a sound that gives our ear a sense of the pitch of it. All instruments generate a bunch of additional frequencies that are less powerful than the fundamental. These are called overtones or harmonics. If our ear can’t hear the fundamental, but it can hear the harmonics, our brain extrapolates the pitch and we “hear” it, not in our ear but in our head.
In the studio, we can add EQ and boost the harmonics above the fundamental, and that will give us a bit more presence to the bass sound. Still... doesn’t really work all that well, especially on small speakers, and lots of mids tend to make it sound thin and annoying.
(By the way, this small speaker issue also affects kick drums or low synth parts. Luckily, the fix works for bass as well as kick, or anything down there.)
The idea is to generate MORE harmonics from the fundamental. Technically said: we generate overtones that are mathematically related to the fundamental but higher in frequency range. These harmonics will make the bass “bigger” on a small speaker. Bigger on a big speaker too.
How does one do that, you ask?
Harmonic distortion. We put something on the bass (or the kick), deliberately get some distortion, and voilà! We have more harmonics to play with. Technically said: we use saturation.
The Pawn Shop Comp is a go-to for this. Here’s what you do:
1) Copy your bass track. Label the original BASS, the copy BASS FIX. Put an iteration of the Pawn Shop Comp on each.
2) Compress BASS for a big fat sound. We prefer a medium attack for punch and a medium-fast release for some low-end bloom, but play with it. Start the ratio at 4:1 but it’s ok to go higher or lower. If the playing is pretty consistent, ratio can be on the lower side. If the playing is erratic, feel free to go way up. It can almost be a little doofy and too much. Watch that meter! Musical looking = musical sounding.

3) Compress BASS FIX to get a bit of "click" on the track: medium attack, medium release. 8:1 or higher. Higher ratios tend to be clickier. Don’t be surprised if the compressor settings for the two channels are similar—they are, after all, the same thing.

4) Still on BASS FIX, flip around to the back of the Pawn Shop Comp. Dial up the PREAMP to get some distortion. Crank up BIAS a bit to make the effect more pronounced. Try out different tubes and transformers. You’re looking to get a rich, bright sound. It should sound a little buzzy when you hear it by itself. That buzz is harmonic distortion, and that's what will make that bass sound like something on a small speaker. Don’t let it get too buzzy. It shouldn’t sound like it’s going through a fuzzbox. Also, play around with the EQ back there! Here are some settings I used.

5) Start out with the channel labeled BASS and bring it up in your mix. Bring up BASS FIX until the overall bass sound has more clarity and presence. Switch your monitor speakers between bigs and smalls and tweak the settings and the levels until you can hear the bass on the small speakers.
Quick small speaker trick: I switch over to my Mac Mini’s internal speaker, which sounds awful. Laptop speakers also have no bottom end, so they’re good to use as a small speaker.
I did a quick screen grab of the frequency response of the BASS channel—just compression, and the BASS FIX, which has compression and saturation. BASS FIX is on the bottom. You’ll see there is more going on from about 300Hz up—denser. That’s the saturation.

Some notes:
No, the bass on the small speakers won’t sound like the bass on the big speakers. This isn’t a miracle. You’re looking to get the bass to do the same JOB on the small speaker that it does on the big speaker. So, if the bass is carrying a lot of the song, like this, then it should do that same thing on either speaker.
Some of you might be thinking, “I’ll add a high-pass filter to that BASS FIX track, because we don’t need all that extra bottom end, we really need the higher stuff!” Awesome! You’re thinking correctly, but... if you roll off the lows BEFORE saturation happens, the fundamentals won’t be there for the saturation to dig into and make them sexy overtones. So, by all means roll off the lows, but AFTER the saturation, so after the PSC, not before.
If you’re in a roll-off filter mood, you can low-pass the BASS channel: try like 24dB/Octave at maybe 200Hz. This gets rid of all the high stuff. The resulting sound should be like you’re smothering a bass amp with a pillow. Then, high-pass the BASS FIX channel: 24dB/octave at 200Hz. Experiment! I’m guessing at these settings. I will write another whole thing on fixing bass, and another, and another... there are so many ways to fix low-end issues.
My background, if it isn’t obvious by now, is very analog: big studios, big tape decks, big consoles, mics all over the place and the band playing live. My philosophy back then was to record decisively. Get the sound wanted (or needed) on the tape, build the sonics of the song by making choices, so that the mix was that, a mix, rather than what we seem to have today, a fix. Bear that in mind as you read through this collection of thoughts on recording guitars.
Some of you are experts and know this stuff. Some of you have big budgets and guitar techs and amp collections. Many of you don’t. Hopefully there’s a bit of wisdom in here for everyone.
Tuning
I wrote a bunch on harmonic distortion here, and how it affects whether or not something might be perceived as being on pitch. You know what else makes things seem on pitch? Properly tuned instruments? You know what else? Properly intonated instruments.
Before a studio lockout began, I would gather up all of the instruments—guitars and basses—that we were thinking of using and drop them with a guitar tech who would do a setup on everything and, more importantly, intonate every instrument to itself and make sure all the instruments played in good tuning with each other. I can’t stress how important this is. Doubling guitars that are out of intonation won’t necessarily make a record sound out of tune, but things will tend to sound thinner or buzzy as the upper harmonics clash.
Other things that affect tuning: the amps. Yes, solid state amps and tube amps have different overtone series and they can sound out with each other. Again, not so much out of tune, but things sound thinner rather than thicker. You all are probably using amp sims so this might not be as much an issue, but do listen for this stuff. It builds up.
Some people change strings every song. I change when things are starting to die a bit—you’ll notice a loss of clarity and brightness. And of course, if a string breaks, change the whole set.
Instruments aren’t perfect, and the tuning and intonation change as one travels up and down the neck, or up and down the keyboard. Retune the instrument if parts are played in different registers. In other words, you might need to tune for parts played lower than the 5th fret, and then re-tune when parts are played further up the neck. Acoustic guitars are especially a pain in the neck for this stuff.
Always have a tuner set up on your console or DAW and leave it up the whole time, even the vocal sessions.
Guitars, Amps, Cabs, Mics
I like a lot of separation; I want to be able to hear each little part, so I always mixed up guitar sounds on a recording, using different guitars, different amps, different mics, and in many cases different rooms and even tunings. I never wanted redundancy; I ALWAYS wanted something different.
I’d typically close mic amps with an SM-57 and an AKG 414, because these two mics sounded very different from each other. The SM-57 had a bubble in the midrange, while the 414 had more bass and pronounced upper mids and highs. To my ear they fit together like two puzzle pieces. I thought the same way for guitars—a humbucker to the left, a single coil to the right; and for amps—a cranked up high gain amp here, a lower gain, cleaner amp there; and for cabinets—a 4x12 closed back cabinet on one side, a single 10” open back on the other.
Typically, I’d try to use different guitar and amp combinations for rhythm parts as well as solos. I wanted the solo to stand out, so why use the same processing as the rhythm parts?
Usually, any pedals were added as we went down to tape. I don’t think I ever tracked a guitar part with a sense of, “Hmmm... I wonder what I might do to this in the mix?” Seriously, I just wanted to mix it in the mix.
I mic’d amps as far up from the floor as possible, meaning the upper speakers of a 4x12 cabinet and putting a combo amp on a table or a chair, or on another cabinet, to get it off the ground. This was for phase reasons. You don’t want to record parts that sound like a stuck phaser... unless you want the sound of a stuck phaser.
By the way, each speaker in a cabinet sounds different. Pick the one you like best. Also, experiment with micing the sides of cabinets, the inside of a combo amp, etc. Chances are you won’t get something usable, but it can be fun and sometimes you get a sound that works.
A cool trick is to mic a solid body guitar’s body. You get a thin, flinky sound, but it can add articulation to a part, and a strummed mic’d electric body is a very cool “acoustic” guitar sound. You can also do this by taping a set of headphones to the guitar’s body and then running the headphone’s cable into a mic pre.
Processing
Beware hitting preamps too hard and driving them into saturation. Yes, it can sound cool and add interesting distortion, but clipping removes attack and articulation. If you need clicky guitar parts, give them headroom. Same thing with recording to tape or using tape emulation. You won’t have nice note separation and articulate parts if you get rid of transients.
If you need click, don’t clip.
I never recorded guitars with noise gates. I hate the sound of gates. There are other ways to get rid of hum and noise without screwing up the decay of notes, especially these days.
Guitars don’t have particularly fast transients, so compression is more about the overall sound of the circuit and the gain reduction needed. I preferred dbx160s, especially the units with the dial VU meter. In the box, it’s our Pawn Shop Comp or the VCA-based simulation on the Korneff AIP. Something about a VCA compression circuit on a guitar always feels right to me.
Don’t compress guitars too much to tape, or DAW. But also don’t be afraid to compress everything while tracking down, and then compressing again in the mix. If you’re shaving a little bit of dynamic range down here and there all over the mix—a dB or two here, 3dB there—the net result is a louder sounding recording that doesn’t sound compressed. I love compressors, but if I hear the artifacts of compression I usually think it’s cruddy engineering. So, lots of pumping, loss of brightness, and that “grind” sound compressors can impart, I’m looking to avoid that.
Recording
I used to track bass and guitar overdubs in the studio, with the player sitting across from me, both of us behind the console. With my right hand, I would operate the tape deck via its remote, punching in and punching out as needed. I’d use my left hand to do whatever had to be done when the guitar player ran out of fingers—muting strings, sometimes fretting notes, sometimes squeezing notes into tune behind the nut. This is actually really commonly done in studios, and if you’re tracking parts without lending a helping hand, you’re doing the music a disservice.
I have a good sense of time and an even better sense of groove. Often, if a player couldn’t get the feel of a part, I would tap my foot on their foot to impart both the tempo and the feel. Some people can’t tap their foot on time. I can’t imagine it but it used to happen.
Squeaky strings? Have the guitar player rub their fingertips on their forehead or sides of their nose to get a little oil to lubricate the strings. This usually gets rid of squeaks.
Put the amp in the bathroom. Turn off the water to the toilet and flush it. Stick an SM-57 down in the bowl and move it around til you get a cool sound. Obviously, don’t put it in the water. Toiletverb.
Toilet paper tubes, paper towel rolls, pieces of PVC piping, these are all fun to stick mics down and then point at an amp, or an acoustic guitar. Studios used to have water coolers with those big replaceable bottles. These are fun to drop a mic down as well. Just make sure they’re dry.
You can also stick mics down orchestral instruments, like tubas and such, and point them at amps for cool sounds. This actually works well for vocals too. Get a tuba. Have someone sing into the bell, stick the mic by the mouthpiece.
Experiment with pulling patch cables half out of pedals. Usually, it sounds awful, but I had a few fuzz pedals that would turn into oscillators, the pitch adjustable with the knobs. Got some pretty out guitar sounds doing this.
Overdub strummy guitar parts with open-tuned guitar parts, and then pull them back in the mix, or use the open guitar part just to feed the reverb and have none of its dry signal in the mix.
Alligator clips—those little pieces of wire with a grippy jaw on the end of them that you find in the workshop area of a studio—clip these on guitar strings for a very strange sound. It will fret a note, but the overtones are really different.
If you want a guitar part to sound further away, mic it from further away.
You can make a talk box by sticking a small speaker, like an earbud, into someone’s mouth and then putting them in front of a mic. Actually, you can run an entire mix through a pair of earbuds in someone’s mouth, and then mic their face, and then track that down into the DAW and use it in the mix somehow. Use cheap ones. They will get all spitty.
I am sure I will think of a bunch of bizarre things I did once this gets published, but for now... sayonara and happy recording.
Let’s look at the relationship of timbre to distortion, because the two are cousins if not siblings. We’ll also compare clipping to compression, differentiate the sound of the two, and get that firmly in your ear.
Four or Five Characteristics
Sounds have four main characteristics: pitch, timbre, loudness, and envelope. And duration, but we don't need that right now.
Pitch we know about. Loudness we know about.
Timbre is a mix of pitch and loudness. All instruments—indeed, all objects, have a timbre when you give them a good klonk or whatever it is that needs to be done to get the thing to make noise. Ping a glass with your fingernail and there is a distinctive "glass" sort of sound. Get two of the same glasses, ping them both and you'll notice that they don't sound exactly the same. This is because each glass has a slightly different overtone series—a slightly different set of harmonics, which are frequencies above the loudest frequency, the fundamental, that gives the note its pitch name.
The overtone series of every instrument is different. We call a sound with a lot of high overtones "bright," with lower overtones "warm." Depending on the math of it all, overtones can also make things sound harsh or smooth, or even in or out of tune. I wrote more in-depth on this stuff here.
Timbre is Overtones
So, you have a note at 329.63 Hz, and you'd like to make it brighter, so you put an EQ on it and turn it up, but you don't have the frequency of the EQ at 329.63 Hz, do you? You have it at like 8kHz or something. A shelf at 15kHz. That EQ is turning up the overtones of the note, right? It gets brighter because you're amplifying the overtones.
For some of you, this is "Duh." For others of you, this is, "Oh really...? Hmmmm..."
What if, instead of amplifying the overtones of an instrument, we added more overtones into the picture? We generated some additional overtones that are consonant and harmonious with the fundamental, and added them into the sound. It would be brighter, right? It would be subtle, not as noticeable as an EQ boost, but it would make an audible difference.
That, campers, is what happens when you drive a signal into tape or a circuit a little too hard and start clipping it. It generates additional overtones—harmonic distortion. This is what happens when you saturate tape, or saturate a transformer, or overload a circuit on a preamp. Heck, just passing a signal through a compressor with no compression happening causes some additional harmonic distortion to happen, which changes the timbre of the signal. This is what people are talking about when they say, "I'm adding this not for the compression, but for the color."
Instruments sound the way they sound in part because of their timbre. Equipment sounds the way it sounds in part because of its harmonic distortion. These are the same thing, really.
Timbre and Harmonic Distortion Fall in Love at an All-Inclusive Resort
So, some instruments naturally sound better with some pieces of gear because the timbre and the harmonic distortion are complementary. And things can also sound bad because of the relationship of these two things. I found out pretty early in my career when I was recording guitars through distorted amps, that sometimes, if I doubled a part with two different amps, it might actually sound a little thinner when mixed together, or buzzy and harsh, and in some cases, out of tune. It was overtones and the timbres not lining up.
It's dumb luck that the harmonic distortion caused by tape compression/saturation generally enhances the tonality of most instruments. Same thing for circuits using tubes. The same thing for transformers. Rather than EQing a vocal to get it to sit better, we can smush it a little bit into tape and it gains a bit of presence and "bite." We can get a bass or a kick to have more authority on small speakers by pushing it a little bit harder through some transformers, which tend to generate harmonic distortion that is lower in frequency than most of the stuff generated by compression/saturation/distortion.
Slamming cymbals through things often sounds like ass—really nippy and harsh. Too much harmonic activity. Higher voices and higher-pitched keyboard parts can get really nasty with too much extra harmonics up there. Danger Will Robinson!
Don't forget, ALL analog gear and all ANALOG MODELED digital audio adds some harmonic distortion, and things change timbre due to this as levels go up and down. At Korneff, we spend MONTHS on the modeling to get all the distortion and harmonics behaving in an authentic, analog way. It's easy to make a plug-in that does something. Relatively speaking. It isn't as easy as, oh, making toast. But it's much more difficult to make a plug-in that really captures the analog inspiration.
Harmonic distortion ain't the only thing that happens when you club a baby seal of an audio signal with a 600 pound tape deck. Or a feather-light Echoleffe Tape Delay. You also change the waveform.
Clipping Made Easy
Any sound has an envelope. This is how the sound varies in loudness or power on a micro level. My easy way to think of it: there's a distance between the loudest bit of the sound, usually the attack, but not always, and the quieter bits of the sound—the way the note dies off, the resonances of the body of the instrument (or of a speaker cabinet or a room). The little rattles and noises and squeaks things make.
Compression changes the distance between the loud bit and the quiet bit.
Tape compression and saturation squash the signal (compress it) with an immediate, instantaneous attack that definitely clips the transient a bit. This is true of ANY signal that you slam into clipping: you lose some attack. But saturation has a very very fast release. Like, slightly less than instantaneous. Actually, for our purposes, it's instantaneous.
So, when you squash something into tape, not only do you add harmonics, you lop off the loud bits and smush them down, and because you're increasing the level to do the smushing, you're also bringing up the quiet stuff.
Back to our snare. If we smash it into tape, it gets a little bit THICKER, because we're adding harmonics, and a little bit LONGER ACROSS TIME, because we're changing the relationship between the loud and the quiet. You understand that if we bring up the quiet stuff, the sound will appear to last longer, right?
And you realize that lengthening a snare will change the groove, right? It will sound more "behind the beat" if you squash it into tape.
Now, crushing guitars into tape adds a very nice set of overtones that give them a little more brightness, but the transients are getting slightly clipped, and they get a little bit less distinct and less punchy. You lose the "click" of things. Same thing with pianos or any sound with a fast attack. Slam it into tape or a tube or a preamp, clipping it, and you'll lose that transient a bit. Same with vocals. When I was recording rap stuff, I would cut the vocals a little bit lower so as to not lose articulation and wind up with it sounding mumbly. I would cut punk vocals clipping into tape to deliberately get them a little less articulate and at the same time bring up the spittiness and the mouth noises (that's quiet stuff) so the whole thing sounded more "in yer face."
If you think about it, if someone was in your face screaming at you, you'd hear all the mouth noises. You might even taste the mouth noises.
Homework
So, set up a mix. Route everything to a stereo subgroup. Label this CLEAN. Add two pre-fader sends from this and send each to a different stereo subgroup.
Put the Echoleffe on one subgroup and set it to Tape Emulation mode. This is the group that's going to clip everything using tape saturation. Label this one CLIP. Everything going through this will lose transients but gain harmonics and gain length (the quiet stuff will get louder).
Put the Pawn Shop Comp on the other subgroup. You can use the default setting. Set the ratio to like 6:1 and drop the threshold until the meter is bouncing musically. We want to compress, not limit. This compressor will bring out the transients and push down the quiet stuff a bit. Label this COMPRESS.
Route all three subgroups—the clean, the clip and the compress, to your stereo master.

Playtime! Experiment! Pull down COMPRESS and leave up CLEAN and CLIP. Pull down CLIP, push up COMPRESS. Listen for the differences.
How does it sound with all three up? How does it sound if you pull down CLEAN all the way? Did you set the sends to pre-fader? If you didn’t, you're about to find out why they need to be set that way.
Throw a LUFS meter on it and mess around with things. Can you get something like a -8 LUFS-S reading without it sounding like utter ass? And without driving things over like -1dB true peak?
Play some more. Maybe route some sounds just to the clip, and others just to the COMPRESS. What works best where?
By the way, if you don't know, what you're doing is parallel compression and parallel saturation. I know most of you know this, but there are a lot of beginners reading this, too.
You will learn tons if you do this.
I’ve always disliked tracking anything with headphones, especially vocals. Some vocalists have pitch problems on headphones. Some are just uninspired. If you’re recording yourself, it can be a pain in the head to keep switching your set-up.
The simple solution is to record without using headphones and instead monitor with speakers. There are a few ways to do this. I’ll go over all of them briefly, then show you a way to do it that works amazingly well.
First things first: remember that most of the leakage a directional microphone picks up is reflected sound coming in the FRONT of the mic. Remember that the back, or sides, of a directional mic are designed to reject sound and overall that works pretty well. I’m assuming you’re not cutting a vocal with a omnidirectional, especially if your goal is to cut down on leakage.
A moving coil mic is generally going to pick up less of the stuff you don’t want, which is the room reflections coming over the shoulder of the person singing. Why? Because moving coil mics are less sensitive overall—a big heavy diaphragm attached to a big heavy coil of wire has more inertia. Condenser mics will generally pick up more of everything, but I’ve cut tons of vocals with speakers for monitoring using condensers and it usually works out fine.
Just Record and the Hell with Leakage
You can set up a mic in front of the speakers and cut the vocals and just ignore the leakage. Depending on your room and the volume you’re working at, leakage might not be an issue. Obviously, use a directional mic with a cardioid or hypercardioid pattern, experiment with where you place it—you might get less leakage if you put it right in front of one speaker rather than in between the two (the polar pattern will affect this a lot).
In the mix, you’ll have to gate things or edit out the leakage. However, there can be problems because the leakage on the vocal track, when the gate is open or when the vocal is playing, can mix with the music on your tracks and you might hear a change to the snare or the low end whenever the vocal comes in and out of the mix. This is evident on the Chris Isaak song Wicked Game. My quick fix is to ride (automated) the vocal level rather than gate or edit it, so I can control how much of the overall sound of the track changes.
Using a moving coil rather than a condenser is recommended if you’re doing things this way.
If you’re sitting down, think about throwing a piece of acoustic foam over the keyboard/work surface and on the monitor. Close reflections suck for a vocal.
If the leakage is a problem, then you’ll probably have to start playing around with phase.
Put the Speakers Out of Phase
I think this is an awful solution to a problem, but I’ll explain it anyway, provided you promise not to use it. I tried it once and it was a waste of my time.
Set up the vocal mic so that it is in the exact sweet spot of the speakers, then reverse the phase of one of the speakers and THEORETICALLY the resulting phase cancellation will result in far less leakage, and because the singer is not exactly in the sweet spot — the spot of maximum cancellation — they’ll still be able to hear the music well enough. Engineers also have done things like putting out-of-phase speakers to either side of the singer, equidistant, pointing at the mic.
Why this sucks #1: It sounds awful
This sounds awful. Out-of-phase speakers sound awful. Mostly you’ll kill a ton of bass, so the music won’t be exciting—there's nothing quite like an unenergized and uninspired singer, and the net sound will be phasey and plain old weird. If the singer shifts or moves, they’ll hear all sorts of swooshing and if the phase issue is bad enough, they might get nauseated. Did you know huge weird phase shifts plays ear games and causes something akin to motion sickness? Ever cut tracks with a singer who wants to vomit?
Why this sucks #1: It works like ass
Because most of the leakage that comes in a mic is coming in the front, and is predominantly indirect sound, chances are the speakers out of phase trick isn’t going to buy you much. The out-of-phase sound that goes bouncing around the room comes back as in-phase leakage.
If you want to cancel using phase, you have to flip the phase AT THE MIC, not in the air.
Use Two Mics
Get two identical mics. Flip the phase on one of them. Put them very close together, displaced vertically rather than horizontally (one over the other rather than side by side). Have the singer sing into the in-phase mic, combine them into one track or do it in the mix.
This basically sucks too. In a live situation, this might be workable, but in a studio situation, unless the singer is working really close to the in-phase mic, this is going to be all over the place. Little movements will change the frequency response; the out-of-phase mic is picking up the singer's chest so things could wind up overly warm or thinned out, depending on how the phase cancellation affects frequency response. The singer has to put a lot of effort into staying still and in one place, and that usually results in a stiff, bad vocal.
Here’s the best way to do it.
Record an Out-of-Phase Leakage Track
The first time I tried this was tracking a jazz choir and I didn’t have enough headphones. I put them out into a room and set up a pair of big, loud monitors, and put two Crown PZMs mic out, each taped to a music stand. The speakers blasted into the choir, the mics were spread about 8’ roughly 6’ from the first rank of the choir. We cut a good take. I had the choir shuffle their positions around and recorded another take. I played back the four tracks and it was a big leaky mess until I reversed the phase of the second take. Leakage GONE. Vocals untouched.
This technique works jaw-droppingly well. Here’s how you do it:
First of all, set up the mic so the singer is really comfortable and loves how the speaker monitor mix sounds. I usually did this right in the control room in front of the console dead center, but it can be anywhere. Once you get the singer happy, tape the mic stand down, tape the mic to the stand — whatever you have to do to make sure that microphone doesn’t move at all.
You also can’t move the speakers, and you have to do all the recordings using the same speakers. And you should tape down the monitor level: it has to be the same for every take. Ideally, you do all the vocal tracks in one day.
So, set up your vocal chain (mine was usually a Quad Eight mic pre followed by a Summit TLA-100 and then an Aphex 551). Get it to sound dandy good. Put the vocalist in position, then have them sing. As many takes as you need. Punch in, etc. Don’t do any comping yet!
Once you get a good take, have the singer stand there in front of the mic silently and record just the leakage — exactly what the singer was hearing when they did their vocals, but no vocals. Now the magic: play back the vocal track, reverse the phase of the silent track, and bring it up in the mix until the leakage disappears. And disappear it will. This works like magic. Comp the vocal til the cows come home this will still work. You can even bounce the vocal track and the leakage track together — just make sure the leakage track is phase reversed.
In the old days before unlimited tracks, if I was going to double the vocal, I would cut the double and reverse the phase of that — I have yet to have a singer cut a double so close that any of the vocals canceled, but I suppose it’s possible. I also don’t see the point of cutting a double so tight that you can’t tell it’s a double — just bring up the original track by 3dB and get it over with.
This technique works amazingly well. It’s how Chris Cornell cut a lot of his vocals. I’ve used it on hundreds of sessions. This method will also work with an omnidirectional or a bidirectional mic, and it works like a charm with condensers as well.
The first thing you can do is stop thinking of reverb as a ramification of physics.
Don't think of using reverb as an acoustic effect. Think about using it as an emotional effect, or a narrative effect. Lyrically, is the song set in the present or the past, or is it perhaps in both? Can reverb be used to differentiate the past from the present, or the present from the future? What does the past sound like? Is the future wet or dry? When the singer is in their head, what does that sound like? What is the reverb of thoughts?
Is the character in different spaces during the song? Is there a bedroom, or a kitchen? Is the character in one place during the verse and another place during the chorus? This might be something you decide that's not based on the lyrics. It can just be a decision you make.
Control the sense of space and intimacy. Reverb is distance. Want a vocal part to sound like it's in the listeners' ear? Dry it up and pan it hard. Control the depth of the soundstage by putting some things farther away than others. What's in the back of the room? What's in your face? Make decisions, damnit!
The listener probably will never go, "Ah, the singer is in the bedroom in their past, then dreams they're in a canyon, then they yell in a bathroom." And honestly, you don't want your listener noticing all that stuff, that would be like watching an old Godzilla movie hoping to see the strings moving everything. But you do want your listener to "lose" themself in the song, and you do that with small, well-thought out decisions. It's like when you eat food prepared by an excellent chef. You don't know what little tricks they're up to. You're not thinking, "Ah, this butter has had the solids removed so it is actually closer to ghee." You're just thinking, "Man, this is delicious."
We want people to hear the results of our work, not retrace our exact steps. We're not making records for other engineers to like.
Put two different reverbs on two channels and pan them so that one reverb is on the left, the other on the right. Then, feed the signal that you want reverb on to channels, or more one than the other - whatever you want. The more different the two reverbs are, the weirder this effect gets. A short decay time on the left and a long decay time on the right will move the reverb across the speakers from left to right. There are all sorts of things you can do with this set-up.
Set really long pre-delay times. Pre-delay corresponds to how close the nearest wall is in a space. In a small space, the nearest wall is only a few feet away, but in an aircraft hangar, the nearest wall might be hundreds of feet away, so there will be a long pause between the direct sound and the start of the reverberant sound. Our ear makes decisions about the kind of space it is in based on when it hears the initial return of the room, ie., the pre-delay. So, a small room with a huge pre-delay sounds very unnatural, as does a huge room with a very short pre-delay. This is a fun thing to experiment with; it adds a bit of "acoustic confusion" into the mix. Perhaps tie the use of it into the lyrical or emotional content of the song. Like, the singer is expressing doubt or confusion in a section, and to heighten that, add a short reverb with a long pre-delay, which not only pops the lyrics out but also gives the listener a hint of confusion.
Compress your reverb returns. Stick a compressor on the insert of the return channel and squash that stuff. Play with the attack and release. Can you get the reverb to "breathe" along with the tempo of the track? Long attacks will increase the "punch" of the reverb. Short attacks and releases can lend an almost backwards sound to the reverb. Experiment with putting the compressor both before and after the reverb in the insert — you'll get wildly different results.
Duck your reverb returns. Put a compressor on the return (you pick the spot in the insert chain) and then key that compressor to duck the reverb. If you key the compressor off, say, the vocal you're putting reverb on, you'll get a very clear vocal with reverb blossoming whenever the singing stops, and there's no automation needed. What about ducking the backing vocal reverb with the lead vocal, especially if there is an alternating quality to the two parts? You can also key reverbs off percussion so that the kick or the snare stops the reverb for a moment, which can give you all sorts of rhythmic effects in addition to giving your mix clarity. Remember, reverb tends to muddy things up, so if you're ducking during busier sections of the song, you're going to increase clarity in those sections, and differ the effect of the reverb 'til a moment after, so the track will be clean but still have an overall wet quality.
Gate and Key your reverb returns. Gated reverb is a staple effect on drums from the late 70s, to the point where it is a cliché, but gating a reverb and then keying it from another sound source is still a fun thing with which to experiment. Key percussion with itself to get a classic gated reverb effect on something other than a snare. Gate the reverb of a tambourine with the snare so there's a huge wet noise on the snare that isn't the snare. Gently expand (a gate that only reduces output by a few dB, such that when the gate opens there's only a slight volume increase) the tails and decays of pads with the rhythm instruments to extend the feel of the groove into other aspects of the sonic landscape.
Modulate your reverbs. It's amazing how cool a little chorus or phase sounds on reverb, and how people seldom think to do something so simple and effective. In the old days when hardware units were the only option, it was hard to sacrifice something like a rackmount flanger to a reverb, but nowadays, just throw a plug-in on it. Just experiment. Like a lot of effects, modulated reverb is best used sparingly, to heighten a specific moment of a song, rather than having it on all the time. But rules and suggestions are made for breaking and ignoring, so feel free to slop modulation all over the place, but perhaps control it with ducking? Modulated reverb on strings, keyboard pads and chorussy vocals can add an otherworldly effect to things, and you can reign it in using keying and automation.
Goof Around in Fadeouts. Good fadeouts are an art form. I love fadeouts that have a little something in them to catch your ear and pull your attention back into the song. An amazing, fast guitar run, a spectacular vocal moment, someone talking, etc. Doing wacky things with the reverb in a fadeout is always fun. Crank the reverbs up so that things sound like they're going farther away as they fade, or dry things up totally so that the fade makes things sound like they're getting smaller. Roll off the bass gradually and pan things tighter to accentuate the smallness.
A last bit of advice: you’ve got more power in your laptop than anyone in the studio biz has ever had. By next year, that will probably double. Put that power to use in the search for something new, different and yours. Experiment and play. Don’t let Ai have all the fun.
We usually think of harmonics as being pleasant things to hear. They give an instrument its timbre, they provide brightness and clarity.
Don’t know what harmonics are? Go here and read.
Usually the harmonics that our ears like to hear are mathematically related to the fundamental based on whole numbers. Whole numbers: ones and twos and threes. Octaves are a multiple of 2, 4, 8, etc., things like that. Harmonics can be even numbers, but also odd numbers, and harmonics based on 3 or 5 or 7, while they might sound a little wooly, they don’t sound plain old bad. Also, keep in mind that sometimes the math on these things isn’t perfect. It might not be a perfect multiple of 3 but something close, like 2.98, but generally this is good enough.
Inharmonicity
However, there can also be harmonics generated that don’t have any whole number relationship to the fundamental, and these harmonics are usually unpleasant to hear. This is called Inharmonicity — when the harmonics don’t make whole number sense mathematically.
Strike Tones
On many instruments, inharmonicity happens in the strike or the initial attack of the note. Bowed and reed instruments—violins and flutes, as an example, don’t have inharmonicity because they don’t have a fast transient attack. Brass instruments typically have slower attacks as well.
Fast transient attacks, on the other hand, generate a lot of “inharmonic” stuff—lots of non-whole number overtones. On a piano, the initial strike of the hammer generates a lot of inharmonicity, and that strike is basically pitch-less for a split second. It’s only once the string resonates for a moment that we get a sense of the note. The same thing is true of guitars, bells, and especially drums. That initial strike is basically out of tune, and it is the resonance after the strike that conveys a solid sense of pitch.
The faster the attack, the more inharmonicity is generated in that moment. And, by the way, the transient is typically the brightest moment of a note, because it is so rich with harmonics both good and bad.
Actually, the strike of a note is usually very out of tune! Plug a bass into a tuner and watch how the tuner behaves when you slap a note versus using a softer attack with your finger.
Bells are a great example of the inharmonicity of a strike tone. Listen to Hells Bells by AC/DC and the opening bells are out of tune until they resonate. This has to do with their strike tone. I found a great video that explains this, and while most of you won’t ever record church bells, this is fascinating stuff and it will help get the concept of inharmonicity firmly in your mind.
SO.... instruments have inharmonicity in the attack, the strike. But what about gear? Compressors? Amps? Plug-ins?
Intermodulation Distortion
The way equipment and devices, whether analog or digital, create inharmonicity is through Intermodulation Distortion.
Intermodulation distortion is overtones that are way out mathematically from the fundamental. They typically occur when multiple fundamentals mix together in ways that generate, well... non-whole number math. Harmonics are generated that don’t have whole number relationships to the fundamental. Some of these new harmonics might be undertones that happen below the fundamental, and others above. In some cases the products of intermodulation distortion sound good, but the more complex the sounds get, things get really hairy quickly.
Remember that an instrument, unless it’s like a flute or something with a very simple timbre, already has a lot of overtones to it. A human voice has an incredibly complex series of overtones, so complex that virtually every person has a unique set, which is why we can recognize someone’s voice even if they just clear their throat. So there’s this ton of harmonic activity, then there’s harmonic distortion added to it, and all of those fundamentals AND harmonics have additional harmonics added to them, and then intermodulation distortion kicks in, and ALL those fundamentals AND harmonics AND additional harmonics start negatively reacting with each other adding in yet more harmonics that have bad math going on.
This is the distortion you hear when you crank up guitar amps, or slam things through the mix bus and drive it into clipping.
Here’s a nice, non-technical video on it that makes a lot of sense. You’ll hear why intermodulation distortion can be a huge issue.
Quick Takeaways
Some things to take away from all this.
- Strike tones are out of tune and bright.
- Intermodulation Distortion gets worse and more noticeable as the sounds interacting with each other become more complex. It’s hard to get a flute to exhibit any intermodulation distortion. It’s easy to get a full mix to sound awful with even a little intermodulation distortion.
A very common technique in the old days was the Mixback. Basically, engineers would print whatever was going through the master bus (the stereo bus) to two open tracks on the 24-track master. Can you believe there was a time when 24-tracks was too many? Actually you can find track sheets from 8-track and 16-track recordings with lots of open tracks.
The mixback tracks were a running record of whatever was in the master bus during the session. They gave the engineer a good-sounding mix just by pushing up two faders.
If you needed to do overdubs, you’d just bring up the two mixback tracks and there was a headphone mix ready to go. Need more of something like the lead vocal? Bring up the lead vocal track a little bit and now the mix has more lead vocal. Need less bass? Piece of cake: reverse the phase on the individual bass track, slide up the fader and the phase cancellation lowers the volume of the bass in the mix! How cool is that? And yes, it really does work!
A better mixback trick: you could “punch in” the mix. If you didn’t have automation, with a mixback you could work on each individual section, punching in and out to do all sorts of difficult mix moves. And if the record company wanted changes to the mix, that was easy to do — add tracks in or lower them using the phase trick.
Automation ended the Mixback, or did it? With a DAW, bouncing rough mixes and then bringing them back into the session is very useful. It makes fixing latency issues a breeze: bring up the mixback mix, mute all the individual tracks and turn off any plugins on the mix bus. You can tweak the mixback by bringing up individual tracks, reversing the phase if you need to lower the volume of something, and then bounce that and bring it back into the session.
With a bit of ingenuity and enough ins and outs on your interface, you can even do a real mixback: route the master bus output to two tracks of the DAW (make sure you mute those tracks to avoid feedback), and then you can punch in and out of your mix just like the good old days. I mix this way all the time, punching the mix in section by section. And because it’s digital, there’s no generational loss or hiss build-up.
Schönen Montag!
That’s German.
I’ve been bopping around Montreal’s subway system listening to Krautrock. It’s perfect music for trains and tunnels and feeling odd and alienated.
I went down a Krautrock rabbit hole. And this is your invitation to join me down there!
Krautrock... that is a terrible name, coined by British music journalists. I hope any German readers don’t find it offensive, and please feel free to correct or add to anything in this Neuer Montag.
It’s also ridiculously reductive. It’s applied to a variety of music recorded from about 1968 into the 1980s, that stylistically ranges from psychedelic jams to synth-based minimalism to prog rock with embarrassingly bad lyrics to punk to free jazz. As a genre, Krautrock is all over the place.
While a lot of it has a mechanical 4/4 beat known as “motorik,” the only commonalities seem to be a tendency towards experimentation and noise, and that it doesn’t have blues as a basis for chord structure or improvisation.
Whoops! Another commonality in Krautrock is a brilliant engineer/producer named...
Conny Plank
By brilliant, I mean Bill Putnam or Al Schmitt or Tom Dowd brilliant. An engineer’s engineer. Someone who not only knew where to put the mics, but also how to build the console. BRILLIANT. Conny Plank should really be much better known.
Konrad “Conny” Plank recorded or produced practically every group associated with Krautrock at one time or another. He also recorded albums for Scorpions, Eurythmics, Devo, Ultravox, Killing Joke, Brian Eno, and even Duke Ellington! Plank turned down working with David Bowie on the album that eventually became Low, (too much drugs, he thought). He also turned down working on U2’s Joshua Tree (too much Bono).
The center of his studio near Cologne was a 56-channel custom console, built to his own specifications and recording style with Michael Zahl, who now makes 500 series EQs and such.
Here is some vintage console porn. I love this stuff.
Plank also developed a recall system that used a camera suspended over the console to take a picture of the knobs. To recall this “snapshot” of the console, the film would be projected back through the camera onto the console and an assistant would turn the knobs to match the image projected onto them. Brilliant.
He also solved the problem of listening to finished mixes in the car: he built an illegal radio station in the studio. He and the clients would pile into his car, tune into the studio’s station, and wait for the assistant engineer to include the mix in a playlist of similar records. Again, brilliant.
And, of course, he was a virtuoso engineer, a huge believer in mic placement and working room acoustics. His recordings of percussion-based experimental jazz are fabulous, capturing everything with amazing clarity and precise stereo placement. He was also a master of tape manipulation, blessed with a fantastic memory that allowed him to edit long, free-form jam sessions into cohesive songs, linking forward and reversed bits of tape and noise without keeping detailed notes. He just sort of “did it."
Conny died young, at 47 in 1987, of cancer. His console is now in England, still making records for artists like Franz Ferdinand. The Motorik goes on!
Some Things to Hear
Here’s a curated list of some things Krautrock and Conny Plank.
This is a wonderful extension to everything I’ve written above, going into a bit more detail on the recordings and linked to listening examples.
https://thevinylfactory.com/features/10-essential-conny-plank-records/
Can - this is out there stuff.
https://www.youtube.com/watch?v=2dZbAFmnRVA
Kraftwork - the Beatles of Krautrock. They’re still around. The album Autobahn was the last record they did that was engineered by Plank as they became more successful and ever more electronic. Here’s a playlist of vids. Great background music. Occasionally look up and see Germans dressed as robots.
https://www.youtube.com/watch?v=OQIYEPe6DWY&list=RDEMlS0N2Gz3BIH0JY8Cyyrimw&start_radio=1
Neu! - Loose, jam-oriented stuff. The recording below is an 8-track, engineered by Conny Plank. The drums here can’t be on more than two tracks, but there’s astonishing clarity and stereo perspective on everything. My guess is he did this with just two mics in exactly the right place.
https://www.youtube.com/watch?v=zndpi8tNZyQ&list=RDzndpi8tNZyQ&start_radio=1
Niagara - who says a percussion-only album can’t be amazing? Breathtaking engineering by Conny Plank, and amazing playing by a bunch of killer drummers.
https://www.youtube.com/watch?v=4T5R4nFIBgg
La Düsseldorf - a spinoff comprised of members of Kraftwork and Neu! Industrial before industrial?
https://www.youtube.com/watch?v=dz9q9UZS4M0&list=PL4384B64D44A0F11C
Cluster - electronic and minimal, occasionally with Brian Eno.
https://www.youtube.com/watch?v=l50cmJOiHv0&list=OLAK5uy_kNoI0SQPzDD4EFa5MY1gKk-dOyd202orU
Tangerine Dream - dramatic electronic Krautrock, or the basis of every sci-fi movie soundtrack since 1980.
https://www.youtube.com/watch?v=cdFHE73aOMI&list=RDEMflsAy-eLxQ2-oszlwebZ4g&start_radio=1
Faust - Krautrock as noise. Or punk ska. I have no idea what this stuff is.
https://www.youtube.com/watch?v=menuXx3oq80&list=OLAK5uy_lJ5UzdPN6Dj1C7D3oYhQDSc_6Zjc3KgPI
Not Krautrock, but Conny Plank...
Eurythmics - Belinda. Recorded by Conny Plank. This is when the band was way more rock.
Eno - Ambient 1: Music for Airports This is the record that started ambient music. Plank was very much involved.
Ultravox - Vienna A huge early new wave hit. High romance and electronic noise, Conny Plank at the faders.
Devo - Q: Are we not men? A: We are Devo. Produced by Eno, engineered by Plank. Insane stuff.
Scorpions - Love Drive. Conny Plank and early metal. Great sounds overall.
A Percussion Recording Tip
I used to use a stereo tube mic most often, but when I was dealing with a percussionist who was playing a lot of different things all over the place, like congas, and then some bell tree thing, and then a rik, and then a talking drum, and on and on, it became impossible to mic all of it with a pair and get good capture, or mic things individually and not get tons of phase issues.
The solution was a t-shirt and a PZM.
A PZM is a flat plate of a mic that has a semi-hemispherical pattern — it picks up everything across 180 degrees. Not a cheap mic, they were originally made by Crown and were like $800 each. However, you could get a Crown PZM for $60 if you went to Radioshack, because the Realistic PZM was in fact a Crown mic. For $120 you could get a pair of great-sounding, albeit unbalanced wreck-around mics.
SO... I taped a PZM to the center of a t-shirt with gaffers tape and had the percussionist put the shirt on, the mic facing out from his chest, perfectly positioned for percussion pick-up. Since he was naturally balancing levels as he moved from instrument to instrument, I didn’t even have to ride gain. For the mix, I’d split the one track to a bunch of channels and then automate them with whatever EQ or effects were needed to get the best sound out of that particular bit of percussion. When dealing with expensive studio musicians, producers often wanted me to go fast with them in the studio and then work more on the mix because, well, my time was cheaper!
Thanks for coming down the rabbit hole. Tchüss til next week!

Double compression is an awesome technique that totally upped my engineering chops once I mastered it.
It's basically using two compressors in series (one after the other) on a sound source. I mainly use it while tracking, but it is handy to use mixing as well, and in this blog post I'll give you ideas and settings for both applications.
A lot of this post will be centered around vocals, but the technique can be used for anything, although I use it religiously on vocal and bass. Religiously. I don't track either of those sources without double compression on them.
I was shown a double compressor by an engineer named Fred. He had been at Media Sound in NYC in the 70s, which is where he learned it. I was working in his studio, tracking a crappy bassist. Fred came in, put 2 DBX 160a's on the channel, tweaked a few knobs and lo and behold, suddenly it seemed the guy could actually play.
Why Two Compressors?
Recording on analog tape was really an exercise in minimizing tape hiss, and the most important thing you could do was record your tracks at as high a level as possible to get as high a signal to noise ratio as possible. Yes, you wanted a signal to have dynamics to it, that interesting up and down of volume and intensity that conveyed emotion, but you didn't want to overload things so much so that you heard distortion, and you didn't want things so quiet that they "fell into the mud,” down in there in the hiss.
Ideally, say with a rock vocal, you wanted to restrict that singer’s output level to about a 9dB swing on the VU meter, with the quietest stuff down around -7 and the loudest about +2, just moving out of the red — call this Maximum Meter Swing. Usually, though, for the majority of the vocal, you want a much tighter meter swing.

You could, of course cut the vocal higher than that, especially if it was a screamer for a singer, because they were already reducing their dynamic range by screaming. Having a vocal hit the tape a little too hard on high, loud notes sounded good, too, adding a little extra grit and mojo.
With a singer with good mic technique, cutting a vocal was easy; you'd throw a DBX 160 or a 1176 on it, or if you were at a better studio an LA-2a (or if you were really lucky a BA-6a or, gulp, a Fairchild) on at and you were done*.
But if the singer was all over the place, you'd need a lot of compression to get it on tape correctly, and lots of compression sounded bad — pumpy, with a loss of high end. Again, sometimes you wanted that if the genre called for it, but not usually.
3 to 6dB of gain reduction on a vocal was usually inaudible, but if that climbed up into the 10 to 15dB range, it sounded terrible to my ears.
Double compression solves the issue by splitting the amount of gain reduction needed across two compressors. The first compressor handles the first 6dB on compression; the 2nd compressor takes care of anything above that. Think sometimes none, often one, sometimes both. Also, because the waveform is "pre-compressed" when it hits the second compressor, the second doesn't impart as many negative artifacts to the signal. In fact, you can really squash hard with the second compressor without it sounding awful.
These days, y'all don't worry about signal to noise ratio that much, but mastering double compression means much less work in the mix automating and fixing things because levels are a mess.
Double compression changes the way you track dynamically active signals. I was able to lay these gorgeous vocals on tape that required almost nothing in the mix in terms of automation (I hated using automation - another thing to write about). I started using double compression on bass, acoustic guitars, sometimes on percussion—anything that was all over the place on the meters got double compressed while tracking it down to tape.
Tracking/Analog Settings
I went through a bunch of different compressor setups back in the day, and sometimes I was limited to what was in the studio, but my usual setup was/is a Summit TLA-100 followed by an Aphex 551 Expressor.
Base settings were generally compressor 1 is "softer and slower" and the compressor 2 is "harder and faster."
Compressor 1:
I usually use a soft knee compressor with a ratio under 4:1. I want a fairly slow attack and a longer release. Ideally, the attack is long enough to let some of the transient through so there is some punch, and the release is long enough so that the output is consistent.
Compressor 2:
I use a hard knee setting, and the ratio above 8:1. I want the attack fast so that it hits fast transients coming through — sort of like a limiter — and the release fast as well.
When setting these two, of course use your ear, but you also need to watch and interpret the meters. I prefer VU meters, because I'm so used to them. The output level meter to watch is that of the second compressor, or the input meter of the channel. Again, I prefer swinging arm meters over bars that light up.
When things are very quiet, I don't want to see any movement on the gain reduction meters and I'm looking for output levels to be around -5dB or so, certainly not much less than that.
As signals get louder, I want to see the first compressor meter moving, but not by much, and no activity on the second compressor's meter.
Once the signal hits its "average performance" level, I'm looking for the first compressor to be in steadily, with the gain reduction meter swinging down to around -2 to -5dB. The meter movement, as I often write, should look like the way the signal sounds.
The second compressor's meter should be very twitchy and jumpy, moving a lot but not by very much. The output meter of the second compressor should hang around 0, the VU meter on the console matching it.
When things get loud, both compressors should be in a lot, the first for 6dB to 8dB on gain reduction, the second for, well, basically whatever is needed to keep the output meter from getting above about +3dB.
There are also times when the second compressor might kick in for a split second and the first doesn't do anything — this should happen when a really fast transient goes by, like a slapped bass note.
Now, the levels on your digital input... this could be argued over and discussed til everyone is dead. I try to keep the max peaks under -6, with things averaging around -15dB. This correlates well with my experiences using analog tape, giving me about the same headroom. But there are no firm rules here, and everyone does this differently. And please note these are levels for TRACKING into an individual track. These are not not not suggested mix bus or group bus levels.
Mixing/Digital Settings
You modern guys don't have tape hiss as an issue, and I'll bet a lot of you are tracking with a mic through an interface, riding bareback with no hardware compressor, and then compressing on the mix side. Here are some settings for you, using the Pawn Shop Comp. Two instantiations on one channel are PERFECT for this sort of application, perhaps even better than perfect.
The basic idea is the same as tracking: we want the first one soft and slow, the second hard and fast. Here are some visuals along with settings from an actual session fixing a vocal.


The critical setting is going to be the threshold. On Compressor 1, watch the meter and listen. You want a sluggish, sort of musical movement to it. On compressor 2, the movement should be twitchy and fast. That meter shouldn't "lock up" until that signal is loud.
The rear panel controls of the PSC offer you a ton of extra options. Here are some ideas:
On the first compressor, use the tone controls to "push into" the second compressor in different ways. For example, if something is a bit too warm, like a chesty vocal, cut a little at 171Hz and see how that affects the overall functioning of both compressors. Remember, you can always restore stuff using the tone controls on the second compressor.
I tend to boost highs on the second compressor, rather than the first. It just seems to work better. +3dB @ 2.4kHz is a nice touch.
To get a more aggressive sound, use the preamp on the first compressor to add some saturation.
To get a more classic late 60s 70s soul music vocal sound, set the preamp of the second compressor such that when it gets hit hard there's a bit of grit on the vocal. There's also the OPERATING LEVEL control of both compressors to play with.
Man, if I had two hardware Pawn Shop Comps, I would be in tracking heaven. If you’ve not played around with the PSC yet, get a demo installer and use it.
So there you have it, a bunch of settings and a bit of backstory.
And now, as Fred used to growl at interns in the studio, "Go cut that 'f**king track."
*Do you all realize how spoiled you are when it comes to compressors, these days?
Additional Notes:
Most of the time, back in the day, compression was used to restrict dynamic range, not to give something character. The whole "character piece" thing... I don't recall that from my years in the studio that much. Gear either sounded good or it didn't, you either liked it or you didn't. I was usually looking for things to not sound compressed.
The Summit TLA-100 is a monster. It's a tube compressor but it doesn't sound or work like a typical opto or vari-mu unit. It's really versatile (it has switchable attack and release times) and can be used on literally anything. My Desert Island compressor... other than the Pawn Shop Comp.
The Aphex 551... why this compressor never achieved huge fame is beyond me. It is very clean — probably a little too uncolored for most people — and it has adjustable EVERYTHING: attack, release, ratio, knee, upward expansion of the high end, keying, etc. Probably too many controls for most people as well. But man, you could make this thing sound punchy or as utterly invisible as required.
Korneff Audio released our Talkback Limiter plug-in on March 25th, 2020. The road to the plug-in, though, began over a decade before I started to even ponder how to program the DSP for it.
It was late 2008, during one of those unforgettable tech visits at our famed studio, House of Loud. Ernie Fortunato, a tech wizard, was with us to work on some console repairs and maintenance. I was assisting him, absorbing every bit of knowledge I could. We were working on our 4056G+ console, and Ernie was meticulously going through the center section, pulling out circuit cards one by one. When he pulled out the 82E33, it caught my eye. I was struck by how simple the circuitry was, and it immediately piqued my interest.

The SSL 82E33 is a limiter circuit that was designed to amplify and level out the musicians’ talkback system mics that are commonly found in the studio area of a recording facility. It’s a simple circuit with a simple task: hard limit anything that goes above threshold and do it fast.
As a drummer and producer, I knew the legend of the SSL Listen Mic Compressor, famously used by Phil Collins and engineer Hugh Padgham. First used accidently by Padgham while engineering Peter Gabriel sessions at The Townhouse, the most well known use of it is on Phil Collins’ In THe Air Tonight.
This piece of gear isn't just a tool; it was a cornerstone of a sound that defined an era. Combined with a noise gate, it is the sound of drums in the 80s.
Despite its storied history, I had never used one firsthand. It was difficult for me to retrofit House of Loud to access the one in the console, and I had tried SSL’s free plugin version, but it never quite captured the magic described in all those interviews. Finally seeing the little beast in the flesh, I decided the simplicity of the 82E33 made it the perfect candidate for a DIY project. Even better, I had almost all the parts I needed right in the shop, except for the transformer. Since I planned to make it into a rack mounted unit used only at line level, I decided to substitute the transformer with a balancing chip.
One thing about me is that once I get something in my head, I don’t stop until it’s done. That night, I stayed up late, consumed by the project. I designed and etched a circuit board and then built up the circuit. The process was both exhilarating and nerve-wracking. When the moment of truth came... it didn’t work. Oof.
Frustration set in, but I was determined to figure out what went wrong. I spent the entire night troubleshooting (I still do this, although now it’s code and not capacitors), staring at my octopus-looking disaster of a circuit, but I just couldn’t see the problem.
Luckily, Ernie was scheduled to return in the morning. When he arrived, I showed him my creation, and he started poking and prodding. After a few minutes, he asked, “Where’s R40?”
I had completely missed a resistor that was essential for sending bias voltage to the sidechain. After a quick dab of solder to place the missing resistor, the unit sprang to life. The sound that came out was monstrous, completely overdriven, and over the top. I was over the moon.
Up until that point, my go-to for parallel drum compression was the ADR Compex F760, which was hard to beat. The Compex is a one-trick pony, but it’s a great trick. If you’re not familiar with the name, you’ve heard the sound on "When the Levee Breaks”, and a bunch of other records. The F760 had been my trusty companion, giving my drum tracks that punch and presence I loved. But this new creation had a character all its own. Ever since then, it’s been a staple in my drum sound. It adds snap to snares and kicks and toms. Sometimes, I use it as my main parallel drum bus, and other times, I run it alongside the Compex as a second parallel bus.
Once I had my own 82E33, I started to experiment with it, and found that it worked well on a lot more than drums. People don’t believe it when I tell them it’s my main vocal compressor, but it is. I ended up making myself a rack of these things so I could scatter it all over my mixes. This DIY journey not only expanded my technical skills but also expanded my creative palette, giving me a unique tool that ended up becoming an integral part of my sound.
Years later, I needed still more of them, so I figured out how to recreate the 82E33 using DSP, and that little DIY coding project became the Talkback Limiter, the second plug-in released by Korneff Audio.
Looking back on the genesis of the TBL, I realize it was more than just building a piece of gear. It was about the joy of discovery, the thrill of problem-solving, and the satisfaction of creating something that truly enhances my music. Whether I’m laying down a new drum track or tweaking a mix, this little piece of history, reborn through my hands, continues to inspire and push me to explore new sonic territories. We hope the software version of it inspired you on your creative journey.
We are in the age of digital audio, but your ears haven’t gotten the memo. Nor has the air, or those moving membranes that push the air that we call speakers. These things remain analog.
To that end, here are some very analog tips and tricks for dealing with mixing using those analog ears dealing with the analog physics of sound.
Get multiple speakers
This is obvious, but it can’t be stressed enough. You want to get a bunch of different speakers, especially different-sized speakers, because your mix will sound different on each of them. You want at least one pair that is accurate (whatever that means), and you want some that sound awful or at least more like the speakers people have knocking about the house, the car, etc.
I used to use the bigs (the big monitors that were soffit mounted in the studio), the bridges (nearfields on the console bridge), a boombox (I had a Panasonic boombox with RCA inputs that I could feed from the console via an adaptor), and a pair of really good headphones (Grado open cup). This gave me a good representation of how things were going to sound in the real world. Everything but the car! For that I had to drive around in the car.
While big soffit-mounted speakers might be hard to get, there are so many great-sounding nearfield monitors, as well as cheap, awful Bluetooth things out there, that you should be able to cobble together a bunch of different speakers to mix on.
Multiple speakers are essential to getting the most mileage out of the next trick:
Barely crack the volume
Turn the volume all the way down so there’s no music heard from your speakers. Then, pick one of your multiple sets of speakers and slowly turn up the volume just a crack until you can just hear something. What’s the first thing you hear? The snare? The vocals? Keep turning up the volume bit by bit until you can hear everything in the mix - but don't turn it up loud: keep it overall as quiet as you can. Then turn the volume down all the way, switch to a different set of speakers and slowly crack the volume up until you hear the first thing you hear. The snare? The vocals? The bass? Again, slowly increase the volume until you can just hear all the elements of the mix.
Do this little test with each set of your speakers and compare. The experience of what comes in first, second, third, etc., should be consistent, but it also might not be, and that might be a cause for concern. Obviously, bass is different on a bigger speaker, but if the bassline is important and it shows up early on the big speakers but later on something smaller, that’s a problem, especially if the bass is a hook element. How about the sit of the vocals? Are they in the same spot on each set, or is it different? What about the feel and the groove? Does it work on the bigs but not on the smalls? How are reverb and ambiance functioning on each set? Consistent or all over the place?
The ONSET of when you hear an element come in is a huge clue as to what needs work.
Resetting your ears
In the previous trick/hack/thing I emphasized that you want to keep the volume low. The why of that has to do with how your ears work.
Two things: 1) Loudness changes the frequency response of your ears and 2) Your ears get used to how things sound at a particular volume very quickly.
This means that if you’re listening to something at one volume, and then lower the volume, the response will sound really different to you. If you increase the volume a bit, your ears will almost instantly get used to that new volume and that becomes the new “normal” for your ears. Also, generally, things appear to sound better to us as volume increases (up to a point).
So, if you’re constantly changing the volume around as you mix, you’re shooting yourself in the foot. Start with the volume comfortably low and leave it there. Resist the urge to touch that knob—tape it off.
There’s a good chance that as you work, the volume you're mixing at will creep up, and you’ll also have to listen to your mix at louder volumes because it won’t only be heard quietly. However, once you turn it up, you really can’t go back to a lower volume unless you take a break from the mix and give your ears a bit of time to reset. I needed about half an hour early in the mix, but my reset time would get longer as the mixing process continued.
The point here is this: plan on mixing in chunks, starting at a low volume, checking things at a higher volume at the end of that particular chunk, and then taking a break for a bit to reset your ears.
To reset your ears, sit somewhere quiet. Don’t watch tv. Go outside.
If you’re mixing on your own schedule in your own studio situation, that’s the best. I had to mix while watching the clock to stay within budget, and it sucked.
Working opposites
It’s very tempting to mix the low things on bigger speakers or on headphones (good headphones have good bass response, usually) but the clarity of things in the low end is also about their overtones. Remember, a low E on a bass is 41Hz, and that’s a struggle for most speakers to reproduce. What you’re really hearing on most speakers of that low E are its Octave overtones at 82hz, 164hz, etc., and other overtones such as the Fifth (123Hz, 246Hz, etc) and the Third (103hz, 207Hz). There are also much higher overtones and sounds from the bass that give it articulation, and of course, all this changes on note-by-note basis as the bassline changes.
The point? Mix low-end stuff on smaller speakers. Now, if you’re doing EDM or something that's going to be heard mainly in a club, the upper articulation of the low end won’t be a much of a concern, but if you’re doing pop stuff that’s going to be heard on typical home setups, the problems in your low end won’t be in your low end, they’ll actually be higher up in the mids.
Your mix’s overall upper mids and high-end can also benefit from this sort of opposite thinking. Little speakers might appear to sound bright with lots of high-end, but what they really have, especially if they’re cheap, is a lot of presence, which means they push out a lot of 2kHz to 8kHz. If you’re trying to get cymbals and hi-hats right (whatever that means) on smaller speakers, or on cheap speakers, that might result in a bizarrely dull mix.
Headphones in the picture
As I wrote earlier, headphones can have accurate bass (or perhaps too much bass if they’re things like Beats), but where headphones are all over the place is placement and panning. Things panned to center are generally a bit quieter on headphones, unless the headphones are compensating for that, so vocals might sound low, bass and snares and kicks might sound low, etc. Left and right placement extremes are also weird on headphones. Headphones definitely increase spatial drama—reverbs and ambiance are much more apparent on headphones. There’s not much you can do to adjust to this, but you do need to take it into account in your thinking.
Don’t destroy your ears
My dudes, I can’t emphasize this enough: protect your ears.
Those ears of yours are statistically the most accurate of your five senses.
They’re also wired into your brain differently from vision, touch, smell, etc. They hook into your emotions. Into your limbic system. Into your fight or flight mechanism.
Without hearing, human communication is seriously impaired. Visually impaired people have it much better socially than hearing impaired people.
You don’t want hearing issues. YOU DO NOT WANT TO DEAL WITH TINNITUS. Believe me, you don’t want to deal with tinnitus.
Sorry to end on a bummer, but hearing loss and tinnitus is a bummer of an ending. Take care of yourself.
Recently, I had the incredible opportunity to record a cover of "Karma Police" with the band Pierce the Veil at Signature Sound Studio in San Diego.
As an audio engineer, capturing the perfect drum sound is not only a pivotal part of creating a track that resonates, but it was also one of the highlights of the original song that influenced the overall vibe of the song. Here's a behind-the-scenes look at the recording techniques we used to achieve this dynamic and impactful drum sound.
Our session took place in Studio A, renowned for its fully loaded 32-channel API 1608 console and a spacious live room measuring 31′ x 27′ with a 17′ ceiling. The old '80s recording studio aesthetic of faux brick walls and an old parquet floor not only set the mood but also provided the perfect room acoustic properties for this track. Navigating a new room can be a challenge, but with the assistance of award-winning engineer and mixer, Christian Cummings, we were up and running in no time. His knowledge of the room was invaluable, guiding me on the best placement for the drum kit in the live room to exploit its natural reverb and warm characteristics.

The essence of "Karma Police" demanded a layered approach to capturing drum sounds, targeting three types of ambiance: close, mid, and far. I wanted to make sure each part of the drum kit could shine through in the mix, not only providing depth but also a sense of space. A basic assortment of microphones was used for close miking positions. Carefully placed gobos and an area rug helped keep the close mics dry. Instead of close miking the cymbals, I opted for an "overall" drum sound using a pair of Bock 251 overheads. They were placed slightly higher than usual to take advantage of Lonnie's consistent drumming, which practically pre-mixed the drum sound with his performance. For the mid ambiance, Coles 4038 ribbon mics were positioned about six feet in front of the kit in a spaced pair configuration. These microphones offer a smooth, warm sound and have the ability to capture high-frequency detail without harshness. This is what I built the entire drum sound around. All of the close mics needed to reinforce the room mics, especially the Coles. For the expansive room sound needed for the song's explosive ending, Beyer M88 mics were placed about 15 feet back in an XY configuration.
The API console did a wonderful job of making the drums punchy and full, but the overall vibe was missing a little bit of that "magic." I knew exactly what it needed; a little love from the El Juan Limiter. Giving the Coles a healthy dose of limiting, along with input shape set to Punchy, really brought them to life. A nice lift in the bottom end from the Tone Shaping finished it off nicely by adding a satisfying heft to the entire kit. Everyone was like "damn, these drums sound sick". The prototype for Puff Puff mixPass also made an appearance on guitars and bass, but that's for another story.


A lesson I've learned early in my career is to commit your sounds to 'tape' during the recording stage. Why wait until the mixing phase? We printed the room sounds through the El Juan Limiter, ensuring that the drum sound we fell in love with was captured exactly as we wanted, forever.
Recording at Signature Sound with Pierce the Veil was not just about utilizing the studio's top-tier equipment; it was about creating an environment where technology meets creativity to capture a sound that truly stands out. This session was a testament to the power of experience, technique, and a little bit of studio magic.
Dan recently worked on Pierce the Veil’s latest release Karma Police, their cover of a modern classic. We have an article about that here.
We thought it would be interesting to take a quick look at the original Karma Police.
Radiohead has always managed to combine a penchant for noise and experimentation with a surprising pop sensibility. Thom Yorke and Co. make interesting and catchy weird records. When The Bends came out in 1995, it was being played in every control room that was setting up for a rock session. A great record, and to my mind a better offering than OK Computer, which followed in 1997.
OK Computer was amazing sonically. More experimental than The Bends, OK Computer was a prickly, challenging listen. The big single on it was Karma Police, and it’s one of the more restrained recordings on OK Computer. It’s “Beatle-esque,” with harmony vocals, a bass drum combo that sounds and feels like Paul and Ringo, and a piano part in the chorus that’s a sweet bite of Sexie Sadie off The White Album.
The vocal performance... this too is a Beatle thing. John Lennon often used to record vocals very close to the mic and sing very quietly. The same thing is happening on Karma Police — the chorus is practically whispered, and it’s not until the vamp out at the end that Mr Yorke opens up and sings with a bit more power.
Quick idea to steal: The vocal on the vamp out has reverb on it, and the ‘verb itself has some additional effects on it. Love this idea - don’t effect the vocal, keep it clean and effect the effect.
I found two demo versions of Karma Police... and of course I ran them through the Puff Puff mixPass and the El Juan!
This first cut sounds like vocals, a guitar, and drums working in a rehearsal space. The song has a different structure and lyrics and it sounds like a very early workout of the tune.
This version is Thom Yorke singing with an acoustic guitar, and the song is basically all there — he’s even figured out the ending vamp.
Enough demos, let’s talk about the drums on the original recording.
Three Overheads = HUGE
Producer/engineer Nigel Godrich worked with Radiohead on Karma Police, and he tended to mic them using a spaced pair of overheads with a third mic right in the middle of the kit as well. Three Overheads.
Here’s a picture of drummer Phil Selway during Ok Computer sessions. They recorded in a mansion in England (actress Jane Seymour’s house), with big rooms, high ceilings and lots of stone and glass. Very reflective spaces.
The kit is mic’d with a spaced pair of what look to be vintage AKG C-12s and a Neumann M-49 in the middle. There are also some close mics on the toms.

Throw some compression on it and there you are: insta huge. Of course, there will be phase issues galore, but oh man! Crush that center mic with a limiter and you’ll get a heck of a huge drum sound.
The overheads to the left and right provide a little bit of left-right movement, and the mic and center pins the whole thing down. And that is the sound of Karma Police 1997. A simple, clever set-up.
What are the sheets or drapes around the kit doing? Not a whole hell of a lot. Probably just getting some of the high-end shizz off of things, but mid to lows is going through them like a rhino through a petunia patch.
VERY IMPORTANT: notice how they took pieces off the kit. One crash cymbal. No ride cymbal. One rack tom. This is a SUPER TIP.
Remember, toms and all drums resonate, and cymbals are highly reflective — they’re big metal plates. Want to clean up a drum set for a good sound? Take all the extraneous stuff off of it. If it’s not getting hit during that session, it shouldn’t be on the drum set.
This mic setup is similar to late 1960s early 70s drum setups, like the "Glynn Johns setup."
Glynn Johns = super influential engineer/producer.
His setup = A mic on the kick, another mic sort of low to the floor tom, and then a third mic somewhat higher and over the snare, but, and this is a BIG BUT, those last two mics have to be equidistant from the snare to keep the snare in phase.
This is the basic sound of Led Zeppelin, the Stones, the Who, etc.
People like to experiment with the Glyn Johns setup but it is hard to get a modern sound from it. First of all, Glyn was typically recording with great players in great spaces, and everything is easier when you have someone like John Bonham on the drum and you’re in the great hall of a mansion. Also, there wasn’t the fastidiousness that modern audio recording seems to wallow in. There was a time when being slightly flat was ok. Ahhh... the good old days.
Often, the Glyn Johns setup has to be augmented with more mics, because the hi-hat and snare are out of balance, the rack tom sounds thin, etc. Eventually, this method becomes basically multi-micing the drum set.
If you try this and you’re using two different mics for the overheads, remember to measure from where the diaphragm is and not from where the grill cover is, so you’ll cheat that measurement a bit. The times I tried (and then subsequently abandoned the Glyn Johns setup) I used a piece of string to get the distances right. You can also use a mic cable. I’ve seen people use tape measures and I think that’s ridiculous. Dude, it’s a drum set with gaffer tape all over it: we’re hitting it with sticks. We’re not trying to calculate the radar return of a stealth fighter.
Back to the Radiohead track, this is a lovely, huge drum sound and the technique used to get it would translate to virtually any room—even a cruddy sounding space in a basement.
As always, we love hearing from you all, we love hearing your thoughts and ideas.
I sort of hate Christmas songs, and I sort of love them, too.
In a lot of ways, Slade’s Merry Xmas Everybody is the best of them, though. At least it frickin’ rocks.
Slade were huge hit-makers in the UK and most of the world in the early 1970s, but they didn’t do as well in the US. In the US we know the covers done by Quiet Riot better than we know the original recordings.
In August of 1973, they recorded a song in New York City at The Record Plant that became, in the long term, their biggest hit. And it is one of the songs that started the whole “Christmas Record” trend that we still suffer from today (I’m looking at you Mariah Carey!)
Merry Xmas Everybody continues to get trotted out every December since its release in November of 1973. This season, the old girl is 50 years old and still growing strong. But there’s something else about it that caused me to decide to write about it for you all today.
First, a bit about Slade:
Slade were a goofy bunch. They formed in the mid-60s, went through a number of stylistic changes, before they stumbled upon the sort of “Country Elves from Outer Space With Spelling Problems” glam rock identity that broke them through.
Lots of hits with misspelled titles: Look Wot You Dun, Coz I Luv You, Take Me Bak ‘Ome, Mama Weer All Crazee Now, Cum on Feel the Noize, Skweeze Me, Pleeze Me...
Gimmicky, and not all of the titles were dopey, but the formula worked and Slade in the early 70s was unstoppable. It helped that lead singer Noddy Holder and bassist/multi-instrumentalist Jim Lea were excellent writers, and that the whole band could deliver the musical goods live and in the studio.
Guitarist Dave Hill was a solid player, a natural showman, and continues to rock the single worst haircut in music history. Their drummer was a guy named Don Powell.

Accident and Aftermath
On July 4th, 1973, Powell and his fiancée, Angela Morris, were in a severe car accident. Both of them were flung from the vehicle when it hit a stone wall. Powell fractured his skull, broke both ankles, a bunch of ribs, and was in a coma for six days. He came out of it with traumatic brain injuries that plague him to this day. He can remember everything pre-car accident, but his short-term memory is shot full of holes, and he can forget something within minutes of it happening.
Miss Morris fared worse. Only twenty years old, she was killed.
The best therapy for Don Powell was to get working again, which meant re-learning how to play the drums (brain damage sucks) and heading back into the studio with the band.
In the Studio
Pre-accident, Slade in the studio worked fast, dropping songs to tape live in a few takes and then overdubbing a thing or two. It helped that their producer at the time was Chaz Chandler, the guy who discovered Jimi Hendrix and brought him to England. Chandler’s whole idea of making records was get it the hell over with fast and save money. Hendrix evolved into a studio-centric creative and split with Chandler because of this. Chandler found Slade and another payday.
Post-accident, Don Powell could play the old hits, but couldn’t remember new drum parts for more than a few minutes, let alone play an entire song. Post-accident, Slade had to record songs in bits based on what Don Powell could retain in his brain, and then edit the bits together and overdub onto that. This ain’t cut and paste with a DAW: this is sections of 2” tape scattered about the studio, written on with grease pencil, and spliced together with a razor blade and sticky tape. It is a TON of work, and incredibly frustrating.
The first song they did after Don’s accident was during August in 1973, at The Record Plant in a sweltering New York summer, when they recorded a fucking Christmas song. Section by section. As fast as their damaged drummer would let them.
Chords
The basic progression for the verses is I vi iii V, so in G that’s G Em Bm D. That’s not a very common progression in pop, and the Bm to D sort of makes the chorus sound unresolved and “open." It doesn’t end but our ear wants it to. It feels like the song should just keep going and going.
The chorus is very cool: G to Bm to a very cool A# (or a Bb) and back to D. That would be I-iii-#II-V. There are a lot of ways to think about the A# to D, like as a... sharp 5 of 5 substitute. Whatever, it’s cool. Write a song with it.
The chorus ends on a D; the bridge starts on a Dm, which immediately shifts the mood of things and goes well with the slow down in tempo. The bridge serves as a cool-down, but it doesn’t last. It resolves out IV to V (a C to D) and the band rocks out til the end, vamping over that unresolving chorus.
Lyrics
The lyrics... I love these lyrics. The opening verse is sexual innuendos and nods to drinking:
Are you hanging up your stocking on your wall?
It's the time when every Santa has a ball
Does he ride a red-nosed reindeer?
Does a ton-up on his sleigh?
Do the fairies keep him sober for a day?
Ton-up might be a reference to the car accident: it’s English biker gang slang for driving really really fast.
The second verse is down-to-earth and domestic.
Are you waiting for the family to arrive?
Are you sure you got the room to spare inside?
Does your granny always tell ya
That the old songs are the best?
Then she's up and rock 'n' rolling with the rest
But it could also be read that Granny is dead and up in heaven with Hendrix, Jim Morrison, Brian Jones, et al.
The third verse is perhaps childhood memories?
Are you hanging up your stocking on your wall?
Are you hoping that the snow will start to fall?
Do you ride on down the hillside
In a boggy you have made?
When you land upon your head
Then you been slayed
Clever boys - they name-check themselves!
The chorus is wonderful and unabashedly joyous:
So here it is, Merry Christmas
Everybody's having fun
Look to the future now
It's only just begun
That’s every bit as uplifting as I Can See Clearly Now the Rain Has Gone.
The Times
In 1973, England was dealing with an economy going down the loo. The US was still stuck in Vietnam. But the drummer was alive, even if in rehab and mourning, and Slade had a major hit.
Their last major hit. The band’s fortunes changed as the decade wore on. The rise of punk strangled glam, Slade had some hits through the 80’s but eventually broke up, never to return to the original incarnation of the band. There are various versions of Slade still around, fronted by various former members. Dave recently fired Don by email. That really makes me sad. 60 years of friendship gone?
Whatever, whatever. Slade is best at a party, like in the video, shown 50 years ago on Top of the Pops.
Happy holidays, everyone. Look to the future now, it’s only just begun.
This guy keeps coming up in things I write and in my thinking about production. He's probably a bigger influence on me than I give him credit for.
I wasn't really a Doors fan. In high school there was a brief phase where a Jim Morrison bio came out and everything was The Doors, The Doors, The Doors, but really, we were 10th graders just hoping to hear the long version of 'Light My Fire' on the radio. Nowadays I prefer to hear the short version on the radio. But 'The End' was cool, the whole 'Morrison Hotel/Hard Rock Cafe' album was good, and 'LA Woman' was a great song to drive to - still is.
The whole album is great, and a perfect listen for a grey Sunday.
One thing I always thought was great on Doors records was the drumming and the drum sounds. In terms of recordings in the mid 1960's, which band had better, and by that I mean more modern, drum sounds than The Doors? Maybe some of the Beatles stuff? Certainly not Cream - they totally lost Ginger Baker in those recordings. The Who? Nope. The Stones? Nope.
The Doors records always had a great, natural snare sound, beautifully recording cymbals with tons of articulation, and HUGE tom sounds. I still think The Doors drum sounds stack up against just about anything, and given the time, they're remarkable.
Bruce Botnick Rules!
The Doors records were all engineered by a guy named Bruce Botnick, and it's a pity his name isn't tossed around in audio circles with the same reverence as Al Schmitt's or Bruce Swedien's. His discography is amazing, stretching way beyond The Doors, out into film mixing, and a bunch of hit records for Eddie Money. He did a fantastic record for the band Love when he was 22 years old — he was a wunderkind. Look him up. He's a monster.
The Doors basically cut their albums live in Sunset Studios, initially to four track and then eight track. Their last album, 'LA Woman', was recorded in their basement rehearsal space, rather than a studio, at Bruce Botnick's suggestion. He set up a control room in their business office and ran mic cables and a talkback system down the stairs. 'LA Woman' sounds great. Hard to believe anyone could get such a clear, powerful recording out of a basement, maybe 8 mics and two or three compressors.
Botnick's recording set-up for Densmore's drums was usually mono, using very few microphones. He would put a condenser roughly at Densmore's head level but over the kit, and another one under the snare, flipping the phase of that—he would have had to adjust these two mics a lot to minimize phase shifts. Single dynamic mic on the kick. This is about the same as Glyn Johns' drum setup at about the same time. There are pictures of Densmore in the studio, with an additional mic or two on the kit, but really, it's just three mics in roughly the configuration described.
Densmore took off the bottom heads and NEVER changed skins (heads). I mean NEVER.
Of course, most of the sound of the drums on a Doors recording is the way Densmore played. He was really a jazzer at heart, and you can hear this in his amazing cymbal work and in the economy of his fills. He's also very interesting and inventive as a player. Bear in mind, most of these recordings were banged out, everyone playing at once, a vocal cut live as well, the whole band listening to each other and basically arranging things on the spot. They were a much better bunch of players than they get credit for, and Jim Morrison a much more capable singer than what is suggested by his reputation.
LA Woman
So, 'LA Woman': The Doors are cutting basically a live blues album in a basement with two extra players, guitarist Marc Benno and bassist Jerry Scheff. They did 10 sessions across about seven days (unbelievable pace - these days just the damn drums take a month), cutting songs in five or six takes. Who records like this these days???
There's a lot of things to hear in Densmore's playing.
He follows the singer. Listen to his fills and you'll notice he's always working around the singer. Actually, scratch that: he's always following whatever instrument is leading the track at that moment. If it's vocals, he's doing something around the vocals; if it's a guitar run or a keyboard flourish, he works off of that. There's a wonderful sense of "handing off" in The Doors' musical arrangement.
Think of "The Gate"
I think of the different sections of a song as being separated by a fence with a gate. So, there's a verse butted up against a chorus, and separating the two is this narrow gate. How the band proceeds through that gate is a huge part of arranging. Sometimes all the instruments walk through the gate together (all playing the same thing) and sometimes one instrument goes through—a guitar lick—and then the rest follow. And with shitty bands, everyone just sort of slams through the gate in a big fucking catastrophe.
Bands that arrange well and listen to each other well, like The Doors, sort of line-up and go through the gate one after the other, no one stepping on someone else's feet, nothing clumsy, everything clean and interesting. And you hear this all over 'LA Woman', where one idea follows another, follows another, and you can literally hear the "handing off" of the attention, the position, as they move through the gate.
Wacky sort of explanation, but listen for it, and of course, try to apply it to your own work.
Finally, Densmore plays with a sense of where he is in the song and where the song is heading to. 'LA Woman', one of the greatest driving songs ever recorded, is a good example of this. Densmore's playing is slightly different at any point in the record. Not only can you listen to just the drums and know, "Ok, this is a verse," you can listen and hear that it's the second verse. There's something slightly different about the playing. It's hard to describe but easy to hear, I think. Densmore also, somehow, plays in such a way that you know the song is in its final stages, that it's ending. Somehow the rhythm is triumphant, or slightly looser. The ending of 'LA Woman' has always sounded triumphant to me. There's a musical narrative to that song. It starts tight and almost "careful," falls apart into drunkenness in the breakdown, the bridge, then somehow finds its way out of that mess and into the sun and new hope, as the band goes speeding off into the sunset down Sunset Boulevard, and out onto the highway. It simply feels great.
Ok. Enough of me. Have a listen to 'LA Woman' while driving. Don't blame me for the speeding ticket.
I think things will get a bit more concrete in the next few weeks. I'll give you some ideas that are more ready to use and are less artsie fartsie. As always, I appreciate all your comments. They make me think, and thinking is good.
I’ve been watching the French-made TV show Lupin, and the song I Can See Clearly Now by Johnny Nash is used in the first bunch of episodes.
I remember when this song came out. I was nine. I had a crummy AM radio that picked up three stations, one of which was WABC in New York City. And they played this song a lot.
I didn’t know music from mudpies at nine, but I Can See Clearly Now was clearly a great song. It had a fabulous hook, a really interesting arrangement, and there was something about it that felt so good.
Johnny Nash was an American singer from Texas. He had some minor hits in the late 1950’s and in the early 60s had his own record label. But, by 1970, his career was pretty much over. He moved down to Jamaica and stumbled into the Reggae scene there. He wound up mentoring a young Bob Marley, and the two co-wrote songs together. Nash loved Reggae, and its mood, rhythms and instrumentation quite literally changed his life.
I Can See Clearly Now was written solely by Johnny Nash and he himself produced the recording at AIR studios in London in 1971. He used a group of studio musicians called The Fabulous Five and probably some other players, but a lot of the details are lost to history.
I Can See Clearly Now is often credited as the first Reggae hit, the song that introduced Reggae to the Western World, blah blah blah. I wouldn’t say it’s Reggae. The rhythm of it is actually straightforward and doesn’t have the offbeat feel that defines Reggae until the choruses, but it certainly has a huge Reggae influence to it and it was a hugely influential record. It was a GIANT hit. It was inescapable on the radio, used in commercials, covered by hundreds of other artists, and more than 50 years later still gets placed in key moments in movies and TV.
It’s a perfect pop tune. And it’s a recording full of surprises. Have a listen:
1971 at AIR... it’s probably a 16 track recording tracked and mixed on a custom Neve console. Typical of the time, there’s no effort to fill up all the tracks. Engineers back then were only five years from doing everything live to 4 track, and that “resourceful” mentality played a big part in recording technique at the time. Why burn a bunch of tracks when we can stick everybody on two, knock through the mix and hit the pub?
Drums and percussion are mono down the center, probably all recorded at once along with bass, piano and what sounds like an accordion. I hear Johnny’s SUBLIME lead vocal, just a touch off to the left, with what sounds like a reverb chamber on it. The loping bass line is just off to the right. On choruses, two harmony vocals come in panned hard left and right, and they’re dead dry. There’s something about the way those harmony tracks pop in and out that makes me think they’re gated—maybe an Allison Research Kepex, which was about the first noise gate on the market. Sounds like Johnny Nash sang all the vocal parts.
Chord-wise, the verse is a rote I-IV-V progression with a bVII thrown in on the chorus. It hints at a key change and this subtly sets up the bridge.
Now, about this bridge... this has got to be one of the greatest bridges in recorded music history. The tonal center shifts down a whole step. Majors are substituted for minors, there’s all sorts of half-step motion, and it’s wonderfully cinematic, like a film score that is minor for scenes of a storm and then breaks into major as the clouds part and the sun comes out. Which is exactly what the lyrics are like at that moment:
Look all around, there's nothin' but blue skies
Look straight ahead, nothin' but blue skies
The vocals in the bridge are phenomenal, like a choir coming out of heaven but, really, I think all they did was crank up the reverb sends. Whatever - great trick. Sounds great.
There are a bunch of overdubs on the bridge. It sounds like horns... No, sounds like a guitar with a fuzz box... No... sounds like early use of synths. AIR had a MOOG at the time and there is some info out there that a synth was overdubbed. I think it's a synth that was overdubbed a bunch. There are parts that sound like saxes, bells, strange fuzzy pads, little squirps and burps. There’s tons to listen to in there. Very impressive for 1971.
Pay particular attention to how the tempo sags on the bridge, which helps with its triumphant feel, and then how the tempo coming out of the bridge is slightly faster than it's been throughout the song. There’s also... a sense of cadence to the drums and percussion, a feeling that the players know the song is coming to an end, and they somehow get that across to the listener. It’s a very hard to describe thing. John Densmore of The Doors is an absolute master of this: put on a Doors recording and you can tell where you are in the song just by the feel of the drums. It’s uncanny.
The song fades on a vamped chorus, with some further synth noodling. So frickin’ great.
It’s a perfect arrangement. So perfect that when singer Jimmy Cliff covered it, and got a major hit, he basically did exactly what Johnny Nash did.
Nash was an incredible singer, with a beautiful, clear voice that was effortlessly expressive. I would DIE to sing like that guy. Sounds like they plopped him down in front of a U67 and he knocked out all the vocals in 20 minutes.
Lyrically, it’s very simple.
I can see clearly now the rain has gone.
I can see all obstacles in my way.
Gone are the dark clouds that had me blind
It's gonna be a bright (Bright), bright (Bright)
Sun-shiny day
There are idiots on the internet claiming that Nash wrote the song after having cataract surgery! I say bullshit and I say who cares? The song is clearly about what it is about, and it needs no further interpretation.
You don’t figure out a song as great as this with your head; you figure it out with your heart.
All in all, the perfect song for a wet day in Montreal, where the sun might not shine much for months, and some good coffee and music is what gets you through til there’s nothing but blue skies.
Happy Sunday.
Hi! Another audio thing to read and think about on a lazy Sunday.
Thanks to everyone that wrote me about last week blog post. Much appreciated. Please feel free to shoot me things either on FB or IG, or even right to the website. Ask questions, share your own experiences, tell me I’m stupid and wrong, argue, whatever. It’s all fine and welcome.
We actually, really, truly have a new plugin soon, by the way!
These Sunday things will deal less with the technical aspects of production and more with the creative, artsie fart side of making records and music. That was always my strong point in the studio, anyway.
This morning I was walking around Montreal, and the song “Season of the Witch” by the singer Donovan popped into my head. Maybe it was the temp I was walking at - that is my theory on why songs pop into your head: something about the rhythm in that moment is a trigger.
Season of the Witch was on Donovan’s Sunshine Superman album, which was his best charting effort. One of the first truly psychedelic albums, Sunshine Superman was very influenced by the times and also highly influential on the music coming after it, which included Sgt Peppers by the Beatles and the first Jimi Hendrix record.
There is a lot going on with this deceptively simple song.
The Recording
Recorded in the spring of 1966 at Columbia Studios in Hollywood, Season of the Witch is probably a four-track recording, but it could be an eight. Remember, though, at that time it was very common to cut things mostly live and to only use as many tracks as needed, so the recording might effectively be a five or a six-track. It’s got that fun, hard panning of tracks, with nothing recorded in stereo (see last week here). You can hear the parts very clearly, especially on headphones.
Donovan himself starts the song off, playing the very simple progression on a very chunky chinky sounding telecaster. There’s no riff, per se, just the chords played in a very rhythmic way - they sound more like drums than a guitar. And when the drums come in, the hi-hat plays off the guitar wonderfully and there’s a great ticky tappy sort of groove. Listen for it. The song was cut mostly live, with an overdubbed lead vocal and perhaps an additional guitar.
Huge Bass, for 1966
The bass is HUGE, especially for 1966. Huge bass was just starting to become a thing, started by The Beatles in 1964 with songs like Ticket to Ride and I Feel Fine, and of course, then Rain and Paperback Writer. Donovan was friends with the Beatles and probably heard test pressings of Rain before he went to California to record what is one of the first truly psychedelic albums, Sunshine Superman.
The engineers at Columbia at first wouldn’t cut the bass the way producer Mickie Most wanted it — hot to tape and pushing the VU meters over 0dB nominal and “into the red.” They were afraid of breaking equipment. Most had to threaten their jobs (he had a lot of industry clout at the time), and they finally gave in. The resulting sound is compressed and round and fat — it’s a great bass sound and a simple, memorable bass line by a session player named Bobby Ray.
Quick thought on studio procedures back then. Studios were commercial enterprises and run really tightly. Often much of the gear in the studio was built by the people who operated it (hence the term “engineer”) and if something blew up, it could put the studio down for days. There also wasn’t a lot of spare equipment in the studio at that time. If the compressor wasn’t working, that could be one of two or three in the entire complex. Blowing up stuff was a big problem back then.
Let the tempo breathe
Pay attention to the ebb and flow of the tempo. Too many records are cut these days so fucking tight, quantized to death against a click. It’s boring and the song doesn’t “breathe" along with the structure and the lyrics. On this recording, the tempo tracks the journey of the song. Drummer “Fast Eddie” Hot, who was a major studio player at the time, lays back on the verses, starts picking up the tempo on the pre-chorus and pushes through the chorus itself. But note at the end of the chorus: he does a fill and slightly increases the spacing between each snare hit to bring the song back in tempo such that the verse is again laid back. This is something that’s hard to program. Listen for it and you’ll see how well it works. Stewart Copeland does similar things on Police records. In fact, any good band with a good drummer would do such a thing. Eddie Hoh was a monster that no one remembers.
At about 3:15, as the song rolls into the chorus yet again, someone dubbed in a really loose, totally sloppy loud guitar part. It lasts for maybe four measures and then vanishes and never comes back! Did they use an entire track on it??
How to sing when you’re not a great singer
Donovan doesn’t have a particularly powerful or wide-ranging voice, but he uses it well and gets the maximum mileage and character out of it. This is a great vocal. He switches between a clipped, spoken delivery in the verses and into his barely stable upper range on choruses. His voice gets thin and wheezy up there with a very plaintive quality. The desperation and angst in it, especially in the choruses is wonderful. Notice too how he ends every single vocal line slightly differently. Sometimes he cuts off the word “witch" to pop out the CH, and it fits in with the clicks of the guitars. At other times he elides it and makes it longer. The important thing here is he doesn’t strive to make everything the same each time. It's like how Jeff Beck plays: he never does the same thing twice. It's so much more interesting than picking up a part and moving it all over the place using cut and paste. Donovan might have cut the vocal in a take or two and it might have been while he laid down his guitar part. A great vocal doesn’t have to be labored and picked over.
On lyrics
The opening lyric...
When I look out my window
Many sights to see
...was somehow perfect for walking around the neighborhood, passing strangers and people in restaurant windows.
The next lyric...
And when I look in my window
So many different people to be
...interesting. Donovan is setting up an external world/internal world sort of thing. In fact, through the entire song, he’s really singing about himself, but there isn’t a lot of I I I Me Me Me I feel I feel I feel I feel crap to the song. It’s not self-absorbed.
A lot of the imagery is just plain old strange:
You’ve got to pick up every stitch
Two rabbits running in the ditch.
Wheat mix out to make things rich.
You've got to pick up every stitch
The rabbits running in the ditch
Beatniks are out to make it rich
Oh no must be the season of the witch
It does strike me as being just a bunch of stuff that rhymes, which I usually hate, but against the music it works. It’s really about the repeated “Chuh Chuh Chuh” sound of the CH at the end of each phase, and how that fits in with the guitar. Wheat mix... I think this is a reference to Weetabix?
Often, good lyrics don’t have to make a lot of sense. What they have to do is serve as a container into which a person can flow their own ideas. Think of the lyrics as a cup and the emptiness of the cup is what is valuable, really. What good is a cup you can’t drink from? The value is in what you can put into the cup. There’s a lot of space in these lyrics. For me, the song was about walking through the neighborhood and strangers. The reality of what Donovan was up to is entirely different.
Season of the Witch isn’t Halloween season and pumpkin spice: it’s about Donovan looking out of his window in 1966 and seeing drug dealers moving into his neighborhood. Hard drugs starting to infiltrate the rock and folk scenes in England and in the US. Things were moving beyond pot and into heroin and it was ominous to him. It did not bode well. The police started cracking down on any sort of drug offense. Six months after Season of the Witch was released, Donovan himself was busted for marijuana possession and it was in all the papers.
In 1966, getting busted for pot would be the press equivalent these days to being caught naked with a member of Congress. It would be the kind of thing that could cost a musician their career. There were a bunch of drug busts in the mid-60s – Donovan, Keith Richards, John Lennon. Now there are dispensaries all over the place and it’s basically legalized. Times have changed.
In 1966, he’d have been censored by the record company for lyrics like “drug dealers are in my neighborhood” and the album wouldn’t have ever gotten released. So he encoded his ideas, and the fun for us in decoding the lyrics is that we find our own things. That’s how art is supposed to work. Of course today, one can write a song called Wet Ass Pussy. Times have changed.
Remakes and spin-offs
Season of the Witch wasn’t a hit, but it has been remade a number of times, most notably on the album Supersession by organist Al Kooper and guitarist Steve Stills. Some excellent wha wha guitar on this particular remake - tons of things to steal.
And of course, there is a Nick Cage movie that sorta sucks with the same title as the song. Season of the Witch is a fantastic phrase.
Bruce: Maury - I have a great title: Season of the Witch. Whadda ya think?
Maury: I love it! It has potential! I’m seeing... a hot girl turning into an ugly witch!
Bruce: I love it. Subliminal. It’s about marriage.
Maury: Yes! But with swords! Quick! Call Nick Cage’s manager!
So, have a listen on headphones. What ideas can you steal? What do the lyrics pull up for you?
And more: do you want these things delivered earlier in the morning? Something to read over coffee and a croissant, in keeping with a lazy Sunday?
Have a great week. Hopefully you get your butt into the studio a bit.
I was listening to some records engineered by the late great Al Schmidt. Damn, his stuff sounded great at any stage in his career, regardless of the size of the console or the number of tracks.
While listening, I started pondering how he used stereo and panning and that got me thinking about “stereo” as a concept in the studio.
Very often, stereo really isn’t stereo. You might be listening with two speakers, or a pair of headphones, and things might be panned around and all wide sounding, but the reality is, generally, very few things on a recording are actually true stereo. Instead, they are Point Source Mono.
If you stick a microphone on a guitar cabinet and record it to one track, and then play that track back and pan it, oh, 60% to the left, that isn’t stereo. That’s a panned mono track - Point Source Mono panned to the left off center.
If you take two microphones and stick them way up close to two speakers of a guitar cabinet, record each mic to its own track, and then play those two tracks back... and pan them opposite of each other, one to the left one to the right, that is Point Source Mono with two mono point sources. It might sound wider than using one mic and it might approximate stereo, but it is still panned mono—Point Source Mono.
You can even do the AC/DC guitar recording thang, which is to put one mic down the throat of a speaker on the guitar cabinet and then another a few feet away to catch more of the sound of the cabinet in the room, and then record each to its own track and pan them wide on playback and... STILL Point Source Mono! Sounds great, but it’s not true stereo.
So, when is it actually stereo?
It’s actually stereo when you record something with two microphones set-up as a stereo pair, either using a spaced pair arrangement, a coincident pair arrangement (XY, MS), or a near coincident pair arrangement (ORTF). Set up the mics in proper stereo setup, record each to its own track, play them back panned wide and now you have actual stereo. You can also use a dummy head (binaural) if you have one around.
What about stereo keyboard samples? It has two outputs, you run it through two channels. Stereo or not?
Depends. If the samples were made using a stereo mic’ing setup, then it might be. If the samples were recorded as mono tracks and then electronically panned to give the listener the feel of someone playing a piano from low to high, that is again Point Source Mono.... with 88 little point sources.
Does any of this matter?????
Maybe, maybe not. But it’s always nice to know what you’re doing.
Probably 90% of the time when you’re making records you’re working with Point Source Mono sounds. The overall recording might be a stereo experience, but most of the parts of the recording are panned point source mono sources. I admit that this is a bit like knowing the difference between frying and sauté-ing when you’re cooking, but great chefs know the difference; if you want to be a great chef, you should know the difference.
This is not to say you should walk around the studio saying, “Let us record this in glorious Point Source Mono and then pan it wherever we desire in the final mix.” Let’s not do that. But, let’s know the difference.
And let’s know why point source mono is probably better for most of what you’re doing anyway.
Getting something recorded in true stereo to sound good can be hard, and it might not be that useful.
If you stick a crossed pair of mics in front of a singer, a few inches from their mouth and record that, and then play the tracks back panned wide, the first thing you’re going to notice is that the image of the singer is unstable. If the vocalist moves even a tiny bit while close to a stereo pair of microphones, the image is going to jump from speaker to speaker. That will either be cool if it happens every now and again, or distracting as hell if it happens a lot.
The same thing can happen on any instrument or source you record in stereo if you get the microphones close. Of course, you can tighten up the panning up a bit to minimize the jumping... but then you should have just recorded things in mono, right? I suppose a stereo recording that has been tightened might have more perceived... size? Width? Maybe the recording will make the source sound physically bigger? Maybe. You can try it.
Rather than putting the mics up close, what if you pull them back and record?
Unless you’re dealing with something actually wide, like a string ensemble or a drum kit, most instruments are physically narrow and that is how we hear them: basically as a point source. How wide is an acoustic guitar? Two feet? The “stereo” experience of listening to an acoustic guitar is really a mono experience of the sound coming directly out of the guitar, the position of the guitar relative to the right/left of your ears, and the sound bouncing around the room.
A lot of the sound of stereo is the sound of the room, more so as you get further from the sound source. If you have a shitty sounding room and you stick your stereo pair far enough away to get a stable stereo image, then you’re going to be recording shitty room in stereo. Perhaps go close, go mono and add reverb later in the mix. Getting rid of shitty room on a recording is really hard.
Of course, sometimes shitty room sounds are amazing, so remember... often what is cool is what you like.
Most of the time, when you’re using two or more mics to record an instrument, you’re not doing a stereo recording. You’re trying to capture more of the totality of the instrument, or get a certain effect from it. Recording an acoustic guitar with two mics, one near the bridge, the other near the neck where it meets the body isn’t stereo. It's two point source mono recordings of the same instrument from two separate places. Think of it as a recording of the neck and a recording of the bridge. Pan it wide if you want a huge wide guitar. That might be cool. Might be dumb or distracting. Try it! See what you think.
Ditto for sticking two mics into a piano right over the strings near the hammers, or one mic near the hammers and the other further away over some resonant area of the sound board. This isn’t stereo. And if you pan it wide across the speakers it certainly isn’t a true-to-life stereo listening experience unless you normally stick your head in a piano while people play it. Which is dumb and will destroy your ears. But.... it might be awesome in a recording, to have a huge piano eating the listener's head.
A piano recorded ten feet away with a stereo pair, if you’re in a good room, might sound amazing. It also might get swallowed up by everything else in the mix, and you might have a lot of trouble getting it to sit correctly. Again, most of the stereo component is going to be the sound of the room—the piano itself is sort of “wide mono” when you get ten feet out. In mixes, room sounds tend to get masked quite a bit.
Many people put a spaced pair of microphones over a drum kit—this is a very common recording technique. I would argue that this isn’t really true stereo. You’re recording the left half and the right half of the kit—again, two point source mono recordings from two different locations. If you put a coincident pair above the drum kit, then you’re much closer to getting a true stereo recording of the drumset.
I occasionally recorded drummers using an AKG C-24—which is a stereo tube mic—right over their heads as they played. It sounded A LOT like what they were hearing. However, on the speakers, it wasn’t really all that dramatic. When the drummer went around the toms, you could sense the locations of the toms, but they weren’t pinpointed—nowhere near as much as if you panned individual tom mics. It worked well for jazz sessions, less so for metal. Modern drum recording is a mic on everything, down to individual cymbals, and the top and bottom of snares, toms, three mics on the kick, etc. And in the mix that all gets panned around and reassembled into an aural “picture” of the drums. It’s a stereo experience made up of mono sources.
Al Schmidt was, like most engineers who came up through the 60s and analog tape, a minimalist. One thing that I always get from anything recorded by Al Schmidt: you can’t beat the right mic on the right sound source played by the right player.
Speaking of, have a listen to This Masquerade by George Benson. This recording blew my mind when I first heard it as a little kid. It still does. A lot of point source mono to hear. God, this is sexy stuff.
Back in the day when I was just discovering the magic of creating music, one thing was clear - the recording studio experience was something I couldn’t live without. The shimmer of a guitar that's just been re-strung, perfectly placed drum fills, or that exhilarating rush when you push the faders on the console: It’s what I live for. But here's a truth I've come to realize along the way: it's never just about the tools or the room. It's about that one person, watching, teaching, guiding: The mentor. These are the real rockstars, the unsung heroes in the shadows. Finding the right mentor can be pivotal to your professional growth. Here are five points to think about when seeking your guiding light.
1 - Understanding Your Needs:
As you begin your quest to find the right mentor it's easy to get tunnel vision, zeroing in on learning specific technique or gear. But mentorship's real value extends well beyond pushing buttons and faders. Look at the full range of skills and wisdom a mentor can show you. Beyond their technical expertise, the deeper lessons about the music industry's landscape, the nuances of working with others, and even personal evolution can be just as impactful, if not MORE impactful. Put your whole heart into the process, stay open to new ideas and experiences, and allow mentorship to shed light on both the intricacies of modern gear and the rhythm of the industry at large.
2 - Industry Experience & Relevance:
One often overlooked element, when seeking a mentor, is the advantage of finding someone whose path touches many corners of the industry. Why is this crucial? Because it grants you a multi-layered perspective of the entire music business. I was fortunate to learn from someone who was a successful A&R transitioning into a producer role. As much as I wanted to learn about sound and gear, it turned into a more important lesson on navigating the balance between artistic creativity and the pressures of the industry. Look beyond the technical skills and find someone whose journey has traveled a broader landscape.
3 - Accessibility & Commitment:
Let's face it, the audio recording world is fast-paced and most industry vets who’d make great mentors are swamped. Most of these people don't have the time to fully commit to every detail of your needs. Real mentorship isn’t just about shadowing someone and hoping they spill their secrets during a chat in the control room. It's about being a sponge, soaking up wisdom from every possible scenario. I remember the times I ran pointless errands or was the unofficial coffee getter. Trust me, I learned a lot doing that seemingly trivial stuff. Being the guy who remembers how someone likes their coffee might just pave your way from errand runner to someone they trust with more significant audio responsibilities.
4 - Hands-On Opportunities:
One of the most invaluable benefits you can seek out is the chance to get hands-on. One of the first hands-on tasks I had at my internship was hitting record on the 2-track analog mixdown deck and then adding leader tape between the mixes. It might sound basic, but those tasks were pivotal. The engineer wasn't testing my technical knowhow but rather my reaction to responsibility and my general trustworthiness. You shouldn't expect to immediately plop down at the SSL in the control room. Those small tasks are the foundation of it all. Whether you're doing simple digital edits, taking care of basic hospitality requests, or learning session flow, these tasks might seem trivial but they're key to the big picture. Over time, your responsibilities will grow.
5 - Networking & Relationships:
“It’s all about who you know” isn’t just a saying. Networking might be just as essential as mastering the console. Building and investing in these relationships can be game-changers. I had the privilege of interning at the renowned Water Music facility in Hoboken, NJ. Every new producer and project that walked through the door was like a fresh mentorship opportunity, giving me the incredible chance to learn alongside industry giants.
But, decades later, the studio landscape has shifted. Those huge facilities which were a networking mecca have become a rarity. Now, more than ever, you need to actively reach out, make connections, attend industry gatherings, and engage in online communities. Remember, every relationship could be the gateway to your next learning experience or even a big break in your career.
Mastering audio recording isn't just about knobs and buttons; it's a journey filled with highs, lows, twists, and turns. Having the right mentor makes all the difference. Beyond teaching the technical side of things, a mentor can give you life lessons, insights into the industry, and introduce you to invaluable contacts. When looking for mentorship, be open and eager, but also keep your expectations realistic.
It's not just about the equipment - it's the experiences, stories, and connections that will truly shape your career.
I’ve had great mentors throughout my life. They’ve all been different and they’ve always fitted perfectly with the lessons I needed to learn or the help that I required. I’ve been truly lucky.
Getting My Ass Kicked
I was a very cocky kid with a voracious ability to read and learn from books, a tremendous memory, great hearing, a big wise ass talkative mouth, and not quite smart enough to know when I was being stupid.
Joel Fink: He was my freshman acting teacher at Purdue University. Toward the end of my first semester, everyone in the class met with Joel in his office to receive our grade and get some guidance on what to do next. I was expecting an A. Joel gave me a B. I remember the conversation really well:
Luke: A B??!! Why a B?
Joel: Because you’re an asshole. You’re always being so funny and clever and making jokes and talking all the time in class. You’re a disruptive pain in the ass. Think about growing up.
Welcome to college in 1982!
Joel totally got his point across. I got an A the next semester and all of them after that. I started to better control my mouth. Joel and I are still friends.
Rick Thomas: Rick is still the head of the Theatre Sound Design program at Purdue. Rick basically invented teaching sound design for theatre, and I was, I suppose, one of his earliest students. He found me my sophomore year and let me loose in recording studios, gave me responsibility, put me in charge of people to force me to develop leadership skills, and a plethora of other gifts that really shaped me. Rick too, kicked my ass. Worse than Joel.
In a nutshell, he caught me and another student lying. We were writing and recording music for a play, and were exhausted and couldn’t complete something that was due the next day - we would be a day late. For some dumb “We’re 21 and our brains aren’t fully formed” reason we told the director a tape deck had broken. I can't stress enough that THERE WAS NO REASON FOR THIS LIE. She wouldn’t have been mad at all.
Of course, Rick ran into her in the hall one morning and small talk turned into “Is the tape deck fixed yet?”
Uh oh...
Later that afternoon I bounced into Rick’s office to say hi to him, because we really were good friends (and still are). And he expressed deep, deep disappointment in me. I can’t remember much of this conversation because the room started spinning, but there were a few key phrases like, “I thought you were honest and better than that.” The killing blow was when he ended his monologue with, “Now get out of my sight you make me sick."
I walked home crying. This remains the worst day of my life. And of course, the show must go on, so I had to apologize to the director (more tears) and see Rick in the theatre and in the recording studio complex and... my dudes, it was fucking torture. But the lesson was really clear: Your integrity is EVERYTHING.
And then, Rick taught me yet another lesson: He let the incident pass. It never came up again. He never reminded me of it. Our friendship remained intact, he remained my mentor.
Saying, “Get out of my sight you make me sick,” is something you should never say. That said, I survived it and grew from it. And I’m glad Rick said it.
Mentor as Cheerleader
I met another very important mentor in my early 20’s. He wasn’t much older than me. We had been in a band together for a brief time when I was fifteen. He was a fantastic songwriter and singer, and was in college! Impressive stuff for a kid.
We reconnected years later. Richard was now a music business attorney. More than that, he became perhaps my biggest supporter. He got me gigs, he introduced me to people. He gave me a 1957 Fender Bandmaster 3x10 combo that his dad found at a garage sale. Most importantly, he was endlessly positive about my abilities and potential. We traveled all over the place, from mixing concerts in Moscow to clubs all over NYC. We missed the first New York appearance of the Smashing Pumpkins because I was bored waiting for them to go on stage and... well, we went to get Chinese food. All my fault. My career went up, and then it went way down, when tinnitus hit me. Sometimes I feel like I let Richard down.
Richard is STILL in my corner, ever endlessly positive, ever endlessly supportive, endlessly my older brother from another mother. And I support him back, because mentorship is a friendship, and as you get older, mentoring flows both ways. Richard still writes and sings, in addition to being a fabulous photographer, and I support him with endless positivity and ideas on how to record things better, gear and plugin recommendations (he does have a fondness for all things Korneff).
It helps also to have a lawyer as a cheerleader. It’s important to seek out people smarter than yourself, with knowledge that complements yours. You don’t need redundancy.
Being Your Own Mentor
I was really unlucky after college in that I didn’t start working in a big studio. I was in bands and I did a lot of home recording, and gradually I started producing records, and since I wasn’t usually at very good studios at this point, I sort of moved the engineer over out of his seat and played with the knobs myself.
This is a lovely turn of events for a 20 something with ambition and a big ego, but there was so, so, so much I had to either teach myself or figure out on my own because no one was really there to show me anything. I spent a lot of time being my own mentor.
I picked up tricks and tips in studios from other engineers, and I stole ideas from everyone, and I experimented and read books and magazines and listened to records for endless hours.
Hopefully, you’ll find this helpful: I bought a big notebook and started writing down everything I learned from books, all my settings from recording sessions, any ideas I saw, my notes on songs I listened to. I wrote in that book (eventually there were 5 of them, I think) and referred back to ideas often. I stamped that info deeply into my head. This became my standard approach to learning anything, and I still do it whenever I want to get good at something new: I buy a notebook and start reading and writing.
Perhaps the smartest thing I did early on in my production career: A friend loaned me a copy of The Complete Beatles Recording Sessions by Mark Lewishon. This is a detailed account of literally every recording session The Beatles had from 1962 to 1969: every song, every track recorded, every idea, every session player involved, track sheets, lyric sheets pictures... It’s an amazing book.
I bought every Beatles CD and laid on the floor in front of my speakers, one to the left, one to the right, and I read that book from cover to cover and listened. At this time, Beatles CDs were in stereo, the individual tracks were generally spread hard left and right. I listened to a song until I could hear EVERYTHING mentioned in the book, until I could hear George vs John in the harmonies. I sucked in idea after idea after idea, and when I was done, I had a library in my head. Multiply this by all the other albums I listened to and tore apart. Bowie. Aerosmith. Steely Dan. Nirvana. Alice In Chains. Lou Reed. Elvis Costello. Everything.
I can’t emphasize enough that you can’t make records without HEARING them in a very deep way. Listening to the Beatles so purposefully was about the best education I could have gotten.
Get the Beatles Book Here.
While I’m recommending books: Bobby Owsinksi's stuff is great.
There’s an interview with Dan in this one: The Mixing Engineers Handbook.
Partnership
In 2019 three big things happened to me.
First thing: I retired from teaching the gifted and talented multidisciplinary arts program that had been my main gig for fifteen years. It was a great gig, but you have to know when it’s time to go. Retirement is like finishing college or moving to a new city. I left looking for a new adventure, not to sit my fat ass in a lawn chair.
Second thing: My mom died in my arms, throwing up in my face. It was a scene from a zombie movie, and I have PTSD from it. The PTSD brought my tinnitus screaming back. I have had tinnitus for nearly 30 years and I learned to live with it and even forget it for the most part but PTSD decided retirement wasn’t adventure enough, and that now I needed a chronic health condition.
“Oh my God,” I think very often, “I am totally out of my mind.” Retirement, thus far, has been the hardest stage of my life.
Third thing: Dan Korneff showed up at my mom’s funeral. I spotted him and his quiet smile while I was delivering the eulogy. I hadn’t seen him in probably five years, and there he was.
A few weeks later Dan and I had breakfast at Thomas’ Ham and Eggery and a few months after that we delivered the baby, the Pawn Shop Comp, into the plug-in world. And here we all are.
25 years before, I gave Dan a push to get him on his way, and my repayment was him pulling me along and lifting me up. Dan is my latest mentor. Heck, Dan reset my friggin’ life.
I’ve heard him say on occasion, “Luke taught me all I know,” which is TOTAL bullshit. I taught him all I know. He knows more than I’ll ever know about audio. And more than that. Dan’s as smart as he is nice, and he’s incredibly nice.
And patient. He’s been nothing but patient with me. When weird shit happens at Korneff — like e-mail screw-ups, etc. — that's usually me. And he’s been patient with me when my tinnitus has the upper hand.
But at our age, mentorship has become partnership. We both look for the best in the other. We both cover the other’s ass. We each respect the other’s skills and strengths, and we let each other have our individual flaws without judgment. And most importantly, we nudge each other towards being better people. That might be the most important thing of all.
So, those are my thoughts. Find someone to help you address your faults (kick your ass) and someone to be your cheerleader. Learn to be your own mentor and teacher because you’ll be stuck with yourself forever. Find people who focus on what’s great about you, not what is problematic about you. Find people that forgive you.
Find great people. Find people who know more than you. If you’re the smartest person you know then you don’t know enough people.
Find people you can talk to and you can listen to.
Give of yourself. Share what you know. You’ll be paid back when you least expect it but when you most need it.
Never stop learning.
I'll never forget the first person who took me under their wing when I started college.
Making the long trek from New Jersey, I found myself in a situation I was entirely inexperienced in—the traffic chaos of Long Island. Hours after the college had closed - I was trying to enroll and didn't realize the school had different hours before the semester started - I arrived to find a janitor standing outside the main entrance, accordion in hand. This seemingly random encounter was the starting point of a mentorship that would shape my career in the audio industry.
Frank, the janitor, led me into the main recording studio of the college. There was a guy in his early thirties behind a huge SSL 4000 console, working on a mix. He heard me walk in but he barely turned around to look at me. He said, “You wanna come here? Be an audio student? What’s your name?” I said Dan, and he said, "...Dan – hand me one of those patch cables over there." I did. Then he gestured at a chair near his. I sat. This was my introduction to... The Professor.
After intensely focusing on the mix for a few moments, the Professor stood, went to the patch bay and started replugging patch cables like an octopus. He glanced at me: “Turn down the master fader.”
I had a home studio. I knew something about consoles. This wasn't my first rodeo. But I got up and stared blankly at the SSL because I had no idea where the master fader was. He saw I was struggling so he pointed and said, "It's in the center of the console. Nope, not there. Over, down, left, right. Yes, right there. Pull that fader down." Once he was done patching gear he pointed to the seat again and said, "Sit."
At this point I noticed that he had a little dog with him. Cute little thing. The dog climbed out from under the console, walked over next to me, sniffed... and took a shit on the floor. The professor looked at the dog, then looked at me and said, "Are you gonna clean that up or what?" So I grabbed a tissue and picked up the turd. And then I went to find the janitor to get some cleaning supplies.
After the floor was done, the professor asked me if I knew what a Sony DASH was, which I didn't. He pointed to the tape machine in the corner and said, "It's that big thing right there, and this is the remote for it. Every time I stick my finger in the air, I want you to rewind to the beginning and hit play." So I did.
The first semester of school started about a week after that. And I wasn't more than 20 minutes into my first class of the day, when that audio Professor popped his head into the class, looked at me and gestured me out to the hallway. He said, "Dan! Let's go to the studio. We have a session." And that was it.
That was the beginning of a mentorship that taught me some of the most valuable information I know about audio recording. Although my initial encounter with my mentor was far from conventional, I was thrust into situations where I had to adapt and learn on the spot. This experience taught me humility and the value of being open to unexpected opportunities, no matter how unusual they may seem.
Over the next couple years, we spent a lot of time together working on his sessions at the school and at outside studios. The guy was busy. He showed me the ins and outs of working on a session from the ground up. He told me what to do, how to do it and why we did it. And this was everything from how to coil a cable to the appropriate way to label a DAT tape to dealing with musicians. And he wouldn't be shy about telling me what I did wrong... for hours and hours. I think he enjoyed it. And it wasn't without reason. He wanted to make sure that I didn't repeat any of those mistakes in the future, or repeat his mistakes.
One thing I really appreciated about The Professor was his ability to relate technically advanced concepts using normal, everyday examples. For instance, when he was explaining what RMS meant, he used a light switch as an analogy. He said the RMS value of the signal can be thought of as the voltage that would produce the same average brightness of a flickering light. So if you'd adjust the voltage to match the brightness produced by the flickering, you'd have an equivalent representation of the steady voltage. It was analogies like that that really helped me grasp these complex ideas.
Another aspect of his wisdom, that I held in high regard, was his consistent emphasis on the idea that music and recording is a form of art. You're creating art with sound. It's not always 100% technical. There's feeling and mood and all sorts of things that have nothing to do with compressors or equalizers.
During one session, he informed me he was going to be a little bit late and I should start without him. So we were recording two acoustic guitars and a vocal, and I had thought that it was a good idea to have them separated for bleed purposes; they were pretty far away from each other. I wanted complete isolation on their microphones, as far as vocals and guitars go, as well as between the two players. So they were set kind of far apart with headphones on. Everything was going smoothly as far as I could tell.
The Professor finally made it to the session and he sat back and listened to what we had. He then commented that there was no vibe—these players couldn't connect with each other very well. They weren't playing off of each other and it was affecting the performance. “Forget about the isolation, forget about keeping things separate and neat and perfect. Put these two guys together and let them play together. Fuck the bleed.”
And he was right on the money. Being able to interact with each other in a comfortable manner affected their mood and the physical closeness allowed them to play off of each other, creating an intimate and special performance. This experience exemplified his belief that, while technical aspects are vital, they are mere tools; the soul of music is crafted through emotions, mood, and ambiance. His insistence on focusing on the feel, not just the gear and technical aspects, taught me that exceptional audio recordings capture emotions and stories on a profound level.
After a couple of years in school, it dawned on The Professor that I had to start my own career. He set me up with an internship at Water Music, this massive commercial studio in Hoboken, NJ. That recommendation was like getting a backstage pass to a music wonderland.
My schedule during this phase of my life was a total circus act: classes hauling on till 3:00 PM, then hopping onto a train that zipped me to Jersey in an hour and a half. From there, it was straight into the studio grind till the crack of dawn. Rinse, repeat. That was my life for a full year. Sleep was nowhere to be found. This gig was no joke. The initiation? A chunky manual on being an Assistant Engineer. Seriously, it could've doubled as a doorstop. I tackled that monster in just two weeks though, and guess what? I got the golden ticket to the control room.
But let's get real here, it wasn't all rainbows and mixing boards. The initiation didn't involve heroic knob-twisting or epic audio stunts. It was more about plunging toilets, scrubbing dishes, and mopping. These jobs might sound like bottom-tier stuff, but looking back, they were a ticket to character development. They taught me to stay humble, own up to responsibilities, and showed me the ropes – even the less glamorous ones. The Water Music internship was my gateway to a wild, enlightening ride. But let's face it, I wouldn't have nailed it without my mentor's guidance.
He was a treasure trove of sound wisdom, and he didn't hold back in passing on the building blocks of my creative process. But let's save those mind-blowing tips and tricks—like using a gated sine wave generator to pump up the sub-bass, jazzing up snares with a gated white noise generator, and his golden rule of not recording transient material near the time code track—for another blog post, shall we?
In hindsight, this isn't just about my first mentor—it's a tribute to the power of mentorship itself. It's a testament to the unexpected beginnings that lead to profound growth, to the mentors who help us sculpt our paths, and to the value of embracing challenges and learning from them. My journey started with an accordion-playing janitor who let me into a college after hours, and a generous, masterful engineer who immediately put me to work. It took me to where I am today.
Finding the right mentor will completely change the trajectory of your career and life. And who knows, maybe one day you'll start a plug-in company with them.
A few weeks ago I tossed around the concept of Poke and Hang. The more I think about this, the more important it becomes as a fundamental audio concept, especially in mixing, and especially when working with compression and saturation.
This is the video I wish I had made a few weeks ago, when I first trotted out Poke and Hang, because the idea best makes sense in the context of a mix, and masking caused by all the different parts of a mix.
For this video, I’m using pink noise as a stand in for all of the other elements of the mix, and I’m processing a single acoustic guitar track with the Pawn Shop Comp. This lets you focus on hearing what the compressor is doing to the guitar in relation to the pink noise. I think it will make the concept clearer for you.
So, this post is just a video. Minimal jokes and snarkery this week.
Again, thanks for all the supportive email. You guys are great. And thank you for putting up with me as I figure out how to better present myself on video.
We are going to deal with transformers this week.
Where to start...
Transformers are incredibly common in audio gear, stuffed into everything from power supplies to microphones. Vintage EQs and consoles are loaded with transformers - four or more per channel, in many cases. And, or course, transformers have a huge effect on the way a piece of equipment sounds. A big part of, “That vintage thing” is really, “That transformer thing."
Transformers also are sources of non-linearity and harmonic distortion. In fact, “That transformer thing” is basically non-linearity, albeit a nice non-linearity that our ears tend to like.
Let’s make a transformer.
How They Work
When electricity runs through a piece of wire, it causes some magnetism to happen. A way to think about this is electricity has current, and magnetism has flux. So, an electrical current running through a piece of wire creates magnetic flux. I don’t really have a good analogy for you on this... maybe... if you fill a balloon with air the ballon will bulge - the air is the electricity and the bulging is the flux. That kinda works.
If a wire is coiled up, then a lot more magnetic flux is produced. And if that coil of wire is wrapped around a chunk of a ferrous metal - like a chunk of steel or iron, then even more magnetic flux is produced.
We can reverse the process, though. If we have magnetic flux happening somewhere, and we put wire near it, then electrical current starts flowing through the wire. To use my shitty balloon analogy: if we bulge the balloon, it sucks in air.
To make a transformer, we take a chunk of a ferrous metal, like iron, and make a kind of donut shape out of it. And let’s call that donut “the Core.” We wrap a bunch of wire around one part of the core and we feed electrical current into that coil of wire. We’ll call that first coil “the Primary.” We can also call a coil of wire “the windings.” Then, we wrap another coil of wire around the other half of the core - we’ll call that second windings of wire “the Secondary.”
And here is what happens: We feed electrical current into the Primary, which causes magnetic flux in the core. The magnetic flux in the core causes electrical current in the secondary. SO... we feed electricity in, it gets converted into flux, and then re-converted to electricity. My balloon anthology completely fall apart here... like... we stuff air into a balloon and it bulges and it’s touching another balloon and causes it to bulge and the whole thing is vaguely sexual and let's not go there.
Why Do We Need to Do This?
There is a point to all this. First of all, there’s isolation. The current flowing through the primary doesn’t directly connect to the current flowing through the secondary, and in some instances, noise and crap on the primary side of the transformer won’t make it through into the secondary side of transformer. Obviously, when a transformer is used this way, it’s called an Isolation Transformer, and that’s pretty useful in audio.
There’s more it can do. If we use more wire on one side of the transformer than the other, such as less wire in the secondary coil, then we start changing the properties of the electricity in the secondary coil. We can change current, voltage and impedance, thus transforming the signal flowing into the primary side of the transformer so that it is very similar yet different in some useful way, when it flows out of the secondary side of the transformer. See where the name “transformer" comes from?
Depending on how the primary and secondary coil relate to each other mathematically, we can use transformers to increase or decrease voltage (Step Up or Step Down Transformers), or change impedance (Impedance Matching Transformers), etc.
Look at that! A bunch of audio terms that you’ve probably hear around the studio and now you sort of know what they mean. Sort of like, “This Impedance Matching Transformer changes something which I totally do not understand, but it is useful and I run into impedance all over the studio... what the fuck is impedance?"
IT DOESN’T MATTER if you don’t know what impedance is for the most part. No one has ever put on a record and hummed along to the impedance. “My girlfriend loves the impedance on this record. And it reminds her of her now dead cat.” People don’t say this.
But, transformers definitely effect your audio engineering job, and the sound of your recordings, so for now, let’s deal with that. If you find this stuff cool, then please go research it more. Knowing stuff is its own reward. For now, don’t worry about how it works as much as how to take advantage of it when recording.
We do not need to know how to make tomato sauce to know the frickin' spaghetti needs some frickin’ red shit on it, godammit.
Why Transformers Have “That Transformer Thing"
People describe transformers as sounding “Warm” or “Round.” Some people say they add, “Glue” or “Air.”
Jeez. Ok.
I find them “Smushy" and “Mooey.” You might find them “Flatulent” and “Anachronistic.” Whatever. Here’s what is happening:
Audio is an up and down waveform, that is to say, it swings from positive to negative. So, when an audio signal induces magnetic flux in the core, the flux swings from positive to negative analogous to the audio. But, the core sort of likes to stay magnetized a little bit. Meaning, that when the flux in the core is positive, it changes to negative a teeny weenie bit slower than the electricity changes. Now, this doesn’t sound like delay, but it tends to cause harmonic distortion that sits in the lower frequencies. If you think back to previous blog posts, whenever we discussed harmonic distortion, it happened in the higher frequencies. But, for transformers, even if the signal is low in power, the effect tends to center down lower. So, call this warm, call this fat, call this mooey... or call this the ramifications of magnetic hysteresis.
What the Hell Is Hysteresis?
You’ll see this word crop up on occasion, with transformers and often with noise gates. Basically, hysteresis as a concept means that the effect of the change happens slower than the thing that caused the change.
It’s like farts.
A fart happens, and a moment later everyone smells it and is grossed out*. The fart happens faster than the effect of the fart. And then, the fart, whilst finished as an event, lingers for a bit.
Fart hysteresis.
*Unless it is your own fart, in which case, you feel like sticking a flag in it and claiming it for the glory of Spain.
Now, transformers also Saturate, in the true sense of the word saturate.
The core of a transformer has a limit to the amount of flux it can handle, and when you reach that limit, the core becomes saturated. In other words, it can only handle so much magnetism before it can’t handle anymore. Once all the molecules in the core are “fluxed up,” no more flux can be added. As I described in previous blog posts, distortion is caused by a loss of ability. When the core is saturated, it no longer has ability to deal with peaks of waveforms that push beyond its capacity. Full up! No more! So, the peaks of the waveform get clipped, and a clipped waveform caused harmonic distortion. And, in the case of a saturated (clipping) transformer, the distortion is more like normal, clipped wave distortion — it effects the high frequencies.
So, transformers have an inherent low frequency component that happens all the time. In many cases, though, it is REALLY REALLY subtle. Subtle like, “I can taste the difference in the water when I drink from one side of this glass, compared to when I drink from the other side of this glass.” But it can definitely be heard, too.
As you slam more power through a transformer, you get more typical harmonic distortion.
Now, there is also... a sort of “slowness” to a transformer. I chalk this up to the hysteresis thing, but... to my ear, often gear with transformers in it sound slower and less precise to me. Less crisp, I use the word “smushy.” Smushy as in smeared and goopy. Like someone stepped on a cupcake. For that low-end kind of boost that transformers cause, I describe that as “mooey.” Mooey as in MOOO like a cow.
Transformers in Action
I wanted to make a video, but the reality is that it can be VERY hard to hear any effect of a transformer through YouTube after who knows what processing and stuff a video goes through. So, here is how you can experiment with transformers on your own.
Get a Korneff Audio Pawn Shop Comp, if you don’t already have one, by downloading a demo here.
After you install it, bring up a bass track on your DAW and add the Pawn Shop Comp as an insert.
Press the Korneff Audio nameplate to go around to the other side of the PSC.
The transformers are located at the top in the center. You can switch between Iron, Steel and Nickel - these are the metals used in the transformer core. They have an effect on the sound because.... can you guess why? We’ve talked about it....
Magnetic Hysteresis!
Each metal has different magnetic properties, so the hysteresis of each is different, and the effect on the sound will be different. In general, Nickel is affect the least by hysteresis, so it will have the purest sound. Iron tends to sound a tiny bit bright perhaps. Steel has the most pronounced low end bump and also seems the most “smushy."
Try switching between Steel and Nickel to catch the flavor of what makes the two different.
Now, hearing the transformer clip is going to be difficult, because there isn’t a dedicated “transformer clipping adjustment.” You can get it to clip by turning up the preamp gain, but there’s a good chance that you’ll also clip the tube preamp at the same time. That is, actually how it works in real analog life.
You see, back in the good old days, when people designed audio gear with transformers, they weren’t thinking, “Well, this will clip nice on a kick drum.” Nope. They were using transformers because that is what they had to use. And when they designed circuits, they were designing things to simply sound pleasant, not to necessarily clip a certain way, In fact, if you were to go back in time to talk to these guys and you said, “How does this sound if I clip it?” you’d probably get the reply, “Don’t clip it, you idiot. Why would you want to clip it? You want to blow it out? What the hell is wrong with you?"
It’s kind of like if you went to a remote indigenous village in the Amazon rain forest, and there were grass huts everywhere. And you said to one of the natives, “I like the grass texture effect on your hut. Tell me, what made you choose grass as a material for that?” They would look at you like you’re a freekin’ moron right before they shot your lumpy ass with blowgun darts envenomed with poisonous tree frog slime.
Remember that it isn’t just the transformer that makes things sound as they might sound. There’s also the stuff around the transformer that effects things.
Remember, too, that the effect of transformers is typically subtle. Half the time I can’t really hear it. Now, I might like how something sounds going through a particular piece of gear, but I’m not thinking to myself, “Ah, the mellifluous dulcet warmth of a Carnhill VTB9049... sigh..."
Don’t worry if you can’t really hear it. No one ever says, “Turn it up! This is my jam, baby! Dig that transformer Hysteresis!'
Have fun. Write good songs. Make cool records.
I received an email from JC, who wanted some sort of exercise to work on some of the concepts from last few weeks’ posts on Distortion and Saturation and all that. And I thought this was a pretty good idea. So, today’s post has an exercise and a video, but I’m going to start off with a way of thinking about mixing that ties into the exercise and the video.
First of all, let’s not think about the elements of the mix in terms of frequency or dynamic range. Let’s think of things in terms of Hang and Poke. These were two terms I came up with when I was figuring out mixing for myself, and later were useful to teach mixing.
HANG and POKE
Hang is short of “hang time.” An instrument or a part with hang has a lot of sustain. It’s very steady state. It takes up a lot of space in a mix in terms of time. Instruments with a lot of hang are things like strings and keyboard pads, cymbals, tambourines, held vocal notes. This things “hang around” in a mix. More Hang = more detail = big.
Poke is short for “poke through” Parts with poke are short and punchie. Kicks and snares tend to have a lot of poke. Anything that is percussive has a lit of poke. Parts that come in for a split second - like a orchestra hit or a keyboard stab - these have a lot of poke.They poke through the other instruments and sounds of a mix.
Most instruments and parts have a bit of both Hang and Poke. A chugging guitar part has a lot of hang to it, especially with distortion, but there’s also the “click” of the pick and the more rhythmic component of the part, which provides Poke. A kicks drum has a lot of poke, but if it's resonant with a lot of sustain, it could also have a lot of hang to it. If we add reverb to that kick, or we’re picking up room sound from it, then that will increase hang time as well.
ADSR and ENVELOPES
Some of you are thinking, “Wait, this is just a simplified acoustic envelope.” Yes. Basically, the envelope is split in two: there’s the attack portion, and then there’s everything else after the attack. And unless you’re working with synthesizers, that is typically all you really need to know - the attack, and everything else.
Think of Poke and Hang as a see-saw or a teeter-totter: if one side goes down, the other side goes up. So, when you increase Poke, you decrease Hang, and when you decrease Poke, you increase Hang.
Problems at the Surface of the Mix
A mix can be thought of like the ocean. There are elements of the mix, like reverb and maybe pads, that are way down in the depths, and other stuff, like vocals and lead lines, that are floating on the surface. And things like drums and more percussive sorts of parts sort of break through the surface and go back down into the depths.
Let’s say you’re working with an element of the mix, and you just can’t get it to sit correctly. If you bring it up in the mix so that you can hear it, it seems too big, but when you pull it down a little, it seems to disappear under everything.
This is the sort of situation in which the element needs more Poke and less Hang. It needs more Poke so it will sort of punch through the density of the mix, but less Hang so it doesn’t last as long and sounds less big. This is where you use a compressor with an attack time set long enough to allow the Poke to get through, but the release set long enough so that the hang time is decreased, which pushes the resonance and whatnot down under the surface of the mix.
You can sense you have to increase Poke and Decrease Hang when the element you are working with seems too big when you think it is at the level it should be at, but when you lower it in the mix, it seems to disappear in the depths of the overall mix.
Another example: Let’s say you’re working with an element, and it seems small and lost, or lacking detail. It might sound great by itself, but when you start adding other things, it gets swallowed up. If you raise the fader it gets really loud before it sounds big or detailed. This is a situation in which the element has too much Poke and it doesn’t Hang around long enough.
This is fixed with a compressor set to a very fast attack and release, but what works even better is Saturation.
Remember that when you Saturate a signal, you’re basically clipping the signal with an infinitely fast attack and release, and this is perfect for increasing the Hang by shaving off the Poke. And this is the reason that drums sound fat on analog tape - the tape compression pushes down the Poke and brings up the Hang.
You can sense you have to increase Hang when an element vanishes in the mix, or seems to only have any presence and size when it's really loud.
Here’s an Exercise
I made a quick video... actually, I made a hugely long awful video that the lovely and talented Raquel edited down... and in it is an exercise that will help you to hear Poke and Hang, and also hear the subtle difference between compression caused by a compressor, and compression caused by saturation.
Go to https://korneffaudio.com/pawn-shop-comp-2-0/ and download a Pawn Shop Comp demo for this. The Pawn Shop Comp is perfect for this exercise because it has a compressor and a tube preamp that can do saturation, so you can hear both effects using one plug-in and one signal chain.
Here’s the video below:
And that’s it for today. Thank you JC for the suggestion. Feel free to hit us up with questions. New plug-in in a few weeks, and some other developments!
I didn’t manage to get my video setup working this week. Problems. Which sucks, but problems are a part of, I guess... virtually everything. Basically the only problem I don’t run into is having problems with having problems. Having problems seems to work fine, constantly.
I curse a lot, even on good days, but it sorta goes through the roof when I run into problems in the studio.
I started thinking of how I solve problems (and how much cursing is involved). I recall using the first version of Pro Tools that was released. This was back in, like, 1991? It was brand new, and it sucked. It crashed constantly, like an overzealous Kamikaze pilot, and had so many totally counterintuitive bullshit functions... there was an overall level control in the EQ section and its default position was all the way down. So you’d switch in the EQ and it would cut the channel out completely. And the knob was tiny, so looking at the EQ, you thought it was a filter or some such. And who in their right mind adds a level control IN THE FRICKIN’ EQ????
After god knows how many crashes, I took the manual - it was in a ring binder - and threw it out the studio window. The rings opened up and the pages blew in the night air and rained down over the parking lot like Nazi propaganda dropped from a plane. The assistant was a lovely girl named Yoshimi. She started crying. It was because I was cursing so much, I think, but on the bright side she learned a lot of new English vocabulary that session. I am still not fond of Pro Tools.
Problems happen. You must get through them. It does suck. Especially with clients there and the clock running. You will not be feeling good about yourself, so you need to find the problem fast and fix it fast. Here are some ideas and thoughts on the matter.
Gear Doesn’t Usually Break
I’ll tell ya right now, broken gear is almost NEVER the problem, because gear seldom blows up. Now, if you’re working with old tube stuff or vintage equipment, yes, this stuff does break, but generally analog stuff gets noisy or crackles, or the switches and pots are intermittent, and it’s really clear that something is wrong with the piece of gear.
I have thousands of hours in the studio. Blown gear has been the issue like five times. Every other time it was me or someone else (usually me) being a fucking moron and setting something wrong, or patching something wrong. If your gear is modern stuff, it doesn't blow up. The smart money is on you making a mistake.
Check if stuff is plugged in and switched on. I cannot begin to tell you how often this is the problem.
Mics Don’t Usually Break
Rarely do mics just stop working. They’re very reliable, even my vintage 1961 C-24 condenser. I've NEVER stopped a session because it had a problem.
If a mic is going to break, or is going to get broken, it will probably be a condenser or a high end, vintage ribbon, but again, this almost never happens in a good studio. Yes, I have been in shitty studios that had broken mics in the mic cabinet, which is utter bullshit, and if you’re putting stuff out in the studio that you know is broken or intermittent, like broken mics and cables, then you deserve to go out of business.
Dynamic mics are practically bombproof, the exception being AKG 225E’s, which will break if you look at them funny. Chances are you’ll never run into one of them in the studio, though, because they’ve not been made in years and they’re all broken.
Yes, mics can break, but it is very uncommon. Again, usually the issue is dumb stuff by the engineering staff.
What About Cables?
Cables are definitely a point of failure. But at a well-run studio, there shouldn’t be ANY broken or intermittent mic cables, patch cords, guitar cables, speaker cables, etc. Because a well-run studio staff pulls broken cables out of use IMMEDIATELY and either fixes them or tosses them.
Seriously, there is no excuse for a session to grind to a halt because the absolute cheapest thing is broken. What the fuck is that about? Fix it or chuck it.
If, during a session, a cable seems dicy, pull it out of use, tie a knot in it, and either throw it into the shop area, or into a box, or in the corner where no one will use it. After the session, test it thoroughly. Wiggle the wire near the connectors, etc. If it makes noise or cuts out, fix it or toss it. But do NOT put it back on the cable rack or in the cable closet. I was at a shitty studio for a few days that kept returning broken cables to the rack after we packed up. So, when I found a bad one, I’d cut a connector off with a pair of wire cutters I had with me. Fuck ‘em.
I was never a house engineer; I was a freelancer. And I would work in studios ranging from the best in the world to the worst in the neighborhood. I had a cable tester in my “producers toolbox,” and one of the first things I would do at a session in a strange studio was test a bunch of cables. If I found a bad one, then I learned something about the way the studio was run. But more importantly, I now had a pool of cables that I knew worked, so those would be the cables I would use for the duration of the session.
I integrated this testing into my workflow. Generally, I started projects tracking the basic tracks live in the studio. So, that initial session might have 20 or more microphones and cables in use. The perfect time to test everything was during that initial set-up.
Cables do break, but yet again, usually it is dumb stuff by a person that is causing the problem. Get a cable tester. Use it a lot.
KISS ASS
This is an acronym: KISS means “Keep it Simple, Stupid,” and ASS means “Always Something Stupid” or “Always Something Simple."
KISS is naturally how you want to proceed in general in the studio, because if you’re keeping things simple, you can typically work faster. One mic is a lot faster to set up than eight, one mic will have no phase problems compared to having eight, etc. And eight mics plugged in means eight mics to break, eight cables to break, eight mic preamps to be set wrong, eight phantom power switches you forgot to press down. Tons of points for failure.
But you can’t always keep it simple, things get complex. Complex is fun, but it does have higher screw-up potential.
ASS.... always something stupid/simple. 90% of the time when I think something is broken in the studio, it’s because I patched it wrong. Or it's set to Line when it should be set to Mic. Or it's bypassed. Or it’s plugged into input #10 and I’m bringing up the fader on input #11. Or someone is singing into the wrong side of the microphone (unbelievably, this has happened. More than once!). Or it's turned off.
Start by looking for something simple and stupid: a human mistake.
But First - TURN THE VOLUME DOWN
Before you try to fix anything, drop the monitor volume way way down, so you don’t damage speakers or ears by throwing switches and pulling patches. There’s nothing that says idiot more than breaking more things while troubleshooting.
It’s nice if you can mute things entirely and go by the meters, but if I have my head behind a rack and I can’t see the meters, then I don’t know what the hell is working or not. So, at least, make sure you lower the speakers way down, but leave them up enough that you can hear something.
Of course, if what is going through all your gear sounds like a square wave, then keep everything down. Loud, continuous, distorted digital noise is to be avoided. Use. Your. Peanut.
Look at the Lights and Meters
Make a noise at the source and look at the meters. Does the preamp light up? Does the compressor meter kick to the left? Wherever the lights and meters stop working first in the signal chain is where your problem lies.
Check the Buttons, Switches, Knobs
Check all these on all the gear in the signal chain. Chances are the problem is a Mic/Line switch, or something is bypassed, or the volume is down, etc. Is there a trim turned down? Is there a knob that has a switch in it, and you have to lift the knob or press it down - this was a common SSL problem. Phantom power on? Hidden level control in the goddamn plug-in panel?
Check Your Patching
It’s so easy to fuck up a patch. On a dense patchbay, where you can barely read anything and there are cables going everywhere and the whole thing looks like worms at a swinger party, patching really requires paying attention. Don’t be embarrassed if you screw up a patch.
Double check your patching. Make sure output goes to input. Make sure you’re not off by one row or point in the patchbay. Make sure you’ve pushed the cable down all the way. Make sure some damn assistant didn’t slip in a phased reversed cable because he wants to “test you” (true story.... bastard).
Be Anal about Patching
If you want to be anal, which is not a bad thing in the studio, patch things one at a time. This is a really good idea if you’re working with unfamiliar gear or in a new studio or setup.
Start with the source, get it to the output, and confirm that you’ve got sound. Then add the compressor, and confirm you have sound. Then add the EQ, or whatever, and confirm you’ve got sound. This is a slow way to go, but this is an excellent way to avoid problems and learn. I find I do this a lot especially when I’m interfacing my DAW with analog outboard. It helps me to find and fix latency issues, which, coming from my analog background, is sort of an “unnatural" thing for me.
Another trick: say what you’re doing out loud. I still do this: Preamp out to compressor in.... compressor out to EQ in... It keeps mistakes from happening, especially when the patchbay gets crowded. Saying it out loud also slows you down a bit. Yes, engineering should happen quickly, but taking a slow moment while setting up allows for faster overall speed later. And NOTHING in the world feels worse than trying to find a problem while the clients are watching you, waiting for you to figure it out.
Confirm What Is Working
Well, now you’re sweating, because you’ve checked all the obvious stuff and it still ain't working. So, you might have a genuine gear issue on your hands... I still would bet you patched shit wrong and just missed seeing it, but I digress...
Take a deep breath, or two, and start swapping out one element of the signal chain at a time.
So, if you’re using a microphone, a cable, and a preamp, after checking all the switches and the patching, because ASS, swap the cable and see if you get sound. If swapping the cable doesn’t change anything, then swap the mic. Still no change? Swap the preamp. Don’t swap more than one thing at a time, because then you don’t know what is causing the problem.
What is very handy is having two fairly identical signal chains to test against each other. If you’re in a drum session and you can’t hear the kick mic but you can hear the snare, unplug the cable from the kick mic and plug it into the snare mic. If you can’t hear the snare, then the problem is on the kick’s signal chain. If you can hear the snare, then the issue is with the kick mic.
Simplify the Signal Chain
If you have a mic plugged into a preamp, plugged into a gate, plugged into a compressor, plugged into an EQ, plugged into an interface, that’s a lot of points of failure.
Find the problem by simplifying the signal chain. Speak into the mic. Is the preamp lighting up? If it is, patch the output of the preamp right into the interface and see if you get sound. If you do, you now know the problem is between the gate and the EQ. So, patch into the preamp output into the gate, and patch the gate output into the interface. Can you hear it? Repeat this process, cursing liberally, until the problem is solved.
If you’re in a digital situation, do the same thing. Figure out a way to get the source directly to the output, confirm the source is working, and go from there.
Avoid Problems in the First Place
The easiest way to solve problems is to not have them. Keep your gear in good shape. Fix, or toss, bad cables immediately.
Dust everything and vacuum often. Dust screws up everything. Make sure your control room chair isn’t running over cables.
If a channel, or a preamp, or a piece of outboard dies, either pull it from the rack or put a piece of artists tape on it. Now, use your peanut here. I was in a session and we found a blown channel on the console, so I told the assistant, who was a dumbass, to mark it, so he put a HUGE piece of tape over the channel and wrote BLOWN!!! and drew flames and fire all over it. Yeah, that’s EXACTLY what we want the client to think, that the studio is shitty and on fire. What a fucking idiot. I put a piece of tape on the fader and another over the mute button.
I’ll have to compile my best moron assistant stories, one of these days.
Have great sessions and make great music.
Oh my, we're jumping back to DISTORTION, for a bit, and looking at what happens when you push a signal up, run it out of headroom, and generate harmonic distortion.
Isn't it cool that, if you've been following this series of posts, you can now understand everything I just wrote? It's also cool if you already knew all this stuff. Everything is cool. Even distortion is cool... if it sounds good.
You may have read, or heard, engineers say things like: "Compression is distortion, distortion is compression, saturation is distortion, saturation is compression yada yada yada" and now all of these terms are mixed in your head and it's confusing. So, let's straighten this out and give you some mental tools so you can get this crap under control.
DISTORTION and COMPRESSION
As you know (and if you don't, go here), as we crank up the signal through a piece of gear and run it out of headroom, the gear loses its ability to reproduce the signal and the wave clips. That is, the peaks of it - the waves that are very high in power - are rounded off a bit. And if you're knocking off the high peaks of a signal, you are compressing the dynamic range of the signal. So, a side product of pushing a signal into the distortion point is some compression.
You've probably heard this whenever someone overdrives up a guitar amp. You'll notice that there's not a lot of volume difference between the softly played parts and the loudly played parts. Contrast that to a guitar amp that isn't overdriven: the quiet parts can be very quiet, and the loud parts really loud. Try this with a Fender Twin - you'll hear the loudest, utterly painful clear guitar parts, and you'll have to squint for the quiet stuff.
Compression occurs early on, as you use up headroom, and it doesn't necessarily generate that much harmonic distortion. It will produce some, but it might be inaudible at first.
DISTORTION is NOT a COMPRESSOR
So, there is a compression of dynamic range when you have distortion, but it isn't the same type of compression that you typically get from a dedicated compressor.
Typically, a compressor has a bit of lag from when it senses a signal over threshold to when the gain reduction circuit kicks in. That lag is called "Attack Time", and sometimes it's fixed, sometimes it's adjustable, sometimes it's short, sometimes it's long, but in any event, that "lag" is pretty much the reason why a compressor sounds punchy: it lets the transient get through... the transient "punches" through - is a good way to remember this.
But when you slam a signal into a tube, or a FET, or into analog tape, and cause clipping, there is no lag. The transient doesn't get through, it is immediately squashed at the speed of not enough electrons. There's also a very fast release when you're getting this sort of effect.
So, this type of compression is very different from that caused by a compressor. It can be very useful, actually, and you're very used to hearing it, especially on records from the '50s, '60s and '70s.
SATURATION
Saturation is a term that describes a physical phenomena: if you record very hot to tape, the magnetic particles can't move any further, and that is called "tape saturation". Think back to 8th grade science class and making "saturated solutions" with that asshole Mr Frank, who always favored the lacrosse players over nerdy fucking musicians like me. Uh... I digress.
Saturation is also what happens to transformers, when a lot of signal is pushed through them and they become "saturated”. Here’s a topic for another post, I guess.
If compression is what happens as we start pushing a signal into clipping, saturation is what happens if we keep going: the signal gets squashed a bit more, and Harmonic Distortion starts to increase.
Increasing harmonic distortion adds upper harmonics, so, a signal moving into saturation tends to get brighter, and the more you push in, the brighter it gets. And this is the big use of saturation and "saturators" these days, to make things a bit more present by adding brightness and... COMPRESSION, right? Because using up headroom and generating harmonic distortion adds compression. But not "compressor compression", right? It adds compression that's not punchie.
DISTORTION
If you keep increasing the level, you'll keep increasing harmonic distortion, and eventually your ear will recognize things as sounding distorted. There isn't some spot where audio engineers agree: "Oh, that's gone from saturation to distortion". A classical engineer will hear ANY compression and saturation and call it distortion, whereas someone using saturator plug-ins might be drawing lines here or there. Someone like me, an old-school analog engineer, will probably just record stuff and get it to where they think it should be and not give a squirrel's ass about what it's called.
In other words, the words are arbitrary. What's happening is this: as you turn things up, you reduce the dynamic range and add upper harmonics. That's what it all is.
WHEN DO YOU USE THIS STUFF?
All the time, I guess. I usually pushed drums into analog tape, while recording, to tame the attacks a little bit and "lengthen" the hits (more on that later). I would, typically, cut the kick kinda on the lower side, because I wanted as much of the punch of that thing as possible, but snare I would usually smush in quite a bit, and cymbals too. Hi hats... if I wanted them crisp - meaning lots of nice ticky ticky transients - then I would cut them on the low side. If I wanted to make them more sloppy (squash the transient a bit) then I would:
a) cut them higher
b) cut them lower
If you answered a), you understand tape compression.
A basic way to think of using saturation/tape compression (or whatever this sort of thing might be called) is: Do I need this instrument to sound brighter? Do I need more punch out of it? Is it too punchy?
Realize that making it brighter, by generating more distortion, will typically nip off transients a bit. You're going to notice the loss of transients on faster things, not so much on slower things like vocals or guitars. As I wrote in last week's blog post, I used to always smush guitars into tape, and that was usually done to get rid of some of the transient activity, so things weren't so pingy and whistle-like (the Insufferable Midrange Filter on the AIP hadn't yet been invented).
And that is it for this week. I had hoped to make you all a video, but my tinnitus is bad this week so it wasn’t meant to be.
Yes, I have tinnitus. I got it years ago, from a week of sessions that was a little too long and a little too loud.
Tinnitus, if you’re in audio, is a bit like getting in a car accident while driving. You might be very careful, and take all precautions, and you can still get hit. If you’re on the road, you can get hit. Honestly, with tinnitus, you can be miles from the road up in the mountains and suddenly a car can drop out of the sky on your fucking head.
Someday I’ll write a bunch of things on tinnitus, but for now I’ll say this:
1) Wear hearing protection around drum sets, horn sections, PA systems and guitar stacks. And on subway trains.
2) Don’t go to ANY live gigs without hearing protection. It could be a concert of ants picking their noses. If it’s being mic’d, it’s too loud.
3) Get an SPL meter app for your phone and measure your environment. Note whenever you’re in a place that gets consistently above 80dB-SPL. Try to avoid those places, and if you’re stuck in one of them, leave as soon as you can - like within an hour. If it is louder, leave sooner. If it is above 100 dB-SPL, question why you are there in the first place.
4) Avoid earbuds like the plague. Never wear them on a train or in a car. This is like playing Russian roulette with a lawn mower.
If you have tinnitus... I feel ya. Most likely it isn’t your fault, and beating yourself up won’t help. Feel free to write me - Luke @ Korneff Audio dot com. Remove the spaces and make the dot a dot. You’re not alone and there are some things you can do so life doesn’t suck.
Here we are - the end of the line for this series of posts on levels, noise, distortion, etc.
Gain staging... from all the talk in online forums and people saying, “Well, you really need to watch your gain staging,” you’d think there's some sort of mystical science magic to it, but it’s really simple.
Gain staging is making sure that each piece of equipment in your signal chain has the best possible signal-to-noise ratio and enough headroom to prevent unintentional distortion.
We have to cover two concepts really quickly, then I’ll tell you how to gain stage things, and we’ll finish off with some tips (rules, suggestions) that make this even easier.
UNITY GAIN
What this means is that the level flowing into the piece of gear is the same as the level flowing out of the piece of gear. Think of a wire. If you feed a signal into a piece of wire, and the wire isn’t really tiny or tremendously long, the amount of power feeding in is the same as the amount of power feeding out.
If we stick a bunch of amplifier circuits and EQ circuits and processor circuits between the input and the output, unity gain is still what we want to have happening.
Now, there’s usually something to control Input Level, we sometimes call this a TRIM, and there’s usually something to control Output Level, and this can be called Output Trim, or Output, or it can be a fader, or, in the case of a compressor, it might be called Make-Up gain, or it can have a Make-Up Gain AND and Output level, but the basic idea is the same: There’s something to control the level of what feeds in, and the level of what feeds out.
Now, most equipment has some sort of meter - ranging from a couple of LEDs to a mechanical VU meter, and that meter is usually located after the Output level somewhere, but sometimes it is switchable, which is nice, because then you can see what your input level is before it processes things, compare it to the output level, etc.
SO, you’re always aiming for Unity Gain with each piece of gear, and what we want is the input level set so that the meter reads nominal, and the output level feeding out is at nominal. To do this, we set the level control knobs at the position that gives us Unity Gain, and that position is usually marked with a ZERO or some such.
Set Things to Unity Gain
This is easy. Grab your OUTPUT LEVEL knobber and set it 0. So, if it’s a fader on a console you slide it up to 0, the output knob is at 0, etc. What if the Output Knob is labeled from 0 to 10? Set it to 8, or set it to 10, it depends on the circuit and we’re not going into that here.
Next, feed signal into the input, turn up the input gain until the METER is hanging around 0, which indicates nominal level. Now you’ve got something really close to Unity Gain happening for that piece of gear. Will the meter go up and down? Yes. But you’re not chasing the meter. You’re looking to get the meter hanging around 0, or nominal level. Don’t be too fussy. Just get it close.
Remember, the Unity position on a knob or a fader is at ZERO. 0. When you set it to Unity, that’s where it goes.
The next step is to feed the Output of one piece of equipment into the Input of the next piece of equipment. NOW... this might get a bit tricky, so we have to cover Operating Level quickly.
OPERATING LEVEL
Simply put, Operating Level is the amount of power a piece of equipment wants to see at its input and output. This is what you’ll usually run into:
Mic Level is the level of power coming out of a microphone and it’s REALLY LOW. How low? Like -50dBu. What does that mean? It means really low. Don’t worry about it. Mic Level is so low that you can’t do anything with it until you bring it up to Line Level. That’s what a Mic Preamp does - it brings a Mic Level signal up to Live Level.
Instrument Level is the amount of power that comes out of a bass or a guitar with a passive pickup. It’s also really low, and in my mind it's basically the same as a mic level signal. For those advanced campers, I’m ignoring impedance today. If you don’t understand that previous sentence, that’s fine. You'll get there eventually.
Line Level is the level of power flowing through gear - consoles, tape decks, compressors, coming out of synths and keyboards, etc. There are three possible line levels: Consumer, Pro Audio and Broadcast.
Consumer line level is -10dBV. This is the line level of home stereo equipment and also output level of a lot of synths and keyboards. What does -10dBv mean? Well, it means if the thing is set to unity you have -10dBV feeding in and -10dBV feeding out and that’s all need to know. -10dbV is a LOT more powerful that -50dBu. Ignore all the V’s and u’s for now. -50dB is less than -10dB, right? Close enough for rock and roll today.
Pro Line Level is +4dBu. This is hopefully what the majority of equipment is at in your studio. Can’t tell? Pro equipment uses bigger, heavier, tougher connectors. Consumer stuff uses shitty little connectors. With pro line level stuff, if the meter is at 0 and gain is at unity, you have +4dBu feeding in and out. And it’s got a lot more power than -10dBV consumer stuff. Again, ignore the V’s and u’s and just look at the numbers for now. +4 is more than -10 and a lot more than -50.
Broadcast Level is +8dBu. I don’t even know how common this is anymore as I don’t do work in radio stations or TV, but it is 4dB hotter than Pro Line Level. You can probably ignore this.
Speaker Level is what comes out of a power amp and plugs into a speaker. It’s like a SUPER BOOSTED line level. Line level is too weak to move the diaphragm of a speaker, so a power amp is needed to crank shit up. A dumb idea is to plug the output of a power amplifier into anything other than a speaker. Poofsky.
Again, hopefully your equipment is all +4. It won’t be - you’ll have some guitars and keyboards and, of course, mics, and they won’t be at +4, but that’s why you have preamps. Plug the mic level and instrument level and consumer level stuff into a preamp, and add gain to get it to read 0 on the meter. Now, going out of your DAW or mixer, you might be feeding into a pair of “consumer level” active monitors. Usually there’s a switch so you can match the Pro Level output gain to the consumer level input gain. If you’re thinking the switch knocks off at about 14dB of gain, you’re right.
GAIN STAGING WHEN TRACKING
Ok, here we go.
Starting with a mic preamp: Turn the input gain all the way down. Set the output gain to Unity. Plug in the mic. Have the singer or musician play, and turn up the Input Gain until the meter is reading 0, or nominal. Done. If the meter has slow ballistics and you’ve got drum fast transients, run the meter a little lower, like -10 or -15. Slow transients? You can run it a little hotter and increase your S/N ratio. But really, unless you’ve got slow meters and fast transients, park it around 0 on the meter and move on.
Plug the output of the mic preamp into whatever is next - a compressor, an EQ, etc. Set the EQ flat, set the compressor threshold all the way up, etc. If there’s an output level control or makeup gain set that to Unity, that is, to 0 or to 8 or whatever. Watch the meter. If there’s no input to adjust it should hang out around 0. If there’s input gain then set that to Unity or play with it until the meter is at 0. Now, as you adjust the EQ or the compressor to change the signal, the gain will change, so you’ll have to adjust the Output level perhaps, or the input level - it depends on how crazy the gain change might be.
You keep going until you reach what your final stage is, either an analog tape deck or a digital tape deck, or a DAW, or perhaps a live mix console... whatever.
Hit analog tape at 0 on the meters, unless it is drums, in which case hit it a little lower unless you want distortion. Hit digital tape decks, like ADATS and DATS and Sony DASH machines as hard as you can without going over.
Hit DAWs at around -18 to -12dBFS. Yes, you can hit it harder, but for now, you want things bouncing around in that -18 to -12 area.
ADJUSTING LEVELS
Now, where do you adjust the level if things are hot at the tape deck or the DAW? Well, the best place is the Mic Preamp input. Yes, it will screw up your compressor settings a bit, but that’s life and engineering and you’re paid to tweak things. The mic preamp is doing almost all of the work here, so that is where you adjust it. When tracking, get in the habit of setting levels at the earliest spot in the signal chain, at the preamp. And NEVER (and I mean this almost absolutely) use a fader to fine tune your gain. The exception: if you’re riding levels while tracking, then use the fader. Other than that. Leave it at Unity. Have I made this clear?
Always do this. It will save your ass.
Always set your output levels when tracking to Unity. Especially on a console. When you’re tracking, all of the faders should be at the 0 mark on things. DO this RELIGIOUSLY. Here’s why.
Faders get bumped during sessions because that happens. If you always set them to Unity, then if they get bumped you just set them back to 0 (Unity). You need to pull a mic down quickly? Pull it down. When you bring it back up, place it where it always should be, at Unity.
True story. Was live tracking a band and we had about 27 mics going into the console. Took HOURS to get levels. Irate girlfriend of lead singer came in, caused a huge ruckus, running around the room screaming, and she ran to the console and moved all the faders around! “There,” She said! “I fucked up your mix.” I think I yelled at her. She stormed out of the room. Band was very upset. “Oh no! She wreaked our levels that took HOURS to set,” cried the guitar player. “Luke, I am so sorry...” said the lead singer.
I laughed. Slid all the faders back up to... WHERE THEY ALWAYS SHOULD BE WHEN TRACKING. Unity. 0. Band loved me and bought me a pony after that. Named the pony Unity.
MISMATCHED OPERATING LEVELS
When you’re feeding something low level into something higher level, you want to adjust things at the INPUT STAGE of the higher level piece of gear. So, with a low level mic going into a preamp, you tweak the gain of the preamp. If you’re plugging some strange shitty consumer -10 compressor you bought into a +4 thing, add gain using the +4 device’s input trim.
What if you feed +4 into -10? Well, turn DOWN the output of the +4 device by about 14dB so you don’t overload the -10 device.
FINITO
AND... there you have it. Gain Staging. It’s easy. This blog post is done. What follows below is a bunch of common sense hints that are worth following.
See ya next week.
COMMON SENSE HINTS
1) Nominal is nominal is nominal. If the operating level of each piece of gear is the same, then setting everything to nominal will work. When in doubt... NOMINAL.
2) Set levels as hot as possible without getting distortion. You’re always trying to maximize the s/n ratio.
3) Use the hottest mic possible. It’s really hard to overload a modern condenser, let alone blow it out.
4) Preamps generally have a lot of headroom, so they can usually be run pretty hot. But LISTEN. Some preamps overload in a nice way, others crack and snap. And this sounds like shit. When in doubt, back it down a bit. You can always add distortion later, but you’ll never get rid of it once you have it.
5) Most mechanical (dial) meters are VU and have slow ballistics. Run your level lower on these when it’s percussive stuff, and at nominal for everything else, including entire songs. You can run your level higher on VU meters when the transients of the input signal are slow.
6) LED meters might appear to be fast peak type meters, but in my experience they usually have similar ballistics to a VU meter. Run some drums through it, run some vocals through it, watch how the meter responds. Or look in the goddamn manual.
7) Want to calibrate everything in your signal chain? Stick a guitar amp in the room without a guitar plugged into it, crank it up so it hisses (white noise). Throw a mic in front of it. Plug the mic into a preamp with the output at Unity and adjust the input gain to get it to 0 on the meter. Feed that through each piece of gear in your signal chain until you get to tape or DAW. No guitar amp? Mic the fridge. Or water running in the sink. Don’t get the mic wet.
8) The above is too much work? Turn your mic preamp input all the way down. Set the output of everything to Unity. Set the input gain of the rest of the signal chain to Unity. Provided everything is at +4 operating level you’re done.
9) You’ll make WAY less mistakes when patching things if you always think OUTPUT feeds the INPUT, and always plug stuff in that way - the patch cable goes into the OUTPUT first then you plug it into the INPUT. If I’m doing a complex patch or I’m using a strange patchbay (and I am almost 59 and my brain is turning to shit so most patchbays are strange to me these days), I say in my head or even out loud, “The Micpre output goes into the TLA-50 input. Then I grab the next patch cable and “The TLA-50 output goes into the Pultec input...” I have always done this, even when I was young and smart and fast. It reinforces the signal flow in your head, it eliminates almost all patching errors, and it keeps you from looking like a fucking moron during a recording session.
10) When patching in STEREO, put the patch cord for the LEFT signal in your LEFT hand and the RIGHT signal in your RIGHT hand, and then do the above: “The preamp outputs feed the compressor inputs..” Always put left in left and right and right and you’ll reduce the chances of cross patching something to like 0. I don’t know why schools don’t teach this shit. It will save your ass.
11) Another stereo hint. I always put Left side signals on Odd numbers and Right side signals on Even numbers. And I always put them beside each other. SO, if I have a stereo pair of mics as overheads, the left is plugged into 9 and the right into 10, as an example. I NEVER break this rule. live mixing too. If somethings on the Left side of the stage I want it on the Left side of the console so I can grab it with my Left hand. It keeps everything straight in your head. Of course, if something happens to one of my hands, like it gets bitten off by a pony, then it’s mono for me.
12) Tape stuff down if you don’t want it bumped.
When last our heroes met they were discussing Dynamic Range and Nominal Level.
Dynamic Range is the space from the Noise Floor - the spot where the signal is covered up by noise (very very quiet) to the Distortion Point, which is the spot where harmonic distortion becomes very noticeable.
Nominal Level is a semi-arbitrary spot within the dynamic range of a piece of equipment that the manufacturer has decided gives you a high Signal to Noise ratio and enough Headroom. It’s based on their knowledge of the design of their equipment and conventions in the audio industry. What’s problematic about Nominal Level is that what is actually being measured can be different on each piece of equipment.
Manufacturers put meters on their products, and now is the time to understand how meters relate to nominal level.
SPEEDOMETERS and the VU METER
If you’re in a car in New York in the US, and you’re driving at the speed limit, your speedometer would look like this:
If you’re out west in Wyoming, the speed limit is higher, and the speedometer might look like this:
If you’re in Germany on the Autobahn, the speedometer might look like this:
But you! You’re a crazy traveling’ bastard! Bit confusing if you’re driving in Wyoming at 70 mph, then NY at 55 mph, then you go to Germany and it’s not even miles per hour, it’s now kilometers per hour, then suddenly you’re in NY, speeding around on the highway and a Scorpions tune comes on and you have a flashback to Germany and suddenly you’re going 130 mph. A cop pulls you over, tasers your stumpy ass, etc. True story.
So, let’s say we invent the UNIVERSAL SPEEDOMETER. And it looks like this:
It doesn’t show how fast you’re going in some specific unit, it just shows you how far you are away from the speed limit, and it’s calibrated to wherever you’re driving. In NY, we set the Universal Speedometer to “55” and if we put the needle at 0 we’re at the speed limit. In Wyoming, 0 means we’re going 70mph, in Germany 0 means we’re going 130kph. So, now, no matter where we go to drive, with the Universal Speedometer, we get the car up to the 0 and we’re fine. Who cares about the exact number in mph or kph because we know WE ARE AT NOMINAL.
If we need to pass someone, we can speed up, the meter goes up, and we use up some of our HEADROOM to get better performance and speed and get around some car in the way. And if the meter is really low, we know we’re close to the NOISE FLOOR and driving too damn slow.
The Universal Speedometer is a VU meter. It doesn’t tell you what the nominal level is, it tells you something much more important, which is: Are you at nominal level or not. And as long as we slap a VU meter on all of our gear, we now know exactly where to park the level at: 0. 0 is nominal level.
And it really doesn’t matter what the meter looks like, if it’s LEDs or LCD or mechanical or virtual, nominal is nominal.
VU Meters and Average vs. Peak
Now, even though I sort of implied that all meters are the same they aren’t. It’s audio. There’s always something to fuck up the simplicity.
Meters have a speed of response, and it can be different from meter to meter. Some meters are fast and others are slow. Some meters measure peak energy, some meters measure average energy.
So, let’s say I’m in a room with the lights off. The room is dark. Let’s say I flip the light on for a split second and then click them off again. The room is bright for a moment, but then it’s dark again. So, it’s dark on average, but there was a “peak” moment of light. If I start flashing the lights on and off quickly, you might start perceiving the room as being “bright” rather than “dark,” because the AVERAGE light in the room across time is higher.
SO, if you are “set” to notice the average brightness of the room, you’ll respond one way, and if you’re set to notice “peaks” you’ll noticing something else.
So, how a meter responds depends on if it’s “noticing” the average or the peaks, or some sort of in-between.
VU meters are set to notice the average power of a signal. So, if you run something with a slow attack through it, like a violin or a guitar or a voice, or an entire finished song, the meter gives you a good idea as to where your signal level is at. But an instrument with fast transients, like a drum, the attack happens too quickly to be noticed by the meter. And by the time the meter responds the transient has already gotten through and it could be WAY above nominal and actually causing distortion, but a VU meter doesn’t tell you that.
Setting Levels to Analog Tape
A skill you had to have in the analog days was how to read the meters to get good levels on tape. As discussed, with slow transients, meters were more accurate as to level than with fast transients. So, I learned to cut drums to tape on the low side, know that the signal hitting the tape was actually +15dB or more above the meter reading. Vocals I would cut right at about nominal, because the vocalists I was working with were usually pretty consistent. I would cut bass right around nominal or a little higher, but if the bassist was slapping, the meter wouldn’t respond fast enough, so I’d set the levels a little lower than nominal.
With a softer song, I would actually cut the vocals hotter to tape - burn up some headroom to increase the distance from the hiss. And if I was working with a very unpredictable singer on a loud rock track, I might cut the vocals low on the meter to buy me some more headroom, unless I wanted distortion.
One thing I always did was smash heavy guitar parts into the tape. On playback they would sound huge and crunchy, and very solid - due to the tape compression. One time I was running everything so hot that the studio manager shut down my session because he thought I was damaging equipment. He brought the studio's tech in to lecture me on proper levels (I kid you not) and the tech proceeded to laugh at the studio manager.
If it sounds good, it is good.
Meter Ballistics
Fast responding peak meters are tracking the transients of signals and are basically telling you how much headroom you’re using up. This is really useful information, but it’s different than what a slower meter is telling you. Both types of meters are really useful, especially together, which is why so often a VU meter has a peak light on it.
Meter Ballistics is how fast the meter responds. You can get an idea of this by looking at the meter. If it’s a mechanical meter and it has to swing a little needle around, even if it’s really fast it’s never as fast as an LED meter can be. But an LED meter might be electronically slowed down to respond like a slower mechanical meter - virtual meters on your DAW might be set to respond to average rather than peak, too. You can also look in the manual and find out if the meter is measuring peak or something more average (look for the letters RMS, which basically means "average.”).
Often, there are two meters within a meter set, and one is measuring the average power, and the other is measuring peak. When you're metering violins, which have slow attacks, you’ll notice that the two meters read very close to each other, whereas if you’re metering drums, there will be a much bigger difference between the two.
SO... now you know pretty much exactly what that meter is doing, and you know what nominal level is, and how this all fits together. Next week, we’ll talk gain staging and setting levels.
Thanks for all the good feedback. Much appreciated.
Let’s put the whole thing together today. How Noise, Distortion and signal level all fit together. How it all works.
DYNAMIC RANGE
All devices in audio - from a human voice to a mic to a preamp to a converter to a console to a power amp to a speaker to a human ear, all have a lower limit and an upper limit.
The lower limit is self-noise, the noise floor.
The upper limit is the distortion point, which is the spot that harmonic distortion becomes a big problem. By the way, the manufacturer decides what is unacceptable harmonic distortion.
So, that is the playing field in audio - from the Noise Floor to the Distortion Point. And we call that area the DYNAMIC RANGE.
Dynamic range can be huge. Your ear has a dynamic range of maybe 180 dB. You can hear from an ant picking its nose to something as loud as a gunshot about a foot from your head. Truly, though, if you’re listing to things ON PURPOSE and without HEARING PROTECTION louder than 112 dB you’re crazy. We will, of course, talk about dB later... much later...
Mics have dynamic ranges around 120 dB, which is about the same as a human ear on average. Mic preamps have dynamic ranges all over the place, from as high as 130 dB to 90 dB or even less. Digital audio recordings can have dynamic ranges well over 100 dB, depends on how they’re designed. Analog tape sorta sucks - lotssss of hisssss - dynamic range can be in the 70’s down to the 60’s even with noise reduction. Radio stations barely hit 50 dB of dynamic range.
HOW TO SET LEVELS BADLY
Let’s learn how to be a shitty engineer quickly..
Our signal chain is a loud band (150 dB D/R), into a good mic (120 dB D/R), into an ok preamp (90 dB D/R) onto a tape track (70 dB D/R) and then out through a radio station (50 dB D/R).
Now, common sense would suggest you set the levels as high as possible. Especially when analog recording, the idea was to hit the tape very hard, making sure most of your signal was way above the noise floor, so the only time you’d hear hiss was when the song was very quiet, like at the beginning or the ending. So, let’s just do that, set everything right below distortion:
Notice the noise floor going up? Congratulations, shitty engineer! You’ve lost all the quiet stuff in the noise! By the time it hits the radio you can’t even hear the fadeout of the song and the hiss and noise has gotten really loud. Get fired by the band!
Let’s do the opposite. Let’s set the levels so that we DON’T loose all the quiet stuff. We'll keep our signal as far above noise floor as possible...
Now you see the dynamic range squashing down and clipping the wave form, adding harmonic distortion. You lose again! Now your recording is distorted from almost the moment things begin, and it just gets worse and worse... Shitty engineer, nicely done. Fired by band. Work for uncle loading boxes.
HEADROOM
We need to find a place within the dynamic range to set our levels so we avoid being a shitty engineer. Let’s reason this out.
Ok, we do want levels as high as possible, because noise sucks. But what if something unexpected happens? If we set a mic preamp level to right under distortion, and then the vocalist moves in a little closer to the mic, or sings a tiny bit louder, the increase in power can clip the mic preamp, and you’ll hear distortion. So, we need a little bit of safety margin up there so we have some room in case something gets unexpectedly loud. That’s HEADROOM.
What are typical headroom figures? It’s all over the place. On analog tape decks we were usually recording to give ourselves about 9 dB of headroom on the tape. Mic preamps usually have very good headroom - from 18 dB to 26 dB or even higher. Like dynamic range, it’s variable and depends on the type of gear and the manufacturer, and the engineer.
NOMINAL LEVEL
We want to set our levels as high as possible to keep our S/N (Signal to Noise) ratio as high as possible. And we don’t want to clip, so we’re going to give ourselves a little room on top - headroom. We call this level NOMINAL LEVEL.
What usually happens is we have the musician play or sing, and we watch the meters and listen, and we set the level so that we have some headroom just in case. It’s sort of an average, pretty high level. It’s different for different types of gear, and it's usually determined by the manufacturer. The signal won’t sit exactly at nominal level the whole time, because when recording or live mixing, the signal (the band, the vocal, the drums, etc.) will go up and down, depending on the dynamics of the player and the song. But there is pretty much a consistency to everything, right? Shit’s not usually really loud and then really quiet unless the players suck or it’s some sort of avant-gard weird ass thing happening musically.
So, here is what it all looks like:
Dynamic Range is from Noise Floor to Distortion Point.
Nominal Level is a High Average level setting.
Signal to Noise Ratio is from Noise floor to Nominal.
Headroom is from Nominal to Distortion Point.
Signal to Noise Ratio + Headroom = Dynamic Range.
What are typical nominal level figures? It depends. It depends on the type of gear you’re working with. The nominal level of an analog tape deck is measured one way, while the nominal level of a mic preamp is measured another way, while the levels on your DAW are measured yet a different way.
If you are thinking, “Wait. The nominal level is basically different all over the signal chain. Manufacturers decide where it is, engineers decide where it is, the type of gear affects it. Jeez Louise, how do I set levels so everything sounds rocking’ good?"
You use meters and common sense. And experience.
HOW TO SET LEVELS CORRECTLY
To not be a shitty engineer, you set your levels differently for each piece of gear, adjusting to take into account the dynamic range of each piece of gear. In other words, the nominal level changes, and you have to do things to control your dynamic range. Like this:
Notice that we’re reducing the dynamic range from both the top and the bottom. Instead of letting our signals go beyond the distortion point or below the noise floor, we’re controlling things. We’re controlling the dynamic range of the signal across the signal chain. That sounds like compression doesn’t it? And yes, that is certainly part of what is going on. But there is also recording technique involved to make sure all the pieces of gear fit together in the best way possible for the signal.
That’s GAIN STAGING. More on that at a later date!
OK! It’s been a pretty long slog through this stuff, but hopefully you’re a bit clearer on it all. I can be confusing, and usually when I explain it I can wave my arms around and demonstrate stuff and it makes more sense and I look like a nut.
I can’t emphasize how important knowing that diagram - Dynamic Range with Nominal in the Middle, is. If you can hold that diagram in your head while you’re setting up gear and getting your levels, your recordings will improve immensely. I want you all to be great engineers.
Previous posts have talked about what happens when audio signals get too powerful, too loud. Distortion is what happens. That ain’t the same pork chop is what happens. For a refresher go here.
This week, let us look at kinda the opposite. If distortion is what we hear when things are too much, what is at the other end, the quiet, weak side of things?
Noise is at the other end.
NOISE and SIGNAL
Noise is anything that you’d rather not hear, basically. And Signal is the thing that you actually do want to hear.
- Watching sports on TV and hearing the announcer clearly = signal
- Spouse/Significant Other/Toddler w/Poopie Diaper/Pet Cat in Heat = noise
When we like how noise sounds, it isn’t noise anymore. It becomes signal.
Example: you’re recording drums. The snare is leaking into the tom misc. The snare leakage is noise. So, you put a bunch of gates on the tom mic and spend 45 minutes getting rid of all the snare leaking into the toms.
Cue the band, cue the drummer. Do the count in 1 2 3 4...
And it sounds like shit. The snare sounds like you mic’d up this monkey:
Because the leakage into the tom mics was actually HELPING the snare and the whole drum set. So, you pull off the gates, and now that leakage, previously noise, has become part of the signal.
At a live show, the audience is noise, the sound of the band is signal. In your car, the radio is signal and the sound of the engine, the wheels on the road, the wind rushing past the car is the noise. And suddenly, you hear a “pop” and then a flapping sound outside the car, and now the radio becomes the noise, so you turn it down to hear if you have a flat tire, because the road sounds are now the signal.
Please note that when Noise gets in the way of hearing the Signal there is a Problem.
SELF-NOISE and the NOISE FLOOR
Self-noise is the noise that a device makes when it’s turned on and power is running through it. If you aren’t running a signal through your console or your interface, and you turn up the speakers, you’ll hear hiss. Hopefully, the hiss will be very quiet, and you won’t hear hum along with it.
Hiss is the sound of the device working, the sound of electrons running around the circuit. This hiss is self-noise. All devices that have power flowing through them make noise. Your body generates self-noise, unless you’re dead.
At night, if it's really quiet, you might hear a whooshing in your ears and perhaps a very very quiet whining sound. If you put a cup or a shell to your ear you’ll easily hear the whooshing — remember as a kid when you put shell to your ear and could hear the ocean? It’s not the ocean. It’s blood flowing through your ear, reflected back into it by the shell. You’ll hear the same whooshing if you put a coffee cup up to your head, rather than a barista yelling or a tractor on a coffee plantation in Guatemala.
The whoosh is your blood flowing. The whine is your nervous system working. This is really quiet stuff, about the quietist things you can hear. We call this the Threshold of Hearing. This is like the sound of an ant picking its nose.
Now, you don’t normally hear this stuff in your day-to-day life because everything around you is noisier. Noise causes masking when the signal gets too quiet and falls below the noise. The limit to how quiet a signal you can have is how low the noise is. You can’t really go below the noise, so that bottom limit is called the Noise Floor. You can’t get lower than the floor, right?
The noise floor of a piece of audio equipment is typically really low. Guitar amps have more noise — how often have you heard a sustained guitar note decay away into the hiss of a guitar amp? It goes below the noise floor and then you can’t hear it anymore.
The noise floor is shifting thing. When you’re mixing live, is the hiss through a PA system really an issue? It might be during the sound check when the venue is empty. But once it fills up with people, the noise floor of the audience is considerably higher than the hiss of the PA and effectively masks it. And if your PA hiss is heard above the audience... jeez, you suck, you stumpy bastard.
SIGNAL to NOISE RATIO
You’re in the coffee shop talking to a friend. The friend who is talking is the SIGNAL — the thing you want to hear, and the background chatter, espresso machine sounds, etc., are the NOISE — the things you don’t want to hear. The louder the coffee shop gets, the louder your friend will have to be such that you can hear their signal over the noise.
Signal over Noise... let’s call this the Signal to Noise Ratio. S/N ratio. If this is a low number, the noise is loud and it's intruding on the signal. If this number is high, the noise is quiet compared to the signal. So, now you understand this bit more:
You still might not fully understand decibels, but we’ll get to that.
The S/N ratio is different for different types of equipment. It’s comparatively huge for microphones and really good preamps, and much less so for cheaper equipment, guitar amps, PA systems, etc.
Noise builds up. When recording, the ambient sound of the studio feeds into, oh, say a condenser mic, which adds some hiss, and then into a preamp, which adds a little more hiss, and then into various converters and devices, all of which add hiss. And all of this noise adds up, and that’s the noise floor. Then someone wacks a snare out in the studio, and that goes slamming through everything and it’s much louder than the noise. High S/N ratio. The snare rings out for a moment, then decays into the ambience of the room. And once it decays to a certain level, we’ll notice the noise again. S/N ratio is a fluid thing.
CAPTAIN OBVIOUS
This is frickin’ obvious but it must be said: you usually hear noise when things are quiet, when the signal is low and the S/N ratio is small.
Another frickin’ obvious thing that must be said: analog recording techniques were mostly developed to compensate for noise, especially tape hiss.
Tape hiss... the sound a piece of magnetic tape makes as it slithers over the heads of a tape recorder. The more tracks you have, the more tape hiss you’d get. Dolby, DBX noise reduction, noise gates, etc., were all developed to control tape hiss.
Digital recording was developed to totally get rid of tape hiss.
I can’t tell ya how much time I spent in my engineering career trying to get rid of noise. Automating mutes. Gates. Yada yada yada. I never really used DBX systems because I thought they sounded terrible, and if I was recording at a nice high level to really good tape, and was careful with muting, I could make a virtually hiss free record.
You can’t hear hiss when the band is cranking.
I cannot understand why anyone would make a plug-in that adds “authentic analog noise” to the signal chain. Restaurants are allowed to have a very low percentage of cockroach bits and rat crap in the food. Would you add cockroach bits and rat shit when you cook at home to get that “authentic restaurant taste?” Fuck no.
Next week we’ll put all of this together and figure out dynamic range and metering.
Be well. Stay safe.
Distortion, in the simplest sense, is when what comes out is different than what goes in. Think about eating dinner and what happens six hours later.... that ain’t the same pork chop, is it?
Something in the process, in the piece of equipment, is changing the signal.
Usually, what happens is that the piece of equipment runs out of ability to accurately reproduce the input signal. But what the heck does this mean, actually? Let me give you a few examples. If you get this clear in your head, so many things will suddenly make sense.
Let’s Look at a Speaker
A simple speaker is a cone of paper that’s being pushed forward and backward by an electromagnet (the coil). There’s a flexible springy area around the cone of paper called the surround, and the base of the cone is attached to another springy thing called a spider. The surround and the spider are attached to a frame called the basket. The spider and the surround allow the cone to move forward and back while supporting it in the basket. When the cone moves forward and back it pushes air forward and back. The coil is what causes the cone to move - pushing it forward and back, depending on the signal that’s fed into it. Like this diagram:
If you feed in a low frequency signal, the cone moves back and forth slowly, and as the pitch goes up, the cone moves back and forth faster and faster. If you feed a weak signal in, the cone moves back and forth over a small distance.
If you crank the power up (the volume) the cone moves back and forth and covers a longer distance.
However, the cone can’t move an infinite distance back and forth. There will come a point when the surround and the spider are completely stretched and the cone can’t move any further. The speaker has run out of ability. Does that make sense?
When the cone has ability to move, it does so, and it can accurately track the up and down of the waveform. When the surround and spider run out of stretch, however, the cone can’t track the waveform. It moves as far as it can, can’t go any further, so it essential jams - it stays still. And the waveform that comes out of it is now different than the waveform that went into it. And if you look at the waveform the speaker is emitting out, it’s clipped — it’s squared.
Remember last week, when we mixed odd order harmonics in with the fundamental and caused a square wave? This is exactly what’s happening with the speaker, but in reverse: its movement is “jammed:” it squares and generates a bunch of distorted crap — harmonic distortion crap. Oh my, that doesn’t look like the original pork chop, does it?
So, a speaker has a certain amount of ability to move and reproduce a waveform in a linear manner. If we put in too much power, we run the speaker out of ability, and the result is distortion.
How much ability does a speaker have?
It depends on things, but to look at it very simply, if a speaker is rated to 150 watts, it has 150 watts worth of ability.
Let’s Look at an Amplifier
Ok, so a speaker is rated to 150 watts, so that means an amplifier which is rated to 150 watts... hmmm... that means the amp has 150 watts worth of ability to reproduce the signal, right?
EXACTLY!!! That is exactly right. Amps - and not just power amps or guitar amp, but the little tiny amplifiers stuffed into the circuit boards of your recording console, have only so much ability. They run out of ability to reproduce a signal, and when that happens, the result is distorted output, non-linear output.
As a signal feeds in, the amplifier uses power to reproduce it. As we turn up the input signal, the amp needs more power to track the waveform in a linear manner. But there isn’t infinite power. The amp isn't connected directly to the sun. Eventually, the amplifier cannot draw anymore power, and it loses its ability to track the waveform, and it squares the wave, just like a speaker that runs out of springiness.
Amplifiers use power to reproduce signal, and if they don’t have enough power, they generate harmonic distortion. A simple way to look at, but a very useful way to look at it.
Everything Runs Out of Ability
A singer can only get so loud before their vocal chords can no longer move — they physically slam into each other in the voice box. The vocal chords run out of ability. The resulting vocal has a growl to it — distortion. Harmonic distortion. And if the singer keeps doing this, they start losing their voice, and if they do it enough, there can do permanent damage, just like you can blow a speaker out, or blow up an amplifier.
Your ears. Your eardrum can only move so far. The little bones in your ear (there are three little bones in each) can only move so far. The little hairs in your cochlea which turn sound waves into nerve impulses can only move so far. They run out of ability to move, to track the waveform as it gets loud, and the result is distortion. And you can hear this distortion, and you can feel it. And if you consistently run your ears out of ability you’ll get tinnitus. Or, if the waveform is loud enough, you can blow your eardrum out — literally tear it apart.
Stuff certain mics into a kick drum and one good hit can break the diaphragm in a split second, and if it doesn’t break it, the mic will clip the waveform as it runs out of ability to move and starts generating harmonic distortion.
Do digital processors run out of ability? Yes. Digital processors do math, and you can basically use up all of the processor’s ability to perform mathematical calculations. The result however, isn’t harmonic distortion. It’s a loud click or static "scratching" sound, and if you feed that through a speaker, the speaker runs out of ability to reproduce it almost immediately, which is why it sounds awful and is really bad for your speakers. And your ears.
Everything runs out of ability, and when it does, you get that unrecognizable pork chop.
A short post this week, but an important one if this is stuff you’re trying to wrap your head around. Hit us up on Facebook or Discord if you’ve got a question.
Last week I wrote about Bias, and how if an amplifier or an audio component isn’t biased correctly it might not work or it might cause a lot of harmonic distortion.
This week: what the heck is harmonic distortion, and what the heck is a harmonic?
What’s a Harmonic?
So, first of all, what is a harmonic.
If you take a note, like a C, and play it on a guitar or a piano, because of the physics involved, not only do you hear the note C, you also hear, very quietly, other notes that are mathematically related to the C you’re playing. Like, you might hear a C an octave higher, and then another octave above that, and you might hear an E and G mixed in there as well. It’s actually quite a bit more complex than that, but the point is that if you play a note on virtually any instrument, you get more than the single note that defines the perceived pitch. That other stuff are the harmonics.
I found this video, which is an ok explanation - it could be clearer, but if you want to take a moment, a quick watch might help you understand some of the physics involved.
The harmonics of a note are caused by the physics of vibrations, and by the construction of an instrument, or of a persons’ face if we’re talking about the harmonics of a sung note. And, in fact, the harmonics of an instrument are a huge factor in why an instrument sounds the way it does. A guitar with steel strings has a different set of harmonics than a guitar with nylon strings. The two types of guitars have a lot in common in terms of harmonics — you can tell they’re both guitars — but the steel string is typically brighter and more metallic, and that’s because of its harmonics.
The harmonics have a mathematical relationship to that C you played (we can call that the fundamental), and the particular pattern of harmonics is what makes an instrument recognizable as an instrument. And some patterns of harmonics sound better to our ears than other patters of harmonics.
In fact, harmonics do tend to be high frequency information, and we will see why that is important in a bit.
What’s Harmonic Distortion?
Harmonic distortion is when harmonics are added to a sound, a signal, that aren’t there in the original signal.
Back to playing a C. If we played a C on a very simple instrument, like a flute, you would get a very pure sounding C — it wouldn’t have a lot of extra harmonics happening, unlike a guitar, for instance. The complex body and physics of a guitar actually add harmonics to the C. It’s a bizarre way to think of it, but you can consider a guitar a generator of harmonic distortion. So is a piano, a trombone, a human voice, etc. These all are sort of "harmonic distortion generators". But we want that particular harmonic distortion - it’s how those instruments sound.
Electronic components (amplifiers, etc.) also add harmonics to a signal. Usually a well-designed circuit adds a very, very tiny amount of harmonics, and we really can’t hear it because it's such a small amount. That is also harmonic distortion. A badly designed circuit can add enough harmonic distortion that one can really hear it. There are amounts of harmonic distortion that can be very noticeable, and certain patterns of harmonics are more noticeable, and some patterns sound good, and some sound like shit.
Harmonic Distortion = Sonic Finger Print
All the elements in an audio recording signal chain add some amount of harmonic distortion. Microphones, speakers, preamps, compressors, power amps, guitar amps, effects pedals — all of these things add harmonic distortion. Some are designed to add as little as possible, and others are designed to add huge amounts. Microphones sound different from each other, in part, due to the harmonic distortion they add, as do speakers, mic preamps, etc.
As mentioned earlier, some patterns of harmonics our ears like better than others. Tubes, whether in compressors or guitar amps, tend to have harmonic distortion that our ears like. Tubes are often described as sounding “warm.” That’s the mathematical relationship of the harmonic distortion (the harmonics added) of a tube circuit.
Solid state equipment also has distinctive harmonics patterns that it adds to a signal. That’s part of the reason Neve sounds like a Neve, and a Mackie sounds like a Mackie.
THD
THD stands for Total Harmonic Distortion, and it’s a measurement of the amount of harmonics a piece of equipment adds to a signal passing through it. The manufacturer of the equipment will usually specify this as a percentage at certain frequencies, something like, “Less than 0.5% percent THD from 20 Hz to 20 kHz at full rated power.” Some manufacturers specify it in much looser terms: “Less that 1% THD.” Generally, the better the gear, the lower the % of harmonic distortion, and the more specific the manufacturer will be about it.
What’s a lot of harmonic distortion, and what’s a little? Depends. 0.5% is pretty good for a tube component, but pretty awful for a solid state component. A really high end solid state device can have incredibly low harmonic distortions - like 0.002%.
Tube mics are typically in the 1% THD neck of the woods. 1176 Limiters have around 0.5%. A Neve 5211 is down around 0.0015%. Obviously, guitar amps designed for distortion have much higher amounts of THD. And also obviously, is that the more you turn stuff up (increase the power), the more you increase harmonic distortion.
But, THD is really only a small part of the harmonic distortion story. There’s also the “sound” of the harmonics added, the math of their pattern, that make a huge difference.
Even vs. Odd Harmonics
I made a video about this next bit, so you can watch the video and skip ahead, or watch it and then read so you understand it all that much better.
Quickly, let's look at the way a string vibrates.
A vibrating string is very complex. Back to our C, if you fret and pluck a C on a guitar, you'll get a nice loud fundamental, vibrating at 261.63hz. Let's round that to 262 to make the math easier.
So, we have a string vibrating at 262hz, but it's also vibrating at twice that - 524hz. But it isn't vibrating with as much power, so this 1st harmonic is much quieter than the fundamental.
There's also a harmonic vibrating four times as fast as the fundamental — 1048hz.
When these vibrations all happen on one string, the result is a much more complex waveform than any fundamental or harmonic by itself.
There are also other math things happening there. There's an E, which is the third, and which is around 1.25 times the fundamental.
These harmonic relationships that sound good to our ears tend to be even number multiples, often called even order harmonics. Our ears tend to not like odd number multiples - 3, 5, 7, etc. These particular odd harmonics sound kinda ugly to our ears — the 7x is especially dislikable, and they tend to square the wave off...
In general, our ears think even harmonics sound better than odd. In general, tube equipment generates a lot of even harmonics. Does that explain to a large extent why everyone likes the sounds of tube amps?
Many Things Explained
Understanding some of the math of harmonics also explains why distortion seems to make something sound brighter: because what you're adding is harmonics ABOVE the fundamental, and those harmonics stack up and increase the apparent high frequency tonality of a sound. It also explains why too much harmonic distortion can sound harsh and painful — it's causing a lot of high frequency activity, and our ears don't like that very much.
Now, some of you might be thinking, "Even on a really good day people can only hear up to 20kHz. If I have something at 8kHz, then it's harmonics are at 16kHz and 32kHz and other frequencies, all high, and most of them beyond the range of hearing. How can this possibly affect what we hear?"
The answer is that we can sense frequencies we can't clearly hear, and very high, over 20kHz frequencies can affect the equipment we are using, especially digital stuff, and we can hear that effect.
Some stuff you just have to take on faith, you stumpy bastard.
SO... download a PSC demo or buy one, flip it around to the backside, turn up the PREAMP gain until you hear some distortion. Then swap around the three different sets of tubes we've thoughtfully included, and you'll hear the quality and frequency response of the distortion change. This is because we've modeled the different harmonic distortion characteristics of them into the PSC.
On the AIP, you can do this: Switch on the PSP and turn up the input gain until you hear distortion.
Go around to the back panel and click between TUBES , TAPE and SOLID STATE to hear three different variations of harmonic distortion. Turn up the INPUT TRIM to make the effect more easily heard. Don’t forget that turning up the trim can add a lot of gain and make things louder. Use the OUTPUT TRIM to readjust output gain down.
Next week, we'll talk about why cranking things up causes an increase in harmonic distortion, and we'll start talking about some recording techniques that take advantage of the physics involved.
We’ve gotten really good feedback on our blogs, and we're glad a lot of you have been finding them helpful.
But in much of the feedback, people ask questions, usually about technical terms or issues. I try to write things such that they are “self-explained,” and you don’t need to google terms, but there are some concepts that require going deeper. And our plug-ins have more possibilities and performance if you understand things going on under the hood, or in the case of our plug-ins, around on the other side.
So, for the next few weeks, I’m going to address some of the these technical concepts in an easy to understand way. There will be some details I'll gloss over, and a few things I’ll simplify, but conceptually, everything I’ll write will be useful and applicable. The technical stuff is important to know and apply - it’s the reason we call ourselves Audio Engineers, because it’s engineering.
LET'S START WITH BIAS
Bias. There are reasons I want to start here, rather than something more elementary like dynamic range or “what is a dB” or some such. If you understand bias, you’ll understand a lot of other concepts, and things like dynamic range and harmonic distortion will actually make more sense when we get to them. And if you understand bias, our plug-ins will make more sense to you, especially since almost all of them have a tweakable bias control on them.
LOTS OF AMPLIFIERS
Analog recording equipment is made up of a bunch of components, things like tubes and transistors and transformers, etc. And digital plug-ins are all simulating the characteristics of those analog components.
Generally, in a piece of analog gear, no matter if it is an EQ or a compressor or a mic preamp, the heart of it, the thing that makes it work, is some sort of amplifier. So, for the rest of this article, when I write amplifier or component, I am NOT referring to a guitar amp, or a mic preamp or a stereo power amp; I'm referring to a little circuit thing stuffed down in all the analog gear you will ever run into. A recording console has literally thousands of amplifiers in it.
Amplifiers in equipment can be based on tubes, or on solid state component like transistors or OP amps, or some sort of combination. Obviously, if you've got a bunch of amplifiers in a device they're going to contribute a lot to the sound and character of the device, which is why tube EQs and compressors sound "tubey" and Neve EQ's sound "Nevey." The amplifiers inside the gear impart a particular sound.
AUDIO CIRCUITS HAVE A SWEET SPOT
Amplifier circuits of any type — tube or solid state — actually don't want to work properly. In some cases they don't want to work at all. They are very particular about the amount of input fed into them, and they can be very particular about power in general. And unless power is handled just right, a component might not work, or work like ass, or work inefficiently and burn out quickly. They have a sweet spot.
If you feed in too little power, you’ll be below the sweet spot, and for a lot of components, they simply won't work, or if they do work they're very quiet, or really noisy. If you feed in too much power, you’ll be above the sweet spot and while the component will work, it might be distorted or otherwise bizarre sounding.
Weird shit happens outside of the sweet spot. It’s like frying eggs. If you set the frying pan’s temperature too low, your eggs are going to be sitting in oil getting all disgusting without getting cooked. Nice. Oily raw eggs. If you have the frying pan crazy hot, when you drop in the egg, the oil will come splattering out, making a mess, burning the egg and your face off (if you decided to lean over the pan like an idiot). The sweet spot of the pan is the right temperature, such that the egg cooks just fast enough that you have control, and you get the egg that you want.
LINEARITY AND NON-LINEARITY
For many amplifiers, the “sweet spot” is when the response of it is LINEAR. You’ve probably heard this term a lot. Basically, when a circuit is linear, the signal that comes out of it is the same as the signal that feeds into it. Now, if it’s an amplifier, the signal coming out might be more powerful (louder), but if the amp is linear, the frequency response of the output closely matches the frequency response of the input. In simplest term, shit sounds the same going in as it does coming out.
If the level of power you feed in is BELOW the sweet spot area, the response is NON-LINEAR, if the device even worked and passed signal. If you go OVER the sweet spot, the response is also non-linear, and what comes out of the component isn’t the same as what went in.
How is the output different if the component is non-linear? Well, there can be a lot of things different about the two signals, from changes in the frequency response to changes in the envelope, but the thing engineers are usually looking at when they want to discuss linear/non-linear is Harmonic Distortion.
We’re going to spend a lot of time on harmonic distortion, but not today. For now, all you need to know is that if a device is behaving in a non-linear manner, harmonic distortion typically increases.
Recap:
Linear: what comes out is the same as what goes in
Non-Linear: what comes out has been changed, and is different from what goes in.
AMPLIFIERS ARE LAZY
Now, here’s the problem, and this is true for many of the components in a piece of audio equipment. They only behave in a linear way across a small range of power. In many cases, this range is TINY. Outside of this range, the component is non-linear. So, the big trick to designing an analog preamp or a compressor, is to make sure all of the different amplifiers are getting a power level that makes them linear, and that level might be different for many of the components involved. Again, if you’re below that tight power range, the component might not even work, and if you’re above it, the component will add distortion.
So, for most amplifiers, there needs to be a BIAS signal added to it, and this makes the amplifier play nice with the audio signal. The type of bias signal can be very different depending on the component, and the circuitry involved can be different, but in general, all bias signals push an amplifier or a component towards efficient, linear performance.
BIAS: THE GUN TO THE HEAD
Bias for some amplifiers or components is basically a gun to the head. As an example, to get an analog tape deck to record, a super high pitched and very powerful bias signal is mixed in with the much weaker audio signal and actually printed to tape. This bias signal is so strong that it forces the magnetic particles on the tape to actually record. Some types of transistor based amplifiers also need to have a bias signal mixed in with the audio input signal, and then the bias signal, which you don’t want to hear, is filtered out.
In this case, bias is like... going to a birthday party place when you're a little kid and you want to go in the Ball Pit or use the Bouncy Castle or something and there's a height requirement, and you're too short. Your head needs to come up to a certain line by the door, and if it doesn't, no Ball Pit for you, you stumpy little bastard.
But, you have special Bias Shoes, that add a few inches to your height (they add power). You put them on, and now you appear tall (powerful) enough to get into the Ball Pit (linear amplifier performance).
BIAS: SETTING A CAR IDLE
Other types of amps use bias differently. In this case, the bias is sort of an efficiency adjustment. A device might work with a wide range of bias settings, but again, there is a sweet spot where it works best.
A way to think of this is to think about a car idle. When your foot is off the gas, with a normal gas powered car, the engine runs but it doesn’t put out so much power that you can’t control the car by just holding down the brake. In fact, if it is set right, you should be able to drive the car, albeit very slowly, just by the brake. If the idle is set too high, when you lift your foot off the brake the car jumps forward. You can set the idle so high that the car can’t be stopped even with the brake slammed down. The idle can also be set so low that the car coughs and stalls, or that when you step on the gas it dies. If you set the idle just right, the car purrs like a kitten, can be controlled by the brake at really low speeds, and when you punch the gas it takes off and there’s plenty of power to drive with.
On the Pawn Shop Comp, the preamp BIAS is on the back panel. Adjust BIAS to increase or decrease the amount of preamp distortion. And you won’t be damaging any tubes by doing this!
A bad idle setting is hard on the engine, hard on the transmission, burns more gas and makes the car really hard to drive. Likewise, for some types of amps, like a tube amp, if the tubes are biased correctly the amp is quiet, has plenty of power when you need to rock out, and the tubes have a long life. Set the bias wrong and you’ll burn out tubes, cause excessive distortion, or the output might sound dull and lifeless.
ADJUSTING BIAS
First of all, what you SHOULDN'T do is open up your mic preamp or your vintage tube compressor, locate the trim potentiometer that adjusts bias and then dick around with it. In physical audio gear, bias is generally set at the factory and it's not something the average person should deal with. Now, as gear ages, bias settings can drift, and as they do, the performance of the piece of gear will change. In some cases, the drift might make things sound better, and it other cases, it might make it sound worse.
But with virtual equipment, like Korneff Audio's Talkback Limiter, or Pawn Shop Comp, there's a bias potentiometer that you can adjust. And as you might think, if you turn the bias counterclockwise, the circuit's performance changes one way, and if you turn it the other, it sounds different yet another way.
On the Talkback Limiter, the preamp BIAS is a trim pot on the back panel to the right. sets the performance of the FET compressor circuit. It is preset at an optimal point that strikes a balance between low distortion and high output. If you increase BIAS, the gain and compression effect increases, but harmonic distortion will increase, too. Turning it down will lower gain and distortion, but the compression circuit will work unpredictably, which is kinda cool.
What you're doing, in analog terms, is adjusting the overall output and linearity of the circuit. With one of our plug-ins, you're adjusting values in a computer algorithm that will change the harmonic distortion characteristics of the signal. Depending on the plug-in and the audio signal you're feeding it, you might even get changes to the envelope of the sound, the attack and release of the compressor, etc.
Dan especially loves tweaking bias on his vintage analog equipment and analog gear he makes. And he wanted to give you guys an experience of what that might be like, and how it might affect audio signals. All without the danger of blowing things up or getting electrocuted. In virtual Korneff Audio World, by all means click the Korneff nameplate, go around back and tweak away.
So that is BIAS. It’s a signal that makes an analog audio device work efficiently and have a linear output. If it's not set right, things will either not work at all or sound distorted or like ass in general.
Bias in a nutshell. Now, go be the damn audio genius I know that you can be.
If you have questions, feel free to post them up on Facebook or use the contact form up top and send us an email.
Most recordists use digital reverb these days, but a lot of the programming of a digital reverb is based on either environmental reverb or mechanical reverb simulators.
Environmental Reverb - sound waves bouncing around a hall or a room, or a reverb/echo chamber.
Mechanical Reverb Simulators - using metal and speakers and pickups to “mimic” naturally occurring acoustic reverb.
This article covers Live room, Chamber, Plate and Spring reverb — how they sound, how they work, where you might find them useful in a recording, etc. There are some musical examples to listen to, and we've made a “cheat sheet” for our Micro Digital Reverberator that will help you when you select programs on it.
But first...
Quick Explanation of Reverb
A sound travels out from a source, moving at the speed of sound, which is about 1’/ms (one foot per millisecond). It strikes a surface, like a wall or a cliff, bounces off of it and comes back to our ear, still moving at the speed of sound. If the wall was 20’ away, it would take about 20ms for the sound to travel to the wall, and then another 20ms to travel back. The total time of the echo would be 40ms. If the wall absorbed a bunch of the sounds high end, the echo would sound less bright.
Reverb isn’t one sound wave bouncing off one surface. It’s sound waves bouncing off floors, ceilings, walls, tables, chairs, people, etc. Reverb is thousands, even millions of echoes that happen over a period of time. Rather than hearing a distinct, clear echo, we hear a wash of sound that gradually decays over time as the sound waves are absorbed by the surfaces off of which they bounce. The frequency response of the reverb is caused by the absorption of different frequencies by the surfaces of a space. A big wood room with carpets will absorb fairly evenly, with the highs being absorbed the most. A small tiled room will tend to sound bright because more high frequencies are reflected by the tile rather than absorbed.
You can download our Cheat Sheet here. And here’s a quick thing to try: Get a vocal track, put an MDR on it, and cycle through the programs' marked chamber and then listen for the qualities that are common to all the chamber presets. Do the same for plates, and then for springs, then for live spaces. You'll teach yourself to hear the differences, and then hearing reverb types on recordings, and making choices for your own mixes will be a lot easier.
Alright, enough of that. Onward.
Halls and Room - natural reverb
Big concert halls and large rooms work well with sounds that have slow transients, like strings and orchestras. Big rooms can make drums and percussive sorts of parts sound confused and muddy. As room sizes get smaller, they become useful for adding character or thickening. Small rooms can also sound very weird and kinda ugly.
The earliest type of reverberation on recordings was caused by the space in which the recording was made. If an orchestra was recorded in a concert hall, the sound of the reverb of that hall would get recorded as well. Ditto for recording anything in live studio space, a little vocal booth, a stairwell or a bathroom, etc.
Concert halls tend to be warm sounding, without an emphasis on highs, and have decay times over 2 seconds. As a natural space gets smaller, it tends to get brighter and a bit wonky sounding. Concert halls are designed with certain frequency response and decay characteristics, while stairways and bathrooms aren’t designed with any thought of acoustics.
To my ears, the decay time of most real spaces - halls, live rooms, have a logarithmic decay time. That is, the reverbs' energy drops off drastically and then slowly tapers away. It sounds like this diagram looks:
There is almost always a hint of the room on any recording made with a microphone, and sometimes that hint is quite pleasant, and sometimes it sucks. A little bit of room on an instrument or vocal can add a subtle doubling effect and make the instrument sound bigger (check out this article here). A room with a lot of character — a bathroom, a hallway, can make a part stand out.
Here’s an orchestral recording of some Iron Maiden. Note that the drums are boxed in with plexiglass. Live drums in a highly reverberant concert hall would be rather unintelligible.
Another thing to listen to: this Cowboy Junkies’ track was recorded basically live in a church around a stereo mic.
Reverb Chambers
Chambers are usually bright and have a rhythmic, repetitive quality to the decay. Chamber reverb is a classic sound on vocals, and putting chambers all over a recording will impart a vintage quality.
In 1947, Bill Putnam put a speaker and microphones in a large bathroom at Universal Recording Studios. He fed some of the session he was working on (The Harmonicats Peg O My Heart) into the speaker, picked up the echoes and reflections bouncing around the bathroom with the microphones, and fed that back into his mix. Other studios followed and soon many studio complexes were converting storage rooms or adding on spaces to make a reverb chamber.
Reverb chambers sound somewhat like a concert hall, but much less natural because they’re typically much much smaller. In order to get a longer decay time, reverb chambers were treated to be very reflective, which resulted in longer decay times that are unnaturally bright and have a strange decay quality. Chambers sound like they’re decaying in sections. Think of “chunks” of reverb that get quieter over time, sort of like echoes, but reverb. To me, chamber reverbs have a “cannoning” sound to them — the reverb is thumpy.
Reverb/Echo chambers seem to have a cyclical, sort of “stop and start” decay curve. Visualize it this way:
Motown, the Beatles, everything coming out of Capitol Studios in Los Angeles, and just about everything recorded by a major studio through the 1950 into the early '70s, has reverb from a chamber on it.
Chambers sound lush and articulate, and are great for vocals and adding mood. They can sound big like a hall, but have better definition. A hint of a chamber on a part can add a tangible sense of space. A lot of chamber reverb sounds otherworldly and imparts a lot of mood.
Here’s a great example of chamber reverb: The Flamingos' I Only Have Eyes for You. This recording, done in 1959 live in the studio at Bell Sound Studios in NYC, is groundbreaking in so many ways: it’s one of the first times a record was produced to deliberately have a mood and not just be a documentation of a performance.
Reverb chambers, though, have problems. Often studios built them in basements, and that required constantly running dehumidifiers to keep them from filling up with water. And depending on the studio’s location, having a basically big empty room was stupid, from a real estate perspective. What studio in a major city can afford to pay for the square foot cost of a reverb chamber these days? As real estate prices went up, many chambers wound up converted into studios or office spaces.
Luckily, in the late 1950’s, some smaller solutions to the problem of reverb became available.
Plate Reverb
Plate reverb is luscious and rich. It’s very smooth, with a very even, almost linear decay. It’s the sound of vocals from the mid 1970's right up through now, but it is best used sparsely, to highlight elements of your mix. Too much everywhere makes a mess.
Plate reverbs were usually made from a large plate of steel. A driver (basically a speaker) is screwed in somewhere around the center of the plate, and then two pickups (basically microphones) were screwed into the left and right sides of the plate (stereo!) towards the edges. Instead of the sound waves bouncing around a room, they bounce around the plate, and the result sounds like reverb. The decay time is controlled by damping the plate (imagine holding a huge pillow against it).
The first commercial plate reverb was developed by a German company, Elektromesstecknik. They released the EMT-140 in 1957. It was a monster—8 feet long and 600 pounds, and it wasn’t cheap.... but it was smaller than a reverb chamber and MUCH cheaper than building a room.
Plates became increasingly common through the 1960’s and into the '70s, and it wasn’t really until the advent of digital reverb units in the 1980’s that plate reverb began to fall out of favor. If a record was made from 1966 'til 1985, there’s a good chance there’s a real plate reverb on it.
Plate reverb sounds thick, lush, and has a very even decay time. The frequency response is similar to that of a chamber — a little unnaturally bright—but without the repetitive, segmented decay of a chamber.
Try to visualize a plate reverb in a way similar to this diagram: the reverb trails with energy distributed more or less evenly across its decay:
Plates are often used on vocals, especially lead vocals that need to pop out in the mix. When I was coming up through studios in the '80s and '90s it was common to put a plate reverb on the snare, even if the drums were cut in a big live room. The even decay and “thickness" of a plate reverb is very flattering.
Plate reverbs, too, have their problems. They are big and get in the way — even a small plate reverb is as big as a folding table on its side. You couldn’t have a plate reverb in a control room, not only because of its size, but also because it could start to feedback during the recording session! Studios put plates in the basement, or some other isolated room, and there had to be a remote control, yada yada, but it was sort of a pain in the ass anyway. Elektromesstecknik (EMT) developed the EMT-240, which was much smaller, and used a small piece of gold as a plate. About as big as a large PC, the EMT-240 didn’t require isolation to prevent feedback, and had a warmer character to its sound. Plates are mechanical, and mechanical things wear out and break, and once digital reverb units came onto the scene, plates faded. A few companies still make new plate units, but most of what is available today is either three decades old or a plug-in.
This record of Sister Golden Hair by America has a really nice, sparse use of plate reverb on it. Most of the recording is dry, but you can hear a lot of plate on the slide guitar and backing vocals, and just a touch of it on the lead vocals. This is an impeccable production, by George Martin, of a simply terrific song.
Spring Reverb
Spring reverb is bright and artificial sounding. It’s more of an effect than ambience. Top of the line spring reverbs can be quite beautiful sounding; cheaper units sound strange and “boingy.” When you don’t quite know what something needs, put a spring reverb on it.
Spring reverbs use a mechanical system similar to that used in a plate reverb, but instead of a big piece of steel, there are springs. A spring unit is smaller than a plate, and much cheaper, although really good studio quality spring reverb systems, like an AKG BX-20, were, and still are, pretty pricey.
Bell Labs originally patented the spring reverb as a way of simulating delays that would occur over telephone lines. The first musical application was in the 1930's, when spring reverbs began appearing in Hammond organs. Spring reverbs can be made cheap and small, and have been built into guitar amps since the 1950's. Spring reverb on a guitar is an utterly recognizable sound to the point of cliché.
Top quality spring reverbs, like the aforementioned AKG BX-20, are found in studios, but they didn’t replace plates, although they do have similar sonic qualities. An expensive, well-designed spring has a similar frequency response to a plate, but a more jumpy, inconsistent decay. Good spring reverbs impart a “halo” to an instrument. On a vocal, a spring doesn’t really sound like reverb, but rather, it sounds like an effect. I tend to think of spring reverb more as an effect than a means of adding ambience.
Amy Winehouse records used a lot of spring reverb sounds, to evoke a vintage, early 1960’s sort of vibe. I picked her recording of Round Midnight to give you a good example of a spring reverb on a voice. Notice the shimmering halo that surrounds the lead vocal— that’s a spring reverb. And listen for the level of the reverb changing during the mix, accentuating certain parts of the song.
Cheap spring reverbs sound boingy, but even that can be a useful effect. Rather than presenting some sort of cliché surf guitar as a reference for this sound, here’s a bit of madness from King Tubby. This is crazy stuff, with spring reverbs on drums and vocals, runaway tape delay all over the place, noises, distortion and slams, weird EQ’ing and filtering. If you’ve not listened to King Tubby.... jeez! Go listen to King Tubby!
We Have Reached the End of the Decay Time
So, some ideas, some patches, some tech stuff. I’ll cover what digital reverbs do in the future (really, what digital reverbs usually do is simulate all the reverbs I've described above). Remember, everything written above (except for the facts of how the various types of reverbs are constructed), is just a guideline. There are no commandments here. Use your ear, listen to recordings, and experiment.
Since we released the AIP, we’ve been getting the same question over and over again Should I put the Compressor before or after the EQ?
This question goes waaaaaay back in time, to when engineers first started patching in multiple processors on a channel while beating a mammoth to death with a stick. And the answer now is the same as it was back then: It depends.
But it’s a really useless answer, isn’t it? You can answer ANY question with “It depends.” Do you like sex? It depends. Do you have five bucks I can borrow? It depends. Does this sound like a hit to you? It depends.
Today, you’ll get an actual answer to the question, "Should I put the Compressor before or after the EQ?”
Usually the Compressor is Before the Equalizer When You’re Tracking
Close to 90% of the time, when you’re tracking, the compressor will be before the equalizer. When in doubt, the compressor goes first.
Why? Three reasons:
1) Because it will be less work for you
If the compressor is first, when you change its controls, it won’t affect the settings of the EQ much if at all. More gain feeding into an EQ doesn’t affect the way its knobs work. But a compressor’s main adjustment is threshold, and input gain will always affect the threshold settings.
If you put the EQ before the compressor, then whenever you adjust the gain of a particular band of the EQ, it results in a change in the output of the EQ, which means more or less signal feeds into the compressor, and that will affect the threshold setting. If you are constantly tweaking an EQ, you'll be constantly adjusting the compressor threshold to compensate.
With the compressor first in the signal flow, you set its threshold and whatever other controls the compressor might have, and you leave it alone basically. And then you can screw around with the EQ all you want and you won’t have to touch the compressor.
2) Compressors can lessen the need for EQ
Let’s say you’re working on a kick drum, and sound is missing some attack and thud. It’s missing that “cut." The kick’s transient has a lot of frequency content, much of it happening somewhere in the upper midrange anywhere from 2kHz to 8khz, and the thud - that “dead body falling off a balcony onto a carpet” sound is down in anywhere from 50Hz to 150Hz. Yes, you could sweep around with two bands of EQ and dial in some attack and thud... or your can run the kick through a compressor (might we recommend the Korneff Audio Pawn Shop Comp for this...) and get the attack and thud, and some added punch, just by setting the compressor right. If it still isn’t what you’re looking for, then you can throw an EQ on after the compressor, and fart around a bit until you have the sound you’re looking for.
The same goes for guitars, vocals, bass, etc. Usually the compressor first will even the sound out, fix a few issues, and the net result is less need for equalization.
3) Because you can compensate for the frequency response of the compressor
Compressors tend to change the frequency response of the signal a bit. Mash something pretty hard with a compressor and you’ll lose some high and low end typically, but even patching a signal through an 1176 that’s in bypass will do something to the sound. With the EQ after the compressor, you can adjust for the changes in frequency response caused by the compressor.
So, when you’re tracking, you probably want the compressor first. Unless you want it last when tracking... because... it depends.
EQ First to Fix Big Problems
You’re in the studio, recording a bass player, and his C on the 3rd string 3rd fret is really loud for some reason—crappy bass, neck resonances, crappy bass player with crappy technique, etc. When he plays it sounds like a a a a g# g# g# e e C C C C a a a. Damn that resonant C to hell!
You put a compressor on it, drop the threshold down, get a nice bit of click to bring out the attack, and it evens the dude’s playing out until he hits that damn resonant C. And then the compressor smashes the crap out of things because that one note is so much louder than every other. And if you set the threshold higher to not hit the C that hard, then the compressor does next to nothing on all the other notes. Damn that resonant C to hell!
What you have to do is bring down that loud ass damn resonant C using an EQ first, and THEN run the signal through a compressor. Patch in a parametric EQ, set it to a narrow bandwidth (say 1/4 or 1/8 an octave), set the frequency to 65Hz, and cut by 6dB or so. Patch the EQ into the compressor, and now the compressor will respond to the signal much more consistently.
Where did the 65Hz number come from? That is the frequency of a C, 3rd string 3rd fret, on a four string bass that is tuned to A 440Hz.
So, if there is something in the frequency response of a signal that is excessive, then an EQ first is handy to nip out the crap before compressing it.
Cleaning up problematic sounds before compression is also handy for getting control over woofy sounding kick drums, spiky sounding cymbals and hi-hats, and midrange heavy vocals. You’ll find very often with vocals that high pass filtering them, or cutting, say, everything below 300 Hz with a shelving EQ (like 2 or 3dB worth of curt - doesn’t need to be a lot) will actually help the compressor work more effectively across the rest of the signal.
Compressor Last on the Stereo Bus, Sub Groups, or when Mastering
Probably all of you put a compressor across the stereo bus, or the master bus, depending on what you call it, for your final mixes. You might also be putting a compressor across each of your sub mix busses (the guitars bus, the keyboard pad bus, the vocals bus, etc.) This is actually a common application for the AIP.
In these applications, the compressor last in the effects chain seems to work better. Perhaps it has to do with how the EQ "pushes in" to the compressor. There's a whole bunch of vague things I could write here, but the point is it: compressor last sounds better.
When the compressor is last, the separation between the elements of the mix is clearer. I notice more details and overall I can "see" into the mix a bit better. With the EQ last in the mix chain, I've noticed that the whole mix is thicker, but more smushed together and sonically homogenized.
Compressor last in the mix bus chain also seems to tighten up the mix rhythmically - the "glue" people talk about. This is due to the main rhythmic elements - typically the kick and snare, being the loudest elements of the overall mix, and hence hit the compressor hardest and "drive" it a bit, causing it to sound tighter as the overall mix dynamics are changing because of the kick and snare pushing the compressor. Imagine if you grabbed the master fader and moved it a tiny bit on beat—you’ll get a rhythmically tighter mix. See how that works? You can even play with your EQ settings a bit to make a particular frequency range sort of “lead” the compressor.
This is a good thing to experiment with, whether you're mixing in the box or working hybrid. Flip the compressor and EQ around in the bus chain and see what sounds best to you. My rule of thumb, though, for overall bus processing, is to put the compressor at the end of things.
Filters -> Compressor -> EQ
Often things can get recorded that are beyond the range of speakers to reproduce, and often beyond the range of ears to hear. Low end thumps, perhaps caused by a vocalist taking a step while singing, or a resonant rumble caused by an air-conditioner, can get recorded and can be really loud, but basically unheard while you’re working on the track because your speakers just can’t quite get down there. But even though those low sounds can’t be heard, they still travel through your signal chain, and power and dynamic range is used up as equipment tries to reproduce a basically inaudible signal. This results in moments of distortion and overload that cause problems in audible frequency areas. Similarly, loud high end signals can do the same thing.
High and Low pass filters were originally put on console channels to deal with this sort of problem, and you should be using them to clean up crap that doesn’t belong. Reaching into the bottom end using a High Pass filter and getting rid of excessive lows, especially on instruments that simply do not have significant information way down there, will often tighten up things down there and make room for the instruments that do need authority down there. And the same thing goes for the high end: Low Pass off instruments and sounds that don’t extend meaningfully in the high end.
In fact, the old school way of doing things, which is still a good idea (and exactly how Dan Korneff, me, and loads of engineers approach a mix, incidentally), is to start by setting up pass filters on every channel and getting rid of what isn’t needed. You might be thinking that you need all the lows and highs of every instrument, and if you were to listen to individual channels solo’d out that might be the case. But in a mix, it all blends together, and space has to be shared. Bright keyboards with lots of high end will clash with the highs of vocals. Decide which part deserves the space and cut accordingly.
Once you get rid of the crap, run things into the compressor to even out the performance and perhaps add a bit of attack, then give it some polish with EQ last in the signal chain.
Depends = Adult Diapers
Remember that in all creative things, the main rule is that there are no rules. In audio, what sounds best is best. If you always put your EQ first, compress after and it sounds great, then excellent love sandwiches for you. I write these things mainly to give you ideas and inform your thinking, never to pin you down with rules and dogma.
So it does depend.... but usually the compressor goes first! Or last!
If you've been using our Pawn Shop Comp, you might be using it backwards. And if you haven't got the PSC yet, click here, get the demo and follow along with this blog post: you'll learn some good stuff.
A few days ago, Dan and I were chatting about audio (whatever), and he described in detail his approach to using the Pawn Shop Comp. It's completely opposite to the way most engineers probably use it. And since Dan built the PSC, it certainly makes sense that he knows it better than anyone. He also uses it a lot — typically it's on 40 to 70% of the channels in his mixes.
So, this is Dan's approach, 180 degrees in the other direction, and I'll tell you exactly what he does.
1. Flip it Around to the Other Side
The first thing Dan does is hit the nameplate and flip the Pawn Shop around to the back. He starts off completely ignoring the "comp" of the Pawn Shop Comp. Instead, he starts by adjusting the back, thinking of the PSC as a channel strip rather than a compressor. And that makes sense, because the preamp and most of the back panel goodies are pre-compressor in the signal flow.
2. Goof Around with the Resistors
On the back panel to the right, there are switchable resistors. Just by swapping in different resistors, you can adjust the high end frequency response and some of the saturation characteristics of the PSC. Dan and I both have an "old time" engineering philosophy, which is, "Start by getting rid of what you don't like." The resistors allow you to subtly tailor the high end of your track before you even touch an EQ.
Metal Film resistors are modern components, and have the brightest, least colored sound. Switch to these when you need highs with a lot of sheen, such as on vocals, or ride cymbals, strings, pianos, etc.
Carbon resistors are darker, with Carbon Composite being the darkest. Use these when you want to round off the highs. They work well to tame nasty cymbals and high hats, smooth out vocals that were cut on cheaper condenser microphones, which can often make them sound spitty, take some of the high end "chiff" off electric and acoustic guitars, etc. You'll also tend to get a different flavor to the saturation — more on this as you read...
3. Play with the Preamp
Off to the left is the PREAMP GAIN. You'll notice that it is already turned up a little bit even before you start to adjust it. Take this as an invitation to adjust it some more.
As you turn it up, you'll start to overdrive the preamp a bit. Depending on the signal you're passing through the PSC, you might not hear much of a difference, but the more clockwise you go, the more you'll hear it, as you push the preamp into saturation and eventually distortion.
Quickly explained, when you turn up the gain too much on something like a tube or a transistor, you generate harmonic distortion. And to our ears, a little harmonic distortion sounds good - we'll call this saturation. And, sometimes, a hell of a lot of harmonic distortion, like when you overload a guitar amp, sounds good. We typically call this distortion... uh... distortion.
Think of it like toast: getting the bread a golden brown is saturation, and burning it a bit is distortion. In audio, we usually prefer the toast a little brown — it's better than white bread.
What the saturation is adding is, essentially, high harmonics that are mathematically related to the signal passing through the preamp. An easier way to think of it: saturation is kind of an upper midrange to super high end equalizer. SO... turning up the PREAMP gain is like making the toast golden brown by adding high end.
Whatever. Saturation of vocals gives them a beautiful sweetness, or a nasty ass snarl, depending on your settings. On things like drums, saturation sort of acts like a compressor with an infinitely fast attack, and it rounds out the transients. Cymbals will go from a sharp "ting" sound to a smoother "pwish" sort of sound. Same thing happens on guitars and snare drums. Careful with a lot of saturation on a kick drum — too much and it will lose some of its punch in the mix.
Now, the PREAMP BIAS control... this is sort of like "What if we plugged the toaster directly into the power grid." Not really, but, kind of.
Bias, simply put, adjusts how well something works. If you make something work harder than it is designed to work, you'll get a lot of power out of it, but the results can be unpredictable, and in the real world, you'll burn it out.
But with the PSC, playing with PREAMP BIAS won't blow anything out, but if you crank it up you can get insane amounts of distortion. Use PREAMP BIAS to get fuzz on basses, or add additional distortion to guitar tracks, or make a vocal sound like Satan backed a master truck over the singer's face. Whatever - play with it! It's fun.
Don't be afraid to add saturation to any and all of your channels. The BIG SECRET to those amazing sounding vintage records that you love is that there is saturation ALL OVER THE PLACE. I used to hit the 24 track tape super hard when tracking, basically adding tape compression to everything (tape compression = fancy word for saturation), and then the individual channels might be driven a bit too hard (if it sounded good). Some audio channels click and sound awful when you push them too hard. I loved Trident consoles but if you overdrove the mix bus even a little it would sound like shit. The SSL and Neves not so much). And then there would be more damage done by compressors, mastering, etc. There's a reason Dan puts the PSC on so many channels, and this is it.
4. Tweak Them Tubes!
As he messes with the PREAMP, Dan also plays with the tubes — there are three different models to choose from. Each has its own gain structure and saturation characteristics.
The 12AX7's are the default, and they have a nice distortion to them, which is why they're often used on guitar amps. The ECC83's have a lot more gain, and they respond to an instrument's frequency response very differently than the 12AX7's. Switch between the two and see what you like better. The differences in sound will become much more pronounced the more gain you have.
The 5751 tubes have much less gain, are much rounder in the high end, and sort of smear the transients out. Switching to these will lower the gain through the PSC and give it an overall more vintage sort of vibe. Think vocals that need a bit of taming, synths that are harsh and remind you of your mom yelling — 5751, mom!
5. Transformer Time!
Transformers are a HUGE part of the sound of a piece of analog equipment. It's not uncommon for vintage mic preamps to have loads of them — my Quad Eight MM61 mic pre's have a whopping EIGHT transformers per channel, and those transformers are intrinsic to the sound of them.
Without going into a lot of detail, transformers can saturate like a preamp can, but the saturation is very different. The harmonic distortion added is at lower frequencies, and the more you clobber a transformer.... this is hard to describe... it sort of makes the signal kind of slower and mushy? I can't describe it really, but you can definitely hear it.
You can't directly overload a transformer really - you couldn't on vintage gear really either. If you're running a lot of gain through the system, transformers will overload. But the overload/saturation characteristics are very frequency dependent. On the PSC, as you play with preamp gain, you'll automatically affect the transformers.
Dan uses the transformers to contour the bass response of the PSC. Now, depending on the amount of gain you have happening and the type of instrument you're processing, you might find it very difficult to hear the effect of the transformers. I always switch them around, and sometimes it makes a difference, and sometimes I'm just flipping shit around doing nothing. It's always worth a try, though.
NICKEL - this is the most modern sounding of the transformer types, the least colored. This, with Metal Film resistors and the preamp set as low as possible, will give you a very clean, wide sound. On things that you don't want colored, Nickel is your choice.
STEEL - steel transformers pull warmth out of the signal, and if you overload it, it tends to tighten things up and make things a bit more.... forward? Bright? Again, hard to describe. I switch to Steel when I want things to cut a bit more in the mix. Flubby kick? Try Steel. Shitty drummer? Fire him!
IRON - Iron is probably the easiest to hear and has the most pronounced effect. I hear it as a lift to the bass and a thickening of the lower mids. Bass is a natural use, as well as on guitars and vocals.
So far, we've done all sorts of processing without touching an EQ or a compressor. In effect, we are "custom building" a channel to fit our signal by switching around components and adjusting gain, very much the way a console designer would develop the sonic signature of a recording console, or a preamp. The PSC backside lets you pretend you're Rupert Neve or some guy like that. Now, to be clear, you aren't Rupert Neve, and the PSC gives you a lot of control, but not the control that an actual console designer might have. However, in terms of what you can do within your DAW and without getting electrocuted, the PSC is amazing.
6. EQ EQ
This blog post is getting too long - I'll have to make a CliffNotes version of it, but we are almost done.
The PSC has two bands of EQ built into the PREAMP. Both EQs are wide bandwidth peaking EQs, with response curves similar to console EQs from the late '60s and early '70s. They're very smooth, they don't have a huge amount of gain, and they sound kind of like a cross between an old Neve EQ (like a 1073) and a Quad Eight or a Sphere or an Electrodyne EQ — or something from the '70s, made on the West Coast of the US.
Dan and I both think of them more as level controls than EQs. What I mean by this is that if you turn up WEIGHT, you'll lift up a pretty large area of the bottom end. You can't use WEIGHT to really pick out the thud of a kick, but if the entire kick sound is anemic and weak down there, WEIGHT will add... uh, weight. Cutting it gets rid of mud. There are two frequencies to choose from. We usually switch between them and go with what sounds best.
As your mix builds up, remember that you can go to WEIGHT on specific channels and pull some bass out of things to keep the low end from getting flabby — I'm looking at you, guitars and tom toms and drum room sounds.
FOCUS is a midrange lift. Now, the area that it covers, Dan has noticed, is an area that a lot of engineers are scared to EQ. And rightly so; it's dead in the middle of things and too much in there sounds honky and stupid. But FOCUS is very smooth, doesn't have a lot of gain available, and it works really well to sort of push a track out in the mix or pull it back. Again, we think of it as a level control, and not as an EQ.
WEIGHT and FOCUS are really well named. Dan's idea.
AND WE ARE DONE
Quick recap — the CliffNotes version: Switch around to the back, try different resistors, adjust with the preamps and the tubes, experiment with the transformers, dial in the WEIGHT and FOCUS. Get it sounding good and then...
Switch around to the front and mess around with the COMPRESSOR!!!!
GAHHH!!!! More controls!! Time for a bath!
Nailing the reverb and ambience on lead vocals can be really tricky. This week, we’re going to show you a method for doing vocal ‘verb that’s easy, basically fool proof, and will work for music of any genre. AND we’re going to show you a nifty vocal reverb trick that you can use to highlight a specific section of a song.
Three Reverbs on a Vocal
The basis of this method of getting vocal reverb is similar to that which we use on snare - read Dan’s Snare Trick blog post from last week if you missed it. We will be using three different reverbs. The first will add thickness and presence to the vocal, the second will place the vocal in an acoustic space, and the third reverb is a special, which you can use to highlight the vocal in specific sections of the song.
ONE: Thick and Present
First, instantiate a Micro Digital Reverberator on the vocal channel’s insert, after all the other processing you’ve got happening (EQ, compression, etc.). Regardless of what MDR program you use, turn DRY fully clockwise and lower WET to around 50%. You’ll adjust WET more later.
For a program, you’re looking for a small room that will add texture and thickness to the vocal but not really reverb.
These are the programs we like to use. We tend to choose one that is the opposite of the voice — it it is a dark, bass voice we choose a brighter program. For a bright or higher voice, try one of the darker settings. If you don’t know what to pick, just use Machine 1 Small 1, it always works great for this.
Machine 1 Small 1
Machine 2 01 Small Bright .1 SEC
Machine 2 02 Small Bright .2 SEC
Machine 2 03 Small Bright .3 SEC
Machine 2 05 Medium Bright .6 SEC
Machine 2 09 Medium Dark .5 SEC
With reverb times above 300 ms (.3 seconds or higher) beware setting WET too high, it can sound like a bathroom. Typically, we wind up using Machine 1 Small 1 or Machine 2 09.
Most vocal tracks are recorded in mono, but at this stage, switch the channel’s output to stereo—you’ll see why in a moment.
Press the Korneff nameplate to pivot around to the back of the MDR, find the WIDTH trimpot and set it to 50% or lower. Because you’ve switched your mono vocal track into stereo, the stereo width control will have the effect of widening the voice a little bit. You can crank it all the way up to 200%, but depending on the song this might be a bit distracting. This is one of the settings you’ll be messing with later in your mix as you add in instruments, etc.
So, now your vocal should be a little bit bigger and commanding more attention in your mix, but it won’t be louder or processed sounding.
Quick trick here: crank up the INPUT gain to drive the MDR a little bit. This will get you a cool, slightly grainy saturation. Be sure you turn the OUTPUT gain down otherwise you’ll digitally clip the channel, and that will sound like ass.
TWO: Reverb and Ambience
The vocal, at this stage, is probably too dry to sound polished and professional, so we want to add a reverb effect that we can really hear and recognize as reverb. We’ll do this using an effects send.
Set-up a send from the vocal channel, and put an MDR on the insert of the Return. Set the DRY to 100% and the WET to 0%—this is the way the MDR initially loads in.
There are a TON of possible reverb programs on the MDR to choose from at this point, so a lot of what you pick will come down to taste. We typically use medium and smaller sounding rooms on vocals when songs are fast, and bigger, large rooms and plates and halls when songs are slower. These are the settings that we keep going back to all the time:
Machine 2 13 Large Warm 1.1 SEC is a gorgeous reverb and generally where I start. IF it is too bright, I look for something darker, if it is too big I look for something smaller, etc. This particular program blends really well within a full mix, and it adds polish without making things sound “reverby” like a record from the early ‘60s.
Machine 2 22 Large Warm 1.75 SEC sounds like a vintage echo chamber and is very musical and rhythmic on a vocal. Great for ballads and things like that.
Machine 1 Large 1 always sounds good, but it might be too much for some music.
Machine 1 Small 5 works great on vocals with lots of short words when intelligibility is needed.
Machine 1 Small 6 This isn’t a room, it is a smooth and dark plate reverb effect, and it can be way too much. We like to use this but throw it way back in the mix so you only really hear it in the gaps of the other instruments.
THREE: A Special
During your mix, you’ll probably have some moments where you want the vocal to jump out and really call attention to itself. For that, we’re going to use a special.
Sibilance - your enemy, your little pal...
Generally, on vocals, we try to get control of sibilance. Sibilant frequencies are in the 5kHZ area and they add intelligibility to speech and singing. They are the frequencies generated by consonants. As people get older, these frequencies get harder to hear, which is why you have to repeat yourself and speak very clearly around grandma and grandpa (especially if they were in a punk band when they were younger). Too much sibilance on a record, though, sounds hissy and spitty. It’s caused by sounds like S and T overloading a microphone or a preamp somewhere. Usually, we don’t want sibilance. However, this trick is all about generating sibilance.
Set up yet another effects send from your vocal channel. Crank up the send level a bit. On the return channel, add channel EQ or a High Pass Filter, and follow it with yet another MDR on that insert. Set it to 100% WET, 0% DRY.
Dan loves the Vocal Whisper preset on the Lexicon 480L unit, so we’re going to sort of rip that sound off a bit.
On the channel EQ you’ve got before the MDR, put a high pass filter at about 10kHZ and roll off everything below it. This will prevent almost anything other than high pitched vocal sounds from getting to the MDR.
On the MDR, set it to Machine 2 50 Multitap Reverse. Flip around to the back panel of the MDR and set DAMPING to -1.6dB, LPF to 13.2kHz and WIDTH to 170% (you can go higher).
As you play your mix, you’ll notice that S’s and T’s, and other sibilant consonants, will jump out and kind of sound like ghostly whispers. Adjust the High Pass Filter of the EQ to get more or less of the effect. Dan likes to use this in relatively open areas of the song to create a scary, unsettling mood.
A good way to figure out where you want to use this effect is to put it on the vocal somewhere in the middle of your mix process and listen to the entire mix with it a few times. Generally, there will be certain spots where the effect jumps out. I often just leave stuff like this on always so there is a random element happening in the mix to give me ideas.
Some Other Ideas
There are all sorts of fun variations on this you can try. As an example, rather than setting a high pass filter, set a low pass around 400Hz and send all that dark, warm low-end gunk into the MDR. Set the MDR to Machine 2 34 Slow Gate 450 MSEC. If you listen to the effect all by itself it, it sounds like a moron singing in the shower, but in the mix it gives the vocal subtle movement and texture, and makes it seem wetter than it actually is. When I do this, I set my other vocal reverbs to something bright so things don’t get muddy.
And of course, try this trick on guitars, synths, etc.
And that is it for this week. Let us know how this all works for you on our Discord or Facebook.
The other day on our Discord channel, Dan Korneff shared a snare drum trick that uses the Korneff Audio Micro Digital Reverberator.
Dan is well-known for his hard hitting, articulate drum and snare sounds. Not many producers/engineers have snare sounds that are considered iconic, but Dan does—search "Paramore Riot snare” on Gearspace - multiple threads come up.
His MDR snare trick is easy and gives you a lot of control over the size of the snare and how it sits in the stereo field.
What you're going to do is set up two different MDRs. One we’ll call the Fatter, the other we’ll call the Wider. Together, they’ll make your snare fatter and wider. Think of it as drinking beer and sitting around with no exercise all day for the snare.
Make the Snare Fatter
Start by dropping a Micro Digital Reverberator instance on the snare channel, or on the snare bus if you are combining multiple snare sources together. Do this by placing the MDR in an insert location, after any other processing on the channel.
Switch the MDR over to Machine 2, and bring up program 03 Small Bright 0.3 Sec. Set DRY all the way to the right, and bring down WET to around 2:00. You want to pass all of the snare through the MDR, and then add the effect back in.
03 Small Bright 0.3 Sec has a timbre similar to that of a tight drum booth or a small, highly reflective room. The very short time of this patch means that it will sound more like a doubling than a discrete reverb. It will thicken the snare up and make it last a little longer.
Press the Korneff nameplate to switch the MDR to the back side so you can tweak the circuit. Locate the blue trimpot to the left—it’s labeled WIDTH. Turn this all the way counterclockwise to 0%. This will make the the wet signal mono, so even if you’ve got a stereo snare track (for some reason), the effect will be confined tightly. This reinforces the snare’s solidity in the sound field.
Make the Snare Wider
To widen out the ambience of the snare, you’re going to set up another instance of the MDR, only rather than using it via a channel insert, you’re going to feed it via an effects send.
Set up a new send on the channel and turn it up to 50% for starters. Find the return, and put the MDR into an insert slot on it. The MDR instance will load and the DRY should be all the way to the left (fully counterclockwise) and the WET should be all the way to the right (fully clockwise). You don’t want any dry signal in an effects return.
A fresh instance on the MDR will come up set to Machine 1, program Large 1. This is not a coincidence—the plug-in’s default is the patch Dan uses the most, and it’s perfect for this application. Large 1 is dark and has a decay time of around .6 seconds and sounds like a big studio live room that has been damped down to control flutter and ring. It has a noticeable slap to it. To my ear, it has a “cannon" sort of effect on a snare.
Flip around to the back of the MDR and set the WIDTH trimpot to 100%. We want this as wide as possible to wrap that ambience around us a bit.
Adjust Adjust Adjust!
Depending on the genre of music and the density of your mix, you’ll have to adjust the settings a bit, and this is where the PRE-DELAY control on the front comes in really handy.
PRE-DELAY delays the onset of the effect. At low settings it is hard to hear, and as you turn it up you’ll increase the separation between the dry signal and the wet signal. Below 40ms, the dry and the wet will sound cohesive, but once you get higher than 40ms, the two signals will separate and your ear will hear them as two clearly different events. Very high settings will give a slapjack like effect and start to add an additional rhythmic component to your mix. You may or may not want that.
On the Fatter MDR, the one set to Machine 2,, as you adjust the PRE-DELAY the sound of the snare will brighten. Turn up the WET control to increase the size of the snare. To my ear, turning up WET seems to increase how hard the drummer is hitting. Seriously, even if you’re not using the wider portion of this trick, it’s a good thing to slap an MDR with this patch on your snare always. In a weird way it is doing the work of both a compressor and an EQ but in a much subtler way.
PRE-DELAY and the level of the return on the Wider MDR (the one set to Machine 1), has a lot of effect on the location of the snare from front to back in your stereo space. By messing with level and PRE-DLAY you can “move” the snare back or forward in relation to the rest of the drum set. Too much effect can make the snare sound like it was recorded in a completely different space than the other drums, and this might be an effect you’re going for. I usually want my drums to sound integrated and cohesive, so I’ll set Wider so that snare plays nicely with the rest of the mix.
Feel free to use different programs on the MDR and experiment a bit. In general, though, the trick works best with smaller spaces on the Fatter, and larger spaces on the Wider.
Some Extra Ideas
I usually feed a little of the hi-hat into the Wider using the effects send. Huge, distant snares with small, dry hi-hats sound goofy to me. I like these two instruments appearing like they at least know each other and not like they’re meeting each other for the first time on a Tinder date.
You might want to think about automating the Wider return. In general, you should be automating your reverb returns, almost “riding” them like any other instrument in your mix. In spots where you want the snare to be prominent, ride the return up, in places where things need to be tightened and tucked in a bit, pull the return down. I especially like bringing reverbs up during fade-outs, so it sounds like the song is fading away into space rather than simply getting quieter.
See You On Discord
This blog was inspired by a question that came up on our Discord channel. Thanks to Mistawalk! Dan and I monitor our Discord, and we definitely answer questions and give out ideas and tricks all the time on it, so hit up our Discord. And our Facebook. We’ll help you out however we can.
Fall is here! Leaves! Halloween! And I’m already sick to hell of pumpkin spice.
I must admit, I do have pumpkin spice lattes in the fall, and I enjoy them muchly. I like an occasional pumpkin spice milkshake. And some pumpkin spice beer is ok... not too disgusting. But after that it starts getting ridiculous, and suddenly pumpkin spice is seemingly in everything, from lasagna to eyedrops.
Really, it is best in pie. Pumpkin pie.
But, it is pumpkin spice season, so we at Korneff Audio are jumping on the bandwagon with our Pumpkin Spice Compressor, or PSC.
The PSC (also known as the Pawn Shop Comp) is a super versatile plug-in. It combines the punch of a FET style compressor, with a tube preamp section. And then on top of that, it has switchable tubes, resistors, transformers and transistors. The net result is something far beyond the simple sweetness of pumpkin spice, or the functionality of just a compressor. The PSC is like an entire spice cabinet of colors and flavors, and you can use it all over your tracks and mixes. Unlike a lot of plug-ins, there isn’t one thing it is particularly designed to do. It does everything, although we’ve not tested it with lasagna.
So, here are three “recipes” from the Korneff Audio Kitchen, for applying the PSC to your recordings. They’ll give you some insight into the ways the Pawn Shop can add magic to tracks, and hopefully stimulate your own thinking and creativity.
1) Use a PSC to Help Track Things Quickly
When I’m tracking parts to develop ideas, I want to get things recorded as quickly as possible. However, if I’m moving fast, chances are I’m playing or singing a bit on the sloppy side, mic positions and technique are a bit loose, and instrument sounds aren’t fully worked out. Consequently, levels can be all over the place and the sonics can be off enough that things get lost in the mix, or become too dominant and distracting. I don’t want to slow my workflow down by adding EQ’s and compressors, and then fiddle with settings, but I do need quick control over things. Enter the PSC.
As I track, each new channel has a PSC instance on it. If I need a little compression, I press Auto Makeup Gain, turn up the Ratio a smidge and then pull down the Threshold ’til the meter bounces a bit. The initial settings for Attack and Release are usually fine. If a track needs bottom or brightening, or if it’s muddy, I go to the Back Panel and use the Focus and Weight controls to get it to fit into the mix so I can evaluate how the part works. These are not full featured EQs, but their frequencies are carefully chosen and effective for making the sorts of fast changes I want in this situation. The last thing I want to do is sweep through frequencies, dick around with bandwidth, etc. With the PSC, there’s just two gain knobs and four frequencies, and that’s enough for quick fixes.
The PSC also has considerable saturation capabilities, so if I want to experiment with distorted, overdriven vocals, or see what fuzz on the bass might sound like, I don’t have to add a saturation plug-in, I just mess around with the Pawn Shop’s Preamp Gain and Bias. With just those two controls, I can dial in anything from some subtle overtones to full out stomp box.
Once I finish my “idea” tracking, I can re-record things more carefully, or, if a track is close enough, I can add other plug-ins to more precisely take the sound to where it needs to be. In many cases, though, the PSC stays in the signal path because it adds character and a subtle vintage “something” to any signal you pass through it.
2) Operating Level Control = Secret Spice Mojo
This is my favorite knob on the PSC. It’s sort of a limiter that overloads and adds some saturation and harmonics, but who cares how it does its Mojo, the point is using that Mojo.
At low settings, from 2 to 6dB, the Operating Level Control pulls whatever you send through it forwards in the mix: things get a little louder, and seem to sit a bit more securely. I use it strategically to focus attention on things in the mix that have to stand out—lead vocals, instrumental solos, key melodic ideas, etc. I tend to add a few dB of it on snares, to give them more “size” in the mix.
At high settings, Operating Level absolutely squashes things, and because it has a very fast release, it adds a strange “pumping” sort of distortion that sounds like bad radio reception or something.
I use Operating Level SPARINGLY—I don’t slop it all over my tracks like, well, the way people slop pumpkin spice all over during the fall. Another way to approach it: listen to your mix. Are you losing any one particular instrument or part? Use Operating Level on that. In fact, I’ve added an instance or two of the PSC and ONLY used the Operating Level Control on a few occasions. 3dB of it adds so much.
3) Divide and Conquer the Bass Recipe
Getting bass to sit right in a mix is often a pain in the ass. You typically want bass big on the bottom, but it can interfere with the kick, and it needs midrange articulation otherwise it gets muddy, but you don’t want it to sound all clicky and “fingery.” It has to sound good on big speakers, and it also has to be present on an iPhone speaker. And bass is difficult to record well, especially when you’re not dealing with a great player who has a great instrument.
Here’s a bass fix recipe that almost always works.
A: Copy the bass track to another channel, or if you’re using multiple tracks, set things up so that you have the same bass parts in two subgroups. You basically need two bass tracks in parallel to make this work.
B: Stick a Low Pass filter on the first bass channel and roll off everything above 300Hz. I use a pretty steep slope, like 24dB/octave. On the second bass track, use a High Pass filter and roll off everything under 300Hz with the same, steep slope. So, now you have the lows of the bass on one channel, and the mids and highs on another.
C: Put a PSC on each bass channel after the filter. Now you can process each frequency range independently of the other.
D: On the low bass channel, I use the PSC’s compressor to control the overall boominess and low end sustain on the bass. I set the ratio really high - like 20:1. The compressor’s release is the critical control here. Longer releases will keep the bass more under control and cut down how long it hangs around in the mix, while faster settings can give lows a lot of blossom and sustain. On the back panel, I usually switch the Transformer to Iron, which sounds slow and laggy to me, and that accentuates the bottom. On the lows channel, I’m looking to get a thick, soggy sound—like the goop in a pie mixing with the crust to turn it into a kind of sweet pudding.
E: On the other mid highs channel, I set the ratio to around 8:1 and experiment with attack and release to get the amount of articulation I want. Generally, this bass channel will have a slower attack setting on the PSC to bring out the initial “pluck” of a note, and a fairly fast release to keep things lively and jumpy. I set the Transformer to Nickel on this channel, as it imparts a nice crispness to the transients. On the mid high channel I want the sound to have a bite, like when you chomp into a nice, fresh apple.
Because I have independent control of the lows and mid-highs, I can add saturation using the PSC’s preamp controls to the mid-high channel to get additional character and harmonics, while not losing the tightness of the lows. I can also add effects like flanging or reverb to the mid-highs without making the low end loose and unfocused. If I want huge, thick low end I can get that too without ever losing the articulation and “cut” of the bass in the mix. In the final mix I can also ride the two different faders to easily adjust the way the bass sits in the mix.
And, of course, this technique isn’t limited to electric bass, it works on synth patches, as well as guitars and vocals.
Not Just for the Fall
I never know how to end these things... but in summation, the PSC, unlike pumpkin spice, is not just reserved for this one season. And it’s not just useful for one application. It can be an integral part of your workflow throughout the entire year, helping you to work fast and still get great sounding recordings.
Thoughts on Reverb by Dan Korneff
Reverb is probably the most often used effect in modern recordings. When you think about it, the ability to transform the space in which our instruments exist with a couple clicks of a button is pretty mind blowing. As with any element in the recording process, the use (or misuse) of reverb is completely subjective, and your only limitation is your imagination.
In my world, reverb serves two completely different, and equally important purposes. The first use is a very practical approach. Whether you realize it or not, EVERYTHING you hear exists in some kind of space. When you’re having a conversation with someone, their voice sounds different in a hallway than it does in a closet. Even if the closet is really small, you still hear some type of ambiance. You not only hear the direct sound of a source, but you also experience the ambiance of the environment.
Since most modern engineers spend a good amount of time isolating instruments (close mics with tons of baffles) and removing the environment from their tracks (ever use a reflection filter on your vocal mic?), The very first thing I do, especially on vocals, is insert a reverb on the channel and create an ambient “space”. I’m not talking about slapping a 3 second reverb on everything. It’s going to be something short — 1 to 3 seconds. Just a touch of something to make the track sound like it’s not hovering in the center of an anechoic chamber. Since I’m trying to recreate a natural space, it only seems fitting to use a more “natural” sounding reverb. One of my favorite settings is the MDR Machine 2 on Preset 1. It’s small and bright, and a tiny bit just fits right in for me.

Give these shorter reverbs a try on some tracks. You might just be surprised by how quickly your mix starts becoming bigger and better sounding.
The second approach is way less practical, and a LOT more fun! I grew up in the 80’s, where EVERYTHING was bigger. It wasn’t just the size of your AquaNet-soaked hair at school, it was reverb too! Everything was drenched in it. You’d have to send a rescue dog on a daily basis to help find your favorite singer at the bottom of a well. Every snare sounded like a punching bag, exploding from the speakers. What's not fun about that??
The impractical use of unnatural sounding spaces can lead to some really unique sounds that are so odd you’ll want to hear them over and over again. Using a setting like Machine 1 – Large 7 on a sparse guitar performance, or a short percussive vocal hook might just be that over-the-top decay you need to make the track stand out. Exaggerate your snare with the exploding ambiance of Machine 1 – Large 3. Add a little texture to a vocal with Machine 2 – Program 49.



If you’ve been behind the console for 25 years like me, reverb is not a new concept. Using these units for their practical purpose can be a thankless job, but necessary to bring extra realism to your music. But don’t deny yourself the fun of creating wildly unnatural or super exaggerated moments in your tracks whenever you can. Reverb might just become fun again for you.
Ok, so, you’re writing songs and they’re the same old, boring thing. Or you’re working with a band, you're in pre-production, and the bunch of you are reviewing songs.... and they’re the same old thing: boring.
I wrote a song this morning and recognized that the turnaround was a melody from another song I had written a few weeks ago. Again, the same old thing. And boring.
There’s a reason why you keep writing the same old thing, and it isn’t really your fault. It’s because you are you, and you will tend to think of the things that you usually think of. You’ll tend to do the things you usually do. Take breakfast. Pretty consistent, right? Eggs. Coffee. Toast. Or maybe you’re thinking, “I’m a cereal person.”
It’s evolution. Your brain likes to travel in patterns and grooves. It’s energy-efficient. Your brain wants to not think that much as a means of conserving energy. Ever been SUPER hungry after a really creative recording session or a creative collaboration with someone? Your brain, when it’s going full bore, guzzles some serious gas. But, for the species to survive, best not think all that much otherwise you would have to eat a lot, so your brain likes to skimp and copy and do the same old thing.
And so we are each stuck with ourselves and our same old shit.
What if, instead of writing a song as you, you write it as someone else? You write as a character. Think Ziggy Stardust. Think Thin White Duke. Think of all the characters across David Bowie’s career and all the songs he wrote as that character, in the voice and thoughts of that character. All the songs Madonna and Co. cranked out as she went from a Virgin (?) to Papa Don’t Preach to the Material Girl to whatever she was up to recently with the eye patch. Gerard Way—what is he up to? Characters, inspired by comic books and history.
Writing as a character, as someone other than yourself, opens you up to think about things from a point of view other than your own. Writing a breakup song? Write it from the point of view of the person you are breaking up with. Writing a song about a shitty childhood? Write it from the point of view of the parents involved. Writing about politics in the US? Think about the point of view of a Native American, or George Washington in his grave and he’s reading shit on Facebook.
I’ve written a bunch of different things in the past month by popping in and out of characters. I’m 57, married with two kids, but I can find a way into the head of a girl that runs away from home and goes on a date to a funeral with a guy that wants to impress his family and his dead mother. I can find a way into the head of a 7th-grade kid at a school dance who’s gay. I can write a drunk who’s trying to get to Red Hook, Brooklyn (great place! Seriously, you want to visit if you can). I’m none of those things... but all of us are close enough.
I don’t particularly have a formula for how to do this. Start writing something and flip the point of view.
I’ve done this with bands, this write-in exercise, and invariably someone says, “But I’m not expressing myself.” Bullshit. Of course you’re expressing yourself. Who else is there for you to express? You don’t actually become someone else. What you’ll come up with will be “of you,” but it will be different enough to be out of the same old groove. And that is what you want.
Examples: Bowie
Bowie, on his album Lodger, jumps through several characters. Johnny, in the song "Repetition," is a domestic abuser. The song takes the form of a weird interview, with the Bowie swapping between the narrator and Johnny. On the same album Bowie takes the guise of a DJ and... oh, just listen to the whole damn record.
10CC
The 10CC were one of the more interesting pop groups of the early 70's. They're best known for their hit "I'm Not In Love," which was a ground breaking recording in addition to being one hooky devil of a song. They released an INSANE album in 1974 called Sheet Music. The album abounds with characters. Check out "Clockwork Creep." It features verses sung by a bomb, and the 747 jumbo jet the bomb is upon. The bomb is paranoid, the jet is arrogant. It's crazy. The whole album is all over the place and great. Listen to it!
Speaking of great: Nina Simone
Nina Simone was so brilliant that she couldn’t do one thing, so she did EVERYTHING, from jazz to soul to classical to blues to R&B and pop. And she was a civil rights activist. She was signed to a record label in the late 1950s, and as part of the deal, she had complete creative control of her output, which was unheard of at the time. She actually started her career as a character—she adopted the name Nina Simone so her parents wouldn’t know what she was up to. Nina Simone is the BOMB.
In 1966, she wrote and recorded "Four Women". In it, she writes from the viewpoint of four different African American women. Way ahead of its time for 1966. It’s a blues groove with Nina throwing classical piano licks over it. At the end, she screams “Peaches” like the word is ripping her throat out. So damn good.
Give it a try. Flip your viewpoint. Find a character. Write a song. See how it comes out.
In the fall of 1991, I was mixing some big ass rock recording on an SSL console somewhere. I think I blacked out a lot of the details of this particular experience.
It went something like this: I’m mixing along, all is good, and in walks one of the musicians from the band. He leans over the desk and listens... and he remarks something like, “That hi-hat... it’s not quite right..."
Hi-hat hi-hat hi-hat... the more I listened, the worse it sounded. Like someone hitting a spaghetti colander with a metal spoon. EQ EQ EQ EQ EQ... now it sounds like ass wrapped in aluminum foil... EQ EQ EQ gate compress EQ EQ. Fader up, fader down... check the overhead tracks... GAH!!!! It sounds like shit there, too! What has happened? Who stole the beautifully recorded hi-hat that was on this tape and substituted this thing that sounded like metal baby turds??? GAH!!! Throw out the whole mix! Quit the career! Lock me in the vocal booth and suck out the air with a straw and leave me to die like a gasping fish...
I did a dumb thing. I focused on one element of a mix, and the more I worked on it, the more I mangled it and the rest of the mix. The damn hi-hat became the CENTERPIECE of the mix. I got obsessed with it.
I quit mixing for the night, made a quick cassette, and split for home.
In the morning, I listened to the cassette. The mix was ok. The hi-hat went back to being a just hi-hat. I went back to the studio, finished the mix off, and started on the next song.
I still don’t know exactly how it happened. I guess I was tired—probably 8 hours in on a mix or something. It's really awful to sweat through a mix over a fricken’ hi-hat. To spend an HOUR dicking around with a HI-HAT. What a dumb ass move. But I learned from it.
And I evolved a series of rules to keep me from falling down an obsessive hole again, to get better mixes in general, and, probably most importantly, to have a better experience mixing.
Now, the rules can be broken—sometimes you have to break them. They’re more suggestions then commandments, but try a few and see what happens.
How To Avoid This
1) Automate your mutes first thing if there is a firm arrangement. Then push up all the faders, and with NO SOLOING (and no effects or EQ), get your initial mix. Run the song from beginning to end. Mix EVERYTHING—every instrument and every section. Don’t touch an effect or a processor or an EQ (or solo anything) until you get something cohesive that makes sense.
2) Mix from beginning to end and not specific sections. I used to loop the tape deck and set it to autoplay, so I would work the whole song, have a pause as the deck wound back, and then work the whole song again. This keeps you from spending any more time than the length of maybe a verse working on any one element of the mix. You’re working on that guitar solo and OOPS! Over! You’ll just have to wait 'til it comes back 'round.
3) Set a time limit. Give yourself 20 seconds to work on a part or instrument, then move on. Go fast. Rule #2 generally makes spending too much time on any one element hard, but reinforce this by setting a limit. Set your phone to beep every 20 seconds. When it beeps, move on.
4) Try not to solo things, which is impossible (let’s be real), but do try to solo things in pairs or small groups. Working on the bass? Solo it, the kick, that low drone keyboard part and maybe the room tracks, and then perhaps work on that whole group, jumping from channel to channel, a little touch here, a little touch there. Think of groupings of sounds and instruments that occupy the same frequency range, and solo that group. Working on vocals? Make sure you have the hi-hat in there, any acoustic guitars, any keyboard pads. Instruments in the same frequency range will affect each other, so think of them and treat them as groups.
5) Round-robin your monitoring. Work for a minute on the near fields, switch over to something that sounds like a shitty iPhone, and then to a headphone, and then to the big monitors. Do this as the song plays from beginning to end. Break up the order of switching as you do this. Switch in pairs—listen on the iPhone, then on the bigs. Of course, don’t do this for the whole mix because you’ll drive yourself crazy. Maybe at the end of an hour, lean back and have a listen. Have the assistant randomize the switching of the monitors for you—they’ll feel important!
6) Fuck bleed. It probably isn’t a problem. Headphone leakage on a vocal is almost never a problem unless you decide to do an acapella vocal thing by muting out all the other channels of the mix, which is unlikely. Drum bleed is usually a non-issue unless you’re doing some extreme eq’ing and processing on something like a tom track, and somehow that is effecting the snare. Bring up the faders of all the tracks of the kit, balance it out so it sounds good. Once you chuck in all the other stuff of the mix, there’s going to be so much masking you won’t hear any bleed. Not always, but in hundreds of mixes, bleed was never an issue unless I made it an issue in my own dumb head. Go listen to a great live album like Rock ’n' Roll Animal or Donny Hathaway Live. Tons of bleed on these records that you’ll never notice, and they sound amazing. Don’t get hung up on bleed.
7) Take a break every hour. Get out of the studio, and if you can, go outside. Let nature reset your hearing a bit. Breath some air. Fart. Have some water. Remember caffeine and sugar screw with your blood pressure and screw with your hearing because of that.
8) Be REALLY CAREFUL who is in the studio with you while you’re mixing. Choose your company wisely. Get the band out of the room, or out of the studio totally, if possible. I can’t emphasize how important this is. To do a great mix, you have to be in a really good headspace. The wrong comment at the wrong time can totally fuck you up. I used to tell my assistants that they weren’t allowed to say ANYTHING unless there was a technical issue or an emergency. And if I said anything positive—“This sounds good”—that they had to agree even if they thought it sounded like shit. AND if I said anything negative—“This sounds like shit”—they had to either shut up or say, “I dunno, I kinda like it.”
The band... it really is for everyone’s good that the band isn’t at the mix. The Beatles used to go to the pub around the corner from Abby Road. Encourage the band to go to the pub. I used to work out of a studio that was adjacent to a strip club. It was a great setup. Give the band a bunch of singles and get that mix done!
9) If you find yourself obsessing, or spending too much time on an element, STOP. Stop right there. If you can get away for a few hours, or even a day, then do that. If you’re stuck working because of a schedule, take a good break. When you return, if you’re still getting caught by the problematic track, mute it out—even if it’s the lead vocal or the kick or the bass (or the hi-hat) and work on something else for a bit. Let your focus widen to the rest of the song. Maybe tell your assistant (they can be handy, these assistants) to sneak the problematic track back in when they get a sense that you’re somewhat normal again.
There is no tenth rule.
Thanks for reading. Feel free to send a message.
Float above your mix like a cloud. Don’t fall into it like a raindrop.
Truth be told, the first concert I ever experienced was Vanilla Ice at the ripe age of 11.

It was a Philadelphia 76ers halftime show in 1989, thanks to a raffle my mother won at work. At this point in time, I hadn’t formulated a musical opinion of my own. I was subject to whatever my parents would listen to in the car. My mom would usually dominate the radio, and I enjoyed what I heard. Richard Marx, Simply Red, Billy Joel, Gloria Estefan, Debbie Gibson… All the hits. I could appreciate the melody, and sing along to all the songs. As fun as that was, it didn’t really hit that sweet spot with me, musically speaking. My dad had been jamming this album Hold Your Fire by RUSH for about 2 years non stop. It wasn’t happy like all the other music my mom was listening to. There were these HUGE synths and drums. It was weird and dark. So close to what I was looking for.
A few months later, my brother gets this new thing called a CD player. They have been around for a couple years at this point, but didn’t really take off until “anti skip” technology was developed in 1990. The first album he got was the brand new Megadeth release, Rust In Peace. It just so happened that they were on tour and coming to Philly, so my brother asked if we could go.
ENTER: my first real concert experience. The tour was called Clash of the Titans. I wasn’t really sure what I was in for, but I agreed to go cause I had heard some songs from the opening band, Alice in Chains. The venue was The Spectrum in Philly, the same place as the Vanilla Ice concert. The arena was nice enough to set up folding chairs for the concert goers on the main floor to sit in (yes, you read that correctly). Luckily, we were nestled safely in the nose bleed section. Alice in Chains really rocked. Who doesn’t like the song Man in the Box?? Apparently, the people with the Slayer shirts next to me. They didn’t. I heard all sorts of funny comments being screamed at the band, like “isn’t it past your bed time?” and “time for a diaper change!”. I dunno… they were pretty good to me. Next up was Anthrax. Not my cup of tea. I don’t think my brother or dad liked it either, cause we spent the whole set getting hot dogs and using the rest room. The restroom at a metal concert, when you’re 12 years old, can be a traumatizing experience. I had never seen someone shit in a sink before. There were no urinals. Just a big tub that people were pissing in. Jjjeezzzzusss…
We got back just in time for Megadeth to hit the stage. As the set started, I felt a tap on my shoulder. There was an elderly couple sitting behind me. They were easily in their 70’s! The lady says “that’s my grandson” pointing to Nick Menza (RIP). Their set was quite enjoyable. I had recognized some of the songs from my brother. I remember they played the song Dawn Patrol. It’s just drums and bass. I was like “you can do that?” Mind blown. Hangar 18 sealed the deal for me. I was a fan.
I don’t think I ever gave by brother back his CD after I saw this show!
After they were done, there was one band left… Slayer. I never herd them before, and it would be pretty hard to top what I had just seen. The lights dimmed. The crowd screamed. The set opened with a song called Hell Awaits. Those thoughtfully placed folding chairs took flight and the floor turned into a no-holds-barred WWE match. People were suffering severe trauma. Those chairs got piled up in the back of the floor area and were promptly set on fire. People started fighting. Punching. Kicking. After the show, I had learned that they were doing a dance called “Mosh”. Who knew??

My jaw dropped. The sound this band was making. I had never heard anything like it in my life! It was….. it was… god awful! I couldn’t understand WHY a band would take perfectly good instruments and make songs like that. We suffered thru 6 songs. By the time they played Jesus Saves, we had enough. Time to leave. We didn’t like the music, and my dad was happy to leave early and beat the traffic. That’s such a dad thing to do.
About a week later, my brother gets a new CD. Seasons in the Abyss, by that god awful band Slayer that we just skipped out on. After a couple listens, my opinion started to change. My hatred for something I didn’t understand turned into a curiosity. What I previously heard as “noise” turned into an intricate musical composition. Since that day, Slayer has been one of my favorite bands ever. It made me realize something important. Music that I understood and enjoyed from first listen is also the same music that I grew tired of quickly. The music that I didn’t understand at first, and took a while to get, ended up being the musical experience that lived with me forever.
A couple months later, Metallica released their self titled album, I got my first Tama drum kit, and my life changed forever. You never really know how long music has been a part of your life. It’s just always there. Thanks to a post from my friend Mark Lewis, I now know that my musical Independence started 30 years ago today.
The next bunch of posts are going to be on production, but not about techniques or gear. The real magic of making records isn’t gear, it isn’t presets, it isn’t sound libraries—it isn’t any of that.
The real magic is found in ideas and dealing with people. And no one seems to talk about this stuff, the magic stuff. So, let’s talk about that for the next few weeks.
The Path to The Same Old Shit.
Where it all starts out is the song, and the lyrics. And this is also where most of the problems start, because writing songs is hard and writing good lyrics doubly so.
It is really easy to fall into the rut of writing the same song over and over again. There are some songs I love by U2, but overall they have about four or five songs they re-write again and again. This is true for just about every band and songwriter, and it get worse as an artist gains success, and as an artist ages.
The more you do the same thing over and over again, the more your brain creates a path that is easy to go down. That’s the Path to The Same Old Shit. Most of what I write in the next few weeks will be about breaking out of that rut, getting off that path. I’ll give you a bunch of tricks and ideas, all of which absolutely work. And there will things you can listen to and hopefully you can get steal some ideas.
Oh baby oh baby oh baby! What fucking waste.
Lyrics are really important to me. Some people don’t care, but I do. If lyrics suck, I generally think the song sucks. It pisses me off that a star artist, able to reach millions of people and positively influence lives, releases songs with “Oh baby oh baby oh baby!” sorta lyrics. Or lyrics that are a self-pity party. What fucking waste.
But let’s start there. Let’s start with bullshit lyrics, and how to make them less bullshitty, or at least more interesting. Let’s start with true garbage: dumb love songs.
Try this:
Eliminate I and Me in your lyrics.
So many lyrics are: I love you, I’m screwed up, I’m angry, I’m heartbroken, I miss you, Do you love me, Do you miss me, I wanna rock, I’m sad.... I I I me me me. If you were hanging out with a friend, and they talked like most lyrics are written—I I I me me me—you’d be bored and think your friend is self-centered.
Yes, songwriting and lyrics can be personal, and it is self-expression, but 8lbs 4oz Baby Jesus, try not to make it all about you.
As an experiment, chuck out all the I and me stuff. Don’t use I and me. Write something in the second or third person. You/he/she/they/we/us. Or take a thing you’ve already written, swap around the pronouns, and see what you get.
You might find it a bit more difficult to express your ideas, which is good. Difficult = not in the rut. It will force you to think. And somehow, when you take yourself out of the lyrical equation, you’ll find your range of ideas opens up. You start to treat yourself as a character in a story, and so you move beyond the bounds of your own sad-ass life.
As an example of how this might work, let’s take the trite, crappy idea that is the basis of almost every shitty pop tune ever and see what happens if we swap out all the I I I me me me diarrhea for second or third person.
She left me and I loved her. <-- Yuck.
She left him and he loved her <-- I’m already more interested in the story here.
You left her and he loved her <-- hmmm.... this is suggesting a love triangle.
A love triangle. If I start with that basic idea, You left her and he loved her... I had an experience like that at the end of college. Take that and sort of flip it around:
You left her
He loved her
Sitting here across the room from him
He gets up
He walks up
Now he’s sitting across the table from him
He looks up
Doesn’t recognize him
Smiles
Do I know you, friend?
So this is turning into a country song perhaps, but right now it’s just words. It can go anywhere.
Where can this go? Well, the one guy can stab the other guy in the hand, as I would have liked to have stabbed that fucker Randy in the hand back in 1986.
Another angle: because it is hard to keep track of who is who because it’s all he or him, maybe the character is talking to himself. Is a guy remembering who he was when he was younger?
Keeping the love song thing, is the old self-telling the younger self he’ll get over it? That he should go find her? Is it a scene from the movie “Looper” or “Good Will Hunting?” All sorts of possibilities beyond “I loved her.” Heck, she isn’t even in the lyrical picture anymore.
Write something without I and me. See what you get.
In each of these posts I’ll include a song or songs to listen to, so you can get some ideas. Actually, I want you to steal ideas.
Remember, new ideas are old ideas in a different pants. The best thing you can do is listen to different music, read a lot, watch movies, and in general add LEGO blocks to your brain for your imagination to play with. So, listen to this brilliant country love song and steal steal steal.
George Jones is 8lbs 4oz baby Jesus
This is such a great song, written by Bobby Braddock and Curly Putman. George Jones recorded it as basically a broken man, and at the time he thought it sucked. It won a CMA and a Grammy. It's considered one of the greatest country music songs of all time. In fact, there’s a whole damn book about it.
It’s written mainly in the third person, until is switches about halfway through, and in that moment the “secret” of the song is revealed.
There’s so much to learn here. There are wonderful, little, descriptive details in the lyrics—the letter in which he underlined "I love you" in red—so killer. Steal that idea. Rather than telling us, “he died,” the lyrics inform us that “they placed a wreath upon his door.” So, not only is there a death image there, but there’s a Christmas image. Is he happy now that he’s dead? Is there a celebration somewhere?
Structurally, the chords are the same thing over and over again, but listen towards the end, when she visits him, and there’s a background vocal that sounds ghostly... did she visit him in body, or in spirit? Is she dead too? Were they married? What the hell happened? What’s the story?
That ambiguousness is key. When you can’t quite figure something out, you tend to keep thinking about it. You’ll notice almost always that what stays in your head are thoughts which are incomplete. You remember questions more than you remember answers.
Next week we’ll look at lyrics and songwriting from another angle, with another trick to try.
Speaking of questions, feel free to get in touch and ask them. Use the form below.
Oops! We could not locate your form.
Luke 9/18/20
The masterfader is the most expressive fader on the console. Use it to boost the dynamics of your records and bring drama and excitement to things. Less stupid sex jokes then the first of the series, but a good trick involving tape on the masterfader, and some sneaky ideas.
What else will 2020 bring us???
In a rare moment, Dan Korneff has managed to pull himself out from under all of the projects he’s working on, including NEW PLUGINS, and make a quick video about using the Talkback Limiter as a Bass Amp.
It’s fun, there’s beer, and you get to watch a master engineer at work. All in less than five minutes!
Check our Talkback Limiter here.
There’s an art to a great, long fadeout on a song. Here’s a video with a bunch of ideas on what makes a great fadeout, and things you can do to clean up your masterfading... and yes, there are stupid, pseudo-sexual jokes involved.
Tropical Storm Isaias, formerly known as Hurricane Isaias, took out my power and internet for 4 days, hence no blog for last week.
But a few experiences I had last week reminded me of something I've wanted to write to you all about, and this is very business oriented, and not just audio business oriented. Life oriented.
So, I couldn’t start my generator. Tried everything, no luck. I did some googling and found a guy named Peter, from Franklin Square Mower Repair. I called him.
Predictably, he was really busy fixing generators, and didn’t know how soon he could get to mine. And then an hour later he called me and said he was in my neighborhood and would pick the generator up, and that he might be able to fix it by sometime tomorrow, but he was really busy.
He came by, was a super nice fella, we loaded the generator into his truck and off he went.
He called me two hours later. “I fixed it. It was a broken spring in the carburetor diaphragm. I’m dropping it off in 20 minutes.”
He charged me $97. I gave him $125.
Now THAT’S how you run a business. I’m hoping I break something else just so I can pay Peter to fix it. Maybe he fixes vintage AKG D224e microphones I broke 25 years ago?
And that is exactly how you want to run your studio, or your freelance production career, what have you. You want to OVER DELIVER. You want to surprise and delight people by giving them a great experience. You want to make people feel special and well-cared for. Do this, and you’ll stick in a sweet spot in their brains. They will be your fans. If Peter called me and said, “I need help burying a body,” I would reply, “Let me get a shovel and a tarp. Wanna get a beer after?”
On the other side of behavior is my 17 year-old son.
He’s actually a really good kid overall, but there are days when his attitude about helping out around the house is so fucking poisonous that I feel like calling Peter and saying, “Remember that shovel and tarp we hid in your crawl space? Can I pick them up in 20 minutes?"
It isn’t any one thing my son does; it is a combination of a lot of little things. Complaining constantly. Moving really slowly when I ask him to get a tool. Replying sarcastically to just about anything...
Me: We need to pour a concrete slab for the studio air conditioner.
Him: Fun.
Me: We have to cut up these tree branches.
Him: Fun.
He, of course, says, “Fun” in the flattest, most unenthusiastic way possible. If there is a way to make something that is already miserable—cleaning out a flooded basement, perhaps—even more miserable, my son is the man. The Steve Vai of teen bullshit.
Have an attitude like that, and no one will want to work with you. Which is, of course, my son’s point—he’d rather not work. But if you’re an engineer, or a producer, or a musician, you’ll kill your career dead with a shit attitude.
I have worked with a lot of assistants and engineers. The ones who were gung ho, willing to do anything, never complained, and always had a smile, are the ones who have careers. The arrogant ones, who sighed whenever they were asked to coil mic cables, who thought they were better musicians than anyone in the band, who were always on their phones during sessions... they’re not in the audio business anymore. I’ve never met a single world class engineer who wasn’t pleasant to be around. Your experiences might vary, but I stand by my statement.
Your attitude is more important than your abilities. You’re more valuable knowing nothing and having a great attitude then you are knowing everything and being a shit about it.
Remember, all businesses are people businesses. Audio is a people business. If people don’t want to be around you, you won’t be in it.
We hear a lot these days about brands and branding, about “personal brands.” You might have a studio or a business, and you have a logo, and think that’s your brand. It’s not.
Your logo isn’t your brand. A logo is merely a symbol, a trigger.
Your brand is what people think of you, when they think of you. Your logo triggers that thinking.
You want your brand to be what Peter from Franklin Square Mower Repair has going on. Have a great attitude. Overdeliver. Delight people.
Ponder what people think about you and your work. How do you make that thinking even more positive?
Luke
8/12/2020
Last week, I managed to screw-up our email automation and bombarded a bunch of people with lots of emails. A minor screw-up, really. But it got me thinking about mistakes made in the studio and the best way to handle them.
You Will Screw-up
I actually managed to fall onto a tape deck and break it
You will definitely screw-up in the studio. You’ll erase something, or record it at the wrong rate and resolution, or spill a drink into the console, or say something dumb to a client, or knock over a valuable musical instrument, or snap the head off a microphone, or step on a mic clip, lose files, tapes, or backup drives, or completely forget a session and show up late or not at all, or record something like shit, or fall onto a tape deck... that’s a partial list of dumb stuff I’ve managed to do.
How you respond to your screw-ups is critical to your career. It’s critical to your business and personal relationships. Some of you are young and starting out; some of you are older and established. Screwing up and dealing with the ramifications never stops being an issue in life. I’m 57. I haven’t screwed up today yet, but it’s still before noon.
If You’re Responsible, Take Responsibility
If you’re the lead engineer in the session, any mistake that happens is your fault. You’re in charge. Your job is to catch everything.
I was producing and engineering a session, and Steve, the assistant, rewound the tape after a great, final take of a song—it took hours to get this particular take—and then recorded a new song over it. He discovered the mistake about halfway through. He slid his chair over to mine, face bright red and tears welling in his eyes, and said, “I’m so sorry. I think I just recorded over the last song."
He went to hit the STOP button, but I grabbed his hand. “No, we’ll let them finish. Maybe they’ll get a good take of the next song,” I whispered to him.
The band didn’t get a good take. And regardless, we had to tell them we erased an entire song.
“Guys. I’m sorry. We made a mistake. We recorded over “No Reason Why.” We’ll have to do it again. It’s my fault."
The band’s collective jaws dropped open and all their eyes went wide. Then, a remarkable thing happened. Steve, the assistant, spoke up:
“It was my fault. I was running the tape deck." A tear ran down his cheek.
The band was quiet. Then Mike, the lead singer, blurted out in his southern drawl, “Well, hell, son. I guess we’re gonna have to fire your ass.”
And then the whole band burst out laughing—we all started laughing. Hugs all around. Steve didn’t get fired. We redid both songs and put out an EP a month later.
Take responsibility if you screw-up. Just doing that alone will defuse any sort of anger or tension. Not always, but it works better than the next option:
Mistakes are Forgivable. Lying Isn't
When you screw up, you’re a screw-up. When you lie, you’re untrustworthy and dishonorable. How would you like to be labeled?
Lying might seems like a good idea in some situations. Maybe it’s a good idea to lie if the truth would really hurt someone’s feelings...
A: What do you think of my haircut?
Choose one or the other:
1) Looks great!
2) Makes your head look small.
I’d usually go with #1.
However, in situations involving your behavior and things you’ve done, lying is almost never a good idea. In fact, what happens is you get caught and the lie becomes a bigger problem than the mistake ever was.
My junior year of college I was the darling boy of the theatre sound department at Purdue University. I was defacto #2 to the department chair, Rick, as an undergrad, and was doing all sorts of cool work. I was also 20 and a dumb kid.
I was collaborating with another student on music for a play and we were behind in getting it done. I was also acting in the play and was exhausted from days of classes, nights of rehearsal, working in the studio until 5am, and then repeating that endlessly for weeks.
We were supposed to deliver a set of rough effects cues and music for rehearsal the next day, and we just couldn’t get it done. So, rather then just tell the director that—and she was nice and she would have totally understood—we decided to tell her the tape deck broke.
Dumbest lie in the world. The director, of course, ran into the department chair the next day, mentioned the broken tape deck to him...
Later that afternoon, I bopped into his office just to say hi. Rick was sitting at his desk. He looked up at me and said very, very quietly, “You lied to Marsha. A dumb lie, too. And after all the times we’ve talked about honesty... I’m so disappointed in you. Get out of my sight."
This remains one of the worst days of my life. I stumbled home to my apartment crying. I was devastated. Rick was/is perhaps the most important mentor I’ve ever had. To lose standing with Rick was awful. I still cry thinking about this almost 40 years later.
I had to apologize to Marsha and Rick, and regain their trust. I had to walk around campus, live life, and complete the rehearsal and run of the play, all while feeling simply awful about myself. Looking back, I still can’t understand why I lied, and older me says to younger me, “Dude, get over it.”
The only reason I can say that now is because I eventually did get over it. I got over it, Rick got over it. Marsha got over it. But the lesson stuck. I’ll NEVER get over that lesson.
Forgive Your Dumb Ass.
It's important to forgive yourself after a screw-up. We often hold ourselves to much higher standards than someone else would, and we beat ourselves up. We relive the mistake or the problem, play it over and over again in a tape loop in our heads, giving it all sorts of energy and power. Meanwhile, the “victim” of the screw-up—the band, the singer, the boss, our partner—has forgiven us and has already moved on.
But we can tend to hang onto the mistake. Thanks to that, we’re off our game, we make more mistakes, and feel worse and worse, the shit becomes a death spiral.
So, damn. Forgive yourself. If you’ve been forgiven, then let yourself be forgiven. If you haven’t been forgiven, but you’ve taken responsibility, then you’ve done all you can. Move on. That’s it.
Easy to write about, but hard to do. I have no formula here—I still walk around for a day kicking myself sometimes. But, perhaps the next thing might be a help...
Mistakes = Growth
Making no mistakes means you’re doing the same old thing a lot
You seldom screw-up common place things you do everyday. Most people don’t take a dump and somehow manage to break the toilet and flood the bathroom, right? Or make a peanut butter and jelly sandwich and stab themselves in the eye. There can be accidents in common situations, but generally not screw-ups of thinking or behavior.
So, that means most of the time when you are screwing up, you’re in a new and unfamiliar situation, and that’s good. It means you’re getting somewhere. Perhaps you’re using a new piece of equipment or software, or you’re in a totally new studio environment. Maybe you’re working at the edge of your performance envelope—you’re working at a level you’ve not worked at before. Screwing up in a new situation is evidence of growth. It’s good to make mistakes upward. Try to have new and harder problems. That’s growth.
A toddler learns to poop. One problem solved; onto how to mic a drum set. Forward ho!
If you’ve not done something before, there is a much better chance you’ll do it incorrectly. In many cases, the only way to learn the correct way to do something is to screw it up a few times and learn. Anyone with experience has made mistakes to get that experience, and anyone who tells you, “Don’t do that, do this instead,” has, in fact, done exactly the thing they’re telling you not to do.
A long time ago I was recording vocal tracks at a studio in Queens. It was one of my first times as the lead engineer, and one of my first times working with an assistant. She was doing the punch-ins—they were, for the most part, easy. I noticed she was resting her finger on the RECORD button, and for this particular tape deck, I knew this was a bad idea. From experience, I learned that even the slightest twitch on that button could pop the deck into record, so I made it a habit to never get my finger near that damn button until I was close to the punch point.
I said to the assistant, “Don’t do that. You’ll twitch and put the deck in record and erase something important. Put your finger somewhere else."
She was a new assistant, and sometimes people new to things don’t fucking listen. And sometimes people have something to prove... two minutes later she twitched the deck into record and took out two seconds of a vocal part that was really, really hard for the singer to nail. And I was PISSED. I took over the deck remote, banished the assistant from the control room, and made her clean the bathrooms, vacuum the lounge—all the stuff an intern would do.
She handled it wrong. She should have listened to me, obviously. But also, she was new to being an assistant. She didn’t have enough experience with that deck. She hadn't screwed up a punch until my session.
And I handled it wrong. I should have communicated more clearly. I was new to assistants. I didn’t know quite how to talk to them, how to keep an eye on them without keeping an eye on them, how to exercise my authority.
But there we both were, making new mistakes we hadn’t made before. She was assisting on a session, finally. I was engineering the first album I’d ever produce. We were both moving up and screwing up at new and better things.
And for me, the screwing up continued throughout my producing and engineering days. Now I’m screwing up on new and different things, from parenting to email campaigns.
So, keep screwing up. Take responsibility when you do. Learn from it then move on to the next new screw-up. That is what I’ll be doing for the rest of the day! Week! Month! Year! Life!
Luke 7/29/2020
Last of this series on compressors. Next week we move onto something new... Who knows what it might be!
The Release is the hardest parameter on a compressor to set. It can be hard to hear the effect of release—depending on the other settings of a compressor it can almost be impossible. It’s also really difficult to explain what it is and how to set it. In fact, you might want to just skip all this written stuff and go to the video I made here. That might be more helpful. This was a hard post to write.
But it’s an important one because once you understand release and have an idea of how to find good settings for it, it becomes your bestest buddy in dynamic processor land.
Understanding Release
Another name for release is recovery time. Another way to think of it: how long it takes the compressor to recover to zero gain reduction.
Release is how quickly the compressor stops compressing once the signal falls below threshold. Think of attack as delaying when the gain reduction STARTS and think of release as delaying when the gain reduction STOPS. Attack lets the transient get through BEFORE gain reduction happens. Release keeps the gain reduction in longer—PAST when it should have stopped.
Yet another way to think of it: the signal goes OVER threshold and the compressor kicks in, the signal goes BELOW threshold and the compressor kicks out. Release can make the compressor stay kicked in even though the signal is below threshold.
Another way—back to the dog analogy. You decide that when the dog goes past 10 feet, you’re going to pull him back. A short release would mean that once the dog returned to 10 feet, you would stop pulling on him. A long release means that even though the dog has returned to 10 feet, you still keep pulling on him.
Maybe this will help...



With fast release, the compressor stops the moment the signal goes below threshold. With a slow release, it keeps compressing for a period of time even though it’s below threshold, then gradually stops. How long does it keep compressing, you ask? However long the release time is set for.
How Long Do You Set the Release For??
Many compressors have automatic releases—the LA-2a, dbx 160’s, etc. A very basic explanation of auto release: the more powerful (louder) and faster (the transient of the incoming waveform) the signal, the shorter the release will be for an auto release compressor. This isn’t a bad way to think when you’re manually setting release times. If you’re dealing with fast transients, you’ll tend to set the release shorter. When it's a slow transient, you’ll tend to set the release longer.
Compressing drums, you’ll set release to a short time. Compressing vocals, probably a bit longer. But it isn’t that simple.
Depending on how you set the release, you can bring out the little details of a signal—such as the breaths of a singer between phrases, the ring and resonance of drums, the resonances of a guitar or bass, or the reverb and echos of a room.
All audio signals are a mix of loud stuff and quiet stuff. The quiet stuff is usually covered up by the loud stuff, and when the loud stuff goes away, the quiet stuff has a better chance of being heard.
If we wack a snare drum in a room, the echo and reverb of the room are much quieter than the initial hit of the snare. If we compress the hit of the snare and we have a fast release set, the compressor pushes down the loud snare hit and then the overall signal is brought up by the makeup gain, so essentially, we have made the quiet things louder.



This works on everything. Shorter releases bring up the quiet stuff. Now, it might not be apparent when you’re using a short release on a compressor on the stereo bus because the waveform is so complex. You probably won’t be able to do much with a short release on things like strings and keyboard pads.
Setting the release longer pushes those quiet things down and really long settings tend to make the entire track get quieter and less lively. On vocals, and on instruments that you want to have a more natural quality, you’ll tend to use longer release times.


I made a video showing exactly how release time effects the quiet stuff on a recording, as well as using the Pawn Shop Comp on vocals (and how to fake an LA-2a type sound).
Some Idea on Setting Release
On drums and things of a percussive nature, setting release is pretty simple, and with something like our Talkback Limiter, the release is fixed at "short as hell," which makes the plugin rather perfect for working with drums.
On sounds that are not as percussive, release can be a lot trickier to set. It can be hard to hear the effect it is having on the signal, especially as things get more and more complex.
A way to nail the release time consistently
I tend to sweep around with the release a bit, often overshooting with too long and too short release times to “acclimate” my ear to the effect the release is having, and then hone in on a setting that works.
Watch your meter when you’re setting release. Its movement should correlate to what you’re hearing. Fast, percussive music should have that meter jumping. The meter should move much more slowly on slower, less percussive tracks and music.
The video below is the same as the one I linked to up top. It's long, but it’s really thorough. I cover release on a bunch of different instruments and then on a compressor across a mix
Lots of compression with short releases will always sound very “effecty,” like a Black Keys or a Radiohead record, and this is easy to do. Getting a compressor set so that they’re invisible in the track is much, much harder.
This post has covered a lot of ground. The key to release is to control that quiet stuff after the main part of the signal, and to watch that meter!
We’re spending the next few weeks looking at compressors...
Punchy punchy punchy! This compressor is punchy! That compressor is punchy! Compressor X will make your mix punchy! Compressor Y adds that vintage punch! Yada yada yada!
Today, we dissect PUNCHY, how compressors make things punchy, and what you can do with a compressor to control punch. So, a bunch of theory, not a lot of history, some videos, and a thing to try towards the end.
What is Punch
Punch, as we hear it, is sort of a physical quality to a sound or instrument. It tends to cut through the mix and it tends to be bright. There might be almost an audible “click” to the sound. Punchy basses and kick drums kind of hit you in the chest at louder volumes. Punchy tracks are energetic.... I can’t describe this... argh!!
Punch = Transients"
Punch is connected to and comes from the transient of a signal.
A transient is the initial attack of a waveform. The attack, or transient, is the area of a wave envelope when the sound of the instrument goes from no signal to as loud and as powerful as it will ever be.
Big words. Don’t worry about it. You’ve seen waveform envelopes on your DAW, they look like this:

The attack, or transient, is that highlighted spot at the beginning.
The faster and louder the transient, the more punch a signal has. Drum and percussion transients are very fast and loud, so drums are usually punchy.
Bowed instruments have very slow attacks so that is why you’ll never hear an engineer say, “Wow! What punchy strings!” Bass and guitars have a very variable attack depending on if they’re played with a pick (pretty fast) or fingers (pretty slow). A slapped bass, however, has a very fast transient.
Vocals tend to have slower transients. However, certain consonants have fast transients—T, D, B, K, P. Consonants in general have shorter attacks than vowels, which are typically slow. Rap and hiphop vocals typically sound punchy because there is a lot of consonant activity. Vocal parts that are sung more have more vowel activity and thus less transient activity.
Think of it like this: the more it’s like a drum, the faster the transient. The more it is like a violin, the slower the transient.


If there are a lot of fast transients, the instrument or sound will sound punchy. If there are slow transients, it won’t.
How Compressors Make Things Punchier
To go back to our dog on a leash analogy from last week: If we are going to stop the dog from running past 10 feet, the attack would be how fast we pull back on the leash once the dog has gone 10 feet. If we pull it back immediately, that is a fast attack. If we wait a moment and then pull the dog back, that is a slower attack."
Compressors make things punchier by reshaping the waveform a bit. The transient is, basically, made bigger, which makes the sound subjectively punchier.
As you know (and if you don’t know you’re about to find out) a compressor kicks in and starts to work when the signal goes over the threshold you’ve set. Now, the very first part of the signal that goes over the threshold is..... the transient. If the compressor kicks in immediately, then it will start to reduce the gain beginning with the transient.
But what if the compressor doesn’t kick in immediately? In other words, the signal goes over the threshold and the compressor kicks in a fraction of a section after. The transient gets through untouched, the gain comes down after it, and the waveform is reshaped. It looks like this:


This is the basic way a compressor adds or creates punch: it lets the transient through.
The attack time of a compressor is a big determinant of whether or not it is punchy. Some compressors have fixed attack times, others have program dependent attack times or manually adjustable attack times, and some have a combination of program dependent and manually adjustable. But there are other factors.
The ratio can affect punch. Low ratios typically result in less punch—the difference between the transient and the signal following is is less. Higher ratios tend to have more punch. But attack time can affect this: A high ration with a very very fast attack time will not be punchy at all—in fact it will sound dull and dead if it is active on the signal for a long period of time.
Knee is another factor that effects punch. Knee... how to describe knee....
When a signal goes over threshold, the compressor applies gain reduction, which is set by the ratio. If it applies that ratio all at once, that is a hard knee. So, if the ratio is set to 8:1 and the signal goes over threshold, the compressor clamps down at full power, 8;1.
Soft knee compressors or settings gradually apply gain reduction in proportion to how far the signal goes over threshold. So, if a soft knee compressor is set to 8:1, and the signal goes a little over threshold, the compressor clamps down a little, like 1.5:1. As the signal goes up, the compressor hits harder, so the gain reduction ratio increases... 2:1, 3:1, 4:1, etc., until it hits 8:1. Another way to think of it is the compressor is trying to “ride" the gain like you might, with your hand on the fader, pulling the fader down further as the signal gets louder.
Guess which tends to sound more punchy: Soft knee or hard knee?
Hard knee sounds more punchy, because it more radically reshapes the waveform. Soft Knee compressors tend to have very very fast attack times and Opto compressors like the LA-2a have almost instantaneous attack times, as well as a very soft knee. That is why they sound good on vocals and bass, because they “ride” the gain well, but if you put them on drums, they typically dull things up (that doesn’t mean you shouldn’t put them on drums and see what they sound like though).
Making a Compressor Sound Punchy
First of all, some compressors won’t ever sound particularly punchy, like the LA-2a or any of the dbx compressors that are considered “Over Easy,” which is dbx’s term for soft knee. Original dbx 160’s are plenty punchy and sound great on drums. If you have compressor with a switchable knee, setting it to hard knee will usually get you more punch.
Some compressors are punchy no matter what you do. SSL channel strip compressors and bus compressors are wonderfully punchy. Most compressors that are labeled as FET (Field Effect Transistor) are punchy. Our Talkback Limiter is FET, 100:1 ratio, fixed attack time, and is a punch monster.
Compressors that are really good at riding gain and making things sound even are usually not all that punchy. However, if the compressor has an adjustable attack, then it is quite possible to set the compressor to even things out AND increase punch. This explains a lot of the versatility of our Pawn Shop Comp. It is FET and has a really wide range of attack and release settings, which make it adjustable for almost any sort of compression task... and if you set it right it is very punchy... or slappy ; )
Set It Punchy
I hesitate to give exact numbers because setting compressors right isn’t the same as Neo seeing The Matrix for what it truly is."
In audio, everything depends on what you want and the gear you have at hand, so these are just guidelines to get you into the right neck of the woods. USE YOUR EARS!!
Set your RATIO to at least 4:1, or even higher. 2:1 will never be all that punchy, unless you have a ADR Compex, in which case it will always be punchy no matter what you do to it. Feel free to adjust the ratio up or down as you zero in on the sound you want. In some cases, a tiny ratio adjustment can make a big difference.
ATTACK is the key setting here. Ideally, you end up setting it just after the transient gets through. What you’ll find is you can set it fast for things with fast transients, like drums, but as the transient gets slower, you’ll need to set the attack slower. If you think about this for a moment, it should make sense.
Here is my usual way of finding the right attack for punchy sounds.
I set the ratio up there—6:1, 8:1, something like that. I set the release on the fast side so it isn’t interfering with the cycling of the compressor (I’ll cover release time next week).
I set the attack to its fastest speed and then, as I play the track, I gradually set it slower. There comes a point where the instrument or the mix sort of “jumps” out at me, and there it is.
I made three videos—one for drums, one for guitars, and one for the entire mix:
Good lord! Too many words again! I try to write shorter but I want you to know more, I want you to see this stuff in your head.
Anyway—that is it for now and for next week... compressor release times! I am excited about that! It is a weird thing to be excited about, but not if you love making records.
The next few weeks of Working Studio are going to be about compressors and limiters. It's appropriate—we just released the Pawn Shop Comp 2.0 as well as the Talkback Limiter a month ago.
By the end of this series, you will have a much better idea of how to apply compressors and limiters on your recordings. What you want to develop is a framework for how to think about dynamics and dynamic processors, so your thinking on using them is clear.
This is a long one and it has recording techniques and some HOW TO videos in it. Forward ho!
A compressor... a limiter... the differences between the two... This is all hard to define. I learned in school that a limiter was a compressor with a compression ratio at 10:1 or higher, that a limiter had a fast attack and release. But then, in the real studio, engineers were always saying, “Compress it,” and then patching in a piece of gear that had the word Limiter in its name. What the hell?
I think of compression and limiting as verbs, not nouns. Not as things or gear. Compression, compressing, limiting—these are things you do to an audio signal.
Time for an analogy.
Let’s talk about walking a dog.

You’re walking your dog and he runs around like a f***ing maniac. He knocks over people, trips you up, almost gets hit by cars, it sucks. Obviously, you have to keep the dog contained somehow.
You get a piece of strong rope about 20 feet long. You tie it to the dog and off the two of you go. He runs around all he wants until he gets 20 feet away from you and BAM! The rope kicks in, and if you’re strong enough, the dog stops at the end of it. This is LIMITING.
If you are really strong, or you tie the rope to a tree, and the dog runs 20 feet, then he is effectively stopped cold at 20 feet—like hitting a brick wall. If you’re not really strong, when the dog runs 20 feet and hits the end of the rope, you’ll be yanked a bit, until you dig in your heels a bit and fight the pull of the dog.
The threshold is 20 feet. The ratio is how strong you are. Big strong guy is 100:1. The dog is stopped with the force of a brick wall. Old grandpa fella is 2:1. The dog goes 20 feet and keeps going, slowed down somewhat by the dragging body of the old guy tangled up in his walker.
You decide that almost breaking the dog’s neck whenever he gets 20 feet away is cruel. What you want is the dog to stay about 10 feet from you. He can come closer, he can go further, like when there is something interesting for him to sniff, but for the most part he bounces around about 10 feet away from you. Sometimes 7 feet , sometimes 13 feet, 2 feet, 16 feet, etc. He feels a fairly constant, gentle pressure. You chuck the rope, get a bungie cord, cut it about 10 feet long, attach it the dog, and off you go.
When the dog is within 10 feet, the cord doesn’t apply any pressure. If the dog goes out past 10 feet, the cord applies some resistance to restrict the dog's movement. Can the dog go out to 20 feet and beyond? Sure can—but there will be pressure on the bungie pulling back on him. This is COMPRESSING.
The threshold is 10 feet. The ratio is how much resistance the bungie cord puts up. A pretty weak bungie cord is 2:1. For each 2 feet the dog wants to go, he only gets to go 1 foot. 5:1 bungie cord? For every 5 feet the dog wants to travel, he only gets to go 1 foot.
If the dog wanders away from you from 5 feet to 15 feet or so, you can see he has bungie pressure on him a lot, gently restricting his movement. The bungie only works when the dog goes further than 10 feet from you—only when it passes the threshold. The stronger the bungie, the less gentle the pressure is. At 2:1, the dog has to have double the energy to go one foot. At 5:1, the dog needs 5 times the energy to go 1 foot.
Do you see how limiting and compressing are similar and how they’re different, as well as how there is a bit of blur between the two?
Limiting is when you keep a signal from going past a certain point.
Compressing is when you restrict the overall movement of a signal.
A signal that is limited occasionally goes over the threshold.
A signal that is compressed is over the threshold often.
Limiting is applying a lot of restriction to the signal occasionally.
Compressing is applying some restriction to the signal often.
So let’s apply this to some audio problems.
Problem: Drummer is doing lots of fancy ghost notes and they sound really cool, but in the mix you can’t really hear them.
Ok, let’s think about this for a moment. Here is what the signal looks like to me:

And when we add the rest of the mix, it covers up the ghost notes.

SO... do we want the compressor on a lot, or a little?
Only a little, right? We want it to restrict the hard hits so that when we use makeup gain, the level of the ghost notes comes up.
Do we want the signal usually over the threshold or occasionally over the threshold?
We want it occasionally over the threshold—when the snare is hit really hard.
So this application is LIMITING. The gain reduction happens occasionally for maybe a split second—like a rope. I visualize the whole thing happening like this:



I made a video on this, using our Talkback Limiter.
Problem: Bass player is really inconsistent in volume and the part doesn’t sit right in the mix.
Think about it... it’s sort of like the dog is running around too much, and we just want to restrict that movement. Do we need a rope or a bungie cord? Do we want the signal occasionally going over the threshold or often going over the threshold?


This application is COMPRESSING. The gain reduction happens often, almost continuously, like a bungie. Again, I see it in my head like this:




I made a video for this, too, using our Pawn Shop Comp 2.0.
Questions and Closing Thoughts
You might be thinking, “Ok, so this vocal track is all over the place, and I can’t control it at 6:1 no matter how low I set the threshold, but I can at 12:1. Is that compressing or limiting?"
I’d probably call that compressing, but who cares? What is important is that you understand what you have to do to get it under control and you have a thought process, you’re not just turning knobs until it sounds good.
You might be thinking, “Ok, so I set the Pawn Shop Comp to 2:1, and the threshold really high and the meter jumps a little bit every now and again. Is this limiting or compression?
I’d argue that you’re really not doing much of anything, but if it sounds good then it is good. I’d probably call that limiting. Usually if the unit isn’t kicking in often, I think of it as limiting.
Is the Korneff Audio Talkback Limiter a limiter? Sure is—100:1 ratio and a super fast attack and release. Can you compress with it? Sure. If you run a drum set through it and you drop the threshold and pin the meter you’re definitely compressing it.
Can the Korneff Audio Pawn Shop Comp do limiting? Sure can. Set it with a fast attack and release, set the ratio high and the threshold so that it occasionally kicks in and you’re limiting. The PSC is so adjustable just on the front alone that you can do almost anything with it.
Some other things to think about
A compressor with a limiter also on it is like walking a dog with a 10 foot bungie AND a 20 foot rope. Understand?
Knee? That’s like the more the dog goes over threshold, the stiffer the bungie gets. So, a Soft Knee means that there is little pressure at 10 feet, but by the time the dog is at 20 feet it might be 50:1.
Some of you might know most of this. That’s awesome. Some of you might think I’m glossing over a lot. Yep.
Some of you are thinking what about Attack? What about Release? What about gluing my mix?? All these questions.... we will get to that stuff in the next few weeks.
For now, listen to the track, the instrument, the vocal, whatever it is that needs fixing, and think about if you need to change it, (or control it) all the time, or only occasionally in certain moments.
Update from Last Week
Last week I wrote about labeling stuff.
Yesterday, my Mac decided to act up and crash out of Logic whenever I tried to open the program. Pain in my a**. I rebooted a few times, ran a disk scan, and yada yada yada. No luck.
I decided to unplug all my peripherals to see if something external was causing the problem. I suppose all of us have a lot of peripherals. What a mess! It’s like the snakes invited the worms over and served spaghetti.

And it struck me that now would be a WONDERFUL TIME TO PROPERLY LABEL EVERYTHING. Which is exactly what I did.

Much better.
The What: Learn to write in block capitals and label everything.
You should get in the habit of labeling everything with a piece of Artists Tape and writing in block capital letters with a Sharpie. It will make your sessions run faster and cut down on the amount of time wasting (and session destroying) mistakes.
All your documentation should be written in block capitals: track sheets or track listings (if you hand write them or still use them), session notes, etc. Label your outboard gear so you know which track or channel is plugged into which compressor, eq, etc. Label anything that needs to be made clear and obvious.

Block capital letters are big and easy to read. When you first start writing them you might find they're a bit inconsistent, but within a session or two you'll develop muscle memory and your handwriting will get very uniform and even. If you have assistants in your studio, insist they write everything in block capitals. Your sessions will go faster when you're not trying to figure out someone's shitty handwriting. Your sessions will look tighter and more professional. Clients will be less inclined to dick around with your gear when it is clearly labeled.
As long as your letters are clear and simply, you don’t have to be religious about everything being capitalized. I write with a mix of caps and lowercase, but all my letters are simple, clearly written. If you can write everything with capital letter that's awesome, as long as you strive for neat and consistent.
Having everything labeled becomes really important if you're running your studio as a business, and especially if you're charging a block fee or a flat rate for your time, in which case you make more money if your sessions run fast and smoothly.
Ever sat at a session adjusting an eq and not hearing any effect, only to realize you’re adjusting the wrong damn eq which is patched into a totally different channel, and you’ve totally screwed up all the work you did an hour ago? I have."
A session with a bunch of musicians, perhaps a producer and an assistant, can be a really distracting thing. Especially when it is long, the night is late and everyone is tired from listening to music for hours on end. If everything is neatly labeled in easy to read handwriting, you eliminate those little, "Wait, what is this again?" pauses that happen when you have to switch your brain over from one activity to another. You also drastically cut down on mistakes that cost time, such as adjusting the wrong gain or threshold on a piece of outboard.

These days we have DAWs and it would be dumb to put artists tape on your monitor. However, labeling things, such as which vocal is plugged into which preamp, which track is running through which compressor, is still a really good idea. It makes for less mistakes and a faster session.
And if all your documentation is on the computer, then strive to be clear in what you name things on digital scribble strips, how you processed a particular track, chord changes, lyrics, whatever.
Another good habit to get into is having a Sharpie and tape with you all the time during a session. When I was freelancing in NYC I was always in a t-shirt, so I would clip a Sharpie to the collar (I stole this idea from super engineer Tchad Blake). I'd leave rolls of artist tape around the studio, the control room, etc., so I could label something or write a note in any location in the studio. I made my assistants always carry a Sharpie. These days I still walk around with a pen clipped to my collar. Who wants to waste time looking for a damn pen?

A War Story or Two
You write like a fucking moron, dude."
I learned the lesson about writing clearly the hard way. I have awful handwriting because I have really loose joints in my thumbs. As a young engineer, back in the days of tape, my track sheets were basically unreadable. I would get bored and doodle all over the artists tape on the console scribble strip. My sessions went well, recordings sounded good, but overall, things looked like ass.
I had an overdub session one day at a studio that I was freelancing at for the first time. I was setting up the console, the 2" tape was on the deck, and there was a track sheet laying out. Another freelancer working in the complex walked in for some reason, looked at the track sheet, laughed, looked at me and said, "You write like a fucking moron, dude," and walked out.
In that moment I realized my engineering ability wasn’t being judged by the sonics of my recordings; it was was being judged by my handwriting. A total stranger thought I was a moron, because of my handwriting. Some other engineer might open up a tape box to remix something I had worked on, look at the track sheet, and assume everything on it was engineered like crap by a moron. And my name was on it.
No way was shitty handwriting going to shoot my career in the foot. I rewrote the track sheet and tore up the old one. I relabeled the console in big neat letters. Block capitals.
And after that... I found my sessions went faster. I made less mistakes. Labeling because especially important on long, 10+ hour sessions. I would tape out and label EVERYTHING and more than once this kept me from erasing a track by accident. Clean handwriting and labeling made a huge difference.
A few years later I was serving as the house engineer at a studio for a guest engineer, who was a Grammy winner earlier in his career. An assistant was putting on the master tape the guy had brought - 24 track analog 2” Ampex 456. The test tones on the tape, essential to good tape deck operation in those days, were recorded in a half-assed manner. The assistant was struggling to get the deck properly calibrated. I looked at the track sheet. Really hard to read. Like a chicken trying to wipe dog crap off its feet. I said aloud, to no one in particular, "This is shitty engineering." I turned around and there was Mr Grammy, leaning over the SSL, adjusting something. He gave me a look. Whatever. I pulled out my Sharpie and made a nice, new, neat track sheet, and printed my name under his.
I have to write another column about maintaining situational awareness in the studio.
Anyway, for now, write in block capitals, label everything, and clip a Sharpie to your collar.
Luke 7/1/2020
Shoot us a message if there’s anything you want to read in a blog post.
We have a cool surprise for you all in a few days!
Drum bus compression has really become a "thing." Hit an online forum like Gearslutz and there's tens of thousands of posts and just as many opinions on which hardware or plugin to use, what settings, VCA vs. FET, SSL or API and on and on and on. People drop thousands on a vintage ADR Compex, and it sits idle in the rack until mixdown, and then it does the only thing it will do on the record: squash the drum bus. It's ridiculous.
But it's also really cool. Ridiculous and yet really cool: that's audio engineering. $18k on a vocal mic, and then turn the track into Cheez-wiz by running it through Auto Tune so it sounds like someone wired up a baby duck to a sequencer? Excellent!
I love this stuff so much. So silly. So cool. Sigh.
Anyway.
Get the drum bus compression right and the kit kicks ass. Do it badly, and the whole record sounds like ass. The following is a combination of history, opinion, things to listen to, some production ideas and WTF. Here we go.
My Virgin Bus Compression Experience
Of course, if you solo'd out the 1176's channel it sounded just awful. Like your mom is Lars Ulrich in drag and yelling at you.
First time I saw someone bus compress the drums was in the mid-eighties in NYC at some studio (Sorcerer?). I was a dumb kid at the time who wanted to be helpful but mainly was the fastest mic cable coiler in the world. The engineer had a mix going. He assigned the kick, snare and the overheads to a bus, patched that into a lone UREI 1176 that looked like it spent a long weekend with Madonna , and then routed that back into the console in mono, panned down the center. First time I ever saw Parallel Compression. Then he pressed down all four ratio buttons of the 1176 (first time I saw that, too), cranked up the input, pinning the gain reduction meter, and brought it up in the mix slowly until suddenly the drums were THERE, you know? BOOM! Instant awesome.
Of course, if you solo'd out the 1176's channel it sounded just awful. Like your mom is Lars Ulrich in drag and yelling at you. Whatever. But in the mix, it was sublime.
I don’t remember the song or the group, but the sound was very similar to Don’t Fear the Reaper, and I suspect that there’s an 1176 with drums down the center of this recording (in addition to more cowbell). I had a chat with the drummer, Albert Bouchard, about this years ago, and he said some things to indicate this was the case. If you listen, the drums are strangely mono, and especially in the middle break, when he plays a fast hi-hat figure, it sounds like the pumping of an 1176 to me.
So there's a thing to try
Squash the crap out of the drums, bring them back in mono up the center of the mix.
So, bus your drums, or a few of them, to an open bus, strap a compressor across it, and then bring it back into the mic in mono panned down the center (or back in stereo if you wish, that’s fine, too.
This is parallel compressing, which is when you run an effected signal at the same time as an un-effected signal. In the old days it would use up faders and channels. Nowadays, faders and channels are basically unlimited, so parallel processing of all sorts is rampant. It gives you a lot of control and expression. For instance, you can automate the parallel track, and just bring it up during drum fills, during a break, etc. You can crush the drums, eq them weirdly, and then bring them up in the mix just to make a moment more interesting, all sorts of fun. Works great on vocals and solo instruments.... really, just about anything.
What is with the 70's???
A mermaid gasping for air after you accidentally harpooned her sort of thing going on.
The 70's either have the greatest drum sounds or the absolute worst, depending on your viewpoint and whether or not you like your drums sounds huge and roomy or dull and reminiscent of someone hitting a couch with a broom.
The '70's dead room thing is all over records from California in the 1970's. There are some great songs, but the drums are noise gated and recorded in a dead little drum booth. And while the song is amazing, the drums on Life's Been Good...?
Dead drums are probably more about the advent of noise gates as a viable technology in the early 70’s than anything else. The industry tends to adopt a trend, milk the hell out of it, and then abandon it for whatever cool thing comes next.
One band never succumbed to the whole dry drum thing, and that's Led Zeppelin. Those guys always recorded drums in live rooms with minimal mics, and those sounds have stood the test of time. The archetypal track is When The Levee Breaks - good lord, what a drum sound!
Early Zep records were recorded in houses and other non-studio situations. The console used was typically a Helios, which were amazing, mainly custom made. Only 50 were ever built. When the Levee Breaks is an 8 track recording, so the drum "bus" compression is really a drum track compression: stereo drums squashed through a Helios board compressor. Or not! Helios compressors were made by ADR, so what is happening there is bus compression through an ADR Compex, which, like the 1176, uses an FET. Seeing a pattern?
The Compex is a one trick pony, but it is a great trick. There is no real way to get a Compex to ever sound unnoticeable. Even with the most minimal of settings it imparts a lot of character. Picture a chef who has basically one recipe, which is take whatever it is, add bacon and fry it. Yes, delicious, but for soup? Salad? Ice cream? That's a Compex.
So there's a thing to try
Bus your drum tracks and feed them through our Pawn Shop Comp. Set the RATIO at at least 10:1, the ATTACK to around 7ms, the RELEASE to about 80ms, click AUTO for make-up gain, and then turn the threshold down until the meter shows at least 5dB of gain reduction on a steady basis. Instant Compex. And much cheaper than $2k AND you can use the Pawn Shop Comp on just about anything.
Now, so far, all of these compressors are based on a FET. The Compex, the 1176, the Pawn Shop Comp, the Talkback Limiter - all use a FET style of compression. SO... what’s special about a FET compressor?
FET (FET stands for Field Effect Transistor) compressors have a very distinctive vibe, especially on drums. FET’s were one of the first ways engineers made a solid state compressor (as opposed to using tubes), and the basic circuit design and sound has been the same for decades. These suckers are punchy and with fast material and quick release settings, they have a "mermaid gasping for air after you accidentally harpooned her" sort of thing going on. Quirky, but usually awesome sounding on drums.
Peter, Hugh, Phil, the 80's and MAKE IT GO AWAY
The things the two tracks had in common was engineer Hugh Padgham, and his accidental invention, gated reverb.
I was 16 in 1980. A friend had just bought the latest Peter Gabriel record, which was called Peter Gabriel. His first four albums were all called Peter Gabriel. The 1980 record is nicknamed "Melt" because the cover image is a picture of the man himself with his face half melted.
The first song on the album is a real toe tapper called "Intruder," and sing along kids! It's about a home invasion from the point of view of the invader.
The thing about Intruder, though, is the drums. They had a quality and sound we hadn't heard before. Electronic yet acoustic. Huge but yet squashed and contained. We had no idea what was going on.
And then Peter's former Genesis bandmate Phil Collins released a track called In the Air, that had one of the most iconic drum sounds ever heard... and it sounded a lot like that Invader song.
The things the two tracks had in common was engineer Hugh Padgham, and his accidental invention, gated reverb.
Big commercial studios typically had (or still have) a microphone or two hanging from the ceiling so that the staff in the control room can hear activity in the studio; musicians can simply speak or shout a bit to be heard by the engineer, etc. Solid State Logic added a dedicated listen mic system to their SL4000 E console, and the circuit included a limiter. It had two purposes: 1) Amplify the quietest signals in the room so even someone speaking in a normal voice in the studio could be easily heard in the control room, and 2) Protect the control room speakers and the engineer's ears from loud noises or bangs or enraged lead singers by severely limiting the signal.
The SSL listen mic limiter had a fixed ratio of 100:1, almost instantaneous attack and release, and huge amounts of gain. It was buried down deep and hard wired into the console and wasn't designed to be adjusted. And... it used FETs... is the pattern clear now?
Engineer Hugh Padgham put a noise gate across its output, ostensibly because he didn't want to hear any quiet background noise that can be distracting during a session.
So, Peter and Hugh (and Phil Collins on the drums) are working on Intruder, and Phil is bashing around the kit, and it sounds amazing over the listen mic. Because the talkback limiter was applying huge amounts of gain, which amplified the sound of the room, then crushed the hell out of it, and then the noise gate chopped off the signal abruptly, eliminating the natural tail of the decay of the room.
Hugh Padgham figured out the routing to get the signal to tape, and the sound of the 80's was born, big hair and all. After Phil Collins' In the Air Tonight became a megahit, the gated reverb sound spread everywhere. Michel Jackson, Prince, Madonna, INXS, The Cure, New Order and on and on... it was everywhere. There was no escaping that big ass dumb drum sound.
I was never a huge fan of it, so I was glad it kind of died out. But, like many things, it is back and has become increasingly common again. It is being used with more subtlety (and perhaps taste) than it was during the '80's, but the '80's were never about subtlety. The 1975 have been pilfering a lot of sounds and ideas, including some gated reverb.
So there's a thing to try
Put the Korneff Audio Talkback Limiter across a snare or a kick track and put a noise gate after it. On the Talkback Limiter, turn LISTEN MIC all the way over to the right and have WET/DRY all the way over to the right as well. On the noise gate, set the gain reduction as high as possible, like -100dB, and the hysteresis to around -3dB. Set the threshold such that the gate clicks open for just the kick or snare hit and doesn't trigger on any leakage. Set the attack of the gate as fast as possible, the release to around 100ms, and then adjust the hold for how long you want to hear the effect. Tweek the Talkback Limiter to get distortion or more or less compression, and adjust the gate, especially HOLD, for the effect's duration. You can get a more subtle effect by setting the gate's gain reduction to something around -20dB and backing off on the Talkback Limiter's WET/DRY control.
Gating a drum bus or gating room mics on a kit is a little more involved, and this is where bringing the gated effect back in parallel might be useful.
Send the tracks you want to affect to a stereo bus, and then put the Talkback Limiter and a noise gate on the bus inserts. You'll rough out your sound as described above, but then you'll probably need to use the sidechain filtering of the noise gate to keep the effect from chattering on and off on high hats or whatever else might trigger it. I generally set the gates’ filtering to bandpass, and then set the high cutoff to around 1kHz and the lows to around 100Hz. You can control the overall amount of the gated reverb effect in the mix by bringing up the bus's level in parallel to the rest of the drum mix.
It might take a bit of experimenting to get the sound you're looking for, especially if you're compressing and gating the room tracks and there's high hat leakage. Another trick to clean things up is to key the noise gate off of the kick, snare and tom tracks, such that the gate opens up and lets the crushed room sound through just for those instruments. Describing that sort of set up is long and detailed, and this whole column is dragging on at this point.
Of course, just running the room tracks through the Talkback Limiter instantly gets you a sound that is a lot like gated reverb. Increase the gain and you can pretty much match the drum sounds on Radiohead's The Bends album, which are simply fabulous: huge, trashy and percussive.
It is really worth your time to get your drum bus compressor situation suss'd out, whether you use plugins - my fave is the Pawn Shop Comp for this but our Talkback Limiter is great when I want more raunch out of things, or hardware - I use a pair of Compex2’s (like a Compex but with VCA’s instead of FET’s) when drum bus compressing in the material world. Dan Korneff, of course, favors the Pawn Shop Comp and the Talkback Limiter (he did build them for his own use originally) and he favors the original stereo Compex (which has FETs's) when he wants hardware compression.
Dedicated hardware bus compressors might be beyond your budget, and that’s fine. They might not be all that cost effective. My Compex2’s basically sit in the rack until mix time, forlorn and lonely, like the boyfriend of the mermaid we harpooned earlier. Every now and again I try them on something, like a guitar, and they promptly strangle it. Sigh. Back to a tube thing.
Well, that is it for today, kiddos. Some history, some ideas, some bizarre analogies, a little WTF.
Until next time, make great music, make great records.
Luke D. 6/17 in the year of the plague
I was around 14 when I decided that I wanted to produce music. It was because of The Beatles' Revolver album - a cliché but true.
But what's a kid who wants to make records to do in 1977? This is before Porta-Studios, way before iPads with Garage Band. The only multitrack recorder that I might even kind of been able to afford was a TEAC four track, and they were selling for $5000. Ha! That was as much as a car!
I had a Strat knockoff, a portable cassette deck, a cheap Peavy PA system, and some microphones from Radio Shack. My basement was not The Power Station.
But what I did have was a book. I forgot where I bought it - probably a bookshop in Huntington, NY. I bought it in 1978 for maybe $20?

The book was Home Recording for Musicians, by Craig Anderton.
Oh my god! The cover alone made me want to die! I started saving up egg cartons to tape on my walls.
This book... I read it cover to cover in a few days, including a long section on how to build your own mixer, complete with schematics and parts lists. And then I read it again. I read it almost every night before I fell asleep for YEARS.
I learned about Sel-Sync, which made multitrack recording possible. I learned about microphone polar patters and transducers, and direct boxes, and equalization. I read about overdubbing, and effects sends, and reverb and tape echo. Everything.
Home Recording for Musicians showed me how to bounce between two tape decks. I got my grubby little hands on a four input mixer from RadioShack and a bunch of molded patch cables, and me and a friend, a great drummer named Tom, made, what was for me, my first multitrack recording. It was a shitty blues tune but it didn't matter. It was about the possibilities.
And the possibilities were endless. There was the dream of me someday being in a real recording studio instead of my basement, making a record that sounded as wild as Axis: Bold as Love. Working with rock bands. Making music.
This paperback book was like a ticket, a map to my dreams. I kid not.
There was the battle to win with my parents, who were conservative just enough and uninformed just enough to not have any clue about what I wanted to do. It was mostly a battle of attrition - a war of inches until they agreed I could go to school and study music or theatre or something similarly stupid.
My first time in a real recording studio was at college, at Purdue University. It had a TEAC four track and a TEAC mixer, and because I had memorized Home Recording for Musicians, I could work it all from the moment I walked into the control room. I could do things in there that baffled the grad assistants who ran the place, and it was because I had memorized Craig Anderton's book. All the concepts and ideas came spilling out. I was like an idiot savant.
I managed to get myself into professional studios in New York after I got out of college. And I managed to produce and engineer records, and I graduated up from RadioShack mics and mixers to Neumann's and SSL's.
And I got to make a total ass of myself when I met Craig Anderton.
It was at an AES Convention in New York, I think, in the early 90's. I was walking around, checking out the toys, when I spotted his distinctive profile across the room. It was like... seeing God, or one of the Beatles. He wasn't wearing the seizure inducing shirt he wore on the cover of Home Recording for Musicians, but he had the glasses and his sharp features, and he was tall and skinny and every inch the nerd.
I ran up to him, gushing like school girl. "Mr. Anderton, your book changed my life, and blah blah blah and on and on and on...
He looked at a me like I was a maniac, and then he politely but definitively started backing up and away from me. I think he shook my hand. He sure didn't want to talk, and it wasn't the warm fuzzy experience that I think earned after reading that damn book 1000 times.
It's not that he wasn't nice. It's just.. I don't think he was ready for SuperFan. Who would ever be the fan of someone that wrote an audio book??!!!
I was. Didn't matter if he basically ran away from me and hid behind the Yamaha booth. I remain a fan of his, and a fan of that darn book.
I continue to be a reader. I read every audio book I could find for years, even after I was established as an engineer. I read books on classical recording technique, studio memoirs, every audio textbook written, equipment manuals - everything. I devoured Sherman Keene's Practical Techniques for the Recording Engineer: A Streamlined Technique for Speed, Accuracy and Documentation (get this if you can), which was full of ideas and concepts. I learned how to run a session from this book, and also the best phasing/flanging trick imaginable. When The Complete Beatles Recording Sessions: The Official Story of the Abbey Road years 1962-1970 came out in 1988, I sat down with that book and listened to the albums and songs on CDs until I could hear every nuance and overdub. I think that more than anything taught me how to produce a record.
So, I encourage you all to read. Read manuals, read spec sheets, read biographies, go online and Google up the histories of old recording studios, how great records were made, vintage and long gone equipment. Read about guys like Tom Dowd. Read about how mastering lathes worked. It all applies. There is so much to learn, so much to make you better at what you want to do, so many great ideas to borrow and steal and re-invent.
And I encourage you read before you go to bed. Read before you go to sleep. And let that good stuff you read climb into your dreams. And then wake up and dream into action.
Luke DeLalio 2/25/2020
Luke DeLalio 2/12/2020
In the early 1990's I was freelance producing rock records. There were still big studios and big consoles. Digital recording was taking off but there was still plenty of nice fat analog tape. New, great sounding equipment was being released all the time, and you could still find vintage stuff gear with a bit of poking around. It was a great time to be an engineer.
And the big thing was drum sounds. Everyone was mic'ing the room and gating everything, and triggering samples and doing drum replacement... it was really cool. The Red Hot Chile Peppers released "Blood Sugar Sex Magic" and GASP! They were feeding drum samples into the room and then recording the room sound! Mind blown! And then Steve Albini was making records and Nirvana's "In Utero" sounded like a wonderful live mess. Drum sounds were the holy grail in the early 90's.
I was doing a lot of work out of a studio in Hoboken, New Jersey, called Water Music. It had two rooms: The A room was nicknamed "Heaven." It was HUGE, with a Neve 8088 console in it. The other room was affectionately called "Hell." It was much smaller, and at that time had a ramshackled bunch of mismatched equipment, not even a proper console. Heck, Hell didn't even have a control room - everything was stuffed into one space. You'd set the band up, take a guess with the mics and settings, record a bit and then play it back to see what you got. It took a bit but you could get great sounds. I loved working in Hell. And Hell was a lot cheaper than Heaven. Typically, smaller budget/Indie projects worked in Hell, and the big money sessions worked in Heaven.
One day I came in with a band - I think it was a punk album I was working on - and everyone at at Water Music was excited because in Heaven, cutting an album with some band, was Eddie Kramer.
If you don't know who Eddie Kramer is... he recorded Jimi Hendrix and Led Zeppelin - enough said, right? In the late 60's, using maybe four microphones, a pair of compressors and whatever EQ was on the console at that time - we are not talking about sweepable parametric eq's or anything like that - think high and low shelving and that's it - Eddie Kramer managed to invent rock drum sounds. And rock guitar sounds.
SO... the God of Rock Drum Sounds was in Heaven cutting an album... and he locked the doors and wouldn't let anyone in. The word got around that he didn't want anyone to see his drum mic set-up. It was super secret. Even the main studio assistant, Jim, wasn't allowed into the main live room where the drums were.
I would see Eddie in the morning walking in a courtyard between the studios - he wasn't talkative but he would always flash a friendly smile. He made the assistants sweep the courtyard constantly. The whole thing was a big, weird mystery.
What the hell was he up to in there????
It really didn't take much to stay late one night and wait until the lights were out in the residence part of the studio complex. Water Music was residential, with rooms and suites and a kitchen for artists working in the studios. I once recorded an all girl band in Hell and we all slept together in one huge bed stuffed in a single room. It was platonic.
SO... Jim and I waited until Eddie's lights were out, and we got the master keys for the studio and burgled our way into Heaven to see the top secret Eddie Kramer drum set-up...
It seemed pretty typical. 421's on the toms, top and bottom, a SM-57 on the snare top, I think a Neumann KM-84 on the bottom. A Neumann U-47 FET - often called a FET47, stuffed into the kick, and then there was an AKG D-12 outside of it. The overheads were U-87's or U-67's. He had KM-84s out in the room maybe twenty-five feet away from the kit, but he had them tucked behind gobos - big absorptive panels. This was a cool trick - the gobos kept the mics from from getting any direct sound from the drums. This was an idea I took.
But so far, the big secret set-up was nothing special. But there was one really weird thing...
Five feet out from the drum set, about chest high, centered on the kick and pointing towards the snare, was a Shure VP-88 stereo microphone. I remember it being a VP-88, but I could be mistaken. It was definitely a stereo condenser mic.

Throwing a stereo mic in front of a drum kit was nothing new. I had inherited a little money and blew most of it on a vintage AKG C-24 and used it all the time as a stereo drum mic. But what Eddie Kramer was doing with the VP-88 was something different.
A VP-88 is an MS stereo microphone rather than an XY. MS (Mid Side) stereo mic'ing is really awesome and someday I'll write a whole thing about it, but basically, the VP-88 uses two capsules, one set to cardioid that picks up the center (the middle), and the other set to figure 8 and picking up the left and right (the sides). The two signals are combined in a particular way, lots of phase cancellation ensues, and the net result is really nice stereo with a strong, clear center. It's a very useful technique and I think better sounding than XY.
So, Eddie Kramer had an MS stereo mic in front of the drum set.
That still isn't weird.
What was really weird, was that he had the left and right side of the stereo mic oriented vertically - down towards the floor and up towards the ceiling - rather than to the right and left of the drum set. Picture rotating a stereo mic 90 degrees, so that the left side picks up the ceiling and the right side picks up the floor.
It made no sense. Jim and I had no idea what the hell was up with the VP-88 pointed at the floor and the ceiling. How would you pan that signal in the mix?? I experimented with it on a few subsequent sessions, turning my C-24 up and down rather than left and right, and it always sounded like ass. There were all sorts of weird cancellations caused by things bouncing off the floor and the ceiling. In a small room it was dreadful. Really, it seemed to me to be an awful idea.
In hindsight, maybe it was a red herring. Maybe the VP-88 was plugged in but not even routed anywhere. Maybe Eddie Kramer came up with that doofy mic set-up just to fuck with anyone that snuck in to steal his secrets. And that there was really no secret other than use good mics, use a good drum set, and most of all, record a great drummer. Like Mitch Mitchell or John Bonham.
Or maybe the secret was to have access to a huge room with great acoustics, and a giant console fourteen feet long, and a two-inch 24 track Studer tape deck.
Maybe the secret... is to claim there is a secret! After all, it worked for Eddie Kramer.
We’ve been getting a lot of feedback on the Pawn Shop Comp, and often people remark that it’s really versatile and does a lot more than simply compress. One user wrote, “It’s a preamp, it’s a compressor, it’s a fuzz box, it diced and slices. It’s my go-to Swiss Army Knife. What were you thinking when you made it??”
Good question.
Actually, I wasn’t really thinking when I made it. The Pawn Shop Comp wasn’t planned out. It evolved into what it is across three years and hundreds of recording sessions.
It started out as a simple compressor. At my studio, Sonic Debris in New York, I have a bunch of vintage tube limiters - LA-2’s, Gates Sta-levels, RCA BA-2, And I love them. I love them so much I want to put them on every track. And that’s the problem. I’m recording and mixing things with 200+ tracks. How am I going to get my hands on 200+ tube limiters? How am I going to air condition the studio - can you image the heat pumped out by 200 tube limiters???
The solution was to analyze a few of my favorites, work out a bit of code, and build myself a tube compressor plugin that I could use anywhere and everywhere. So the Pawn Shop Comp was born: a simple tube style compressor with two knobs, ratio and threshold, and auto make-up gain.
But often solutions lead to other problems. I found my plugin sometimes didn’t have the punch I wanted. I added attack and release controls, but that didn’t do it.
Hardware tube compressors and digital tube compressor emulations typically use a circuit based on an Opto-Isolator. The result is a very smooth, gentle compression. When solid state compressors came out in the early 1970’s, they were built around transistors called FET’s, and these units had a more noticible compression action. These were compressors designed to impart a character to a recording, not merely keep levels under control.
So, I decided to add an FET compression circuit to my tube compressor plugin. Yow! Much better! Now I found I could get many different compressor sounds with just one plugin.
But that didn’t stop me adding features. It seemed every time I felt I was missing something on a mix, I tacked it onto the Pawn Shop Comp.
An old trick in the studio is to overload a channel a little bit to generate some distortion. The distortion changes the harmonic structure of the signal, adding some high end and making the sound “jump” out in the mix a little bit. Also, it sounds cool. So, I added some controls to overload the tube preamp, and while I was at it added a bias adjustment in case I wanted to completely blow the signal to hell and turn it into a distorted mess.
The tone controls... I was working on a record in a strange console and found I was adding a little bit of low end and high end to almost every channel... I just didn’t like that console. One night after a session I added tone controls to my plugin - a slight boost in the bass and a gentle rise on the top. Problem solved. On later sessions I tweaked the response curves. The result is the Pawn Shop Comp has a very musical sounding simple equalizer on it that is sometimes all a signal needs to sound great.
I usually mix on a vintage SSL, and the design of it makes it very easy to parallel process a signal. Things a little too much? Route the unprocessed signal to the small fader, sent it to the mix bus and “bleed” a little of it back into the mix. The mix control on the back of the Pawn Shop Comp is basically this. One of my favorite uses is to compress the hell out of the drum bus, and then use the mix control to add back in some of the high end and lighten up on the entire effect.
The Operating Level control started off as a weird experiment. Traditionally, audio gear was designed to operate at one of three nominal levels: Broadcast (+8dbV), Professsional (+4dBV) and Consumer (-10dBV). If operating levels between different pieces of gear weren't properly matched or compensated for, there could be problems, such as additional noise and hiss or ridiculous amounts of distortion. Nowadays operating level isn't really an issue, but as an assistant in big studios when I was a kid, I remember some of the bizarre sorts of sounds that would come out of a mismatched preamp or compressor... and I also remember the sudden smell of smoke as the input circuit of something got toasted. The Operating Level Control was my attempted to replicate some of the overloads and noise without the smells and repair bills. I almost pulled this feature off the Pawn Shop Comp, but a couple of our beta testers loved some of the sounds they could get with it, so I left it in. I tend to use the Operating Level Control when a sound just isn't quite right and I'm not sure what to do with it. I guess my thinking is, "I wonder how it will sound if I blow it up?"
I added in resistor switching because my original Pawn Shop Comp circuit had a smooth, round high end that some of the beta testers described as "dull sounding." So, now you can switch it over to metal film resistors and the sound is modern and bright. Switch it to carbon resistors and the sound is warmer with a little less sparkle.
The FET switching... again, back in my days as an assistant, I remember these bizarre Roger Mayer compressors in the studio that generally sounded terrible on everything... except when they sounds simply amazing on something, and suddenly they were the best sounding compressors ever made. Later in my career I met Roger Mayer, who is a friendly and generous man, and we discussed these particular compressors at length. The FET switch basically gives you the choice of an 1176 sounding compressor response curve, or a Roger Mayer sounding response curve. The end result, though, is unpredictable, and a lot of it depends on how hard you compress the signal, just like the original Roger Mayer compressor I have such fond memories of using.
The last thing I added to the Pawn Shop Comp was the Input and Output Controls, which are really trim controls that let you compensate for signal strength and gain changes. I suppose these controls are a bit of a throwback to my days in the big studio when getting everything properly gain staged was critical. They are useful in getting things clean in the signal flow. The Voltage Indicator is a bit of eye candy. If you take apart a hardware compressor and run signals through it, you want to be able to measure the control voltages so you can properly tweak the circuit. On the Pawn Shop Comp, when you're "in the back" and adjusting things, the Voltage Indicator lets you see when the compressor is active and by how much.
So, that is the story of how the Pawn Shop Comp evolved from a simple, one-shot compressor plugin to what is arguably one of the most versatile and adjustable vintage compressor emulations on the market. Three years of tweaks and refinements are stuffed into one compressor that many users find the first plugin they reach for when making a record. I hope you find it useful, and a bit magical, as well.





The lovely and versatile PSC...

ADSR is too complex. Poke and Hang - just two pieces to think about.
Mix as ocean. Most of the difficulties happen at the surface.
Fix its ass with more poke.
Less Poke means more Hang






Round thing with an arrow through it means potentiometer, or a knob. Look at you! Reading signal flow like a goddamn boss!
Can you figure out the symbol for a switch?









This is what Dynamic Range looks like, kids.



Headroom. If you bump your head there’s clipping...
Memorize this.

Lars, you’re dragging again...






Plenty of power for linear reproduction
Not enough power = generation of harmonic distortion

























KAAAAAHNNNN! Compress the bass!
Filters, EQ, a Compressor... the AIP is your huckleberry.
So much to tweak, so little time!
Settings for Thick and Present vocal reverb
Some vocal reverb choices
EQ and Reverb settings for the Vocal Whispers effect.
Settings for FATTER
Settings for WIDER







