Setting levels on your audio interface and through your DAW isn’t hard. Neither is setting levels on plug-ins, especially ours. But there are a few things to know, so let’s get going.

On the Input

The signal coming out of a mic and into a preamp, and through the preamp and into an interface is actual electricity. This is an analog signal and is subject to the rules of setting analog gain.

orange volume control of the audio amplifier in the studio. vint

The basic rule of setting analog gain is to set it as high as possible and leave a little headroom for natural variations in signal level. We want it as high as possible because almost all the noise (the hiss, the hum, etc.) of whatever you’re recording is going to be introduced into your recording at this point, and we want that stuff as quiet as possible. Some people think hiss and hum are part of the whole “analog vibe” and are charming, but they’re wrong. Back in the old days, we spent a ton of time trying to get rid of that stuff. It’s crap. It’s like the cook's hair on your pizza.

We want to leave some headroom because musicians tend to play or sing a little louder in actual performance—often the performance is 3dB louder than the rehearsal—and we don’t want to add extra harmonics and, even worse, harmonic and intermodulation distortion to our signal unless we’re looking for that sort of sound deliberately.

So, where do we set things? It depends on the meter on the analog device, but generally, there is a 0 point on the meter, be it a VU or an LED bar display, and you want the signal around that zero point. It can go over occasionally. If there’s Peak indicator light, that can occasionally flash red—this is fine. Most meters show green and transition to red when your signal is in the headroom area. Keep the signal high in the green and just licking the red.

It is that easy. As you know more and learn more you can make it more complicated for yourself, but it is that easy. Because it’s easy, though, that doesn’t mean you want to fluff your way through this not paying attention. If you do this step badly you’ll make a lot of problems for yourself. Take the time to set the levels correctly.

So, your properly set signal flows through your analog gear and into your interface, where it gets converted into numbers—math—and feeds into your DAW, and now we have to think of it a little differently.

In the DAW

Most up-to-date, modern DAWS process signals using floating point math. What does this mean? It means that it is virtually impossible to  crank things up too high in the DAW. You can basically set your levels anywhere and it won’t cause “bad math.” However, it doesn’t mean that you can do whatever the heck you want.

Most plug-in, ours included, are designed to emulate analog circuits, and that means that they will emulate distortion and potentially sound like ass if you hit them too hard. Hit them too hard with what? Numbers? There’s no actual electricity feeding through the DAW like there is feeding through an analog recording console. What possibly hits them too hard? Numbers?

Uhm... kind of. If a plug-in is designed to emulate an analog circuit, then it will have a pre-programmed spot in it that emulates the “sweet spot” of an analog circuit, and if the signal is too low or too high for that sweet spot, the plug-in will do some math to add distortion and other non-linearities to your signal. You can’t really overload a plug-in, but you can force it to emulate an overload. This is something you can choose to do, but you need to understand something to actually choose it.

So, how do you know if you’re overloading a plug-n or operating it out of the sweet spot. Well, if it sounds all distorted and, as far as you can tell, it isn’t supposed to sound all distorted, that’s a pretty good indicator. The other thing to do is to look at the damn meters on the plug-in.

Korneff Plug-in Metering

Our plug-ins have two different meters. The PSC, TBL, AIP, MDR, ETD, and Puff Puff all have a meter that looks something like this:

smeter le upscale balanced x2

We will be changing this meter on these plug-ins soon, but to set the level using these, you want to see that meter as close to the 0dB line as possible without going over it very often. You’re basically setting it the same way you’d set a piece of analog equipment, which makes sense because it’s designed to emulate analog equipment. If you set it too low you might hear some weirdness. If you set it too high you might hear some distortions.

Our newer plug-ins, the WOW Thing, and the Pumpkin Spice, have a different meter:

wow meter

It’s similar but with the addition of a numerical display that shows you how far your signal is from hitting 0dBFS, or Zero Decibels Full Scale.

What the heck is 0dBFS? That’s a really good question, because it’s rather arbitrary. Basically, it indicates when you’ve used up all of the bits available for doing mathy stuff in your DAW, but because DAWs use floating point math, there are always extra bits lying around. Kind of.

Argh! What’s important about 0dBFS isn’t what it is, but how far away you are from hitting it.

Because digital recording gear does have to integrate with analog recording gear, and because digital recording technique is built upon analog recording technique, the “industry” has decided that keeping digital levels somewhere between -20dBFS and -12dBFS is where you want to set things. -18dBFS is sometimes quoted as an exact value. And this isn’t a bad idea at all, especially as you learn about the details involved in the making of the decision.

DAW makers, interface designers, and plug-in makers all use this number. -18dBFS is the level that puts things in “the sweet spot.” If you go above it, you could cause the plug-in to emulate analog distortion.

So, that number on the meter that keeps changing as the signal changes is telling you if you’re hanging around in that sweet spot, from about -20 to -12dB below 0dBFS. Our plug-ins are designed to behave like a nice, happy analog circuit in that area. If you go too high, it might distort. If it sounds bad, turn the Input Trim down.

Use the Output Trim to make sure you’re not slamming too much “digital gain” into the next plug-in. But even if you are, chances are the next plug-in in the signal chain also has a trim control which you can turn down to get the signal into the sweet spot for that plug-in.

I have heard of people sticking meters in between individual plug-ins to make sure things are really close to that -18dBFS value. You don’t have to do this. The whole thing isn’t that anal, and it isn’t rocket science for you, making a record. The rocket science is happening in the DAW and in the plug-ins, so relax.

On the Output

You’re mixing and your individual channels are flowing into submasters most likely, and then into your master bus, and then from there out into a file of some sort—like a .wav or .mp3—and into your monitoring system and into your speakers. And this is another spot where you can cause yourself problems. What kind of problems? Finished recordings that are too quiet because... have you heard? There’s a loudness war going on. And finished recordings that are distorted.

But this is a long discussion and it deserves its own post, which I’m in the process of writing.

 

The first thing you can do is stop thinking of reverb as a ramification of physics.

Don't think of using reverb as an acoustic effect. Think about using it as an emotional effect, or a narrative effect. Lyrically, is the song set in the present or the past, or is it perhaps in both? Can reverb be used to differentiate the past from the present, or the present from the future? What does the past sound like? Is the future wet or dry? When the singer is in their head, what does that sound like? What is the reverb of thoughts?

Is the character in different spaces during the song? Is there a bedroom, or a kitchen? Is the character in one place during the verse and another place during the chorus? This might be something you decide that's not based on the lyrics. It can just be a decision you make.

Control the sense of space and intimacy. Reverb is distance. Want a vocal part to sound like it's in the listeners' ear? Dry it up and pan it hard. Control the depth of the soundstage by putting some things farther away than others. What's in the back of the room? What's in your face? Make decisions, damnit!

The listener probably will never go, "Ah, the singer is in the bedroom in their past, then dreams they're in a canyon, then they yell in a bathroom." And honestly, you don't want your listener noticing all that stuff, that would be like watching an old Godzilla movie hoping to see the strings moving everything. But you do want your listener to "lose" themself in the song, and you do that with small, well-thought out decisions. It's like when you eat food prepared by an excellent chef. You don't know what little tricks they're up to. You're not thinking, "Ah, this butter has had the solids removed so it is actually closer to ghee." You're just thinking, "Man, this is delicious."

We want people to hear the results of our work, not retrace our exact steps. We're not making records for other engineers to like.

Put two different reverbs on two channels and pan them so that one reverb is on the left, the other on the right. Then, feed the signal that you want reverb on to channels, or more one than the other - whatever you want. The more different the two reverbs are, the weirder this effect gets. A short decay time on the left and a long decay time on the right will move the reverb across the speakers from left to right. There are all sorts of things you can do with this set-up.

Set really long pre-delay times. Pre-delay corresponds to how close the nearest wall is in a space. In a small space, the nearest wall is only a few feet away, but in an aircraft hangar, the nearest wall might be hundreds of feet away, so there will be a long pause between the direct sound and the start of the reverberant sound. Our ear makes decisions about the kind of space it is in based on when it hears the initial return of the room, ie., the pre-delay. So, a small room with a huge pre-delay sounds very unnatural, as does a huge room with a very short pre-delay. This is a fun thing to experiment with; it adds a bit of "acoustic confusion" into the mix. Perhaps tie the use of it into the lyrical or emotional content of the song. Like, the singer is expressing doubt or confusion in a section, and to heighten that, add a short reverb with a long pre-delay, which not only pops the lyrics out but also gives the listener a hint of confusion.

Compress your reverb returns. Stick a compressor on the insert of the return channel and squash that stuff. Play with the attack and release. Can you get the reverb to "breathe" along with the tempo of the track? Long attacks will increase the "punch" of the reverb. Short attacks and releases can lend an almost backwards sound to the reverb. Experiment with putting the compressor both before and after the reverb in the insert — you'll get wildly different results.

Duck your reverb returns. Put a compressor on the return (you pick the spot in the insert chain) and then key that compressor to duck the reverb. If you key the compressor off, say, the vocal you're putting reverb on, you'll get a very clear vocal with reverb blossoming whenever the singing stops, and there's no automation needed. What about ducking the backing vocal reverb with the lead vocal, especially if there is an alternating quality to the two parts? You can also key reverbs off percussion so that the kick or the snare stops the reverb for a moment, which can give you all sorts of rhythmic effects in addition to giving your mix clarity. Remember, reverb tends to muddy things up, so if you're ducking during busier sections of the song, you're going to increase clarity in those sections, and differ the effect of the reverb 'til a moment after, so the track will be clean but still have an overall wet quality.

Gate and Key your reverb returns. Gated reverb is a staple effect on drums from the late 70s, to the point where it is a cliché, but gating a reverb and then keying it from another sound source is still a fun thing with which to experiment. Key percussion with itself to get a classic gated reverb effect on something other than a snare. Gate the reverb of a tambourine with the snare so there's a huge wet noise on the snare that isn't the snare. Gently expand (a gate that only reduces output by a few dB, such that when the gate opens there's only a slight volume increase) the tails and decays of pads with the rhythm instruments to extend the feel of the groove into other aspects of the sonic landscape.

Modulate your reverbs. It's amazing how cool a little chorus or phase sounds on reverb, and how people seldom think to do something so simple and effective. In the old days when hardware units were the only option, it was hard to sacrifice something like a rackmount flanger to a reverb, but nowadays, just throw a plug-in on it. Just experiment. Like a lot of effects, modulated reverb is best used sparingly, to heighten a specific moment of a song, rather than having it on all the time. But rules and suggestions are made for breaking and ignoring, so feel free to slop modulation all over the place, but perhaps control it with ducking? Modulated reverb on strings, keyboard pads and chorussy vocals can add an otherworldly effect to things, and you can reign it in using keying and automation.

Goof Around in Fadeouts. Good fadeouts are an art form. I love fadeouts that have a little something in them to catch your ear and pull your attention back into the song. An amazing, fast guitar run, a spectacular vocal moment, someone talking, etc. Doing wacky things with the reverb in a fadeout is always fun. Crank the reverbs up so that things sound like they're going farther away as they fade, or dry things up totally so that the fade makes things sound like they're getting smaller. Roll off the bass gradually and pan things tighter to accentuate the smallness.

A last bit of advice: you’ve got more power in your laptop than anyone in the studio biz has ever had. By next year, that will probably double. Put that power to use in the search for something new, different and yours. Experiment and play. Don’t let Ai have all the fun.

 

We usually think of harmonics as being pleasant things to hear. They give an instrument its timbre, they provide brightness and clarity.

Don’t know what harmonics are? Go here and read.

Usually the harmonics that our ears like to hear are mathematically related to the fundamental based on whole numbers. Whole numbers: ones and twos and threes. Octaves are a multiple of 2, 4, 8, etc., things like that. Harmonics can be even numbers, but also odd numbers, and harmonics based on 3 or 5 or 7, while they might sound a little wooly, they don’t sound plain old bad. Also, keep in mind that sometimes the math on these things isn’t perfect. It might not be a perfect multiple of 3 but something close, like 2.98, but generally this is good enough.

Inharmonicity

However, there can also be harmonics generated that don’t have any whole number relationship to the fundamental, and these harmonics are usually unpleasant to hear. This is called Inharmonicity — when the harmonics don’t make whole number sense mathematically.

Strike  Tones

On many instruments, inharmonicity happens in the strike or the initial attack of the note. Bowed and reed instruments—violins and flutes, as an example, don’t have inharmonicity because they don’t have a fast transient attack. Brass instruments typically have slower attacks as well.

Fast transient attacks, on the other hand, generate a lot of “inharmonic” stuff—lots of non-whole number overtones. On a piano, the initial strike of the hammer generates a lot of inharmonicity, and that strike is basically pitch-less for a split second. It’s only once the string resonates for a moment that we get a sense of the note. The same thing is true of guitars, bells, and especially drums. That initial strike is basically out of tune, and it is the resonance after the strike that conveys a solid sense of pitch.

The faster the attack, the more inharmonicity is generated in that moment. And, by the way, the transient is typically the brightest moment of a note, because it is so rich with harmonics both good and bad.

Actually, the strike of a note is usually very out of tune! Plug a bass into a tuner and watch how the tuner behaves when you slap a note versus using a softer attack with your finger.

Bells are a great example of the inharmonicity of a strike tone. Listen to Hells Bells by AC/DC and the opening bells are out of tune until they resonate. This has to do with their strike tone. I found a great video that explains this, and while most of you won’t ever record church bells, this is fascinating stuff and it will help get the concept of inharmonicity firmly in your mind.

SO.... instruments have inharmonicity in the attack, the strike. But what about gear? Compressors? Amps? Plug-ins?

Intermodulation Distortion

The way equipment and devices, whether analog or digital, create inharmonicity is through Intermodulation Distortion.

Intermodulation distortion is overtones that are way out mathematically from the fundamental. They typically occur when multiple fundamentals mix together in ways that generate, well... non-whole number math. Harmonics are generated that don’t have whole number relationships to the fundamental. Some of these new harmonics might be undertones that happen below the fundamental, and others above. In some cases the products of intermodulation distortion sound good, but the more complex the sounds get, things get really hairy quickly.

Remember that an instrument, unless it’s like a flute or something with a very simple timbre, already has a lot of overtones to it. A human voice has an incredibly complex series of overtones, so complex that virtually every person has a unique set, which is why we can recognize someone’s voice even if they just clear their throat. So there’s this ton of harmonic activity, then there’s harmonic distortion added to it, and all of those fundamentals AND harmonics have additional harmonics added to them, and then intermodulation distortion kicks in, and ALL those fundamentals AND harmonics AND additional harmonics start negatively reacting with each other adding in yet more harmonics that have bad math going on.

This is the distortion you hear when you crank up guitar amps, or slam things through the mix bus and drive it into clipping.

Here’s a nice, non-technical video on it that makes a lot of sense. You’ll hear why intermodulation distortion can be a huge issue.

Quick Takeaways

Some things to take away from all this.

  1. Strike tones are out of tune and bright.
  2. Intermodulation Distortion gets worse and more noticeable as the sounds interacting with each other become more complex. It’s hard to get a flute to exhibit any intermodulation distortion. It’s easy to get a full mix to sound awful with even a little intermodulation distortion.

 

 

Reverb has been a studio staple effect since the 1950s, and traditionally, it’s been an expensive proposition. As an acoustic phenomena, reverb is complex and re-creating it required dedicated spaces initially—reverb chambers. Real estate ain’t cheap. Later, mechanical reverb simulators, like plate and spring reverbs were developed, but it was still costly. Digital reverbs started appearing in studios in the 80s, sounding great but again, rather expensive.

In the early 90s, the price barrier was broken and digital reverb units became affordable enough to find homes in smaller studios, home set-ups, and in musicians’ live setups. Soon artists were replicating their live and home studio sounds in the big studio by using these little cheap reverb units.

More expensive digital reverb units used a process called convolution to simulate reverb and other delay effects. Convolution required a lot of processing power, which made for an expensive and large unit.

An innovative designer named Keith Barr, trying to skirt this issue, developed a different means of generating reverb in part inspired by an older analog delay technology called “bucket brigade.”

Picture a bucket of water being handed from one person to another to another to another. That handoff takes a moment of time. The more people on the bucket bridge, the longer it takes the bucket to travel from start to finish.

This is roughly how a bucket brigade circuit works: a signal is passed from location to location within a circuit, or within a chipset. In the case of an analog bucket brigade, the signal quality degrades as it goes from location to location — think of water splashing out of the bucket as it’s passed.

Mr Barr did a similar thing but in the digital realm, using a computational loop. Think of the bucket perhaps being passed in a circle. The result wasn’t necessarily realistic, but it had a unique character, and for certain types of effects it was better sounding than convolution. Most importantly, it could be accomplished using less computational power, which resulted in low-cost, physically smaller devices. Instead of needing a dedicated room or a three-rack space box, you could get high-quality reverb out of a guitar pedal.

Mr Barr’s designs found their way into all sorts of processors and into the hands of musicians and engineers. The sound of many genres, such as Shoegaze and Trance, is built around these little, low-cost reverbs.

Our Micro Digital Reverberator is faithful to the sound of these units if not to the technology. Computational power is now cheap. Our MDR is built from carefully sampled impulse responses taken from our own collection of vintage hardware that we used in our home studios decades ago. It’s an interesting twist of fate that an inexpensive process, created to mimic an expensive process, is now itself being mimicked by the expensive process, which isn’t expensive anymore!

Whatever. The MDR has the sonic character of the original units and the fast, easy interfaces that made working with these things such a snap in the studio.

We use them the way we used them 30 years ago: slapping them onto a send and return, picking out a preset, and moving forward on the session with minimal fuss. Sometimes we’ll swap in something different in the final mix but more often than not, the unprepossessing Micro Digital Reverberator winds up being the reverb we use across the entire project. Fast, simple, and inexpensive has always been a winning formula.

Keith Barr died in 2010 at a relatively young 61. He was an innovator and a pioneer.

Happy Monday, Summer Campers!

This popped up on the Instagram feed of super-engineer Tchad Blake last week:

"There's no "best" eq for anything. All you can really say is what's your favourite at any given time. The (Korneff Audio) AlP has quickly become my favourite eq, ever. Analog or digital. Every time I use it on anything I think I'm hearing something new. It fits my ear/brain chain better than anything l've used before. Wtf...right?? How did these guys do this?? I'd love to know if anyone else out there is hearing just how cool this thing is or even, tell me how it's not. I've been using it every day for over a month and I'm still jacked up about it. "

This is a tremendous compliment, coming from this guy. Tchad Blake is king cheese, bacon from heaven. He's awesome.

Forget the credit list and awards: Tchad Blake makes really interesting recordings. He's always experimenting and inventive. He records drums in the smallest room possible using a Binaural Dummy Head as an overhead mic, doesn't use much reverb, loves to compress and distort, pans things strangely, and in general makes super cool sounds. Listen:

American Music Club "Mercury" on Apple Music

On Spotify

On YouTube

Tchad asked, "How did these guys do this??"

This is how we do it

The EQ on the AIP is a 4-band fully parametric EQ that is especially sweet sounding. It has a weird interface that is based on its inspiration, the Klangfilm RZ062B.

From the name, one can guess that Klangfilm was German and involved in sound for film. Formed in 1928 by partnership between Siemens and AEG (Telefunken), Klangfilm amplification, speakers, preamps, EQs, and home entertainment equipment was top-notch. By WW2, Klangfilm was wholly owned by Siemens, and often the names Klangfilm and Siemens are used interchangeably. Klangfilm stuff from the 50s and 60s is especially coveted.

The RZ062 was a tube EQ built for film mixing consoles. It was a three-band passive EQ with high and low shelving and either a midrange tilt EQ (the 062a) or a presence EQ (meaning upper midrange) that had 4 different frequencies (1.4kHz, 2kHz, 2.8kHz, 4kHz) with up to 5dB of gain at the selected frequency (062b).

The 062 has some similarities to the REDD 37 console used by the Beatles on Rubber Soul, Revolver, Sgt Peppers, The White Album: the REDD 37 preamps were made by Siemens.

The RZ062 is an amazing sounding EQ that is remarkably smooth and gorgeous sounding, but it's very limited in choices of frequencies, bandwidth, and overall versatility. Another common complaint is that most of the gain controls provide 2dB increments, and often a setting is either too little gain or too much.

What Dan loved about it, aside from the overall character, was the presence EQ on the 062b that worked perfectly for electric guitars.

So, Dan got his hands on the schematics and basically built the circuit digitally.

This is the usual way we make plugins—we model things at a resistor, capacitor, transformer, transistor, diode level. But what we also do is figure out what we can do with that circuit in the digital realm that would be impossible or, at the very least, difficult to do in the analog realm.

Frankenklangfilm

In the case of the RZ062, Dan decided to take a passive EQ and make it fully parametric. This makes the AIP 4-band incredibly versatile, with the sonics of the original expressed in a modern way. The AIP EQ can do anything a digital parametric EQ can do, from narrow deep cuts to ultrawide boosts, making it useful for anything from getting rid of hum and notching out vocals to finding the exact sweet spot on a snare to gentle "airband" style enhancements. The gain is adjustable out to a ridiculous 36dB of boost and cut, and we've even modeled some EQ curve goofiness that can happen with vintage passive equalizers.

Is it an exact recreation of a RZ062b? No, but at certain settings it can precisely replicate the response curves of the original. We consider it more the Klangfilm's mutant cousin. Frankenklangfilm.

One thing that hasn't changed from the original, however, are the tube/transformer input and output stages, which are a big reason the AIP EQ is so sweet sounding. The original circuit design tends to saturate the transformers a bit. The result is that the input signal is harmonically enhanced feeding into the equalizing circuitry, and then the EQ'd signal is rounded off a bit by the output.

So, that's the quick version of what's going on with the EQ on the AIP. If you want to grab an AIP Demo, click here.

Amazing Interview

Gearspace did an interview with Tchad a few years ago. It's detailed, funny, and he gives away the store and the secrets.

I have been sick with a summer cold and tinnitus all week, and I'm behind on answering a bunch of you that wrote in. I'll get back to you all this week. It's always a delight writing New Monday and hearing from you guys.

Next week I think we need to do a survey about how I can make New Monday better and more useful for you.

Warm regards,
Luke@KorneffAudio.com

Here we are - the end of the line for this series of posts on levels, noise, distortion, etc.

Gain staging... from all the talk in online forums and people saying, “Well, you really need to watch your gain staging,” you’d think there's some sort of mystical science magic to it, but it’s really simple.

Gain staging is making sure that each piece of equipment in your signal chain has the best possible signal-to-noise ratio and enough headroom to prevent unintentional distortion.

We have to cover two concepts really quickly, then I’ll tell you how to gain stage things, and we’ll finish off with some tips (rules, suggestions) that make this even easier.

UNITY GAIN

What this means is that the level flowing into the piece of gear is the same as the level flowing out of the piece of gear. Think of a wire. If you feed a signal into a piece of wire, and the wire isn’t really tiny or tremendously long, the amount of power feeding in is the same as the amount of power feeding out.

a wire

If you don’t understand this diagram you should give up.

If we stick a bunch of amplifier circuits and EQ circuits and processor circuits between the input and the output, unity gain is still what we want to have happening.

a device
The triangle thinger means Amplifier.

Now, there’s usually something to control Input Level, we sometimes call this a TRIM, and there’s usually something to control Output Level, and this can be called Output Trim, or Output, or it can be a fader, or, in the case of a compressor, it might be called Make-Up gain, or it can have a Make-Up Gain AND and Output level, but the basic idea is the same: There’s something to control the level of what feeds in, and the level of what feeds out.

with trimsRound thing with an arrow through it means potentiometer, or a knob. Look at you! Reading signal flow like a goddamn boss!

Now, most equipment has some sort of meter - ranging from a couple of LEDs to a mechanical VU meter, and that meter is usually located after the Output level somewhere, but sometimes it is switchable, which is nice, because then you can see what your input level is before it processes things, compare it to the output level, etc.

with a meterCan you figure out the symbol for a switch?

SO, you’re always aiming for Unity Gain with each piece of gear, and what we want is the input level set so that the meter reads nominal, and the output level feeding out is at nominal. To do this, we set the level control knobs at the position that gives us Unity Gain, and that position is usually marked with a ZERO or some such.

the many faces of unity on a knob
The many faces of UNITY.

Set Things to Unity Gain

This is easy. Grab your OUTPUT LEVEL knobber and set it 0. So, if it’s a fader on a console you slide it up to 0, the output knob is at 0, etc. What if the Output Knob is labeled from 0 to 10? Set it to 8, or set it to 10, it depends on the circuit and we’re not going into that here.

Next, feed signal into the input, turn up the input gain until the METER is hanging around 0, which indicates nominal level. Now you’ve got something really close to Unity Gain happening for that piece of gear. Will the meter go up and down? Yes. But you’re not chasing the meter. You’re looking to get the meter hanging around 0, or nominal level. Don’t be too fussy. Just get it close.

Remember, the Unity position on a knob or a fader is at ZERO. 0. When you set it to Unity, that’s where it goes.

The next step is to feed the Output of one piece of equipment into the Input of the next piece of equipment. NOW... this might get a bit tricky, so we have to cover Operating Level quickly.

OPERATING LEVEL

Simply put, Operating Level is the amount of power a piece of equipment wants to see at its input and output. This is what you’ll usually run into:

Mic Level is the level of power coming out of a microphone and it’s REALLY LOW. How low? Like -50dBu. What does that mean? It means really low. Don’t worry about it. Mic Level is so low that you can’t do anything with it until you bring it up to Line Level. That’s what a Mic Preamp does - it brings a Mic Level signal up to Live Level.

Instrument Level is the amount of power that comes out of a bass or a guitar with a passive pickup. It’s also really low, and in my mind it's basically the same as a mic level signal. For those advanced campers, I’m ignoring impedance today. If you don’t understand that previous sentence, that’s fine. You'll get there eventually.

Line Level is the level of power flowing through gear - consoles, tape decks, compressors, coming out of synths and keyboards, etc. There are three possible line levels: Consumer, Pro Audio and Broadcast.

Consumer line level is -10dBV. This is the line level of home stereo equipment and also output level of a lot of synths and keyboards. What does -10dBv mean? Well, it means if the thing is set to unity you have -10dBV feeding in and -10dBV feeding out and that’s all need to know. -10dbV is a LOT more powerful that -50dBu. Ignore all the V’s and u’s for now. -50dB is less than -10dB, right? Close enough for rock and roll today.

Pro Line Level is +4dBu. This is hopefully what the majority of equipment is at in your studio. Can’t tell? Pro equipment uses bigger, heavier, tougher connectors. Consumer stuff uses shitty little connectors. With pro line level stuff, if the meter is at 0 and gain is at unity, you have +4dBu feeding in and out. And it’s got a lot more power than -10dBV consumer stuff. Again, ignore the V’s and u’s and just look at the numbers for now. +4 is more than -10 and a lot more than -50.

Broadcast Level is +8dBu. I don’t even know how common this is anymore as I don’t do work in radio stations or TV, but it is 4dB hotter than Pro Line Level. You can probably ignore this.

Speaker Level is what comes out of a power amp and plugs into a speaker. It’s like a SUPER BOOSTED line level. Line level is too weak to move the diaphragm of a speaker, so a power amp is needed to crank shit up. A dumb idea is to plug the output of a power amplifier into anything other than a speaker. Poofsky.

Again, hopefully your equipment is all +4. It won’t be - you’ll have some guitars and keyboards and, of course, mics, and they won’t be at +4, but that’s why you have preamps. Plug the mic level and instrument level and consumer level stuff into a preamp, and add gain to get it to read 0 on the meter. Now, going out of your DAW or mixer, you might be feeding into a pair of “consumer level” active monitors. Usually there’s a switch so you can match the Pro Level output gain to the consumer level input gain. If you’re thinking the switch knocks off at about 14dB of gain, you’re right.

GAIN STAGING WHEN TRACKING

Ok, here we go.

Starting with a mic preamp: Turn the input gain all the way down. Set the output gain to Unity. Plug in the mic. Have the singer or musician play, and turn up the Input Gain until the meter is reading 0, or nominal. Done. If the meter has slow ballistics and you’ve got drum fast transients, run the meter a little lower, like -10 or -15. Slow transients? You can run it a little hotter and increase your S/N ratio. But really, unless you’ve got slow meters and fast transients, park it around 0 on the meter and move on.

Plug the output of the mic preamp into whatever is next - a compressor, an EQ, etc. Set the EQ flat, set the compressor threshold all the way up, etc. If there’s an output level control or makeup gain set that to Unity, that is, to 0 or to 8 or whatever. Watch the meter. If there’s no input to adjust it should hang out around 0. If there’s input gain then set that to Unity or play with it until the meter is at 0. Now, as you adjust the EQ or the compressor to change the signal, the gain will change, so you’ll have to adjust the Output level perhaps, or the input level - it depends on how crazy the gain change might be.

You keep going until you reach what your final stage is, either an analog tape deck or a digital tape deck, or a DAW, or perhaps a live mix console... whatever.

Hit analog tape at 0 on the meters, unless it is drums, in which case hit it a little lower unless you want distortion. Hit digital tape decks, like ADATS and DATS and Sony DASH machines as hard as you can without going over.

Hit DAWs at around -18 to -12dBFS. Yes, you can hit it harder, but for now, you want things bouncing around in that -18 to -12 area.

ADJUSTING LEVELS

Now, where do you adjust the level if things are hot at the tape deck or the DAW? Well, the best place is the Mic Preamp input. Yes, it will screw up your compressor settings a bit, but that’s life and engineering and you’re paid to tweak things. The mic preamp is doing almost all of the work here, so that is where you adjust it. When tracking, get in the habit of setting levels at the earliest spot in the signal chain, at the preamp. And NEVER (and I mean this almost absolutely) use a fader to fine tune your gain. The exception: if you’re riding levels while tracking, then use the fader. Other than that. Leave it at Unity. Have I made this clear?

Always do this. It will save your ass.

Always set your output levels when tracking to Unity. Especially on a console. When you’re tracking, all of the faders should be at the 0 mark on things. DO this RELIGIOUSLY. Here’s why.

Faders get bumped during sessions because that happens. If you always set them to Unity, then if they get bumped you just set them back to 0 (Unity). You need to pull a mic down quickly? Pull it down. When you bring it back up, place it where it always should be, at Unity.

True story. Was live tracking a band and we had about 27 mics going into the console. Took HOURS to get levels. Irate girlfriend of lead singer came in, caused a huge ruckus, running around the room screaming, and she ran to the console and moved all the faders around! “There,” She said! “I fucked up your mix.” I think I yelled at her. She stormed out of the room. Band was very upset. “Oh no! She wreaked our levels that took HOURS to set,” cried the guitar player. “Luke, I am so sorry...” said the lead singer.

I laughed. Slid all the faders back up to... WHERE THEY ALWAYS SHOULD BE WHEN TRACKING. Unity. 0. Band loved me and bought me a pony after that. Named the pony Unity.

MISMATCHED OPERATING LEVELS

When you’re feeding something low level into something higher level, you want to adjust things at the INPUT STAGE of the higher level piece of gear. So, with a low level mic going into a preamp, you tweak the gain of the preamp. If you’re plugging some strange shitty consumer -10 compressor you bought into a +4 thing, add gain using the +4 device’s input trim.

What if you feed +4 into -10? Well, turn DOWN the output of the +4 device by about 14dB so you don’t overload the -10 device.

FINITO

AND... there you have it. Gain Staging. It’s easy. This blog post is done. What follows below is a bunch of common sense hints that are worth following.

See ya next week.

COMMON SENSE HINTS

1) Nominal is nominal is nominal. If the operating level of each piece of gear is the same, then setting everything to nominal will work. When in doubt... NOMINAL.

2) Set levels as hot as possible without getting distortion. You’re always trying to maximize the s/n ratio.

3) Use the hottest mic possible. It’s really hard to overload a modern condenser, let alone blow it out.

4) Preamps generally have a lot of headroom, so they can usually be run pretty hot. But LISTEN. Some preamps overload in a nice way, others crack and snap. And this sounds like shit. When in doubt, back it down a bit. You can always add distortion later, but you’ll never get rid of it once you have it.

5) Most mechanical (dial) meters are VU and have slow ballistics. Run your level lower on these when it’s percussive stuff, and at nominal for everything else, including entire songs. You can run your level higher on VU meters when the transients of the input signal are slow.

6) LED meters might appear to be fast peak type meters, but in my experience they usually have similar ballistics to a VU meter. Run some drums through it, run some vocals through it, watch how the meter responds. Or look in the goddamn manual.

7) Want to calibrate everything in your signal chain? Stick a guitar amp in the room without a guitar plugged into it, crank it up so it hisses (white noise). Throw a mic in front of it. Plug the mic into a preamp with the output at Unity and adjust the input gain to get it to 0 on the meter. Feed that through each piece of gear in your signal chain until you get to tape or DAW. No guitar amp? Mic the fridge. Or water running in the sink. Don’t get the mic wet.

8) The above is too much work? Turn your mic preamp input all the way down. Set the output of everything to Unity. Set the input gain of the rest of the signal chain to Unity. Provided everything is at +4 operating level you’re done.

9) You’ll make WAY less mistakes when patching things if you always think OUTPUT feeds the INPUT, and always plug stuff in that way - the patch cable goes into the OUTPUT first then you plug it into the INPUT. If I’m doing a complex patch or I’m using a strange patchbay (and I am almost 59 and my brain is turning to shit so most patchbays are strange to me these days), I say in my head or even out loud, “The Micpre output goes into the TLA-50 input. Then I grab the next patch cable and “The TLA-50 output goes into the Pultec input...” I have always done this, even when I was young and smart and fast. It reinforces the signal flow in your head, it eliminates almost all patching errors, and it keeps you from looking like a fucking moron during a recording session.

10) When patching in STEREO, put the patch cord for the LEFT signal in your LEFT hand and the RIGHT signal in your RIGHT hand, and then do the above: “The preamp outputs feed the compressor inputs..” Always put left in left and right and right and you’ll reduce the chances of cross patching something to like 0. I don’t know why schools don’t teach this shit. It will save your ass.

11) Another stereo hint. I always put Left side signals on Odd numbers and Right side signals on Even numbers. And I always put them beside each other. SO, if I have a stereo pair of mics as overheads, the left is plugged into 9 and the right into 10, as an example. I NEVER break this rule. live mixing too. If somethings on the Left side of the stage I want it on the Left side of the console so I can grab it with my Left hand. It keeps everything straight in your head. Of course, if something happens to one of my hands, like it gets bitten off by a pony, then it’s mono for me.

12) Tape stuff down if you don’t want it bumped.

When last our heroes met they were discussing Dynamic Range and Nominal Level.

Dynamic Range is the space from the Noise Floor - the spot where the signal is covered up by noise (very very quiet) to the Distortion Point, which is the spot where harmonic distortion becomes very noticeable.

Nominal Level is a semi-arbitrary spot within the dynamic range of a piece of equipment that the manufacturer has decided gives you a high Signal to Noise ratio and enough Headroom. It’s based on their knowledge of the design of their equipment and conventions in the audio industry. What’s problematic about Nominal Level is that what is actually being measured can be different on each piece of equipment.

Manufacturers put meters on their products, and now is the time to understand how meters relate to nominal level.

SPEEDOMETERS and the VU METER

If you’re in a car in New York in the US, and you’re driving at the speed limit, your speedometer would look like this:

55nyer

If you’re out west in Wyoming, the speed limit is higher, and the speedometer might look like this:

70 wy

If you’re in Germany on the Autobahn, the speedometer might look like this:

130 bahn

But you! You’re a crazy traveling’ bastard! Bit confusing if you’re driving in Wyoming at 70 mph, then NY at 55 mph, then you go to Germany and it’s not even miles per hour, it’s now kilometers per hour, then suddenly you’re in NY, speeding around on the highway and a Scorpions tune comes on and you have a flashback to Germany and suddenly you’re going 130 mph. A cop pulls you over, tasers your stumpy ass, etc. True story.

So, let’s say we invent the UNIVERSAL SPEEDOMETER. And it looks like this:

universal

It doesn’t show how fast you’re going in some specific unit, it just shows you how far you are away from the speed limit, and it’s calibrated to wherever you’re driving. In NY,  we set the Universal Speedometer to “55” and if we put the needle at 0 we’re at the speed limit. In Wyoming, 0 means we’re going 70mph, in Germany 0 means we’re going 130kph. So, now, no matter where we go to drive, with the Universal Speedometer, we get the car up to the 0 and we’re fine. Who cares about the exact number in mph or kph because we know WE ARE AT NOMINAL.

uniiversal states

If we need to pass someone, we can speed up, the meter goes up, and we use up some of our HEADROOM to get better performance and speed and get around some car in the way. And if the meter is really low, we know we’re close to the NOISE FLOOR and driving too damn slow.

passing

The Universal Speedometer is a VU meter. It doesn’t tell you what the nominal level is, it tells you something much more important, which is: Are you at nominal level or not. And as long as we slap a VU meter on all of our gear, we now know exactly where to park the level at: 0. 0 is nominal level.

vu meters

And it really doesn’t matter what the meter looks like, if it’s LEDs or LCD or mechanical or virtual, nominal is nominal.

VU Meters and Average vs. Peak

Now, even though I sort of implied that all meters are the same they aren’t. It’s audio. There’s always something to fuck up the simplicity.

Meters have a speed of response, and it can be different from meter to meter. Some meters are fast and others are slow. Some meters measure peak energy, some meters measure average energy.

So, let’s say I’m in a room with the lights off. The room is dark. Let’s say I flip the light on for a split second and then click them off again. The room is bright for a moment, but then it’s dark again. So, it’s dark on average, but there was a “peak” moment of light. If I start flashing the lights on and off quickly, you might start perceiving the room as being “bright” rather than “dark,” because the AVERAGE light in the room across time is higher.

SO, if you are “set” to notice the average brightness of the room, you’ll respond one way, and if you’re set to notice “peaks” you’ll noticing something else.

So, how a meter responds depends on if it’s “noticing” the average or the peaks, or some sort of in-between.

VU meters are set to notice the average power of a signal. So, if you run something with a slow attack through it, like a violin or a guitar or a voice, or an entire finished song, the meter gives you a good idea as to where your signal level is at. But an instrument with fast transients, like a drum, the attack happens too quickly to be noticed by the meter. And by the time the meter responds the transient has already gotten through and it could be WAY above nominal and actually causing distortion, but a VU meter doesn’t tell you that.

Setting Levels to Analog Tape

A skill you had to have in the analog days was how to read the meters to get good levels on tape. As discussed, with slow transients, meters were more accurate as to level than with fast transients. So, I learned to cut drums to tape on the low side, know that the signal hitting the tape was actually +15dB or more above the meter reading. Vocals I would cut right at about nominal, because the vocalists I was working with were usually pretty consistent. I would cut bass right around nominal or a little higher, but if the bassist was slapping, the meter wouldn’t respond fast enough, so I’d set the levels a little lower than nominal.

vu meters copy

With a softer song, I would actually cut the vocals hotter to tape - burn up some headroom to increase the distance from the hiss. And if I was working with a very unpredictable singer on a loud rock track, I might cut the vocals low on the meter to buy me some more headroom, unless I wanted distortion.

One thing I always did was smash heavy guitar parts into the tape. On playback they would sound huge and crunchy, and very solid - due to the tape compression. One time I was running everything so hot that the studio manager shut down my session because he thought I was damaging equipment. He brought the studio's tech in to lecture me on proper levels (I kid you not) and the tech proceeded to laugh at the studio manager.

special levels

If it sounds good, it is good.

Meter Ballistics

Fast responding peak meters are tracking the transients of signals and are basically telling you how much headroom you’re using up. This is really useful information, but it’s different than what a slower meter is telling you. Both types of meters are really useful, especially together, which is why so often a VU meter has a peak light on it.

Meter Ballistics is how fast the meter responds. You can get an idea of this by looking at the meter. If it’s a mechanical meter and it has to swing a little needle around, even if it’s really fast it’s never as fast as an LED meter can be. But an LED meter might be electronically slowed down to respond like a slower mechanical meter - virtual meters on your DAW might be set to respond to average rather than peak, too. You can also look in the manual and find out if the meter is measuring peak or something more average (look for the letters RMS, which basically means "average.”).

Often, there are two meters within a meter set, and one is measuring the average power, and the other is measuring peak. When you're metering violins, which have slow attacks, you’ll notice that the two meters read very close to each other, whereas if you’re metering drums, there will be a much bigger difference between the two.

SO... now you know pretty much exactly what that meter is doing, and you know what nominal level is, and how this all fits together. Next week, we’ll talk gain staging and setting levels.

Thanks for all the good feedback. Much appreciated.

Let’s put the whole thing together today. How Noise, Distortion and signal level all fit together. How it all works.

DYNAMIC RANGE

All devices in audio - from a human voice to a mic to a preamp to a converter to a console to a power amp to a speaker to a human ear, all have a lower limit and an upper limit.

The lower limit is self-noise, the noise floor.

The upper limit is the distortion point, which is the spot that harmonic distortion becomes a big problem. By the way, the manufacturer decides what is unacceptable harmonic distortion.

So, that is the playing field in audio - from the Noise Floor to the Distortion Point. And we call that area the DYNAMIC RANGE.

dynamic rangeThis is what Dynamic Range looks like, kids.

Dynamic range can be huge. Your ear has a dynamic range of maybe 180 dB. You can hear from an ant picking its nose to something as loud as a gunshot about a foot from your head. Truly, though, if you’re listing to things ON PURPOSE and without HEARING PROTECTION louder than 112 dB you’re crazy. We will, of course, talk about dB later... much later...

Mics have dynamic ranges around 120 dB, which is about the same as a human ear on average. Mic preamps have dynamic ranges all over the place, from as high as 130 dB to 90 dB or even less. Digital audio recordings can have dynamic ranges well over 100 dB, depends on how they’re designed. Analog tape sorta sucks - lotssss of hisssss - dynamic range can be in the 70’s down to the 60’s even with noise reduction. Radio stations barely hit 50 dB of dynamic range.

HOW TO SET LEVELS BADLY

Let’s learn how to be a shitty engineer quickly..

Our signal chain is a loud band (150 dB D/R), into a good mic (120 dB D/R), into an ok preamp (90 dB D/R) onto a tape track (70 dB D/R) and then out through a radio station (50 dB D/R).

Now, common sense would suggest you set the levels as high as possible. Especially when analog recording, the idea was to hit the tape very hard, making sure most of your signal was way above the noise floor, so the only time you’d hear hiss was when the song was very quiet, like at the beginning or the ending. So, let’s just do that, set everything right below distortion:

to distortion point 1to distortion point 1

Notice the noise floor going up? Congratulations, shitty engineer! You’ve lost all the quiet stuff in the noise! By the time it hits the radio you can’t even hear the fadeout of the song and the hiss and noise has gotten really loud. Get fired by the band!

Let’s do the opposite. Let’s set the levels so that we DON’T loose all the quiet stuff. We'll keep our signal as far above noise floor as possible...

to noiseto noise 2

Now you see the dynamic range squashing down and clipping the wave form, adding harmonic distortion. You lose again! Now your recording is distorted from almost the moment things begin, and it just gets worse and worse... Shitty engineer, nicely done. Fired by band. Work for uncle loading boxes.

HEADROOM

We need to find a place within the dynamic range to set our levels so we avoid being a shitty engineer. Let’s reason this out.

Ok, we do want levels as high as possible, because noise sucks. But what if something unexpected happens? If we set a mic preamp level to right under distortion, and then the vocalist moves in a little closer to the mic, or sings a tiny bit louder, the increase in power can clip the mic preamp, and you’ll hear distortion. So, we need a little bit of safety margin up there so we have some room in case something gets unexpectedly loud. That’s HEADROOM.

headroomHeadroom. If you bump your head there’s clipping...

What are typical headroom figures? It’s all over the place. On analog tape decks we were usually recording to give ourselves about 9 dB of headroom on the tape. Mic preamps usually have very good headroom - from 18 dB to 26 dB or even higher. Like dynamic range, it’s variable and depends on the type of gear and the manufacturer, and the engineer.

NOMINAL LEVEL

We want to set our levels as high as possible to keep our S/N (Signal to Noise) ratio as high as possible. And we don’t want to clip, so we’re going to give ourselves a little room on top - headroom. We call this level NOMINAL LEVEL.

What usually happens is we have the musician play or sing, and we watch the meters and listen, and we set the level so that we have some headroom just in case. It’s sort of an average, pretty high level. It’s different for different types of gear, and it's usually determined by the manufacturer. The signal won’t sit exactly at nominal level the whole time, because when recording or live mixing, the signal (the band, the vocal, the drums, etc.) will go up and down, depending on the dynamics of the player and the song. But there is pretty much a consistency to everything, right? Shit’s not usually really loud and then really quiet unless the players suck or it’s some sort of avant-gard weird ass thing happening musically.

So, here is what it all looks like:

with nominalMemorize this.

Dynamic Range is from Noise Floor to Distortion Point.

Nominal Level is a High Average level setting.

Signal to Noise Ratio is from Noise floor to Nominal.

Headroom is from Nominal to Distortion Point.

Signal to Noise Ratio + Headroom = Dynamic Range.

What are typical nominal level figures? It depends. It depends on the type of gear you’re working with. The nominal level of an analog tape deck is measured one way, while the nominal level of a mic preamp is measured another way, while the levels on your DAW are measured yet a different way.

If you are thinking, “Wait. The nominal level is basically different all over the signal chain. Manufacturers decide where it is, engineers decide where it is, the type of gear affects it. Jeez Louise, how do I set levels so everything sounds rocking’ good?"

You use meters and common sense. And experience.

HOW TO SET LEVELS CORRECTLY

To not be a shitty engineer, you set your levels differently for each piece of gear, adjusting to take into account the dynamic range of each piece of gear. In other words, the nominal level changes, and you have to do things to control your dynamic range. Like this:

proper gainstaging
proper gainstaging 2

Notice that we’re reducing the dynamic range from both the top and the bottom. Instead of letting our signals go beyond the distortion point or below the noise floor, we’re controlling things. We’re controlling the dynamic range of the signal across the signal chain. That sounds like compression doesn’t it? And yes, that is certainly part of what is going on. But there is also recording technique involved to make sure all the pieces of gear fit together in the best way possible for the signal.

That’s GAIN STAGING. More on that at a later date!

OK! It’s been a pretty long slog through this stuff, but hopefully you’re a bit clearer on it all. I can be confusing, and usually when I explain it I can wave my arms around and demonstrate stuff and it makes more sense and I look like a nut.

I can’t emphasize how important knowing that diagram - Dynamic Range with Nominal in the Middle, is. If you can hold that diagram in your head while you’re setting up gear and getting your levels, your recordings will improve immensely. I want you all to be great engineers.

 

Previous posts have talked about what happens when audio signals get too powerful, too loud. Distortion is what happens. That ain’t the same pork chop is what happens. For a refresher go here.

This week, let us look at kinda the opposite. If distortion is what we hear when things are too much, what is at the other end, the quiet, weak side of things?

Noise is at the other end.

NOISE and SIGNAL

Noise is anything that you’d rather not hear, basically. And Signal is the thing that you actually do want to hear.

  • Watching sports on TV and hearing the announcer clearly = signal
  • Spouse/Significant Other/Toddler w/Poopie Diaper/Pet Cat in Heat = noise

When we like how noise sounds, it isn’t noise anymore. It becomes signal.

Example: you’re recording drums. The snare is leaking into the tom misc. The snare leakage is noise. So, you put a bunch of gates on the tom mic and spend 45 minutes getting rid of all the snare leaking into the toms.

Cue the band, cue the drummer. Do the count in 1 2 3 4...

And it sounds like shit. The snare sounds like you mic’d up this monkey:

lars you're draggingLars, you’re dragging again...

Because the leakage into the tom mics was actually HELPING the snare and the whole drum set. So, you pull off the gates, and now that leakage, previously noise, has become part of the signal.

At a live show, the audience is noise, the sound of the band is signal. In your car, the radio is signal and the sound of the engine, the wheels on the road, the wind rushing past the car is the noise. And suddenly, you hear a “pop” and then a flapping sound outside the car, and now the radio becomes the noise, so you turn it down to hear if you have a flat tire, because the road sounds are now the signal.

Please note that when Noise gets in the way of hearing the Signal there is a Problem.

SELF-NOISE and the NOISE FLOOR

Self-noise is the noise that a device makes when it’s turned on and power is running through it. If you aren’t running a signal through your console or your interface, and you turn up the speakers, you’ll hear hiss. Hopefully, the hiss will be very quiet, and you won’t hear hum along with it.

Hiss is the sound of the device working, the sound of electrons running around the circuit. This hiss is self-noise. All devices that have power flowing through them make noise. Your body generates self-noise, unless you’re dead.

At night, if it's really quiet, you might hear a whooshing in your ears and perhaps a very very quiet whining sound. If you put a cup or a shell to your ear you’ll easily hear the whooshing — remember as a kid when you put shell to your ear and could hear the ocean? It’s not the ocean. It’s blood flowing through your ear, reflected back into it by the shell. You’ll hear the same whooshing if you put a coffee cup up to your head, rather than a barista yelling or a tractor on a coffee plantation in Guatemala.

The whoosh is your blood flowing. The whine is your nervous system working. This is really quiet stuff, about the quietist things you can hear. We call this the Threshold of Hearing. This is like the sound of an ant picking its nose.

Now, you don’t normally hear this stuff in your day-to-day life because everything around you is noisier. Noise causes masking when the signal gets too quiet and falls below the noise. The limit to how quiet a signal you can have is how low the noise is. You can’t really go below the noise, so that bottom limit is called the Noise Floor. You can’t get lower than the floor, right?

The noise floor of a piece of audio equipment is typically really low. Guitar amps have more noise — how often have you heard a sustained guitar note decay away into the hiss of a guitar amp? It goes below the noise floor and then you can’t hear it anymore.

The noise floor is shifting thing. When you’re mixing live, is the hiss through a PA system really an issue? It might be during the sound check when the venue is empty. But once it fills up with people, the noise floor of the audience is considerably higher than the hiss of the PA and effectively masks it. And if your PA hiss is heard above the audience... jeez, you suck, you stumpy bastard.

SIGNAL to NOISE RATIO

You’re in the coffee shop talking to a friend. The friend who is talking is the SIGNAL — the thing you want to hear, and the background chatter, espresso machine sounds, etc., are the NOISE — the things you don’t want to hear. The louder the coffee shop gets, the louder your friend will have to be such that you can hear their signal over the noise.

Signal over Noise... let’s call this the Signal to Noise Ratio. S/N ratio. If this is a low number, the noise is loud and it's intruding on the signal. If this number is high, the noise is quiet compared to the signal. So, now you understand this bit more:

1176 spec

First distortion, now noise... soon you’ll be able to just read this stuff.

You still might not fully understand decibels, but we’ll get to that.

The S/N ratio is different for different types of equipment. It’s comparatively huge for microphones and really good preamps, and much less so for cheaper equipment, guitar amps, PA systems, etc.

sn01
sn 2

Noise builds up. When recording, the ambient sound of the studio feeds into, oh, say a condenser mic, which adds some hiss, and then into a preamp, which adds a little more hiss, and then into various converters and devices, all of which add hiss. And all of this noise adds up, and that’s the noise floor. Then someone wacks a snare out in the studio, and that goes slamming through everything and it’s much louder than the noise. High S/N ratio. The snare rings out for a moment, then decays into the ambience of the room. And once it decays to a certain level, we’ll notice the noise again. S/N ratio is a fluid thing.

CAPTAIN OBVIOUS

This is frickin’ obvious but it must be said: you usually hear noise when things are quiet, when the signal is low and the S/N ratio is small.

Another frickin’ obvious thing that must be said: analog recording techniques were mostly developed to compensate for noise, especially tape hiss.

Tape hiss... the sound a piece of magnetic tape makes as it slithers over the heads of a tape recorder. The more tracks you have, the more tape hiss you’d get. Dolby, DBX noise reduction, noise gates, etc., were all developed to control tape hiss.

Digital recording was developed to totally get rid of tape hiss.

I can’t tell ya how much time I spent in my engineering career trying to get rid of noise. Automating mutes. Gates. Yada yada yada. I never really used DBX systems because I thought they sounded terrible, and if I was recording at a nice high level to really good tape, and was careful with muting, I could make a virtually hiss free record.

You can’t hear hiss when the band is cranking.

I cannot understand why anyone would make a plug-in that adds “authentic analog noise” to the signal chain. Restaurants are allowed to have a very low percentage of cockroach bits and rat crap in the food. Would you add cockroach bits and rat shit when you cook at home to get that “authentic restaurant taste?” Fuck no.

Next week we’ll put all of this together and figure out dynamic range and metering.

Be well. Stay safe.

Distortion, in the simplest sense, is when what comes out is different than what goes in. Think about eating dinner and what happens six hours later.... that ain’t the same pork chop, is it?

Something in the process, in the piece of equipment, is changing the signal.

Usually, what happens is that the piece of equipment runs out of ability to accurately reproduce the input signal. But what the heck does this mean, actually? Let me give you a few examples. If you get this clear in your head, so many things will suddenly make sense.

Let’s Look at a Speaker

A simple speaker is a cone of paper that’s being pushed forward and backward by an electromagnet (the coil). There’s a flexible springy area around the cone of paper called the surround, and the base of the cone is attached to another springy thing called a spider. The surround and the spider are attached to a frame called the basket. The spider and the surround allow the cone to move forward and back while supporting it in the basket. When the cone moves forward and back it pushes air forward and back. The coil is what causes the cone to move - pushing it forward and back, depending on the signal that’s fed into it. Like this diagram:

simple speaker lg

A simplified speaker

If you feed in a low frequency signal, the cone moves back and forth slowly, and as the pitch goes up, the cone moves back and forth faster and faster. If you feed a weak signal in, the cone moves back and forth over a small distance.

speaker excursion lg

Linear reproduction of an input signal

If you crank the power up (the volume) the cone moves back and forth and covers a longer distance.

more excursion lg

Louder but still linear...

However, the cone can’t move an infinite distance back and forth. There will come a point when the surround and the spider are completely stretched and the cone can’t move any further. The speaker has run out of ability. Does that make sense?

clipped output lg

Speaker can’t move enough and the output is clipped

When the cone has ability to move, it does so, and it can accurately track the up and down of the waveform. When the surround and spider run out of stretch, however, the cone can’t track the waveform. It moves as far as it can, can’t go any further, so it essential jams - it stays still. And the waveform that comes out of it is now different than the waveform that went into it. And if you look at the waveform the speaker is emitting out, it’s clipped — it’s squared.

Remember last week, when we mixed odd order harmonics in with the fundamental and caused a square wave? This is exactly what’s happening with the speaker, but in reverse: its movement is “jammed:” it squares and generates a bunch of distorted crap — harmonic distortion crap. Oh my, that doesn’t look like the original pork chop, does it?

So, a speaker has a certain amount of ability to move and reproduce a waveform in a linear manner. If we put in too much power, we run the speaker out of ability, and the result is distortion.

How much ability does a speaker have?

It depends on things, but to look at it very simply, if a speaker is rated to 150 watts, it has 150 watts worth of ability.

Let’s Look at an Amplifier

Ok, so a speaker is rated to 150 watts, so that means an amplifier which is rated to 150 watts... hmmm... that means the amp has 150 watts worth of ability to reproduce the signal, right?

EXACTLY!!! That is exactly right. Amps - and not just power amps or guitar amp, but the little tiny amplifiers stuffed into the circuit boards of your recording console, have only so much ability. They run out of ability to reproduce a signal, and when that happens, the result is distorted output, non-linear output.

As a signal feeds in, the amplifier uses power to reproduce it. As we turn up the input signal, the amp needs more power to track the waveform in a linear manner. But there isn’t infinite power. The amp isn't connected directly to the sun. Eventually, the amplifier cannot draw anymore power, and it loses its ability to track the waveform, and it squares the wave, just like a speaker that runs out of springiness.

amp linearPlenty of power for linear reproduction

Amplifiers use power to reproduce signal, and if they don’t have enough power, they generate harmonic distortion. A simple way to look at, but a very useful way to look at it.

amp non linearNot enough power = generation of harmonic distortion

Everything Runs Out of Ability

A singer can only get so loud before their vocal chords can no longer move — they physically slam into each other in the voice box. The vocal chords run out of ability. The resulting vocal has a growl to it — distortion. Harmonic distortion. And if the singer keeps doing this, they start losing their voice, and if they do it enough, there can do permanent damage, just like you can blow a speaker out, or blow up an amplifier.

Your ears. Your eardrum can only move so far. The little bones in your ear (there are three little bones in each) can only move so far. The little hairs in your cochlea which turn sound waves into nerve impulses can only move so far. They run out of ability to move, to track the waveform as it gets loud, and the result is distortion. And you can hear this distortion, and you can feel it. And if you consistently run your ears out of ability you’ll get tinnitus. Or, if the waveform is loud enough, you can blow your eardrum out — literally tear it apart.

Stuff certain mics into a kick drum and one good hit can break the diaphragm in a split second, and if it doesn’t break it, the mic will clip the waveform as it runs out of ability to move and starts generating harmonic distortion.

Do digital processors run out of ability? Yes. Digital processors do math, and you can basically use up all of the processor’s ability to perform mathematical calculations. The result however, isn’t harmonic distortion. It’s a loud click or static "scratching" sound, and if you feed that through a speaker, the speaker runs out of ability to reproduce it almost immediately, which is why it sounds awful and is really bad for your speakers. And your ears.

Everything runs out of ability, and when it does, you get that unrecognizable pork chop.

A short post this week, but an important one if this is stuff you’re trying to wrap your head around. Hit us up on Facebook or Discord if you’ve got a question.

 

Last week I wrote about Bias, and how if an amplifier or an audio component isn’t biased correctly it might not work or it might cause a lot of harmonic distortion.

This week: what the heck is harmonic distortion, and what the heck is a harmonic?

What’s a Harmonic?

So, first of all, what is a harmonic.

If you take a note, like a C, and play it on a guitar or a piano, because of the physics involved, not only do you hear the note C, you also hear, very quietly, other notes that are mathematically related to the C you’re playing. Like, you might hear a C an octave higher, and then another octave above that, and you might hear an E and G mixed in there as well. It’s actually quite a bit more complex than that, but the point is that if you play a note on virtually any instrument, you get more than the single note that defines the perceived pitch. That other stuff are the harmonics.

I found this video, which is an ok explanation - it could be clearer, but if you want to take a moment, a quick watch might help you understand some of the physics involved.

The harmonics of a note are caused by the physics of vibrations, and by the construction of an instrument, or of a persons’ face if we’re talking about the harmonics of a sung note. And, in fact, the harmonics of an instrument are a huge factor in why an instrument sounds the way it does. A guitar with steel strings has a different set of harmonics than a guitar with nylon strings. The two types of guitars have a lot in common in terms of harmonics — you can tell they’re both guitars — but the steel string is typically brighter and more metallic, and that’s because of its harmonics.

The harmonics have a mathematical relationship to that C you played (we can call that the fundamental), and the particular pattern of harmonics is what makes an instrument recognizable as an instrument. And some patterns of harmonics sound better to our ears than other patters of harmonics.

In fact, harmonics do tend to be high frequency information, and we will see why that is important in a bit.

What’s Harmonic Distortion?

Harmonic distortion is when harmonics are added to a sound, a signal, that aren’t there in the original signal.

Back to playing a C. If we played a C on a very simple instrument, like a flute, you would get a very pure sounding C — it wouldn’t have a lot of extra harmonics happening, unlike a guitar, for instance. The complex body and physics of a guitar actually add harmonics to the C. It’s a bizarre way to think of it, but you can consider a guitar a generator of harmonic distortion. So is a piano, a trombone, a human voice, etc. These all are sort of "harmonic distortion generators". But we want that particular harmonic distortion - it’s how those instruments sound.

Electronic components (amplifiers, etc.) also add harmonics to a signal. Usually a well-designed circuit adds a very, very tiny amount of harmonics, and we really can’t hear it because it's such a small amount. That is also harmonic distortion. A badly designed circuit can add enough harmonic distortion that one can really hear it. There are amounts of harmonic distortion that can be very noticeable, and certain patterns of harmonics are more noticeable, and some patterns sound good, and some sound like shit.

Harmonic Distortion = Sonic Finger Print

All the elements in an audio recording signal chain add some amount of harmonic distortion. Microphones, speakers, preamps, compressors, power amps, guitar amps, effects pedals — all of these things add harmonic distortion. Some are designed to add as little as possible, and others are designed to add huge amounts. Microphones sound different from each other, in part, due to the harmonic distortion they add, as do speakers, mic preamps, etc.

As mentioned earlier, some patterns of harmonics our ears like better than others. Tubes, whether in compressors or guitar amps, tend to have harmonic distortion that our ears like. Tubes are often described as sounding “warm.” That’s the mathematical relationship of the harmonic distortion (the harmonics added) of a tube circuit.

Solid state equipment also has distinctive harmonics patterns that it adds to a signal. That’s part of the reason Neve sounds like a Neve, and a Mackie sounds like a Mackie.

THD

THD stands for Total Harmonic Distortion, and it’s a measurement of the amount of harmonics a piece of equipment adds to a signal passing through it. The manufacturer of the equipment will usually specify this as a percentage at certain frequencies, something like, “Less than 0.5% percent THD from 20 Hz to 20 kHz at full rated power.” Some manufacturers specify it in much looser terms: “Less that 1% THD.” Generally, the better the gear, the lower the % of harmonic distortion, and the more specific the manufacturer will be about it.

neve shelford

Specifications from a Neve Shelford channel

What’s a lot of harmonic distortion, and what’s a little? Depends. 0.5% is pretty good for a tube component, but pretty awful for a solid state component. A really high end solid state device can have incredibly low harmonic distortions - like 0.002%.

Tube mics are typically in the 1% THD neck of the woods. 1176 Limiters have around 0.5%. A Neve 5211 is down around 0.0015%. Obviously, guitar amps designed for distortion have much higher amounts of THD. And also obviously, is that the more you turn stuff up (increase the power), the more you increase harmonic distortion.

But, THD is really only a small part of the harmonic distortion story. There’s also the “sound” of the harmonics added, the math of their pattern, that make a huge difference.

Even vs. Odd Harmonics

I made a video about this next bit, so you can watch the video and skip ahead, or watch it and then read so you understand it all that much better.

Quickly, let's look at the way a string vibrates.

A vibrating string is very complex. Back to our C, if you fret and pluck a C on a guitar, you'll get a nice loud fundamental, vibrating at 261.63hz. Let's round that to 262 to make the math easier.

sine wave 300

Sine wave fundamental.

So, we have a string vibrating at 262hz, but it's also vibrating at twice that - 524hz. But it isn't vibrating with as much power, so this 1st harmonic is much quieter than the fundamental.

sine wave 600

Sine wave 2x fundamental

There's also a harmonic vibrating four times as fast as the fundamental — 1048hz.

sine wave 1200

Harmonic 4 times the fundamental

When these vibrations all happen on one string, the result is a much more complex waveform than any fundamental or harmonic by itself.

sine300 600 1200

A complex waveform

There are also other math things happening there. There's an E, which is the third, and which is around 1.25 times the fundamental.

sine and 3rds

Fundamental and the third - the basis of a major chord

These harmonic relationships that sound good to our ears tend to be even number multiples, often called even order harmonics. Our ears tend to not like odd number multiples - 3, 5, 7, etc. These particular odd harmonics sound kinda ugly to our ears — the 7x is especially dislikable, and they tend to square the wave off...

sime 300 900

Fundamental and an odd number (x3) harmonic - notice it’s a square wave

In general, our ears think even harmonics sound better than odd. In general, tube equipment generates a lot of even harmonics. Does that explain to a large extent why everyone likes the sounds of tube amps?

Many Things Explained

Understanding some of the math of harmonics also explains why distortion seems to make something sound brighter: because what you're adding is harmonics ABOVE the fundamental, and those harmonics stack up and increase the apparent high frequency tonality of a sound. It also explains why too much harmonic distortion can sound harsh and painful — it's causing a lot of high frequency activity, and our ears don't like that very much.

Now, some of you might be thinking, "Even on a really good day people can only hear up to 20kHz. If I have something at 8kHz, then it's harmonics are at 16kHz and 32kHz and other frequencies, all high, and most of them beyond the range of hearing. How can this possibly affect what we hear?"

The answer is that we can sense frequencies we can't clearly hear, and very high, over 20kHz frequencies can affect the equipment we are using, especially digital stuff, and we can hear that effect.

Some stuff you just have to take on faith, you stumpy bastard.

SO... download a PSC demo or buy one, flip it around to the backside, turn up the PREAMP gain until you hear some distortion. Then swap around the three different sets of tubes we've thoughtfully included, and you'll hear the quality and frequency response of the distortion change. This is because we've modeled the different harmonic distortion characteristics of them into the PSC.

psc gain

Adjustments to hear harmonic distortion on the PSC.

On the AIP, you can do this: Switch on the PSP and turn up the input gain until you hear distortion.

aip front

Front panel: switch in the PSP at the center

Go around to the back panel and click between TUBES , TAPE and SOLID STATE to hear three different variations of harmonic distortion. Turn up the INPUT TRIM to make the effect more easily heard. Don’t forget that turning up the trim can add a lot of gain and make things louder. Use the OUTPUT TRIM to readjust output gain down.

aip back

Next week, we'll talk about why cranking things up causes an increase in harmonic distortion, and we'll start talking about some recording techniques that take advantage of the physics involved.

Most recordists use digital reverb these days, but a lot of the programming of a digital reverb is based on either environmental reverb or mechanical reverb simulators.

Environmental Reverb - sound waves bouncing around a hall or a room, or a reverb/echo chamber.

Mechanical Reverb Simulators - using metal and speakers and pickups to “mimic” naturally occurring acoustic reverb.

This article covers Live room, Chamber, Plate and Spring reverb — how they sound, how they work, where you might find them useful in a recording, etc. There are some musical examples to listen to, and we've made a “cheat sheet” for our Micro Digital Reverberator that will help you when you select programs on it.

But first...

Quick Explanation of Reverb

reverb

Highly technical and scientific diagram...

A sound travels out from a source, moving at the speed of sound, which is about 1’/ms (one foot per millisecond). It strikes a surface, like a wall or a cliff, bounces off of it and comes back to our ear, still moving at the speed of sound. If the wall was 20’ away, it would take about 20ms for the sound to travel to the wall, and then another 20ms to travel back. The total time of the echo would be 40ms. If the wall absorbed a bunch of the sounds high end, the echo would sound less bright.

Reverb isn’t one sound wave bouncing off one surface. It’s sound waves bouncing off floors, ceilings, walls, tables, chairs, people, etc. Reverb is thousands, even millions of echoes that happen over a period of time. Rather than hearing a distinct, clear echo, we hear a wash of sound that gradually decays over time as the sound waves are absorbed by the surfaces off of which they bounce. The frequency response of the reverb is caused by the absorption of different frequencies by the surfaces of a space. A big wood room with carpets will absorb fairly evenly, with the highs being absorbed the most. A small tiled room will tend to sound bright because more high frequencies are reflected by the tile rather than absorbed.

You can download our Cheat Sheet here. And here’s a quick thing to try: Get a vocal track, put an MDR on it, and cycle through the programs' marked chamber and then listen for the qualities that are common to all the chamber presets. Do the same for plates, and then for springs, then for live spaces. You'll teach yourself to hear the differences, and then hearing reverb types on recordings, and making choices for your own mixes will be a lot easier.

Alright, enough of that. Onward.

Halls and Room - natural reverb

Big concert halls and large rooms work well with sounds that have slow transients, like strings and orchestras. Big rooms can make drums and percussive sorts of parts sound confused and muddy. As room sizes get smaller, they become useful for adding character or thickening. Small rooms can also sound very weird and kinda ugly.

live studio

Studio in the early '70s. Note they’re setting up a mic to pick up the whole room.

The earliest type of reverberation on recordings was caused by the space in which the recording was made. If an orchestra was recorded in a concert hall, the sound of the reverb of that hall would get recorded as well. Ditto for recording anything in live studio space, a little vocal booth, a stairwell or a bathroom, etc.

Concert halls tend to be warm sounding, without an emphasis on highs, and have decay times over 2 seconds. As a natural space gets smaller, it tends to get brighter and a bit wonky sounding. Concert halls are designed with certain frequency response and decay characteristics, while stairways and bathrooms aren’t designed with any thought of acoustics.

To my ears, the decay time of most real spaces - halls, live rooms, have a logarithmic decay time. That is, the reverbs' energy drops off drastically and then slowly tapers away. It sounds like this diagram looks:

logarithmic decay

There is almost always a hint of the room on any recording made with a microphone, and sometimes that hint is quite pleasant, and sometimes it sucks. A little bit of room on an instrument or vocal can add a subtle doubling effect and make the instrument sound bigger (check out this article here). A room with a lot of character — a bathroom, a hallway, can make a part stand out.

Here’s an orchestral recording of some Iron Maiden. Note that the drums are boxed in with plexiglass. Live drums in a highly reverberant concert hall would be rather unintelligible.

Another thing to listen to: this Cowboy Junkies’ track was recorded basically live in a church around a stereo mic.

Reverb Chambers

Chambers are usually bright and have a rhythmic, repetitive quality to the decay. Chamber reverb is a classic sound on vocals, and putting chambers all over a recording will impart a vintage quality.

abbey road echo chamber

Abbey Road reverb chamber. Note that it’s pretty gross and ventilation and plumbing runs through it. Where’s that dehumidifier?? Where’s Ringo?

In 1947, Bill Putnam put a speaker and microphones in a large bathroom at Universal Recording Studios. He fed some of the session he was working on (The Harmonicats Peg O My Heart) into the speaker, picked up the echoes and reflections bouncing around the bathroom with the microphones, and fed that back into his mix. Other studios followed and soon many studio complexes were converting storage rooms or adding on spaces to make a reverb chamber.

Reverb chambers sound somewhat like a concert hall, but much less natural because they’re typically much much smaller. In order to get a longer decay time, reverb chambers were treated to be very reflective, which resulted in longer decay times that are unnaturally bright and have a strange decay quality. Chambers sound like they’re decaying in sections. Think of “chunks” of reverb that get quieter over time, sort of like echoes, but reverb. To me, chamber reverbs have a “cannoning” sound to them — the reverb is thumpy.

Reverb/Echo chambers seem to have a cyclical, sort of “stop and start” decay curve. Visualize it this way:

“cyclical

Motown, the Beatles, everything coming out of Capitol Studios in Los Angeles, and just about everything recorded by a major studio through the 1950 into the early '70s, has reverb from a chamber on it.

Chambers sound lush and articulate, and are great for vocals and adding mood. They can sound big like a hall, but have better definition. A hint of a chamber on a part can add a tangible sense of space. A lot of chamber reverb sounds otherworldly and imparts a lot of mood.

Here’s a great example of chamber reverb: The Flamingos' I Only Have Eyes for You. This recording, done in 1959 live in the studio at Bell Sound Studios in NYC, is groundbreaking in so many ways: it’s one of the first times a record was produced to deliberately have a mood and not just be a documentation of a performance.

Reverb chambers, though, have problems. Often studios built them in basements, and that required constantly running dehumidifiers to keep them from filling up with water. And depending on the studio’s location, having a basically big empty room was stupid, from a real estate perspective. What studio in a major city can afford to pay for the square foot cost of a reverb chamber these days? As real estate prices went up, many chambers wound up converted into studios or office spaces.

Luckily, in the late 1950’s, some smaller solutions to the problem of reverb became available.

Plate Reverb

Plate reverb is luscious and rich. It’s very smooth, with a very even, almost linear decay. It’s the sound of vocals from the mid 1970's right up through now, but it is best used sparsely, to highlight elements of your mix. Too much everywhere makes a mess.

plate reverb

The guts of an EMT plate reverb.

Plate reverbs were usually made from a large plate of steel. A driver (basically a speaker) is screwed in somewhere around the center of the plate, and then two pickups (basically microphones) were screwed into the left and right sides of the plate (stereo!) towards the edges. Instead of the sound waves bouncing around a room, they bounce around the plate, and the result sounds like reverb. The decay time is controlled by damping the plate (imagine holding a huge pillow against it).

The first commercial plate reverb was developed by a German company, Elektromesstecknik. They released the EMT-140 in 1957. It was a monster—8 feet long and 600 pounds, and it wasn’t cheap.... but it was smaller than a reverb chamber and MUCH cheaper than building a room.

Plates became increasingly common through the 1960’s and into the '70s, and it wasn’t really until the advent of digital reverb units in the 1980’s that plate reverb began to fall out of favor. If a record was made from 1966 'til 1985, there’s a good chance there’s a real plate reverb on it.

Plate reverb sounds thick, lush, and has a very even decay time. The frequency response is similar to that of a chamber — a little unnaturally bright—but without the repetitive, segmented decay of a chamber.

Try to visualize a plate reverb in a way similar to this diagram: the reverb trails with energy distributed more or less evenly across its decay:

“linear

Plates are often used on vocals, especially lead vocals that need to pop out in the mix. When I was coming up through studios in the '80s and '90s it was common to put a plate reverb on the snare, even if the drums were cut in a big live room. The even decay and “thickness" of a plate reverb is very flattering.

Plate reverbs, too, have their problems. They are big and get in the way — even a small plate reverb is as big as a folding table on its side. You couldn’t have a plate reverb in a control room, not only because of its size, but also because it could start to feedback during the recording session! Studios put plates in the basement, or some other isolated room, and there had to be a remote control, yada yada, but it was sort of a pain in the ass anyway. Elektromesstecknik (EMT) developed the EMT-240, which was much smaller, and used a small piece of gold as a plate. About as big as a large PC, the EMT-240 didn’t require isolation to prevent feedback, and had a warmer character to its sound. Plates are mechanical, and mechanical things wear out and break, and once digital reverb units came onto the scene, plates faded. A few companies still make new plate units, but most of what is available today is either three decades old or a plug-in.

This record of Sister Golden Hair by America has a really nice, sparse use of plate reverb on it. Most of the recording is dry, but you can hear a lot of plate on the slide guitar and backing vocals, and just a touch of it on the lead vocals. This is an impeccable production, by George Martin, of a simply terrific song.

Spring Reverb

Spring reverb is bright and artificial sounding. It’s more of an effect than ambience. Top of the line spring reverbs can be quite beautiful sounding; cheaper units sound strange and “boingy.” When you don’t quite know what something needs, put a spring reverb on it.

spring reverb

The guts of a typical spring reverb.

Spring reverbs use a mechanical system similar to that used in a plate reverb, but instead of a big piece of steel, there are springs. A spring unit is smaller than a plate, and much cheaper, although really good studio quality spring reverb systems, like an AKG BX-20, were, and still are, pretty pricey.

Bell Labs originally patented the spring reverb as a way of simulating delays that would occur over telephone lines. The first musical application was in the 1930's, when spring reverbs began appearing in Hammond organs. Spring reverbs can be made cheap and small, and have been built into guitar amps since the 1950's. Spring reverb on a guitar is an utterly recognizable sound to the point of cliché.

Top quality spring reverbs, like the aforementioned AKG BX-20, are found in studios, but they didn’t replace plates, although they do have similar sonic qualities. An expensive, well-designed spring has a similar frequency response to a plate, but a more jumpy, inconsistent decay. Good spring reverbs impart a “halo” to an instrument. On a vocal, a spring doesn’t really sound like reverb, but rather, it sounds like an effect. I tend to think of spring reverb more as an effect than a means of adding ambience.

Amy Winehouse records used a lot of spring reverb sounds, to evoke a vintage, early 1960’s sort of vibe. I picked her recording of Round Midnight to give you a good example of a spring reverb on a voice. Notice the shimmering halo that surrounds the lead vocal— that’s a spring reverb. And listen for the level of the reverb changing during the mix, accentuating certain parts of the song.

Cheap spring reverbs sound boingy, but even that can be a useful effect. Rather than presenting some sort of cliché surf guitar as a reference for this sound, here’s a bit of madness from King Tubby. This is crazy stuff, with spring reverbs on drums and vocals, runaway tape delay all over the place, noises, distortion and slams, weird EQ’ing and filtering. If you’ve not listened to King Tubby.... jeez! Go listen to King Tubby!

We Have Reached the End of the Decay Time

So, some ideas, some patches, some tech stuff. I’ll cover what digital reverbs do in the future (really, what digital reverbs usually do is simulate all the reverbs I've described above). Remember, everything written above (except for the facts of how the various types of reverbs are constructed), is just a guideline. There are no commandments here. Use your ear, listen to recordings, and experiment.

Nailing the reverb and ambience on lead vocals can be really tricky. This week, we’re going to show you a method for doing vocal ‘verb that’s easy, basically fool proof, and will work for music of any genre. AND we’re going to show you a nifty vocal reverb trick that you can use to highlight a specific section of a song.

Three Reverbs on a Vocal

The basis of this method of getting vocal reverb is similar to that which we use on snare - read Dan’s Snare Trick blog post from last week if you missed it. We will be using three different reverbs. The first will add thickness and presence to the vocal, the second will place the vocal in an acoustic space, and the third reverb is a special, which you can use to highlight the vocal in specific sections of the song.

ONE: Thick and Present

First, instantiate a Micro Digital Reverberator on the vocal channel’s insert, after all the other processing you’ve got happening (EQ, compression, etc.). Regardless of what MDR program you use, turn DRY fully clockwise and lower WET to around 50%. You’ll adjust WET more later.

vocal presenseSettings for Thick and Present vocal reverb

For a program, you’re looking for a small room that will add texture and thickness to the vocal but not really reverb.

These are the programs we like to use. We tend to choose one that is the opposite of the voice — it it is a dark, bass voice we choose a brighter program. For a bright or higher voice, try one of the darker settings. If you don’t know what to pick, just use Machine 1 Small 1, it always works great for this.

Machine 1 Small 1
Machine 2 01 Small Bright .1 SEC
Machine 2 02 Small Bright .2 SEC
Machine 2 03 Small Bright .3 SEC
Machine 2 05 Medium Bright .6 SEC
Machine 2 09 Medium Dark .5 SEC

With reverb times above 300 ms (.3 seconds or higher) beware setting WET too high, it can sound like a bathroom. Typically, we wind up using Machine 1 Small 1 or Machine 2 09.

Most vocal tracks are recorded in mono, but at this stage, switch the channel’s output to stereo—you’ll see why in a moment.

Press the Korneff nameplate to pivot around to the back of the MDR, find the WIDTH trimpot and set it to 50% or lower. Because you’ve switched your mono vocal track into stereo, the stereo width control will have the effect of widening the voice a little bit. You can crank it all the way up to 200%, but depending on the song this might be a bit distracting. This is one of the settings you’ll be messing with later in your mix as you add in instruments, etc.

So, now your vocal should be a little bit bigger and commanding more attention in your mix, but it won’t be louder or processed sounding.

Quick trick here: crank up the INPUT gain to drive the MDR a little bit. This will get you a cool, slightly grainy saturation. Be sure you turn the OUTPUT gain down otherwise you’ll digitally clip the channel, and that will sound like ass.

TWO: Reverb and Ambience

The vocal, at this stage, is probably too dry to sound polished and professional, so we want to add a reverb effect that we can really hear and recognize as reverb. We’ll do this using an effects send.

Set-up a send from the vocal channel, and put an MDR on the insert of the Return. Set the DRY to 100% and the WET to 0%—this is the way the MDR initially loads in.

reverb choicesSome vocal reverb choices

There are a TON of possible reverb programs on the MDR to choose from at this point, so a lot of what you pick will come down to taste. We typically use medium and smaller sounding rooms on vocals when songs are fast, and bigger, large rooms and plates and halls when songs are slower. These are the settings that we keep going back to all the time:

Machine 2 13 Large Warm 1.1 SEC is a gorgeous reverb and generally where I start. IF it is too bright, I look for something darker, if it is too big I look for something smaller, etc. This particular program blends really well within a full mix, and it adds polish without making things sound “reverby” like a record from the early ‘60s.

Machine 2 22 Large Warm 1.75 SEC sounds like a vintage echo chamber and is very musical and rhythmic on a vocal. Great for ballads and things like that.

Machine 1 Large 1 always sounds good, but it might be too much for some music.

Machine 1 Small 5 works great on vocals with lots of short words when intelligibility is needed.

Machine 1 Small 6 This isn’t a room, it is a smooth and dark plate reverb effect, and it can be way too much. We like to use this but throw it way back in the mix so you only really hear it in the gaps of the other instruments.

THREE: A Special

During your mix, you’ll probably have some moments where you want the vocal to jump out and really call attention to itself. For that, we’re going to use a special.

Sibilance - your enemy, your little pal...

Generally, on vocals, we try to get control of sibilance. Sibilant frequencies are in the 5kHZ area and they add intelligibility to speech and singing. They are the frequencies generated by consonants. As people get older, these frequencies get harder to hear, which is why you have to repeat yourself and speak very clearly around grandma and grandpa (especially if they were in a punk band when they were younger). Too much sibilance on a record, though, sounds hissy and spitty. It’s caused by sounds like S and T overloading a microphone or a preamp somewhere. Usually, we don’t want sibilance. However, this trick is all about generating sibilance.

Set up yet another effects send from your vocal channel. Crank up the send level a bit. On the return channel, add channel EQ or a High Pass Filter, and follow it with yet another MDR on that insert. Set it to 100% WET, 0% DRY.

Dan loves the Vocal Whisper preset on the Lexicon 480L unit, so we’re going to sort of rip that sound off a bit.

special settingsEQ and Reverb settings for the Vocal Whispers effect.

On the channel EQ you’ve got before the MDR, put a high pass filter at about 10kHZ and roll off everything below it. This will prevent almost anything other than high pitched vocal sounds from getting to the MDR.

On the MDR, set it to Machine 2 50 Multitap Reverse. Flip around to the back panel of the MDR and set DAMPING to -1.6dB, LPF to 13.2kHz and WIDTH to 170% (you can go higher).

As you play your mix, you’ll notice that S’s and T’s, and other sibilant consonants, will jump out and kind of sound like ghostly whispers. Adjust the High Pass Filter of the EQ to get more or less of the effect. Dan likes to use this in relatively open areas of the song to create a scary, unsettling mood.

A good way to figure out where you want to use this effect is to put it on the vocal somewhere in the middle of your mix process and listen to the entire mix with it a few times. Generally, there will be certain spots where the effect jumps out. I often just leave stuff like this on always so there is a random element happening in the mix to give me ideas.

Some Other Ideas

There are all sorts of fun variations on this you can try. As an example, rather than setting a high pass filter, set a low pass around 400Hz and send all that dark, warm low-end gunk into the MDR. Set the MDR to Machine 2 34 Slow Gate 450 MSEC. If you listen to the effect all by itself it, it sounds like a moron singing in the shower, but in the mix it gives the vocal subtle movement and texture, and makes it seem wetter than it actually is. When I do this, I set my other vocal reverbs to something bright so things don’t get muddy.

And of course, try this trick on guitars, synths, etc.

And that is it for this week. Let us know how this all works for you on our Discord or Facebook.

The other day on our Discord channel, Dan Korneff shared a snare drum trick that uses the Korneff Audio Micro Digital Reverberator.

Dan is well-known for his hard hitting, articulate drum and snare sounds. Not many producers/engineers have snare sounds that are considered iconic, but Dan does—search "Paramore Riot snare” on Gearspace - multiple threads come up.

His MDR snare trick is easy and gives you a lot of control over the size of the snare and how it sits in the stereo field.

What you're going to do is set up two different MDRs. One we’ll call the Fatter, the other we’ll call the Wider. Together, they’ll make your snare fatter and wider. Think of it as drinking beer and sitting around with no exercise all day for the snare.

Make the Snare Fatter

Start by dropping a Micro Digital Reverberator instance on the snare channel, or on the snare bus if you are combining multiple snare sources together. Do this by placing the MDR in an insert location, after any other processing on the channel.

Switch the MDR over to Machine 2, and bring up program 03 Small Bright 0.3 Sec. Set DRY all the way to the right, and bring down WET to around 2:00. You want to pass all of the snare through the MDR, and then add the effect back in.

03 Small Bright 0.3 Sec has a timbre similar to that of a tight drum booth or a small, highly reflective room. The very short time of this patch means that it will sound more like a doubling than a discrete reverb. It will thicken the snare up and make it last a little longer.

Press the Korneff nameplate to switch the MDR to the back side so you can tweak the circuit. Locate the blue trimpot to the left—it’s labeled WIDTH. Turn this all the way counterclockwise to 0%. This will make the the wet signal mono, so even if you’ve got a stereo snare track (for some reason), the effect will be confined tightly. This reinforces the snare’s solidity in the sound field.

mdr snare insertSettings for FATTER

Make the Snare Wider

To widen out the ambience of the snare, you’re going to set up another instance of the MDR, only rather than using it via a channel insert, you’re going to feed it via an effects send.

Set up a new send on the channel and turn it up to 50% for starters. Find the return, and put the MDR into an insert slot on it. The MDR instance will load and the DRY should be all the way to the left (fully counterclockwise) and the WET should be all the way to the right (fully clockwise). You don’t want any dry signal in an effects return.

A fresh instance on the MDR will come up set to Machine 1, program Large 1. This is not a coincidence—the plug-in’s default is the patch Dan uses the most, and it’s perfect for this application. Large 1 is dark and has a decay time of around .6 seconds and sounds like a big studio live room that has been damped down to control flutter and ring. It has a noticeable slap to it. To my ear, it has a “cannon" sort of effect on a snare.

Flip around to the back of the MDR and set the WIDTH trimpot to 100%. We want this as wide as possible to wrap that ambience around us a bit.

mdr snare sendSettings for WIDER

Adjust Adjust Adjust!

Depending on the genre of music and the density of your mix, you’ll have to adjust the settings a bit, and this is where the PRE-DELAY control on the front comes in really handy.

PRE-DELAY delays the onset of the effect. At low settings it is hard to hear, and as you turn it up you’ll increase the separation between the dry signal and the wet signal. Below 40ms, the dry and the wet will sound cohesive, but once you get higher than 40ms, the two signals will separate and your ear will hear them as two clearly different events. Very high settings will give a slapjack like effect and start to add an additional rhythmic component to your mix. You may or may not want that.

On the Fatter MDR, the one set to Machine 2,, as you adjust the PRE-DELAY the sound of the snare will brighten. Turn up the WET control to increase the size of the snare. To my ear, turning up WET seems to increase how hard the drummer is hitting. Seriously, even if you’re not using the wider portion of this trick, it’s a good thing to slap an MDR with this patch on your snare always. In a weird way it is doing the work of both a compressor and an EQ but in a much subtler way.

PRE-DELAY and the level of the return on the Wider MDR (the one set to Machine 1), has a lot of effect on the location of the snare from front to back in your stereo space. By messing with level and PRE-DLAY you can “move” the snare back or forward in relation to the rest of the drum set. Too much effect can make the snare sound like it was recorded in a completely different space than the other drums, and this might be an effect you’re going for. I usually want my drums to sound integrated and cohesive, so I’ll set Wider so that snare plays nicely with the rest of the mix.

Feel free to use different programs on the MDR and experiment a bit. In general, though, the trick works best with smaller spaces on the Fatter, and larger spaces on the Wider.

Some Extra Ideas

I usually feed a little of the hi-hat into the Wider using the effects send. Huge, distant snares with small, dry hi-hats sound goofy to me. I like these two instruments appearing like they at least know each other and not like they’re meeting each other for the first time on a Tinder date.

You might want to think about automating the Wider return. In general, you should be automating your reverb returns, almost “riding” them like any other instrument in your mix. In spots where you want the snare to be prominent, ride the return up, in places where things need to be tightened and tucked in a bit, pull the return down. I especially like bringing reverbs up during fade-outs, so it sounds like the song is fading away into space rather than simply getting quieter.

See You On Discord

This blog was inspired by a question that came up on our Discord channel. Thanks to Mistawalk! Dan and I monitor our Discord, and we definitely answer questions and give out ideas and tricks all the time on it, so hit up our Discord. And our Facebook. We’ll help you out however we can.

 

Thoughts on Reverb by Dan Korneff

Reverb is probably the most often used effect in modern recordings. When you think about it, the ability to transform the space in which our instruments exist with a couple clicks of a button is pretty mind blowing. As with any element in the recording process, the use (or misuse) of reverb is completely subjective, and your only limitation is your imagination.

In my world, reverb serves two completely different, and equally important purposes. The first use is a very practical approach. Whether you realize it or not, EVERYTHING you hear exists in some kind of space. When you’re having a conversation with someone, their voice sounds different in a hallway than it does in a closet. Even if the closet is really small, you still hear some type of ambiance. You not only hear the direct sound of a source, but you also experience the ambiance of the environment.

Since most modern engineers spend a good amount of time isolating instruments (close mics with tons of baffles) and removing the environment from their tracks (ever use a reflection filter on your vocal mic?), The very first thing I do, especially on vocals, is insert a reverb on the channel and create an ambient “space”. I’m not talking about slapping a 3 second reverb on everything. It’s going to be something short — 1 to 3 seconds. Just a touch of something to make the track sound like it’s not hovering in the center of an anechoic chamber. Since I’m trying to recreate a natural space, it only seems fitting to use a more “natural” sounding reverb. One of my favorite settings is the MDR Machine 2 on Preset 1. It’s small and bright, and a tiny bit just fits right in for me.

small bright 1
Dan’s fave setting for adding a little bit of natural space to anything.

Give these shorter reverbs a try on some tracks. You might just be surprised by how quickly your mix starts becoming bigger and better sounding.

The second approach is way less practical, and a LOT more fun! I grew up in the 80’s, where EVERYTHING was bigger. It wasn’t just the size of your AquaNet-soaked hair at school, it was reverb too! Everything was drenched in it. You’d have to send a rescue dog on a daily basis to help find your favorite singer at the bottom of a well. Every snare sounded like a punching bag, exploding from the speakers. What's not fun about that??

The impractical use of unnatural sounding spaces can lead to some really unique sounds that are so odd you’ll want to hear them over and over again. Using a setting like Machine 1 – Large 7 on a sparse guitar performance, or a short percussive vocal hook might just be that over-the-top decay you need to make the track stand out. Exaggerate your snare with the exploding ambiance of Machine 1 – Large 3. Add a little texture to a vocal with Machine 2 – Program 49.

settings for hooks
Handy settings for textural guitar parts or short, percussive vocal parts.
settings for snares
Try this for a huge snare sound reminiscent of the 80s.
/
Add life and texture to a vocal with this setting. Adjust Pre-Delay to exaggerate the effect.

If you’ve been behind the console for 25 years like me, reverb is not a new concept. Using these units for their practical purpose can be a thankless job, but necessary to bring extra realism to your music. But don’t deny yourself the fun of creating wildly unnatural or super exaggerated moments in your tracks whenever you can. Reverb might just become fun again for you.

Last of this series on compressors. Next week we move onto something new... Who knows what it might be!

The Release is the hardest parameter on a compressor to set. It can be hard to hear the effect of release—depending on the other settings of a compressor it can almost be impossible. It’s also really difficult to explain what it is and how to set it. In fact, you might want to just skip all this written stuff and go to the video I made here. That might be more helpful. This was a hard post to write.

But it’s an important one because once you understand release and have an idea of how to find good settings for it, it becomes your bestest buddy in dynamic processor land.

Understanding Release

Another name for release is recovery time. Another way to think of it: how long it takes the compressor to recover to zero gain reduction.

Release is how quickly the compressor stops compressing once the signal falls below threshold. Think of attack as delaying when the gain reduction STARTS and think of release as delaying when the gain reduction STOPS. Attack lets the transient get through BEFORE gain reduction happens. Release keeps the gain reduction in longer—PAST when it should have stopped.

Yet another way to think of it: the signal goes OVER threshold and the compressor kicks in, the signal goes BELOW threshold and the compressor kicks out. Release can make the compressor stay kicked in even though the signal is below threshold.

Another way—back to the dog analogy. You decide that when the dog goes past 10 feet, you’re going to pull him back. A short release would mean that once the dog returned to 10 feet, you would stop pulling on him. A long release means that even though the dog has returned to 10 feet, you still keep pulling on him.

Maybe this will help...

Release 1
Release 2
Release 3

With fast release, the compressor stops the moment the signal goes below threshold. With a slow release, it keeps compressing for a period of time even though it’s below threshold, then gradually stops. How long does it keep compressing, you ask? However long the release time is set for.

How Long Do You Set the Release For??

Many compressors have automatic releases—the LA-2a, dbx 160’s, etc. A very basic explanation of auto release: the more powerful (louder) and faster (the transient of the incoming waveform) the signal, the shorter the release will be for an auto release compressor. This isn’t a bad way to think when you’re manually setting release times. If you’re dealing with fast transients, you’ll tend to set the release shorter. When it's a slow transient, you’ll tend to set the release longer.

Compressing drums, you’ll set release to a short time. Compressing vocals, probably a bit longer. But it isn’t that simple.

Depending on how you set the release, you can bring out the little details of a signal—such as the breaths of a singer between phrases, the ring and resonance of drums, the resonances of a guitar or bass, or the reverb and echos of a room.

All audio signals are a mix of loud stuff and quiet stuff. The quiet stuff is usually covered up by the loud stuff, and when the loud stuff goes away, the quiet stuff has a better chance of being heard.

If we wack a snare drum in a room, the echo and reverb of the room are much quieter than the initial hit of the snare. If we compress the hit of the snare and we have a fast release set, the compressor pushes down the loud snare hit and then the overall signal is brought up by the makeup gain, so essentially, we have made the quiet things louder.

Release 4

 

Release 5

 

Release 6

 

This works on everything. Shorter releases bring up the quiet stuff. Now, it might not be apparent when you’re using a short release on a compressor on the stereo bus because the waveform is so complex. You probably won’t be able to do much with a short release on things like strings and keyboard pads.

Setting the release longer pushes those quiet things down and really long settings tend to make the entire track get quieter and less lively. On vocals, and on instruments that you want to have a more natural quality, you’ll tend to use longer release times.

Release 7

 

Release 8

 

Release 9

I made a video showing exactly how release time effects the quiet stuff on a recording, as well as using the Pawn Shop Comp on vocals (and how to fake an LA-2a type sound).

Some Idea on Setting Release

On drums and things of a percussive nature, setting release is pretty simple, and with something like our Talkback Limiter, the release is fixed at "short as hell," which makes the plugin rather perfect for working with drums.

On sounds that are not as percussive, release can be a lot trickier to set. It can be hard to hear the effect it is having on the signal, especially as things get more and more complex.

A way to nail the release time consistently

I tend to sweep around with the release a bit, often overshooting with too long and too short release times to “acclimate” my ear to the effect the release is having, and then hone in on a setting that works.

Watch your meter when you’re setting release. Its movement should correlate to what you’re hearing. Fast, percussive music should have that meter jumping. The meter should move much more slowly on slower, less percussive tracks and music.

The video below is the same as the one I linked to up top. It's long, but it’s really thorough. I cover release on a bunch of different instruments and then on a compressor across a mix

Lots of compression with short releases will always sound very “effecty,” like a Black Keys or a Radiohead record, and this is easy to do. Getting a compressor set so that they’re invisible in the track is much, much harder.

This post has covered a lot of ground. The key to release is to control that quiet stuff after the main part of the signal, and to watch that meter!

The next few weeks of Working Studio are going to be about compressors and limiters. It's appropriate—we just released the Pawn Shop Comp 2.0 as well as the Talkback Limiter a month ago.

By the end of this series, you will have a much better idea of how to apply compressors and limiters on your recordings. What you want to develop is a framework for how to think about dynamics and dynamic processors, so your thinking on using them is clear.

This is a long one and it has recording techniques and some HOW TO videos in it. Forward ho!

A compressor... a limiter... the differences between the two... This is all hard to define. I learned in school that a limiter was a compressor with a compression ratio at 10:1 or higher, that a limiter had a fast attack and release. But then, in the real studio, engineers were always saying, “Compress it,” and then patching in a piece of gear that had the word Limiter in its name. What the hell?

I think of compression and limiting as verbs, not nouns. Not as things or gear. Compression, compressing, limiting—these are things you do to an audio signal.

Time for an analogy.

Let’s talk about walking a dog.

Dog

You’re walking your dog and he runs around like a f***ing maniac. He knocks over people, trips you up, almost gets hit by cars, it sucks. Obviously, you have to keep the dog contained somehow.

You get a piece of strong rope about 20 feet long. You tie it to the dog and off the two of you go. He runs around all he wants until he gets 20 feet away from you and BAM! The rope kicks in, and if you’re strong enough, the dog stops at the end of it. This is LIMITING.

If you are really strong, or you tie the rope to a tree, and the dog runs 20 feet, then he is effectively stopped cold at 20 feet—like hitting a brick wall. If you’re not really strong, when the dog runs 20 feet and hits the end of the rope, you’ll be yanked a bit, until you dig in your heels a bit and fight the pull of the dog.

The threshold is 20 feet. The ratio is how strong you are. Big strong guy is 100:1. The dog is stopped with the force of a brick wall. Old grandpa fella is 2:1. The dog goes 20 feet and keeps going, slowed down somewhat by the dragging body of the old guy tangled up in his walker.

You decide that almost breaking the dog’s neck whenever he gets 20 feet away is cruel. What you want is the dog to stay about 10 feet from you. He can come closer, he can go further, like when there is something interesting for him to sniff, but for the most part he bounces around about 10 feet away from you. Sometimes 7 feet , sometimes 13 feet, 2 feet, 16 feet, etc. He feels a fairly constant, gentle pressure. You chuck the rope, get a bungie cord, cut it about 10 feet long, attach it the dog, and off you go.

When the dog is within 10 feet, the cord doesn’t apply any pressure. If the dog goes out past 10 feet, the cord applies some resistance to restrict the dog's movement. Can the dog go out to 20 feet and beyond? Sure can—but there will be pressure on the bungie pulling back on him. This is COMPRESSING.

The threshold is 10 feet. The ratio is how much resistance the bungie cord puts up. A pretty weak bungie cord is 2:1. For each 2 feet the dog wants to go, he only gets to go 1 foot. 5:1 bungie cord? For every 5 feet the dog wants to travel, he only gets to go 1 foot.

If the dog wanders away from you from 5 feet to 15 feet or so, you can see he has bungie pressure on him a lot, gently restricting his movement. The bungie only works when the dog goes further than 10 feet from you—only when it passes the threshold. The stronger the bungie, the less gentle the pressure is. At 2:1, the dog has to have double the energy to go one foot. At 5:1, the dog needs 5 times the energy to go 1 foot.

Do you see how limiting and compressing are similar and how they’re different, as well as how there is a bit of blur between the two?

Limiting is when you keep a signal from going past a certain point.

Compressing is when you restrict the overall movement of a signal.

A signal that is limited occasionally goes over the threshold.

A signal that is compressed is over the threshold often.

Limiting is applying a lot of restriction to the signal occasionally.

Compressing is applying some restriction to the signal often.

So let’s apply this to some audio problems.

Problem: Drummer is doing lots of fancy ghost notes and they sound really cool, but in the mix you can’t really hear them.

Ok, let’s think about this for a moment. Here is what the signal looks like to me:

Ghost 1

And when we add the rest of the mix, it covers up the ghost notes.

Ghost 1

SO... do we want the compressor on a lot, or a little?

Only a little, right? We want it to restrict the hard hits so that when we use makeup gain, the level of the ghost notes comes up.

Do we want the signal usually over the threshold or occasionally over the threshold?

We want it occasionally over the threshold—when the snare is hit really hard.

So this application is LIMITING. The gain reduction happens occasionally for maybe a split second—like a rope. I visualize the whole thing happening like this:

Ghost 1

Ghost 1

Ghost 1

I made a video on this, using our Talkback Limiter.


Problem: Bass player is really inconsistent in volume and the part doesn’t sit right in the mix.

Think about it... it’s sort of like the dog is running around too much, and we just want to restrict that movement. Do we need a rope or a bungie cord? Do we want the signal occasionally going over the threshold or often going over the threshold?

Bass 1

Bass 1

This application is COMPRESSING. The gain reduction happens often, almost continuously, like a bungie. Again, I see it in my head like this:

Bass 1


Bass 1

Bass 1


Bass 1

I made a video for this, too, using our Pawn Shop Comp 2.0.

Questions and Closing Thoughts

You might be thinking, “Ok, so this vocal track is all over the place, and I can’t control it at 6:1 no matter how low I set the threshold, but I can at 12:1. Is that compressing or limiting?"

I’d probably call that compressing, but who cares? What is important is that you understand what you have to do to get it under control and you have a thought process, you’re not just turning knobs until it sounds good.

You might be thinking, “Ok, so I set the Pawn Shop Comp to 2:1, and the threshold really high and the meter jumps a little bit every now and again. Is this limiting or compression?

I’d argue that you’re really not doing much of anything, but if it sounds good then it is good. I’d probably call that limiting. Usually if the unit isn’t kicking in often, I think of it as limiting.

Is the Korneff Audio Talkback Limiter a limiter? Sure is—100:1 ratio and a super fast attack and release. Can you compress with it? Sure. If you run a drum set through it and you drop the threshold and pin the meter you’re definitely compressing it.

Can the Korneff Audio Pawn Shop Comp do limiting? Sure can. Set it with a fast attack and release, set the ratio high and the threshold so that it occasionally kicks in and you’re limiting. The PSC is so adjustable just on the front alone that you can do almost anything with it.

Some other things to think about

A compressor with a limiter also on it is like walking a dog with a 10 foot bungie AND a 20 foot rope. Understand?

Knee? That’s like the more the dog goes over threshold, the stiffer the bungie gets. So, a Soft Knee means that there is little pressure at 10 feet, but by the time the dog is at 20 feet it might be 50:1.

Some of you might know most of this. That’s awesome. Some of you might think I’m glossing over a lot. Yep.

Some of you are thinking what about Attack? What about Release? What about gluing my mix?? All these questions.... we will get to that stuff in the next few weeks.

For now, listen to the track, the instrument, the vocal, whatever it is that needs fixing, and think about if you need to change it, (or control it) all the time, or only occasionally in certain moments.

Update from Last Week

Last week I wrote about labeling stuff.

Yesterday, my Mac decided to act up and crash out of Logic whenever I tried to open the program. Pain in my a**. I rebooted a few times, ran a disk scan, and yada yada yada. No luck.

I decided to unplug all my peripherals to see if something external was causing the problem. I suppose all of us have a lot of peripherals. What a mess! It’s like the snakes invited the worms over and served spaghetti.

Messy

And it struck me that now would be a WONDERFUL TIME TO PROPERLY LABEL EVERYTHING. Which is exactly what I did.

Messy

Much better.