Black Friday Sale ends at Midnight!
Save up to 50%

Quick review: a passive EQ uses resistors, capacitors and inductors to change the frequency response of a signal—a filter. However, doing this causes an overall loss of gain, so after that loss caused by the passive filter, there’s an amplifier to bring things back up in level.

What would be an Active EQ? Well, it would be a circuit that doesn’t require the boost of gain after the filter, and instead there's some sort of amplification built into the filter.

First, we need to know what an amplifier is.

An amplifier takes a small signal and makes it bigger. There’s a bunch of cars on the highway. You’re driving on another road. When you speed up or slow down or wiggle your car, ALL the cars on the highway do the same thing as your car. My one car isn’t powering the other cars, but it is controlling the other cars—modulating them. If I “connect” my car to a larger highway I can then control more cars.

How does my car control the other cars on the highway? The road my car is on changes the properties of the highway the other cars are on. If I drive fast, my road makes the highway faster, if I go right, my road bends right and makes the highway bend right.

Let’s call the road my car is on the Control Road. And let’s call the highway The Power Road. Control Road modulates and changes the Power Road.

If I route the car through an obstacle course and keep changing the way the car is facing and turning, speeding up and slowing down, etc, and I put that car on the Control Road, the cars on the Power Road will do the same things my car is doing.

That obstacle course is a filter. If I run a signal through a filter and change it, then put that on the Control Road, then the Power Road behaves in accordance with the physics of the filter.

The roads are paths for cars to follow. The cars are signals. So now we have a Signal coming out of the filter and going into the control, which varies the power.

So, the basic construction of an Amplifier is that there’s going to be a path that we can run a lot of cars down—energy. There’s got to be a way to get on that path—we’ll call it the input, and a way to get off of it—we’ll call that the output. And there’s a place to stick a control signal in.

The control signal modulates the output of the amplifier. The control signal doesn’t necessarily directly interact with the power, or the output; what it does is change the properties of the amplifier, which varies the output. Does that make sense to you?

All amplifiers are going to work this way: a smaller signal controls a larger one. What will be different about amplifiers is the technology used to get a bunch of power in a situation where it can be varied (modulated) by a control signal.

In audio, the two technologies most used are Vacuum Tubes and Transistors. They both do the same thing in terms of the end result, but they go about doing it a bit differently.

Vacuum Tubes

These are really easy to understand. Inside a tube is a Cathode and a Plate. These aren’t physically touching each other, but what happens is electrons form on the cathode and then jump across to the plate, forming an electric field and a magnetic field. Call it an EM field. In between the cathode and the plate, we stick something called a Grid. The grid interferes with the electric field. The control signal feeds to the grid; the grid interferes with the electric field such that that electric field now “looks” like the control signal, only it’s a lot more powerful.

How does the grid interfere?

Remember, when we have a current, we generate an electric field—wires transmit using an electric field. A grid is basically a bunch of wires, and when we put signal through it, we get an EM field. The EM fields passing from the cathode to the plate interacts with the EM fields around the grid. Depending on how things interact, the energy flowing from the cathode to the plate varies.

To get back to cars, imagine the grid is a network of stoplights and cops, all dictating the flow of traffic.

Tubes, however, don't really work well in active EQ situations. They're heavy, hot, unreliable, and require a lot of calibrating and tweaking, as well as being noisy. But understanding how tubes work will make understanding how transistors work easier when we get to that next week.

Plug-in Errata

Our Amplified Instrument Processor, the AIP, has a four-band parametric EQ in it. It's based on a Siemens tube equalizer from the 1950s, the RZ062.

Like virtually all tube equalizers, the RZ062 was passive, using inductors and capacitors to create filters and then boosting their outputs back up, similar to how a Pultec works. But the EQ on the AIP is fully parametric. What's going on there?

Dan figured out the math to make an RZ062, and then he experimented and rewrote the equations to make a digital version of what would be incredibly difficult to do: make a continuously adjustable parametric EQ out of capacitors and inductors. To give you an example of how difficult this would be: a two-band passive equalizer would require a specific inductor and capacitor set for each band. How many different inductor/capacitor sets would be needed to make an EQ sweeping from 20Hz to 20kHz? 19,980? Something insane like that?

The AIP's EQ has the frequency response curves of an RZ062, and the saturation characteristics of it, without requiring four air-conditioned barns each stuffed with thousands of components for each band.

The AIP EQ, by the way, is gorgeous. It has about the sweetest high-end band of any EQ we've worked with.

Quick question, and feel free to write me, would homework help you to better understand this stuff?

Cut-Off Frequencies

We can make a circuit that lets higher frequencies through using a capacitor. The result is a high-pass shelf—an EQ curve where the low end drops off and the highs stay level.

By adjusting the value of the capacitor and the other elements in the circuit, we can control the frequency where the low end begins to fall away. We call this the cut-off frequency—the point where the shelf starts dipping.

We can also make a circuit that lets lower frequencies through using an inductor. This produces a low-pass shelf, a mirror image of the capacitor version. The lows stay level, and the highs roll off.

And by adjusting the inductor and the components around it, we control the frequency where the high end begins to fall away. This is also called the cut-off frequency—the point where that upper shelf starts dipping.

If we combine those two circuits together, we get one that shelves away the lows and shelves away the highs, leaving only the frequencies in the middle. That is called a band-pass filter.

The distance between the high-pass cut-off frequency (where the lows get removed) and the low-pass cut-off frequency (where the highs get removed) is called the bandwidth. The wider the distance between those two cutoffs, the wider the band of frequencies that passes through. The closer they are, the narrower the band.

A band-pass filter doesn’t really have one cutoff frequency — it has two. There’s a cutoff at the low end, where the high-pass section starts allowing frequencies through, and a cutoff at the high end, where the low-pass section starts blocking them.

The frequency right in the middle of those two cutoffs is the center frequency—that’s the one the band-pass filter passes the strongest.

All of these circuits above work by getting rid of frequencies. No matter which one we use, the audio signal feeding in loses power. Using these circuits by themselves, we can cut the bass, cut the highs, or cut both the bass and the highs, leaving only the middle.

Boosting with a Passive EQ

Now… if we want to make the highs louder — like a high-shelf boost — we can’t do it directly with passive parts, because passive circuits can only cut. So what we do instead is:

Step 1: Use a high-pass shelf to cut the bass, leaving the highs less affected.

Step 2: Then amplify everything that comes out of that filter.

The result is that the highs appear boosted, even though all we did was cut the lows first and then re-amplify the entire signal. That’s how passive EQ “boost” works.

To make the lows louder — a low-shelf boost — we do the same thing but at the opposite end of the spectrum:

Step 1: Use a low-pass shelf to cut the highs, leaving the lows less affected.

Step 2: Then amplify everything that comes out of that filter.

To make an area in the midrange louder, we combine the two shelves:

Step 1: Use a high-pass shelf to cut the bass, leaving the highs less affected.

Step 2: Use a low-pass shelf to cut the highs, leaving the lows less affected.

Step 3: Then amplify everything that comes out of that filter.

What’s left — the band in the middle — ends up louder after the make-up gain. That’s a passive midrange boost, better known perhaps as a Peak Boost.

We’ve figured out how to make EQ circuits that reduce the power in areas of the frequency spectrum, and we’ve figured out how to make circuits that raise the power in areas of the frequency spectrum.

What if we want to cut some frequencies and boost others at the same time?

Easy: we just combine all of these individual circuits into one larger network.

Everything we want to cut gets its own part of the circuit.

Everything we want to boost gets its own part of the circuit.

In other words, cutting the low end and boosting the low end aren’t two aspects of one function — they’re two completely separate circuits, each designed to shape the signal in its own way. The EQ lets us use them together at the same time.

The reality of these different boosting and cutting circuits is that they aren’t precise opposites of each other. A passive low-shelf cut doesn’t perfectly mirror a passive low-shelf boost. The shapes of the curves depend on the actual circuit design, and they can be very different from one another.

The Pultec Low-End Trick

Most of you know the classic Pultec EQP-1. It’s a passive EQ, built on the principles we’ve been talking about — though in practice, the real circuit is a lot more intricate than our simplified explanations. It can boost and cut at various frequencies, but remember: boosting and cutting live in separate parts of the circuit, so their curves don’t line up like mirror images.

That’s the whole basis for the famous “Pultec Low-End Trick.”

An engineer dials in a low-frequency cut with one knob, and a low-frequency boost with another. Because the cut curve and the boost curve are shaped differently, they don’t just cancel each other out. They overlap and interact. The cut provides one shape, the boost a different shape. The resulting curve is a wide shelf boost with a dip above it. How this sounds is a huge amount of lows with a chunk taken out of the low mids — this is a very common sort of EQ move to make.

The Pultec has the sound it has not only because of the shapes of its EQ curves, but because there are amplifiers in there providing makeup gain. But it’s even crazier than that.

When we think about a circuit — whether it’s the circuit in a Pultec, a compressor, or an entire console — we tend to picture it like a flowchart:

Input → low-shelf section → high-shelf section → makeup gain amplifier → output

For a compressor we might picture:

Input → detector circuit → gain-reduction circuit → makeup gain amplifier.

This is a convenient way for us to understand signal flow. It’s how we visualize what goes where and in what order.

But that isn’t what’s actually happening physically.

Electricity in a circuit moves at just under the speed of light. Which means that, for all practical purposes, the signal is everywhere in the circuit at once. It’s arriving at the input, being cut, boosted, amplified, filtered, and showing up at the output simultaneously.

And because of this, every part of the circuit affects every other part of the circuit — all the time.

The boost affects the cut affects the makeup gain amplifier affects the input affects the output affects the boost ad infinitum.

Everything pushes and pulls on everything else, all at once.

And because impedance changes with frequency, this interaction is slightly different for every frequency that’s in the circuit.

The overall tone, personality or “sound” of a piece of gear is the total result of all of these interactions happening simultaneously. That’s why no two EQs sound the same, even if they technically perform the same EQ moves on paper, and why we like how some EQs sound for certain tasks in the studio, why one might sound warm, another might sound clinical.

People speak of analog gear as sounding “alive.” This is why. Because the entire thing is reacting to itself at just below the speed of light. If you add microphones in, or an amplifier and a speaker, they become part of the whole, interconnected at almost light speed, the whole thing “breathing."

But What About Plug-ins

Let’s take this into the digital realm. Let’s make an EQ plugin. There are two ways to do this. Simple way first:

We can have chunks of program — a little routine that does a shelving EQ, another that does a peaking boost, another that lowers gain, another that adds distortion, etc. Think of these as Lego blocks.

We build these Lego blocks into a structure that does equalization, compression, saturation, whatever we want. We can swap in different blocks with different characteristics, and we can even fine-tune those individual blocks. We can absolutely get something that sounds very good, and it might even get close to the sound of a particular analog unit.

But what it can’t do is interact with itself the way a real analog circuit does.

In our Lego-style plugin, audio flows through one block, then the next, then the next. The output of Block A feeds Block B, but Block B doesn’t naturally “push back” or interact with Block A. The output can’t affect the EQ curve, the distortion, or the dynamics unless we explicitly program that to happen.

In an analog piece of gear, that kind of interaction happens as a consequence of the circuit’s existence. Everything is talking to everything else all the time, because it’s all one continuous electrical system, not a row of separated Lego blocks.

Now, there’s the complex way to make a plugin.

Early on, we looked at some formulas for Current, Voltage, Resistance, and Impedance. We didn’t get heavily into the math of things, but every section of a circuit can be described by a math equation. Capacitors, inductors, resistors, amplifiers—they’re all describable by an equation. In fact, the entire piece of equipment is a bunch of equations which become part of a giant equation that is the math of the circuit and describes what the circuit is doing at any given moment.

To digitally model a piece of analog equipment, one has to figure out the math of all the different sections of the circuit and then combine them into one massive circuit/equation and then have the computer solve all those equations at the same time. If you do it this way, then one section of the equation can affect another section of the equation. Which means the digital circuit is responding to itself in the way an analog circuit responds to itself.

How often does the equation have to be solved? Well, at a sample rate of 48kHz, it has to be solved at 48,000 times a second. How accurate is the solution to the equation? Whatever the bit rate is — 24 bit, 32 bit float, whatever it might be.

Obviously, doing things this way, which is called Component Level Modeling, typically requires more out of the computer system than Lego style.

Which one, Lego Style or Component Level Modeling, sounds more alive and analog? Which one requires more work?

Here’s a surprise that shouldn’t be a surprise: Korneff Audio’s plugins are Component Level Modeled. Yes, Dan sits there, behind a computer, and figures out the math. These days he has a standing desk, so he also stands there and figures out the math.

Once we figure out the math, we get our hands on the actual hardware thing and compare it to the math. In the case of our Shure Level-Loc, first we did the math, then flew to Chicago to work with Shure and Tchad Blake to fine tune the math so it sounded even more like the actual hardware. In fact, we mathematically modeled three different hardware units, all of which are available on our Shure Level-Loc plugin.

Not every plug-in developer does the math. We do.

We’re actually working on a bunch of mathy stuff right now, and you’ll all get to see it as a new plug-in soon.

Happy Monday -

We start lighthearted, then things get all educational.

Oooooo! The temperature is dropping in the North East. But, we can warm things up with these guys:

Kolektivo

Apple Music

Tidal

YouTube

This stuff makes me want to hang out at the swim-up bar, sipping an unctuous Portuguese white and entertaining bathing beauties with my profound audio knowledge: “Oh Luke," she laughs, "I love how you pronounce ‘impedance.’ So sexy.”

Sigh... He wakes from his daydream with a start, in front of his Mac Mini, the music of Kolektivo playing through the iLouds.

Kolektivo. Who are they? Good question. They are a loose group of musicians and composers, predominantly based in Latin America. There's a big dose of Bossa Nova, so that’s Brazilian, but then songs are also in French (I think the claps are samples—you listen and let me know what you think) and then some of them are closer to dance music or Compas. And then there’s the Ramón Stagnaro question.

Ramón was a Peruvian guitarist and producer who died a few years ago. He worked with Ricky Martin, Gino Vannelli, Diana Ross, Céline Dion... and he played live with Yanni!

Here’s Ramón with Yanni.

That’s right. We’ve gone from Grand Funk Railroad to Yanni...

Skipping forward on the video, we come to darkly handsome men smiling at the camera, playing what looks like a recorder, followed by Ramón going off on a classical guitar. Monster player—it’s worth it to wade through the 15 seconds of recorder to get to the guitar solo à la Ramón, but feel free to stop after once the keyboard solo starts.

Yanni uses Korneff Audio hair care products.

And perhaps he drinks Chocolate Milk.

Speaking of Chocolate Milk, here’s a video by Dan demonstrating Chocolate Milk on a few different sound sources. You don’t often get to watch a master work. Dan always has ideas and tricks and he explains things clearly. Watch. Impress people at the pool.

Back to Kolektivo. Adding to the difficulty of who these people are is that the word means “Collective,” and that describes a lot of different possibilities of people. Google Kolektivo and you might get Drono Kolektivo and their album Leningrad Layers. Droney, strange, electronic music. This isn’t warm sunny Bossa Nova. Heck, the album title sounds like something one would wear in 1941 whilst repelling the Panzer divisions spearheading Operation Barbarossa.

Then there’s Caro Pierotto, a bossa nova singer currently working out of LA, also part of Kolektivo. And here she is doing a Bossa Nova cover of Fleetwood Mac’s Dreams. She appears to be like 14' tall in this video.

Now... somehow I wound up with Caro Emerald, who is not part of Kolektivo, as far as we know, but here she is performing with Metropole Orkest. Sexy stuff. It caught my eye because we saw Metropole Orkest last week with Louis Cole.

Oh no! Louis Cole! Here he is with his group, Knower, and a bunch of friends playing in his house. He’s drumming in the bathroom. With magnificent slippers. Hot bunch of players. Pianist at the end is amazing. Hot music warms ya up.

TIME TO LEARN

Campers, enough fun and games. Time to get serious.

In the next few weeks, I’m writing about equalizers in a technical way that is also completely comprehensible. So, what is happening in the circuit with the electrons and such, but written so you can see it in your head and understand it easily. We’ll start with passive EQs, then move to Active and then to digital emulations. You’ll learn the terminology and concepts you need to get better at audio engineering. To really know what is going on is power.

We start here, talking about Capacitors and Inductors.

Warm regards and see you at the swim-up bar of my dreams,

Luke

PS - let me know if the capacitors and inductors article is helpful to you.

We're talking about EQ circuits for the next few weeks—call this audio tech talk for beginners.

We'll start with passive EQs and then we'll move to active and then digital emulations, but we need to start before all of that with an understanding of Capacitors and Inductors.

Let’s start with a circuit.

We have electrons flowing from a power source through a wire. We stick a resistor in the circuit.

Now, when the electrons come to it, it gets in their way a bit—picture a bunch of guys running down a hall who come to a doorway and have to squeeze through. The resistor is the doorway. It’s like the hallway gets narrower.

Now let’s stick a capacitor in the circuit.

A capacitor is like a room. The guys running down the hallway pile into the room—but there’s a wall across the middle with another door on the far side. They can’t cross that wall. They can only go back out the way they came.

Physically, a capacitor is two metal plates separated by a dielectric—an insulator. The “room” the electron guys get stuck in is one plate or the other, and the wall is the dielectric.

If the electrons in the circuit always move in one direction (DC, or Direct Current), they fill up one side of the “room” and have nowhere to go. The current stops—everything backs up. Electron constipation. That’s different from the resistor, which just makes the hallway narrower but still lets everyone through.

But audio signals are AC—Alternating Current. They flow one direction through the circuit and then the other. Because an audio waveform is a series of peaks and valleys—or as the English in the early 1960s might say, peaks and troughs—ups and downs.

How often does the current alternate? Well, at 60 Hz, it changes direction 60 times a second. Which means the electron guys run into one room, then leave and run into the other room 60 times in one second. At 2850 Hz, they’re in-out, in-out 2850 times per second.

So, a capacitor is a room with a divider, and a bunch of guys fill up one side of the room, leave, and then fill up the other side, each side entering and exiting through the only door they have access to. Electrically, the electrons accumulate on one plate, discharge, then fill up the other plate.

So, you’re wondering why this is important—or handy...

Back to our electron guys. We stuff them in the room, the room fills up, the flow of electron guys stops. Then we reverse the flow, the guys have to leave the room, and the other side fills up, which stops the flow of guys in the opposite direction. If the frequency is low, the guys have more time to stuff themselves into the room, the room gets more filled up. It’s like there’s a guard at the door saying, “Get in, plenty of room, keep coming, get in, get in…” And then when the flow reverses, the guard says, “Ok, get out, come on, keep moving, get out, get out…”

If we slow things down—fill up the room fully, empty it, then refill it fully from the other side—we gum up the works a bit. Things back up. There's electrons waiting around while the room fills up. The guys (electrons) can’t flow easily. So, there’s less movement, less “power” in the circuit.

But if the frequency is higher, the guard says, “Get in, get in—oh wait, we’re changing directions—get out, get out!” The room doesn’t fill up as much, which means it can empty faster. Not as many electrons collect on the plate, so there aren’t as many that have to leave the plate. The result is more movement in the circuit, which means more power.

So... as the frequency goes up, the capacitor makes it easier for electrons to flow through the circuit. As the frequency goes down, the capacitor makes it harder for electrons to move. When it’s harder for electrons to move, the power goes down—which means the low frequencies have less power. They roll off.

You’ve just made a high-pass filter.

​Capacitors pass the highs through, and block up the lows. By changing the capacitor’s characteristics, the materials involved, the size, etc.,  (think of this as changing the size of the room you stuff the guys into) you can adjust the frequencies most affected. Adjust the capacitor, move the high-pass filter up and down.

Now we need a low-pass filter.

How do we do that?

Well, if a capacitor gives us a high-pass, then we need something like a backwards capacitor to make a low-pass. Makes sense, right?

What the hell is a backwards capacitor?

Well... if a capacitor is a room with a divider, and a bunch of guys fill up one side of the room, then the other, it seems to me that a backwards capacitor would be like a huge hallway that is always full of guys. No one leaving. No emptiness. Tons of guys.

So, if our electrons are running around the circuit and suddenly they get into a huge hallway packed with other electrons, that screws them up. How do they get through? “Excuse me, pardon me, pardon me, excuse me, sorry, I didn’t mean to touch your butt, pardon me…”

This gums up the works differently, doesn’t it? It’s kind of the same thing as a capacitor—we’re still messing with the flow of electrons—but in a different way.

Let’s bring frequency into this.

First of all, let’s give this hallway stuffed with extra guys a name. We’ll call it an inductor.

We have our running electron guys, and let’s say they’re changing directions at 100 Hz—one hundred times a second. That gives them 1/100 of a second to get through the inductor, which is a room packed with guys. So there’s time for them to go, “Excuse me, pardon me, pardon, whoops, sorry, pardon me…” But if we increase the frequency, we start giving them less time to get through the room, so it’s more like, “Pardon me, excuse—oh, I have to go the other way—excuse me, you again, pardon—oh! I have to go the other way, damn… me again, sorry, sorry…”

Do you see that as the frequency goes up, the electron guys kind of get caught in the inductor? At low frequencies, the electrons have time to get in and out, but as frequency goes up, they have less, and they get stuck in a swamp of other guys. When we gum up the works and inhibit movement, we decrease the power. Now we're gumming up the works at higher frequencies. The high frequencies have less power. We're rolling off the high frequencies. We have a low-pass filter.

Inductors pass the lows and roll off the highs.

What is an inductor physically like? Since we need space for lots of electrons, we need something big. Like a fatass chunk of wire. Like a coil of wire. And when we coil the wire, it’s sort of like taking the hallway and bending it a bunch, which makes it even harder for the electron guys to get through. Picture you’re at a long, bendy nightclub full of people and you’re trying to get through. The more time you have, the more able you are to make it to the other side. But at high frequencies, you come to the door of the club and think, “Screw this. I’m not even going in.”

We can control the frequency response of the inductor by making it from different types of materials, thicker wire, thinner wire, more coils, less coils, wrapping the wire around something, etc.

What if we stick a capacitor AND an inductor in the circuit at the same time?

Hmmm... so like this?

Well, if we stick this:

With this:

We get this:

And by adjusting capacitors and inductors, we can move the hump up and down the frequency spectrum, and make the hump narrower or wider. This is a bandpass filter.

The problem is that we're reducing gain only. We're rolling off the lows with the capacitor, and rolling off the highs with the inductor, so to make that hump louder compared to the highs and lows, so we have to amplify the signal after that. We need to add gain. So we stick an amplifier after, and now we have something like a Pultec, a passive EQ.

Why passive? Because the tone shaping uses passive components: resistors, capacitors, and inductors are passive: they work without additional electricity fed into them, and they can't amplify a signal.

A passive EQ means the signal is being reduced overall, because we're cutting things out of it, and then we amplify the whole thing, which brings the stuff we reduced back up to nominal and the things we passed get louder.

Kind of like what happens with a compressor, right? We selectively reduce the signal and then overall amplify it.

My circuit diagrams are ridiculously simplified—they don't have an input or an output—but the point is that you understand what's going on and start learning the symbols. It isn't hard to read a circuit diagram. You're already doing it. True, it's "Run, Spot! Run" for now, but eventually it will be, "The only completely stationary object in the room was an enormous couch, on which two young women were buoyed up as though upon an anchored balloon."

I also simplified a TON of electrical engineering stuff. Like electrons and what’s actually happening as things flow through wires and components. It’s not quite guys running around. And there are more words to learn, more jargon. But it’s a useful way to think of things at this stage. We’ll add some more ideas next week.

You don’t need to know this theory perfectly, but you’ll learn enough in the next few weeks that it will improve your engineering.

Happy New Monday, or... Happy Labor Day.

As long as we’re laboring, about a year ago I wrote this: https://korneffaudio.com/new-monday-28/.

It covered Marvin Gaye and there was a chart on EQ’ing ideas. We’re kinda bumping into that EQ’ing thing this Episode.

Sinners

Sinners is a horror flick set in the American Deep South in 1933—the time of lynchings, the KKK, and, evidently, vampires that play bluegrass. And it’s kind of a musical! Definitely worth a watch, and definitely the soundtrack is worth a listen.

Composed by Swedish composer Ludwig Göransson, and cut with a mixture of musicians and actors in a converted church in New Orleans, the music is rooted in delta blues but grows tendrils everywhere, into hip hop, R&B, with strange, dissonant string pads and keyboards, and rhythms pulled from Africa to Ireland.

Watch this scene. Aside from being a visual stunner, it’s a journey down the highway of black music in America. This is SPLENDID composing and filmmaking.

More Listening

The band hates this song and its production—lead singer Ian McCulloch said, "It [still] sounds crap.”

I dunno... I love it. It’s a bit lightweight as a song, but as a production, ay caramba! Amazing.

Lips Like Sugar

Apple Music

Amazon Music

Spotify

Tidal

YouTube

If we’re going to seriously listen to things, I want to find the best possible source. To my ears, nothing is beating Apple Music but Amazon Music is a close #2. They have the least signal compression and loudness compensation. This particular song is full of little production tidbits. They’re best heard on Apple or Amazon. Spotify and Tidal both sound overly compressed and bass-heavy. I can’t decide which one is more awful, and I am disappointed by Tidal. YouTube... eh.

ANYWAY, this one is mind-blowing on headphones. Stuff zinging around everywhere. Listen this week, I’ll tear it apart next week.

A Lesson

Some of you might have noticed that we’re putting out tips and tricks for our plug-ins on Instagram. Here, for instance.

This particular trick is on getting more presence out of bass and low-end sounds so they translate better to small speakers.

But in addition to the IG reel, I also wrote more on this particular trick, and you can find it by clicking here.

Short Stuff

Andy and Stewart are suing Sting for a share of the spoils from the royalties for 'Every Breath You Take'. Evidently, the Stingster makes $700,000 a year just from that one song, and the story is the band was going to chuck the song out when Sting said, “Andy, go do a guitar part.” The Andster popped into the studio and did the guitar part in one take, saving the song, and making Sting $700,000 a year. Not sure why Stewart is involved. Read more here.

Bang a gong? No! Rub it with a flumi! Get weird, voice-like sounds. Very cool. I might have to look into the physics of this.

Did you answer your survey yet and get 50% off of everything? No? What are you waiting for?

Have a great week.

Warm regards,

Luke

 

Happy Monday -

Korneff Audio started on a Black Friday five years ago, with one plug-in, the original Pawn Shop Comp. Five years later, we’ve got nine, and a bunch more waiting to see daylight. So, I guess happy birthday to us?

For this episode (producer/engineer John Agnello calls each of these an episode... sounds like an eventual podcast...), I thought I’d be extra useful by giving some info on our plug-ins, specifically going into how Dan and I use them in the studio, some design background, some usage hints.

There’s so much though, that I am splitting this into two emails, one today and one tomorrow. SO... keep an eye out for New Tuesday!

Factoids and Uses and Whatnot on All Our Plugins, going by age

Pawn Shop Comp/Pawn Shop Comp 2.0

It’s misnamed. It’s really a vintage channel strip consisting of a tube preamp coupled to a FET-style compressor. It works on everything, including the mix bus, but it’s el supremo on vocals and bass. Tons of saturation options because of the preamp, and the ability to switch in different tubes and transformers. The way we use the PSC is to put it on a channel, flip around to the backside, fiddle with the preamp and the tubes and transformer, and THEN adjust the compressor. Think of it as selecting the console you want to use before engaging the channel EQ.

Fun Factoid: The Operating Level control is a circuit Dan nicked off a cassette tape duplicator his Uncle Bob had given him when Dan was a wee teen. He liked how it sounded, so it wound up in the Pawn Shop Comp.

Usage Secret: I’ve mentioned this before... two of them, one right after the other, set one to respond quickly and the other a bit more slowly (play with attack and release). Swap the order in the inserts ’til you get something smooth.

Talkback Limiter

This beast is another FET-style limiter, based on a circuit found in SSL consoles designed to keep studio talkback mics from destroying speakers and ears. Hugh Padgham and Peter Gabriel invented gated drum sounds with this circuit.

Yes, it is amazing on drums. It makes anything snap and click and punch. It lives on our snares, kicks, room mics, etc. It’s probably the best overall drum compressor out there.

But, and I suppose it’s part of the FET transistor modeling, and the artifacts produced by an FET, the TBL adds a thickness to things. It’s hard to describe but I can hear it in my head. It has a similar sound to Neve Diode compressors. It makes me clench my jaw and want to bite something. If you know Neve compressors, you know what I’m talking about. Anyway, the TBL is really great on things like vocals and acoustic instruments provided you back the DRY WET BLEND way way down towards DRY. Like, barely crack it open. It adds a little beef and evenness. We typically follow it with another compressor.

Fun Factoid: for distortion effects, click around to the back and mess with the trimpots. AND for a real adventure, on the front panel, click on the power lights at the top and see what happens...

Amplified Instrument Processor

I wrote about this thing's monstrously good sounding EQ a few episodes ago. Further, I wrote a whole course on how to use it. If you want to be enrolled in the course, reply to this email and I’ll sign you into it.

Usage Idea: Put an AIP on each of your submix buses. Switch on the Proprietary Signal Processing button on the front, and then play around with the three different settings on the back - one is tube-ish, one is tape-ish, and one is California 1970s’ solid state-ish. Again, do this BEFORE you do anything else with the plug-ins. It’s like picking out different sounding channels for each grouping of instruments.

Micro Digital Reverberator

You know who likes reverb units with almost no controls? Me. I love messing around with compressors, and EQs, and delays but when I get to reverbs I just want presets that sound good. I don’t even like adjusting simple things, like the decay time. Maybe it’s from screwing around for hours on 480Ls and always going back to the presets. Who knows.

Do This: Even though the original hardware units this puppy is modeling were basically designed to go on an insert or across a whole mix, put the MDR on its own channel and feed it via a send. Why?

1) You want to be able to EQ your reverbs. This is a HUGE trick. This guy explains it better than I can, so go read this.

2) You want to be able to feed the output of one reverb unit into the next, and so on.

What?? Cascade the reverbs?? YES!!!! It’s total insanity and fun!

In fact, do this: Put THREE MDRs on three separate channels. One is a short small room, one is a plate, and the last is a huge concert hall. Use the small room to widen and add a touch of ambiance. Use the plate for vocals, but just a smidge, and then use the concert hall for pads, etc. NOW... feed a bit of that small room INTO the concert hall, but just a touch, to have some movement and depth way way back there in the speakers. For special moments, like the end of a solo, or a chunk of vocal line when the singer screams out his ex-wife’s name in anguish, or when someone has decided a certain single snare hit is incredibly important, feed the small room into the plate and the plate into the concert hall. Obviously automate this stuff.

Fun Factoid: Everyone overlooks this, but the MDR has stereo widening/narrowing on the back....

The Echoleffe Tape Delay

This is one intimidating monster. I’ve seen grown mix engineers fling themselves into oncoming traffic when they discover there are individual EQs, bias, and pan settings for each of the three delay lines. I have stood over their mangled bodies, finally at peace, and I’ve whispered, “Did you know you also have complete control over wow, flutter, tape age, head bump, as well as tape formulation, and you can switch off the Echoleffe’s delay function and just use it as a tape saturation simulator?"

This thing is the opposite of the MDR. It’s bristling with controls like a pissed-off German porcupine. It’s a pity, because once you get the logic of the controls, the ETD is quick to use and impossibly versatile. It can do easy things, like adding slapback on a vocal (it’s overkill for that, honestly), but it excels at making sounds you’ve never heard before.

The ETD can turn a single note into a keyboard pad that modulates and moves. It can twist delays into reverbs and musically sync the whole thing to the tempo of the track.

Usage Ideas: Set the delay times to below 11ms - set all three of them differently. Pan them everywhere. Play the track, and adjust the feedback for each delay line on the front panel, then go to the Tape Maintenance Panel and futz around with wow and flutter — this will add modulation to the delay times and suddenly you’ve got flanging happening that is out of this world and panned all over the stereo image. Gradually increase one of the delay times to get pitch-shifting effects. Automate the changes of the delay times. Play with the REVERB DENSITY switch on the front panel to basically DOUBLE the number of echo returns.

Even if you never buy this thing, download the demo and spend a week writing songs with it.

Licensing

Our original five plug-ins are iLOK-based for security purposes. Yes, we are phasing that out and soon our original five will use our own proprietary licensing system developed by Dan, the damn genius. When will this happen? We are hoping very very soon, but no promises. But know that we’ve heard your requests to get the heck off iLOK and we are working towards that.

I don’t have a new record this week. I’m still listening to Kim Deal every day. It gets better and more creative and insightful with each listen. But here’s a great interview with her on the Broken Record podcast. She talks about everything, including the new album. And she’s really really funny! And so so smart. She talks a lot about Steve Albini, and sadly, she occasionally refers to him in the present tense, as though he was still alive.

Warm regards,

Luke

Happy Monday -

While I was writing this, producer Shel Talmy died. You might not know his name, but you surely know 'My Generation', 'Friday on My Mind', and this little ditty from The Kinks.

You Really Got Me

This was a groundbreaking recording. There’s fuzz guitar on it!

Now, the story is, to get that guitar sound, Dave Davies slashed his speaker with a razor blade. At the very beginning, before the band kicks in, you can clearly hear a buzzing that might or might not be the two edges of a paper speaker cone against each other, but also, by 1964 people knew that if you turned up an amp a lot you’d get distortion. Heck, people knew this since... forever? So, I think it’s a combination of a turned-up amp and a damaged speaker, but I wasn’t there. I was only a year old and still wetting myself.

Another thing to hear: the bassist not muting his bass. Listen for an out-of-tune resonance that can be heard in the gap in the iconic riff. Even as a kid this used to drive me nuts. What does it take to wrap a sock around the neck at the nut?

By the way, Jimmy Page is on this session, because The Kinks’ lead singer, Ray Davies, wasn’t playing his usual rhythm guitar. Producer Shel Talmy wanted him to concentrate on vocals and brought in Jimmy Page to do Ray’s parts. Because it was live in the studio with no overdubs. Now, both Jimmy Page and Dave Davies claim to be playing the rhythm part, which is unusual because usually guitarists claim playing the solo.

Talmy also produced a few very early David Bowie records, when Bowie was still Davy Jones. You’ve Got a Habit of Leaving is not one of Bowie’s best compositions, but even on this one we can hear hints of his latent songwriting ability. Check out the “rave up” sections that are verging on pure noise.

Talmy wasn’t all noise and rock, though. He recorded some gorgeous acoustic folk stuff. Let No Man Steal Your Thyme by Pentangle is a lovely recording. Check out the cello glide from left to right at the start, and the precision and clarity of the various parts.

Shel Talmy, off to that analog tape studio in the sky at 87.

Pumpkin Spice Latte

Shameless plug-in plug: go buy a Pumpkin Spice Latte. $14.99 - that’s less than what an actual Venti Pumpkin Spice Latte would cost you at a Starbucks in New York, and our plug-in, with its combination of saturation, ambiance, and echo is far more useful and less fattening, unless we’re talking about your tracks, because then it’s more fattening.

Microphone Stuff

I love microphones. I love having a lot of them to choose from, I love moving them around, I love buying them, I love trying different microphones and going, “meh... that sucks, try the XXXXXXX (insert your go-to mic here)”.

In no particular order: mic stuff.

What the 3:1 Rule really is

“When recording with multiple microphones, the 3:1 rule states that the second microphone should be placed three times as far away from the sound source as the first microphone.” Definition courtesy of the internet.

How to explain this... It’s not about phase. Phase doesn’t magically fix itself if things get three times farther away from each other. It’s about the LOUDNESS of LEAKAGE. What causes phase issues is the unintended stuff that gets into the second mic, and if it’s loud enough, plays phase havoc with the intended stuff in the first mic.

We have this:

james taylor

It’s the leakage from the acoustic guitar, if it’s loud enough in the vocal mic, that will cause phase issues when it’s heard with the direct sound picked up by acoustic guitar’s mic. The guitar leakage (indirect sound) on the vocal mic will phase interfere with the guitar (direct sound) on the guitar mic. Following the 3:1 rule means hopefully the direct sound is a lot louder than the indirect sound. It’s controlling level, not phase. If you’re in a small, reflective room with tons of leakage everywhere, all of it loud, you’ll have phase issues regardless of distance.

Instead of the 3:1 rule, do this: Use one microphone. If you can’t do that, the closer the mics get to each other, the closer they have to get to their individual sound sources.

I learned something called acoustic separation. This was like, if you didn’t want the leakage to cause an issue, make sure it’s 26dB quieter than the direct sound. In practice, this is pretty hard to hit, so even if you’re getting 10 or 15dB of difference on the meters you’re doing well. Of course, 26dB is better.

And for God’s sake, don’t get a ruler out and measure this stuff.

Mic Position as EQ

If you’re using a mic with a directional polar pattern, there are a TON of placement options that can drastically change the frequency response of what you’re recording. And I’m not talking about where on the sound source you’re placing the mic. I’m talking about proximity effect and off-axis coloration.

Distance for Low End

Think moving closer or farther for low-end effects.

Most directional mics exhibit proximity effect—the closer you get to a sound source, the more the mic will enhance the low end. Some patterns and mics have more of this than others. Figure-eights (bi-directional) have the most. Rather than boosting the lows, move that mic closer, or swap in a mic like a figure-eight. A fig-eight on a bass cabinet or a kick is a fun thing. A 414 switched into fig-eight is a great thing on a guitar cabinet, also on toms (provided there’s not a cymbal over the tom).

Of course, if you’re trying to get rid of low mud, proximity effect will not be your friend. Proximity effect is often the cause of muddy vocals. Back the singer up a foot.

Fun phase trick. Mic something with a fig-eight, then put a board of wood behind it so the direct sound bounces off the board of wood into the back of the mic for instant phase strangeness. Have someone move the board closer and farther for a flange effect.

Added benefit of bidirectional polar patterns: they have the most side rejection of any mic, which makes them very useful when you really need to isolate a source from something on either side of it and there isn’t a bunch of stuff leaking into the back. Very useful on congas and such, also pianos.

Also, while most omni-directional mics don’t have proximity effect, some, usually multipattern condensers, do have it, so use those ears.

Axis for High End

The reality of directional patterns is that they’re a mess. You see them in books and mic spec sheets and they look like this:

cardioid pattern

Seems nice and uniform, doesn’t it?

Nope. The response changes depending on the frequency. In fact, the only place a mic is reliably flat, or somewhat like its frequency response diagram, is dead on from the front. From any other angle, the response is different.

The basic rules: the lower the frequency, the more the polar pattern tends to be omni; the tighter the polar pattern, the stranger the frequency response. The most consistent patterns are on bi-directional mics, the wonkiest are on supercardioids and shotgun mics. Here’s a more realistic response graph for a supercardioid mic.

polar pattern 906

A total mess above 1kHz. Or... think of it as a bunch of little EQ curves to play with.

Point the mic straight at something, get one frequency response. Position the mic off-axis to the sound source and the high-frequency response changes. It’s like a built-in low-pass filter.

There’s a lot of control here. Put a mic slightly above a singer’s mouth, point it down towards their chest and you can smooth out a spittie high end. Still sibilant? Move the mic right or left a bit. Come in from the side of their head, pointing towards their opposite shoulder. Adjust bass by coming in closer or further away. Adjust sibilance and highs by changing the mic’s axis.

A quick tip: if you’re going to be doing really weird mic angles on a singer, be aware that there’s a “turn towards the mic gravity” going on. Put a dummy mic in front of them so they sing towards it, and then let the weirdly placed mic do its job unnoticed.

This also works for any acoustic instruments, from cabinets to pianos to drums to horns—whatever.

Mics as Limiters

Mics are mechanical, mechanical stuff has inertia, the diaphragm of a mic has inertia. “Slow” heavy mics, like most moving coils, round off transients. I’ve written about this before. Here’s a diagram I stole.

This can make a huge difference between something sitting nicely in the mix and something that sounds like a little click unless you turn it up a lot, and then it’s way too loud.

Use Pop Filters Always

If you’re sticking a mic in front of a person, put a pop filter on it. Doesn’t matter if they’re popping the mic or not. They’re spitting crap and bits of chapped lip and dead taste buds and chia seeds and whatever else is in that mouth into the mic and all over the diaphragm. Kissing is fun, but cleaning chunks of spaghetti carbonara off your eardrum isn’t. Ever pay to get a diaphragm cleaned by some mic tech? Do you want to? Put a pop filter on it. It won’t affect your high-end.

Setting Up and Breaking Down

Most of you no longer deal with this, but it’s a good lesson.

When you’re doing a big session with lots of mics, set up the stands first, the cables second and put the mics on last. Route cables so there is always a footpath for people to walk that doesn’t have microphone cables on it. Route your drum mics all around one side of the kit so there’s a clear way for the drummer to get in and out without stepping over cables.

If you drop a cheap mic, it bounces. If an expensive mic hits the floor, chances are it’s toast.

Breaking down: before you let a single musician into the studio to put away their gear, unplug EVERY SINGLE MIC and put them AWAY in the MIC CABINET and LOCK IT perhaps. Every time a mic was ever stolen or broken, in all the years I was in studios, it was during the breakdown. Get them out of there first and fast.

This was shorter until I heard about the passing of Shel Talmy. Y’all have a great week.

Warm regards,

Luke

Happy Monday

First of all, thank you to everyone who submitted a survey. We’ve only managed to read a few so far but the suggestions have been really helpful. If you’re sitting on your survey, remember you have til the end of the month! We do want to hear from you!

Do you steal ideas when working on your music? I do. I’m working on some acoustic guitar stuff that has a swampy kind of sound, so I went poking around in the past and of course turned up this nugget.

Black Water

On Apple Music

On Spotify

On YouTube

Tons to hear—what a mood and vibe assembled in such a deceptively simple manner. Lots of little changes to the mix in every new section of the song. Listen and follow along.

0:00 Chimes at the top for a water feel, and then listen for an autoharp strummed (it sounds like someone strumming guitar strings above the nut) just in the back, alternating left and right.

0:07 Two slightly different acoustic guitar parts panned hard left and right. This really tricky, syncopated playing. Kudos for songwriter/lead singer Pat Simmons for overdubbing the second part (I think it’s the one to the right) so tightly. Also a viola comes in, a right center, played to sound like a bluegrass fiddle.

0:18 Lead vocals centered and pretty dry.

0:41 Harmony vocals—sounds like three guys around one mic, panned back and right center.

1:00 More reverb added to the lead vocal for this section.

1:10 Lead vocal pans left and... it sounds like they added a delay and panned that hard right. Could it be a 30ms Cooper Time Cube delay? It’s got a strange frequency response. Or it could be a really really tight double. What do you think?

All those moves in on the vocals in less than a minute, and that’s not counting whoever is riding the gain.

1:31 Drums panned 3/4 left, sounds mono. Very flat, dry sounds—typical dead 70s drums. That kick.... that sounds like an AKG D12. They sort of sound like a “boing” rather than a slap.

Bass comes in, played very tight on top of the acoustic guitar’s low note and weaving around the guitar part. Maybe slightly to the left of center?

1:55 Viola again, but now two tracks and one doing a harmony. They “breathe" and it sounds a bit like an accordion.

2:15 The viola swells in, doing what sounds like a horn part, but I think it’s the viola still.

2:24 A great little tom fill. I love drum parts like this that come out of nowhere and seem almost to be a mistake. Charlie Watts is the master of this.

2:35 A solo acoustic guitar, played with a pick, comes in opposite the viola and the two trade off.

2:57 This is such a cool moment. The bass takes the attention, the lead guitar drops back a bit. There’s a cymbal splash off to the left... I think it might be someone making a “pish” noise with their mouth.

3:08 This is a glorious moment in recorded music history: the acapella break on Black Water. Three tracks, one part right, then one center, then one left. They’re all running through Amigo’s reverb chamber, but I think the right side part, with the bass voice on it, has a complementary really long reverb on it panned left. SO... maybe they printed the reverb to tape and then panned it and then fed it back into the reverb again? The center part is doubled lead singer, the left and right parts are the same guys but singing in different registers and balanced differently around the mics.

3:25 The music fades back in, there’s yet another improvised lead vocal. This might be Tom Johnson instead of Pat Simmons but I can’t tell for sure.

And the song rambles out, back down the river.

Brilliant production by Ted Templeman, wonderful engineering by Don Landee. These are the same guys that did a bunch of Van Halen records.

Black Water was tracked and mixed on an API console, cut to a 3M 24 track 2”, and all this fun happened at the now defunct Warner Brothers Studio in North Hollywood. Note that this studio is often credited as Amigo Studios on records - same place, different name after a buyout.

Black Water became an unlikely single in 1974, the Doobie Brothers’ first #1, and was an instant classic. I remember hearing it as a kid and being amazed at the mood and vibe... 1974... 5th grade??? A bunch of us trying to do the acapella part and totally sucking at it because we were all still sopranos. Sounded like The Brady Six.

Speaking of the brilliant acapella part, and speaking of stealing ideas, Ted Templeman nicked it from here.

I listen to this and I’m amazed by the clarity and depth, and how they pulled this all off with comparatively simple equipment. We’re basically talking API console EQs, a few 1176’s and a reverb chamber.

But what is really going on here is excellent microphones put in exactly the right spot on excellent instruments played by excellent musicians. And the whole event happens in a room specifically designed for recording. And the guy picking and putting the mic in the exact right spot has a closet full of different mics to pick from and years of experience putting different mics on different instruments, and figuring out through the osmosis of experience what works with what.

Daunting. Let's cheat.

The Frequency Chart

Frequency coincides with pitch. Every note played or sung has a specific frequency to it.

A useful idea. So useful that in my early engineering days I had to search around audio textbooks (it was a pre-internet world) to turn up a chart like this:

This was a super handy thing to have. It’s not needed as much anymore because there are EQs with built-in Real Time Analyzers, but I find knowing the numbers and the math very useful.

Here’s a bunch of EQ ideas based around the chart and the math. I’ll be adding to it, and if anyone has an idea to add, send it to me and it’ll go on the chart.

For those of you who wanted something more technical, there ya go.

The Vault of Marco

My buddy Marco sends me things he’s listening to, and Marco tends to listen to obscure stuff that is always sorta cool. The first entry into the Vault of Marco, made all the more timely by the impending elections in the US, is...

You’re the Man - Marvin Gaye

On Apple Music

On Spotify

On YouTube

Recorded in 1972, You’re the Man expresses Gaye’s disappointment with leadership in the US and his hopes for better policies for the people. This nice message must be tempered by the fact that Gaye was in huge trouble with the IRS for not paying back taxes and he eventually fled the country.

Motown Records thought the song was too controversial (meaning that there might be financial backlash and boycotts against the label) and didn’t promote it, so it vanished off the charts and out of the cultural consciousness quickly. The album "You’re the Man" was supposed to be released to follow up Gaye’s groundbreaking “What’s Going On” album back in 1972. That didn’t happen either. The album was finally released in 2019, on Marvin Gaye’s 80th birthday, after he had been dead for 36 years.

That’s all for now. Remember to get those surveys in. And of course, feel free to write anytime. It is always a pleasure to chat with you all.

Luke

Korneff Audio

Happy Monday, Summer Campers!

This popped up on the Instagram feed of super-engineer Tchad Blake last week:

"There's no "best" eq for anything. All you can really say is what's your favourite at any given time. The (Korneff Audio) AlP has quickly become my favourite eq, ever. Analog or digital. Every time I use it on anything I think I'm hearing something new. It fits my ear/brain chain better than anything l've used before. Wtf...right?? How did these guys do this?? I'd love to know if anyone else out there is hearing just how cool this thing is or even, tell me how it's not. I've been using it every day for over a month and I'm still jacked up about it. "

This is a tremendous compliment, coming from this guy. Tchad Blake is king cheese, bacon from heaven. He's awesome.

Forget the credit list and awards: Tchad Blake makes really interesting recordings. He's always experimenting and inventive. He records drums in the smallest room possible using a Binaural Dummy Head as an overhead mic, doesn't use much reverb, loves to compress and distort, pans things strangely, and in general makes super cool sounds. Listen:

American Music Club "Mercury" on Apple Music

On Spotify

On YouTube

Tchad asked, "How did these guys do this??"

This is how we do it

The EQ on the AIP is a 4-band fully parametric EQ that is especially sweet sounding. It has a weird interface that is based on its inspiration, the Klangfilm RZ062B.

From the name, one can guess that Klangfilm was German and involved in sound for film. Formed in 1928 by partnership between Siemens and AEG (Telefunken), Klangfilm amplification, speakers, preamps, EQs, and home entertainment equipment was top-notch. By WW2, Klangfilm was wholly owned by Siemens, and often the names Klangfilm and Siemens are used interchangeably. Klangfilm stuff from the 50s and 60s is especially coveted.

The RZ062 was a tube EQ built for film mixing consoles. It was a three-band passive EQ with high and low shelving and either a midrange tilt EQ (the 062a) or a presence EQ (meaning upper midrange) that had 4 different frequencies (1.4kHz, 2kHz, 2.8kHz, 4kHz) with up to 5dB of gain at the selected frequency (062b).

The 062 has some similarities to the REDD 37 console used by the Beatles on Rubber Soul, Revolver, Sgt Peppers, The White Album: the REDD 37 preamps were made by Siemens.

The RZ062 is an amazing sounding EQ that is remarkably smooth and gorgeous sounding, but it's very limited in choices of frequencies, bandwidth, and overall versatility. Another common complaint is that most of the gain controls provide 2dB increments, and often a setting is either too little gain or too much.

What Dan loved about it, aside from the overall character, was the presence EQ on the 062b that worked perfectly for electric guitars.

So, Dan got his hands on the schematics and basically built the circuit digitally.

This is the usual way we make plugins—we model things at a resistor, capacitor, transformer, transistor, diode level. But what we also do is figure out what we can do with that circuit in the digital realm that would be impossible or, at the very least, difficult to do in the analog realm.

Frankenklangfilm

In the case of the RZ062, Dan decided to take a passive EQ and make it fully parametric. This makes the AIP 4-band incredibly versatile, with the sonics of the original expressed in a modern way. The AIP EQ can do anything a digital parametric EQ can do, from narrow deep cuts to ultrawide boosts, making it useful for anything from getting rid of hum and notching out vocals to finding the exact sweet spot on a snare to gentle "airband" style enhancements. The gain is adjustable out to a ridiculous 36dB of boost and cut, and we've even modeled some EQ curve goofiness that can happen with vintage passive equalizers.

Is it an exact recreation of a RZ062b? No, but at certain settings it can precisely replicate the response curves of the original. We consider it more the Klangfilm's mutant cousin. Frankenklangfilm.

One thing that hasn't changed from the original, however, are the tube/transformer input and output stages, which are a big reason the AIP EQ is so sweet sounding. The original circuit design tends to saturate the transformers a bit. The result is that the input signal is harmonically enhanced feeding into the equalizing circuitry, and then the EQ'd signal is rounded off a bit by the output.

So, that's the quick version of what's going on with the EQ on the AIP. If you want to grab an AIP Demo, click here.

Amazing Interview

Gearspace did an interview with Tchad a few years ago. It's detailed, funny, and he gives away the store and the secrets.

I have been sick with a summer cold and tinnitus all week, and I'm behind on answering a bunch of you that wrote in. I'll get back to you all this week. It's always a delight writing New Monday and hearing from you guys.

Next week I think we need to do a survey about how I can make New Monday better and more useful for you.

Warm regards,
Luke@KorneffAudio.com

Cutting to the chase, here’s the video I made about using the EL Juan Limiter to improve the sound of a live recording of Jimi Hendrix playing Johnny B Goode.

My friend Steve had an 8-track (remember those in Grandpa’s basement? Or perhaps your own?) of Hendrix in the West and he always played it at parties at his house. I could listen to this album for hours.

Originally I just wanted to get a decent-sounding version with some video of the actual performance up on YouTube, but then I decided to run the audio through our El Juan Limiter and the results were so good (and so quickly achieved) that I thought I would make a video out of it for you all.

The video ended up taking HOURS because I kept screwing up and restarting and on and on and on...

Here’s the big takeaway: Dan and I use our plug-in backwards. Very often we start off on the back panel, making some moves that affect the entirety of the plug-in’s response, and then we go to the front panel and make specific tweaks.

THis actually makes total sense. Imagine you’re going to do an analog mix. Before you put in a single EQ or compressor, the first thing you’re going to do is choose the console on which you’d like to mix. Big warm Neve? Punchy API? Snappy, crunchy SSL? Same thing with our plug-ins. There are controls that affect the whole enchilada, and they’re usually on the back.

Since we released the AIP, we’ve been getting the same question over and over again  Should I put the Compressor before or after the EQ?

This question goes waaaaaay back in time, to when engineers first started patching in multiple processors on a channel while beating a mammoth to death with a stick. And the answer now is the same as it was back then: It depends.

mammoth

But it’s a really useless answer, isn’t it? You can answer ANY question with “It depends.” Do you like sex? It depends. Do you have five bucks I can borrow? It depends. Does this sound like a hit to you? It depends.

Today, you’ll get an actual answer to the question, "Should I put the Compressor before or after the EQ?”

Usually the Compressor is Before the Equalizer When You’re Tracking

Close to 90% of the time, when you’re tracking, the compressor will be before the equalizer. When in doubt, the compressor goes first.

pre post

Why? Three reasons:

1) Because it will be less work for you

If the compressor is first, when you change its controls, it won’t affect the settings of the EQ much if at all. More gain feeding into an EQ doesn’t affect the way its knobs work. But a compressor’s main adjustment is threshold, and input gain will always affect the threshold settings.

If you put the EQ before the compressor, then whenever you adjust the gain of a particular band of the EQ, it results in a change in the output of the EQ, which means more or less signal feeds into the compressor, and that will affect the threshold setting. If you are constantly tweaking an EQ, you'll be constantly adjusting the compressor threshold to compensate.

With the compressor first in the signal flow, you set its threshold and whatever other controls the compressor might have, and you leave it alone basically. And then you can screw around with the EQ all you want and you won’t have to touch the compressor.

2) Compressors can lessen the need for EQ

Let’s say you’re working on a kick drum, and sound is missing some attack and thud. It’s missing that “cut." The kick’s transient has a lot of frequency content, much of it happening somewhere in the upper midrange anywhere from 2kHz to 8khz, and the thud - that “dead body falling off a balcony onto a carpet” sound is down in anywhere from 50Hz to 150Hz. Yes, you could sweep around with two bands of EQ and dial in some attack and thud... or your can run the kick through a compressor (might we recommend the Korneff Audio Pawn Shop Comp for this...) and get the attack and thud, and some added punch, just by setting the compressor right. If it still isn’t what you’re looking for, then you can throw an EQ on after the compressor, and fart around a bit until you have the sound you’re looking for.

The same goes for guitars, vocals, bass, etc. Usually the compressor first will even the sound out, fix a few issues, and the net result is less need for equalization.

3) Because you can compensate for the frequency response of the compressor

Compressors tend to change the frequency response of the signal a bit. Mash something pretty hard with a compressor and you’ll lose some high and low end typically, but even patching a signal through an 1176 that’s in bypass will do something to the sound. With the EQ after the compressor, you can adjust for the changes in frequency response caused by the compressor.

So, when you’re tracking, you probably want the compressor first. Unless you want it last when tracking... because... it depends.

EQ First to Fix Big Problems

kahn!KAAAAAHNNNN! Compress the bass!

You’re in the studio, recording a bass player, and his C on the 3rd string 3rd fret is really loud for some reason—crappy bass, neck resonances, crappy bass player with crappy technique, etc. When he plays it sounds like a a a a g# g# g# e e C C C C a a a. Damn that resonant C to hell!

You put a compressor on it, drop the threshold down, get a nice bit of click to bring out the attack, and it evens the dude’s playing out until he hits that damn resonant C. And then the compressor smashes the crap out of things because that one note is so much louder than every other. And if you set the threshold higher to not hit the C that hard, then the compressor does next to nothing on all the other notes. Damn that resonant C to hell!

What you have to do is bring down that loud ass damn resonant C using an EQ first, and THEN run the signal through a compressor. Patch in a parametric EQ, set it to a narrow bandwidth (say 1/4 or 1/8 an octave), set the frequency to 65Hz, and cut by 6dB or so. Patch the EQ into the compressor, and now the compressor will respond to the signal much more consistently.

Where did the 65Hz number come from? That is the frequency of a C, 3rd string 3rd fret, on a four string bass that is tuned to A 440Hz.

science

So, if there is something in the frequency response of a signal that is excessive, then an EQ first is handy to nip out the crap before compressing it.

Cleaning up problematic sounds before compression is also handy for getting control over woofy sounding kick drums, spiky sounding cymbals and hi-hats, and midrange heavy vocals. You’ll find very often with vocals that high pass filtering them, or cutting, say, everything below 300 Hz with a shelving EQ (like 2 or 3dB worth of curt - doesn’t need to be a lot) will actually help the compressor work more effectively across the rest of the signal.

Compressor Last on the Stereo Bus, Sub Groups, or when Mastering

Probably all of you put a compressor across the stereo bus, or the master bus, depending on what you call it, for your final mixes. You might also be putting a compressor across each of your sub mix busses (the guitars bus, the keyboard pad bus, the vocals bus, etc.) This is actually a common application for the AIP.

In these applications, the compressor last in the effects chain seems to work better. Perhaps it has to do with how the EQ "pushes in" to the compressor. There's a whole bunch of vague things I could write here, but the point is it: compressor last sounds better.

Ok, here is my theory: if you’re mixing, you’ve probably already fixed most of the glaring problems in the recording. You’ve already compressed tracks to even out performances, you’ve already gotten rid of resonances with EQs, you have levels balanced, you’ve done automating, and final compression is more about giving the whole mix a sound and feeling than it is catching hot moments in the levels.

When the compressor is last, the separation between the elements of the mix is clearer. I notice more details and overall I can "see" into the mix a bit better. With the EQ last in the mix chain, I've noticed that the whole mix is thicker, but more smushed together and sonically homogenized.

Compressor last in the mix bus chain also seems to tighten up the mix rhythmically - the "glue" people talk about. This is due to the main rhythmic elements - typically the kick and snare, being the loudest elements of the overall mix, and hence hit the compressor hardest and "drive" it a bit, causing it to sound tighter as the overall mix dynamics are changing because of the kick and snare pushing the compressor. Imagine if you grabbed the master fader and moved it a tiny bit on beat—you’ll get a rhythmically tighter mix. See how that works? You can even play with your EQ settings a bit to make a particular frequency range sort of “lead” the compressor.

This is a good thing to experiment with, whether you're mixing in the box or working hybrid. Flip the compressor and EQ around in the bus chain and see what sounds best to you. My rule of thumb, though, for overall bus processing, is to put the compressor at the end of things.

Filters -> Compressor -> EQ

Amplified Instrument Processor (the AIP) eq areaFilters, EQ, a Compressor... the AIP is your huckleberry.

Often things can get recorded that are beyond the range of speakers to reproduce, and often beyond the range of ears to hear. Low end thumps, perhaps caused by a vocalist taking a step while singing, or a resonant rumble caused by an air-conditioner, can get recorded and can be really loud, but basically unheard while you’re working on the track because your speakers just can’t quite get down there. But even though those low sounds can’t be heard, they still travel through your signal chain, and power and dynamic range is used up as equipment tries to reproduce a basically inaudible signal. This results in moments of distortion and overload that cause problems in audible frequency areas. Similarly, loud high end signals can do the same thing.

High and Low pass filters were originally put on console channels to deal with this sort of problem, and you should be using them to clean up crap that doesn’t belong. Reaching into the bottom end using a High Pass filter and getting rid of excessive lows, especially on instruments that simply do not have significant information way down there, will often tighten up things down there and make room for the instruments that do need authority down there. And the same thing goes for the high end: Low Pass off instruments and sounds that don’t extend meaningfully in the high end.

In fact, the old school way of doing things, which is still a good idea (and exactly how Dan Korneff, me, and loads of engineers approach a mix, incidentally), is to start by setting up pass filters on every channel and getting rid of what isn’t needed. You might be thinking that you need all the lows and highs of every instrument, and if you were to listen to individual channels solo’d out that might be the case. But in a mix, it all blends together, and space has to be shared. Bright keyboards with lots of high end will clash with the highs of vocals. Decide which part deserves the space and cut accordingly.

Once you get rid of the crap, run things into the compressor to even out the performance and perhaps add a bit of attack, then give it some polish with EQ last in the signal chain.

Depends = Adult Diapers

Remember that in all creative things, the main rule is that there are no rules. In audio, what sounds best is best. If you always put your EQ first, compress after and it sounds great, then excellent love sandwiches for you. I write these things mainly to give you ideas and inform your thinking, never to pin you down with rules and dogma.

So it does depend.... but usually the compressor goes first! Or last!

If you've been using our Pawn Shop Comp, you might be using it backwards. And if you haven't got the PSC yet, click here, get the demo and follow along with this blog post: you'll learn some good stuff.

A few days ago, Dan and I were chatting about audio (whatever), and he described in detail his approach to using the Pawn Shop Comp. It's completely opposite to the way most engineers probably use it. And since Dan built the PSC, it certainly makes sense that he knows it better than anyone. He also uses it a lot — typically it's on 40 to 70% of the channels in his mixes.

So, this is Dan's approach, 180 degrees in the other direction, and I'll tell you exactly what he does.

psc backwardsSo much to tweak, so little time!

1. Flip it Around to the Other Side

The first thing Dan does is hit the nameplate and flip the Pawn Shop around to the back. He starts off completely ignoring the "comp" of the Pawn Shop Comp. Instead, he starts by adjusting the back, thinking of the PSC as a channel strip rather than a compressor. And that makes sense, because the preamp and most of the back panel goodies are pre-compressor in the signal flow.

2. Goof Around with the Resistors

On the back panel to the right, there are switchable resistors. Just by swapping in different resistors, you can adjust the high end frequency response and some of the saturation characteristics of the PSC. Dan and I both have an "old time" engineering philosophy, which is, "Start by getting rid of what you don't like." The resistors allow you to subtly tailor the high end of your track before you even touch an EQ.

Metal Film resistors are modern components, and have the brightest, least colored sound. Switch to these when you need highs with a lot of sheen, such as on vocals, or ride cymbals, strings, pianos, etc.

Carbon resistors are darker, with Carbon Composite being the darkest. Use these when you want to round off the highs. They work well to tame nasty cymbals and high hats, smooth out vocals that were cut on cheaper condenser microphones, which can often make them sound spitty, take some of the high end "chiff" off electric and acoustic guitars, etc. You'll also tend to get a different flavor to the saturation — more on this as you read...

3. Play with the Preamp

Off to the left is the PREAMP GAIN. You'll notice that it is already turned up a little bit even before you start to adjust it. Take this as an invitation to adjust it some more.

As you turn it up, you'll start to overdrive the preamp a bit. Depending on the signal you're passing through the PSC, you might not hear much of a difference, but the more clockwise you go, the more you'll hear it, as you push the preamp into saturation and eventually distortion.

Quickly explained, when you turn up the gain too much on something like a tube or a transistor, you generate harmonic distortion. And to our ears, a little harmonic distortion sounds good - we'll call this saturation. And, sometimes, a hell of a lot of harmonic distortion, like when you overload a guitar amp, sounds good. We typically call this distortion... uh... distortion.

Think of it like toast: getting the bread a golden brown is saturation, and burning it a bit is distortion. In audio, we usually prefer the toast a little brown — it's better than white bread.

What the saturation is adding is, essentially, high harmonics that are mathematically related to the signal passing through the preamp. An easier way to think of it: saturation is kind of an upper midrange to super high end equalizer. SO... turning up the PREAMP gain is like making the toast golden brown by adding high end.

Whatever. Saturation of vocals gives them a beautiful sweetness, or a nasty ass snarl, depending on your settings. On things like drums, saturation sort of acts like a compressor with an infinitely fast attack, and it rounds out the transients. Cymbals will go from a sharp "ting" sound to a smoother "pwish" sort of sound. Same thing happens on guitars and snare drums. Careful with a lot of saturation on a kick drum — too much and it will lose some of its punch in the mix.

Now, the PREAMP BIAS control... this is sort of like "What if we plugged the toaster directly into the power grid." Not really, but, kind of.

Bias, simply put, adjusts how well something works. If you make something work harder than it is designed to work, you'll get a lot of power out of it, but the results can be unpredictable, and in the real world, you'll burn it out.

But with the PSC, playing with PREAMP BIAS won't blow anything out, but if you crank it up you can get insane amounts of distortion. Use PREAMP BIAS to get fuzz on basses, or add additional distortion to guitar tracks, or make a vocal sound like Satan backed a master truck over the singer's face. Whatever - play with it! It's fun.

Don't be afraid to add saturation to any and all of your channels. The BIG SECRET to those amazing sounding vintage records that you love is that there is saturation ALL OVER THE PLACE. I used to hit the 24 track tape super hard when tracking, basically adding tape compression to everything (tape compression = fancy word for saturation), and then the individual channels might be driven a bit too hard (if it sounded good). Some audio channels click and sound awful when you push them too hard. I loved Trident consoles but if you overdrove the mix bus even a little it would sound like shit. The SSL and Neves not so much). And then there would be more damage done by compressors, mastering, etc. There's a reason Dan puts the PSC on so many channels, and this is it.

4. Tweak Them Tubes!

As he messes with the PREAMP, Dan also plays with the tubes — there are three different models to choose from. Each has its own gain structure and saturation characteristics.

The 12AX7's are the default, and they have a nice distortion to them, which is why they're often used on guitar amps. The ECC83's have a lot more gain, and they respond to an instrument's frequency response very differently than the 12AX7's. Switch between the two and see what you like better. The differences in sound will become much more pronounced the more gain you have.

The 5751 tubes have much less gain, are much rounder in the high end, and sort of smear the transients out. Switching to these will lower the gain through the PSC and give it an overall more vintage sort of vibe. Think vocals that need a bit of taming, synths that are harsh and remind you of your mom yelling — 5751, mom!

5. Transformer Time!

Transformers are a HUGE part of the sound of a piece of analog equipment. It's not uncommon for vintage mic preamps to have loads of them — my Quad Eight MM61 mic pre's have a whopping EIGHT transformers per channel, and those transformers are intrinsic to the sound of them.

Without going into a lot of detail, transformers can saturate like a preamp can, but the saturation is very different. The harmonic distortion added is at lower frequencies, and the more you clobber a transformer.... this is hard to describe... it sort of makes the signal kind of slower and mushy? I can't describe it really, but you can definitely hear it.

You can't directly overload a transformer really - you couldn't on vintage gear really either. If you're running a lot of gain through the system, transformers will overload. But the overload/saturation characteristics are very frequency dependent. On the PSC, as you play with preamp gain, you'll automatically affect the transformers.

Dan uses the transformers to contour the bass response of the PSC. Now, depending on the amount of gain you have happening and the type of instrument you're processing, you might find it very difficult to hear the effect of the transformers. I always switch them around, and sometimes it makes a difference, and sometimes I'm just flipping shit around doing nothing. It's always worth a try, though.

NICKEL - this is the most modern sounding of the transformer types, the least colored. This, with Metal Film resistors and the preamp set as low as possible, will give you a very clean, wide sound. On things that you don't want colored, Nickel is your choice.

STEEL - steel transformers pull warmth out of the signal, and if you overload it, it tends to tighten things up and make things a bit more.... forward? Bright? Again, hard to describe. I switch to Steel when I want things to cut a bit more in the mix. Flubby kick? Try Steel. Shitty drummer? Fire him!

IRON - Iron is probably the easiest to hear and has the most pronounced effect. I hear it as a lift to the bass and a thickening of the lower mids. Bass is a natural use, as well as on guitars and vocals.

So far, we've done all sorts of processing without touching an EQ or a compressor. In effect, we are "custom building" a channel to fit our signal by switching around components and adjusting gain, very much the way a console designer would develop the sonic signature of a recording console, or a preamp. The PSC backside lets you pretend you're Rupert Neve or some guy like that. Now, to be clear, you aren't Rupert Neve, and the PSC gives you a lot of control, but not the control that an actual console designer might have. However, in terms of what you can do within your DAW and without getting electrocuted, the PSC is amazing.

6. EQ EQ

This blog post is getting too long - I'll have to make a CliffNotes version of it, but we are almost done.

The PSC has two bands of EQ built into the PREAMP. Both EQs are wide bandwidth peaking EQs, with response curves similar to console EQs from the late '60s and early '70s. They're very smooth, they don't have a huge amount of gain, and they sound kind of like a cross between an old Neve EQ (like a 1073) and a Quad Eight or a Sphere or an Electrodyne EQ — or something from the '70s, made on the West Coast of the US.

Dan and I both think of them more as level controls than EQs. What I mean by this is that if you turn up WEIGHT, you'll lift up a pretty large area of the bottom end. You can't use WEIGHT to really pick out the thud of a kick, but if the entire kick sound is anemic and weak down there, WEIGHT will add... uh, weight. Cutting it gets rid of mud. There are two frequencies to choose from. We usually switch between them and go with what sounds best.

As your mix builds up, remember that you can go to WEIGHT on specific channels and pull some bass out of things to keep the low end from getting flabby — I'm looking at you, guitars and tom toms and drum room sounds.

FOCUS is a midrange lift. Now, the area that it covers, Dan has noticed, is an area that a lot of engineers are scared to EQ. And rightly so; it's dead in the middle of things and too much in there sounds honky and stupid. But FOCUS is very smooth, doesn't have a lot of gain available, and it works really well to sort of push a track out in the mix or pull it back. Again, we think of it as a level control, and not as an EQ.

WEIGHT and FOCUS are really well named. Dan's idea.

AND WE ARE DONE

Quick recap — the CliffNotes version: Switch around to the back, try different resistors, adjust with the preamps and the tubes, experiment with the transformers, dial in the WEIGHT and FOCUS. Get it sounding good and then...

Switch around to the front and mess around with the COMPRESSOR!!!!

GAHHH!!!! More controls!! Time for a bath!