A Post-Production Primer: Mixing & Mastering

Having engineered at some of the top music studios in New York City, New Jersey, Miami, and Atlanta, and currently the head of Artist Development, Sync Manager, and lead A&R at Water Music Publishing—Frank Demilt has a 360 vision of what it takes to get your recording up to the level it needs to be for a successful commercial release. Demilt’s insights and advice on Mixing and Mastering in the following presentation is taken from his new book The Blueprint: The Bible For Becoming a Successful Performing Artist in the Digital Age (Amazon).


The Mixing Process

There are two options when looking to get your song mixed. You can do it yourself or send your session to a mixing engineer. If you send it to a mixing engineer, there are two ways you can send your session:

Your vocals over a 2-track

Full tracked-out stems

It’s important to note that tracked-out stems create more work for the engineer and will cost more money. A mix can cost anywhere from $20 up to thousands of dollars for the top mixing engineers. Getting a $20 mix may not produce the best sound, but if this is all you can afford, spend the $20 to get an engineer to do the mix, especially if you don’t understand the mixing process.

What if you do the mix yourself? If this is your choice, I will applaud you for your confidence, but there are some things to know before diving headfirst into mixing the song yourself.

First, if you haven’t done so already, label your tracks cohesively. Believe me, if you go back to this session two months later you’ll have no idea where the accent hi-hat is if the track is labeled as “Audio track 48.” Once your tracks are properly named you should color code your track groups. For example, make your drum tracks yellow, vocal tracks blue, guitars green and so on. There are no right or wrong color choices, these are strictly a preference, but it is extremely helpful in the long run when looking for specific groups and instruments.

The key to a good mix starts with the balance. This means leveling out the volume of each individual track in your session. If you’re using a two-track, set the volume of the two-track around -10db. This will provide you with enough headroom to properly balance your vocals to the instrumental without distorting. When balancing your vocals, your lead should be the loudest, the stacks will be tucked underneath, the harmonies will surround the lead, and the ad-libs will be the lowest in volume and panned in either direction. A rule of thumb: all of your vocals when played together should peak around -3db.

In most cases, the vocals are the key component of your song, and you want them to be featured front and center. In mixing lead vocals, there are four general areas that will enhance your lead vocal sound:

Clean up the low-end using a high-pass filter to put your vocal out front. This will not only clean up the direct low-end, but also knock out any low-end room noise.


Carve out space for the vocals with small frequency cuts in the instruments where they are fighting with the vocals.


Get the vocal to be present. This can be done by focusing on the middle/high-end frequencies.


Smooth out the vocal by EQing the mid-range, this is where you create vocal clarity without affecting the heart of the vocal sound.

Gain staging is the first step in the post-production process. It is important to go through each syllable in the vocals and match the volume to the rest of the track. This is especially important because if the listener can’t hear or understand certain words it can be a huge deterrence for them to continue listening. Always remember, the first line of the song is the most important, this is what draws the listener in.

EQ is the next step in the mixing process and is used to subtract problem frequencies and extenuate other frequencies to make sounds cut through the mix. Finding the best EQ comes from sculpting the vocals so the leads and backgrounds each have their own space. The “best” mix comes from listening to all of the session’s sonic elements and creating frequency spaces for each sound. Here are a few basics for equalization:

EQ the dominant frequencies.


Cut for uniqueness. Instead of boosting frequencies, cut frequency bands.

When EQing, cut first, boost second.


Don’t attempt to create something that isn’t there, you can only work with the elements you have.


There is no fix for a bad vocal recording.

If too much of a frequency is removed it becomes audibly noticeable to the listener. It’s best to use a wide boost in similar frequency ranges so the listener doesn’t notice. My favorite way to begin the EQing process is to engage a High-Pass Filter (HPF). I start around 120Hz as a baseline. You might need to set it higher later in the mixing process, because the low-end frequencies of your vocals are cluttering the frequency range of your kick and bass.

EQing vocals is a process that can only be judged with your ears, not your eyes. The human voice has key frequency ranges that create each person’s vocal tone characteristics. For example, the 100Hz-300Hz range effects clarity and can make a vocal sound thin when taken away. Whereas the 10-20kHz frequency can cause a harsh and brittle sounding vocal. Remember, it is better to cut first and boost second. Here are some frequency ranges and their characteristics:

100 Hz–300 Hz: Clarity / Thin

100 Hz–400 Hz: Thickness

100 Hz–600 Hz: Body / Warmth

100 Hz–700 Hz: Muddiness

400 Hz–1,100 Hz: Honky / Nasal

900 Hz–4,000 Hz: Intelligibility

1,000 Hz–8,000 Hz: Presence

1,500 Hz–7,000 Hz: Sibilance

2,000 Hz–9,000 Hz: Clarity

5,000 Hz–15,000 Hz: Sparkle

10,000 Hz–20,000 Hz: Air / Breathiness

Sometimes the fix to your vocal frequency problem lies beyond the above ranges. For example, cutting in the 1,500Hz-2,000Hz range fixes the nasal sound. Or your vocal could be too harsh, with heavy sibilance, so you need to tame the high-end frequencies by cutting them or using a low-pass filter to take them out completely. A technique I learned from Ty Dolla Sign’s producer is that all his vocals use a low-pass filter to take away all frequencies above 15kHz. This way they don’t sound too harsh, resulting in a fuller rounder sound. This is all preference and is dictated by the vocalist and music genre. When you have your vocals shaped the way you want, dynamic control (compression) is next.

Audio compression is the process of taming a sound’s dynamic range by setting volume limitations on how much of a frequency is let through. Compressors boost the quiet sections and lower the louder sections to provide a consistent sound. The compressors ratio setting determines how much the compressor is working. The higher the compressor’s ratio, the more the compressor affects the sound’s dynamic range. Dynamics refer to the space between the loudest parts of a sound to the quietest part.

Each compressor has its own unique sound that it gives the incoming signal even before changing any settings. Each compressor has settings that include threshold, attack, release, input and output. The threshold is the level the compressor works at, meaning that until the incoming signal reaches the db threshold, the compressor won’t activate. The input is the level of the sound going into the compressor, and the output is the level of the sound coming out of the compressor. With vocals these two are usually correlated as the higher the input, the lower the output and vice versa. This happens because if the level is louder coming in, the level needs to be lower coming out to balance the overall level. The Attack and Release settings determine the reaction time of changes in the input signal of the compressor’s gain‑reduction. Attack dictates how fast the compressor reacts in reducing gain, while release dictates how fast the gain reduction resets. Be careful, though––too much compression will make it hard to hear and you’ll literally have a squashed sound.

Dynamic sounds have a wide range between the quietest and loudest parts of the sound. For example, a snare hit has a fast and short peak (wide dynamic range), compared to an organ note that maintains the same level after its initial key hit (a less dynamic sound). Dynamics also exist within a vocal performance, such as the singer singing softer during the verse, then belting during the chorus. These dynamic swings can make it difficult to fit everything together in a mix.

Personally, the first type of compressor I like to use is a De-Esser. Unless there is unwanted background noise, in which case gating is my first step. Gating is a way you can clean up the audio being picked up by the microphone when you aren’t singing. Gating enables you to set a volume threshold where any sound that doesn’t reach the specified threshold gets cut from the channel.

De-essing is used to get rid of harsh-sounding esses that come out when recording. Sometimes this can be accomplished through the EQing process, but sometimes the vocal sibilance needs extra taming. The De-esser will only compress the specific frequency range you set it for, allowing you to compress only the problem frequencies and nothing else.

Now that you have your EQ and compressors set, it’s time to add efx. The sound you’re going for will dictate which and how much efx you should use. Reverb and delay are just the tip of the iceberg but are considered the most essential. Why? Reverb and delay are natural occurring sounding efx that we hear when any person speaks to us. Using these efx can be simple or complex. Used correctly, nobody may notice they are there and you can create a pleasing sonic experience. Used incorrectly, you can clutter the entire mix, jumbling multiple sounds together causing an unenjoyable listening experience.

Reverb is the reflection of sound within an environment, which is heard after the initial sound is broadcast. The first sound reflection is considered an echo, after that, the remaining sounds are called reverberation and last until the energy of the sound waves dissipates. In music terms, reverb is an effect used to create depth, add emotion and soften sounds within the song. By using reverb, you’re altering the voice’s unique sound and affecting the timbre. Reverb can do great wonders to your mix, but it can also hinder it greatly. Reverb can clutter up a mix if not used correctly. Too much reverb creates a sound swell that can cover up the complementing instruments and vocals in the song.

Reverb plug-ins and consoles have settings for selecting the frequency ranges you want the reverb to be added. For example, on the reverb I use on my vocal chain I roll off all frequencies below 200Hz and above 5kHz, which enables me to get a clear tone from the reverb. These extreme low and high frequencies could be exaggerated in the vocal you were attempting to eliminate through EQs.

Delay is another naturally occurring incidence when speaking. Sometimes described as an echo, delay develops as sound waves bounce off surfaces in varying lengths of time before arriving at your ear. Vocal delay has its place in music and on vocals. However, the right amount of delay is a personal and stylistic preference. Too much delay can be jarring to the listener as they will be hearing sound in the background well after the initial sound has ended.

Mixing background and ad-lib vocals differs from the lead vocals. You want the lead to stand out from the rest, as this is the main vocal of the song. Your backgrounds (or stacks) are there to support your lead. These vocals don’t need to be fully heard, but they should be audible. The stacks should have a tighter compression and a different EQ setting as to not interfere with the lead. The ad-libs are vocal efx that should be separated from all other vocals of the song. They need to be heard but should never overpower other vocals in the song. I like to put a telephone efx on the ad-libs to separate them so they can be heard but not interfere. All engineers and artists have a unique perspective on how ad-libs should sound, and this is usually an artist’s preference. Be careful with ad-libs, too many can take away from the lead and make the song cluttered and busy, too few can create too much space in the song if your lead vocals have a lot of breaks.

Mixing your background vocals to fit in the track with your lead can be tricky. If the backgrounds are too loud, you won’t be able to hear the lead, and nobody will sing along with the song. Mix them too low and the beautiful harmonies and emphasized phrases are no longer heard. Think of backgrounds and harmonies as vocal ear candy, a way to give the listener something different to listen to instead of just the lead vocal for four-five minutes. Once your vocals are set in the mix, it’s time to move to the instruments.

Before moving on to mixing the instruments, it’s important to note when beginning to mix that the balance is the most crucial aspect of a mix. Every mixing engineer has their own workflow process, meaning some will start mixing the instruments first before the vocals, while others will mix vocals first. From there each engineer has a preference as to which instrument group and which specific instrument they will start with. I like to start with the vocals, because they are the most important part of the song. After the vocals, I like to begin with the drums because this is the backbone of the track.


Mixing The Instruments

Start by setting the snare fader at 0 dB and bringing the rest of the drum mix in around it. The snare is the beat’s foundation, and typically one of the loudest elements in the mix.

Next, bring the kick fader up until it sounds almost as loud as the snare. It should be loud enough so that the low frequencies are rich and powerful, but not so loud that it masks the bottom-end of the snare drum.

Then, bring in the toms. These can be almost as loud as the snare if they’re used sparingly, but if they’re heavily featured, they should sit a little further back in the mix.

Last, bring in the cymbals, overheads, and room mics as needed. The level of these tracks will vary from genre to genre, but they should definitely all be used to support the featured drums, not overpower them.

One key component of balancing the drum mix is panning. Use the pan knob to add separation between the toms, widen out the overhead mics, and add depth to the room mics. Make sure to frequently check your mix in mono, you never know where your track will get played, and you want to make sure it sounds good in every format. If it sounds good in mono, it will sound great in stereo. Vice versa is not always the case.

Once the drums are balanced, bring in the bass. This can be tricky because of the low-end similarities of its frequencies with the kick drum. The bass should be loud enough that the low end is big and powerful, but not so loud that it overpowers the kick drum. Always check your reference mixes to make sure you’re staying on course.

A second bass aspect you will undoubtedly come across in today’s music is the 808. The 808 is technically considered a bass drum; however, in some genres the 808 is used more as the bass than a drum. How you mix your 808 is going to depend on whether it’s acting as a bass or secondary kick drum. Creating good low-end separation between these three instruments can be difficult. They all occupy the same frequency range and can cause a low-end buildup that muddles/overpowers the whole track. High-pass filters and compression are going to be your best friends here.

Last, bring in the remaining instruments in order of importance. Understand, only one instrument can be the focal point, the rest are the supporting cast. Think of them as the background vocals. They need to be present and heard, but not overpowering to the focal instrument.

Balancing all the elements in your session first makes it easier to address frequency and dynamics issues later. Reference tracks will keep you on the right path from start to finish. Remember, it’s the ear, not the gear. The best equipment in the world can’t make up for a bad balance.


The Mastering Process

Now that you have a beautiful sounding mix, the last step in postproduction is mastering. During mastering, additional audio treatments are applied to correct problem frequencies and enhance the musicality. An audio master is the final version of a song that’s prepared for sale, download, streaming, radio play, or any other form of mass consumption. When you listen to a song via streaming, download, or physical format, you’re listening to a copy of the master audio.

Mastering puts a final sheen on the recording you worked so hard to create and the mix you went over with a fine-tooth comb. It brings the sound of your recording to the same level as the millions of songs available. When one of your tracks is placed on a playlist, you don’t want your song to suddenly be softer than all the rest. Mastering will make your final mix sound better, but only if the mix is already good. Mastering won’t save a terrible mix, but it can ruin a good one.

The price of a master will vary depending on the number and the length of the songs. A quick touchup could cost $50-$100 per song. For full-service mastering, the average cost is about $150 per song.

Mixing and mastering your music are critical elements of creating the best listening experience. Put as much time into the mixing as you did in the recording process. Releasing an unfinished product is the fastest way to get skipped. •


A graduate of the Roy Park School of Communication at Ithaca College, FRANK DEMILT (@frankademilt) is a veteran of the music industry. Since 2013, Demilt has worked in some of the top music studios in New York City, New Jersey, Miami, and Atlanta alongside the industry’s top Grammy- and Emmy-winning and -nominated artists. Beginning as an engineer at Soul Asylum Studios in Atlanta, he has since worked in various sectors of the music business. Recently, Demilt was named head of Artist Development, Sync Manager, and lead A&R at Water Music Publishing. He’s also helped launch the creative agency Sloppy Vinyl, a premiere artist development and entertainment company in New Jersey. His new book can be purchased at The Blueprint: The Bible For Becoming a Successful Performing Artist in the Digital Age (Amazon)