Audio effects are the core of how producers and composers shape mere sound into music.
They are essential in every music maker’s toolkit- and whether you’re working with analog or digital sounds, audio effects should be something you learn to master.
Read on to discover what audio effects are and how to successfully implement them into your music!
Keep in mind: a good microphone will help you get the best sound of what you're recording, whether it's your soothing voice or various sounds like tapping and brushing.
What are audio effects?
Audio effects are hardware or software devices that change or redirect how an audio signal sounds.
In other words, they do something electronically to a sound that changes its characteristics. These effects can be controlled by different parameters including rate, feedback, or drive.
Audio effect uses
Most producers will use audio effects for five main purposes:
- Sound design
- Increasing depth
- To fit a sound better in a mix
- To improve the rhythm or flow of a track
- To widen the stereo field
5 types of audio effects
Audio effects can be broken down into 5 overarching categories:
1. Modulation effects
These modify the source audio signal with another signal, generally an oscillator. Includes choruses.
2. Time-based effects
These include processes where some form of time manipulation occurs to the signal. Includes reverb, delay, and echoes.
3. Spectral effects
Spectral effects alter the frequency information of an audio file or the position of these files in a stereo or multi-channel mix. Includes equalization and panning.
4. Dynamic effects
These alter the dynamics of an audio signal- the change in amplitude over time. By changing the signal amplitude, these effects will also alter the shape of the signal waveform, which is signal distortion. This includes distortion.
Includes band-pass filters, bell curve filters, envelope filters, high-pass filters, high shelf filters, low-pass filters, low shelf filters, and notch filters.
What is panning?
Panning is the distribution of a sound signal in a multi-channel field. It helps create the illusion of a sound source moving from one part of the soundstage over to another.
How does it work? Well, panning manipulates the fact that our ears can identify the placement of a sound in a 3D space- since our brains can process the different timings between our left and right ears.
Panning works on a dual stereo system by letting more or less of a signal into each speaker, creating various spatial effects.
You can achieve many different effects using panning, such as:
- To artificially position your sound in a specific place in your stereo field.
- To prevent muddiness and masking in your mix (this is when two sounds cover each other up).
- Auto-pan effects can help a sound sweep across the stereo field, making it sound like the music is moving from left to right.
What is echo and delay?
Delay is one of the more essential effects - being the foundation for others such as chorus and reverb.
Delay records an audio signal for playback at a set time after the original signal. It can be played back in various ways to achieve sounds like echoes that decay over time, or to produce doubling effects.
Most delays work by playing back the dry signal and the delayed signal shortly after.
Modern digital effects units use a recorded buffer to emulate the playback head effect of older delay units. The signal is stored and played back depending on the parameters controlling the echoing effect.
Echo and delay uses
- The most common use of delay is the slapback effect used in early 1950s rock- like Sam Philips or Elvis Presley. These are great for filling out a performance, especially a guitar or vocals.
- Drawn-out delays can create whole new layers and rhythms in a performance. Multi-tap delays are often used in techno and dub music to create swirling synth lines.
What is reverb?
When a sound occurs, two things happen. The direct sound hits your ears, and other sound waves bounce off surfaces before reaching your ears. These other sound waves will reach your ears later and be quieter.
Reverb is a bunch of echoes occurring simultaneously, so we hear them as one single effect. Reverbs can happen all the time in all sorts of spaces; tunnels, halls, deep underground caves…
To recreate reverb in music, we use a metal plate or spring that picks up vibrations inside the tank, transforming them into a signal with an analog circuit.
Reverb plugins are very CPU-intensive: they make thousands of calculations per second determining the delta, frequency response, and other factors.
- Reverb can help bring some sustain to a sound and make it stick around for longer.
- It will give a dream-like quality to your signal.
- It adds fullness and depth to a sound, smoothing out dips and hiccups on the way.
What is chorus?
Chorus is obtained when similar sounds with minute variations in tuning and timing overlap and are heard as one. Think of a choir in a church singing multiple parts simultaneously- they overlap to create a distinct sound.
How does it work? Well, the chorus audio processor will make copies of the original signal and apply delay and pitch-modulation to those copies.
Stereo choruses do the same thing, with added panning in the delays and offset phase to create a fuller sound.
- Helps create a thicker and fuller sound.
- It creates the illusion of complexity and movement- it was used a lot in 80s music!
- Chorus helps widen your stereo image: giving a dreamy quality to guitar or simply bulking up vocals.
What is distortion?
Distortion occurs when you overload the audio circuit, causing the signal to clip. It’s a little tricky to get the hang of, but when used effectively it’s a nifty little tool.
It works by changing your original signal by pushing the sound to clip and compress. This harmonizes the sound, adding pleasant color.
Distortion comes in various shapes and sizes- different types of circuits will produce different distortions.
- Distortion used with tubes is warmer, adding harmonics to thicken the sound.
- Distortions made with transistors are harsher, adding odd harmonics instead.
- Bit-crushing is a common type- it’s often used in video games due to its crunchy qualities.
- Distortion is commonly used on electric guitars and synths and can be achieved with pedals, effects units, rackmounts, etc.
What is equalization?
Equalization is the cutting or boosting of a frequency in the frequency spectrum. The frequency spectrum sits between the lowest and the highest amount of Hertz that human ears can hear.
Equalization divides this spectrum into subsections/bands that can cut or boost your sounds. It works by sculpting existing frequencies in your sound, helping to shape the tone and character of your sound.
It can also change the balance between the frequencies already there.
Cutting the high end of your frequencies will darken your sound, and boosting the high end will make it brighter.
- Equalization is an essential tool for making mixes. You can carve out space in the frequency spectrum, so each of your sounds is just right.
- Equalization helps keep your track from sounding too dull or muddy.
- It can also remove undesirable elements in the recording.
- You can use it to boost the main elements of the recording too.
What is compression?
Compression reduces dynamic range- it is the difference between the loudest and quietest parts of your audio signal. When you use compression, the quieter parts are boosted, and the louder parts are reduced.
Compression works by reducing the gain of the signal. Thye lower the volume of those loud peaks, evening out the sudden bumps in your track. This creates a comfortable fluidity to tracks.
Compression makes tracks sound tighter and more fluid, upping the average loudness of the track. You’ll know if you’ve overcompressed if it sounds dull and noisy.
- Sidechain compression is a common sound in dance music. It makes the music sound like it's pumping.
- Using compression makes your tracks sound more polished and put together.
- It heightens the average loudness of the track, keeping the louder peaks in check.
What is an audio filter?
An audio filter turns down a set of frequencies above or below a certain threshold. These are often found within equalizers or as stand-alone plugins.
The most common types of audio filters are high-pass filters (these let through all the frequencies above the cutoff and attenuate the ones below).
Low-pass filters (these let through all the frequencies below the cutoff and attenuate the rest). Band-pass filters let through all the frequencies in a determined band, attenuating all above and below.
The steepness of a filter is determined by its slope. The lower the slope, the softer it attenuates the frequencies below or above the cutoff frequency. Many filters will also have a resonance control, exaggerating the frequency band around the cutoff.
Audio filter uses
- Audio filters can be used for creative reasons and to correct a track.
- You can carve space in a mix for various instruments and frequency ranges.
- They can also be used to create build-ups and transitions within a mix.
Learn more. Get into sound design
Now that you’re an expert on audio effects, why not take a deep dive into sound design? Check out our page here to get started.
Once you’ve completed that, look at our page on filmmaking to learn how to implement audio effects into a film score.
What are the different types of audio effects?
The five different kinds of audio effects are modulation effects, time-based effects, spectral effects, dynamic effects, and filters.
What do the different audio effects do?
- Modulation effects modify the source audio signal with another signal.
- Time-based effects include processes where some form of time manipulation occurs to the signal.
- Spectral effects alter the frequency information of an audio file or the position of these files in a stereo or multi-channel mix.
- Dynamic effects alter the dynamics of an audio signal- this is the change in amplitude over time.
- Filters include band-pass filters, bell curve filters, envelope filters, high-pass and shelf filters, low-pass and shelf filters, and notch filters.