Audio Signal Processing in Music

Audio signal processing, sometimes referred to as audio processing, is the intentional alteration of auditory signals, or sound, often through an audio effect or effects unit. As audio signals may be electronically represented in either digital or analog format, signal processing may occur in either domain. The former domain is called Analog Signal Processing and the latter is called Digital Signal Processing. Analog processors operate directly on the electrical signal, while digital processors operate mathematically on the digital representation of that signal.


Why is Audio Signal Processing in music is worth getting into-

The recorded music industry is the engine helping to drive a much broader music industry, worth more than US$130 billion globally. This is over three times the value of the recorded music sector, and shows music to have an economic importance that extends far beyond the scope of recorded music sales.

Now, the main focus of this article is DSP. That is what is relevant to Computer Science and that is what is very hot in the music industry right now. But what DSP essentially tries to do is emulate effects that have been implemented using analog technology till now. (Only recently are we seeing effects that are unique to DSP and that trend is quickly rising). So in order to understand what we are doing using DSP in the world of music, we need to understand analog technologies.


Analog Signal Processing:

Analog indicates something that is mathematically represented by a set of continuous values. Analog signal processing (ASP) then involves physically altering the continuous signal by changing the voltage or current or charge via various electrical means.

Now, in musical terms, what we are interested is in what we hear. Here, frequency of a sound wave, its amplitude, type of wave, filtering the signal etc. is what we will modify to achieve a desired result.

Here’s one simple example: Low Pass Filter. This is one circuit which we have come across multiple times. In audio, the lower frequencies would constitute the ‘Bass’ in the sound. Hence, a low pass filter would process the input signal in such a way that only the bass of the original signal can be heard.

Here’s the circuit:



Here, we have a popular distortion pedal called the Distortion + by MXR:



Here is its circuit diagram:



It’s really that simple. However, this is one of the simpler types of effects that exclusively uses electronic components. In certain effects, there are other components used, like metallic plates, vinyl tape, springs etc. The instrument will be plugged in at the input of the circuit and the output will give us the processed signals. Many such pedals were connected together to get a variety of effects and we will study this concept later as part of the ‘Signal Chain’.

We are gonna restrict ourselves to the domain of guitar for examples since they are quite popular.


Some Applications of ASP –

Here are some amongst thousands of effects implemented using ASP. Here is a video demonstrating these effects and more, if you’re curious as to how they sound.

  • Delay – to simulate the effect of reverberation in a large hall or cavern, one or several delayed signals are added to the original signal. To be perceived as echo, the delay has to be of order 35 milliseconds or above. Short of actually playing a sound in the desired environment, the effect of echo can be implemented using either digital or analog methods. Analog echo effects are implemented using tape delays and/or spring reverbs. When large numbers of delayed signals are mixed over several seconds, the resulting sound has the effect of being presented in a large room, and it is more commonly called reverberation or reverb for short. (Show example)


  • Flanger – to create an unusual sound, a delayed signal is added to the original signal with a continuously variable delay (usually smaller than 10 ms). This effect is now done electronically using DSP, but originally the effect was created by playing the same recording on two synchronized tape players, and then mixing the signals together.


  • Distortion, Overdrive and Fuzz- : Distortion and overdrive units re-shape or “clip” an audio signal’s wave form so that it has flattened peaks, creating “warm” sounds by adding harmonics or “gritty” sounds by adding inharmonic overtones. Fuzz is an extreme example where the signal is almost squared off.


  • Chorus – a delayed signal is added to the original signal with a constant delay. The delay has to be short in order not to be perceived as echo, but above 5 ms to be audible. If the delay is too short, it will destructively interfere with the un-delayed signal and create a flanging effect. Often, the delayed signals will be slightly pitch shifted to more realistically convey the effect of multiple voices.


  • Equalization – different frequency bands are attenuated or boosted to produce desired spectral characteristics. Moderate use of equalization (often abbreviated as “EQ”) can be used to “fine-tune” the tone quality of a recording; extreme use of equalization, such as heavily cutting a certain frequency can create more unusual effects.


  • Filtering – Equalization is a form of filtering. In the general sense, frequency ranges can be emphasized or attenuated using low-pass, high-pass, band-pass or band-stop filters. Band-pass filtering of voice can simulate the effect of a telephone because telephones use band-pass filters. This is extensively used in synthesizers.


  • Pitch shift – this effect shifts a signal up or down in pitch. For example, a signal may be shifted an octave up or down. Blending the original signal with shifted duplicate(s) can create harmonies from one voice.

Digital Signal Processing

A digital representation expresses the pressure wave-form as a sequence of symbols, usually binary numbers. This permits signal processing using digital circuits such as microprocessors and computers. Although such a conversion can be prone to loss, most modern audio systems use this approach as the techniques of digital signal processing are much more powerful and efficient than analog domain signal processing.

Here, the numerical manipulation of signals is done. Digital signals store analog signals as discrete time, discrete frequency or other discrete domain signals in the form of a sequence of numbers or symbols to permit the digital processing of these signals.

Numerical methods require a digital signal, such as those produced by an analog-to-digital converter (ADC). The processed result is another digital signal that is converted back to analog form by a digital-to-analog converter (DAC).


DSP Implementation

DSP is no different than any other kind of data processing when it comes to actual implementation. DSP algorithms have long been run on standard computers, as well as on specialized processors called digital signal processors, and on purpose-built hardware such as application-specific integrated circuit (ASICs).

However, when the application requirement is real-time, DSP is often implemented using specialized microprocessors such as the DSP56000, the TMS320, or the SHARC. These often process data using fixed-point arithmetic, though some more powerful versions use floating point. For faster applications FPGAs (field-programmable gate array) might be used.



For faster applications with vast usage, ASICs might be designed specifically. For slow applications, a traditional slower processor such as a microcontroller may be adequate.


DSP Applications

  1. i) Emulation of Analog effects – Those effects we previously explored can all be emulated in DSP. DSP allows users to modify the parameters way more than what analog technologies allow. We can have parameters reach extraordinary values. eg. A delay can be made to echo infinitely.
  2. ii) Amp Modelling

An amplifier is a device that boosts the amplitude of an input signal. Amp modelling is a special application of DSP where an amplifier’s tiny nuances are emulated using algorithms.

The goal is to make a signal going through the modelled amp react like how it would if it were to go through the real amp.


Here is a useful video on a device called the Kemper Profiler..
While this isn’t strict amp modelling, the end result is the same.


Form factors-

Audio Signal Processing units are available in a variety of form factors. These might be purely analog, digital or a combination of both.

i) Stompboxes

Stompboxes usually lie on the floor or in a pedalboard to be operated by the user’s feet. Typical simple stompboxes have a single footswitch, one to three potentiometers for controlling the effect, and a single LED that indicates if the effect is on. Complex stompboxes may have multiple footswitches, many knobs, additional switches, and an alphanumeric display that indicates the status of the effect with short acronyms (e.g., DIST for “distortion”).

eg. Distortion Pedals by TC Electronics, MXR, BOSS


ii) Rackmounted effects

Rackmounted effects are typically built in a metal chassis with “ears” designed to be screwed into a 19-inch rack that is standard to the telecommunication and computing and music technology industries.. A rackmount unit may contain electronic circuitry identical to a stompbox’s, although its circuits are typically more complex. Unlike stompboxes, rackmounts usually have several different types of effects.

eg.Axe-FX, Kemper Profiler, Eleven Rack


iii) Built-in units

Effects are often incorporated into amplifiers and even some types of instruments. Electric guitar amplifiers typically have built-in reverb and distortion, while acoustic guitar and keyboard amplifiers tend to only have built-in reverb. The Fender Bandmaster Reverb amp, for example, had built-in reverb and vibrato.


iv) Software plugins for DAWs

These effects also exist as simple plugins for Digital Audio Workstations. eg. BIAS, Amplitube101

Here is a useful video on the BIAS Pedal.

v) A multi-effects device (also called a “multi-FX” device) is a single electronics effects pedal or rackmount device that contains many different electronic effects. Multi-FX devices allow users to “preset” combinations of different effects, allowing musicians quick on-stage access to different effects combinations. Multi-effects units typically have a range of distortion, chorus, flanger, phaser and reverb effects. The most expensive multi-effects units may also have looper functions.


That’s all folks! Hope this was worth the read!

~Shawn Kenneth Fernandes

Consonance and Dissonance

Thadaa! (Drum roll)

The GEC Music Hub is back in action, with this 2nd article !


This time, let’s try to understand the concept of CONSONANCE & DISSONANCE in music. Well, sounds overwhelming, doesn’t it?.. But trust me, it isn’t. 😛

CONSONANCE embodies the sweetness of a tone whereas DISSONANCE, the “beauty lies in imperfections” aspect.

This article primarily focuses on the western 12 tone scale (a.k.a equitempered scale).

The equal tempered scale consists of 12 notes within an octave.

Now, imagine a rectangle divided equally into 12 parts; each of the 12 parts represents one musical note. Each note corresponds to a frequency. This implies that if n is the first note, then n+n is the second note and so on.

Consider the C major scale (read article 1):


In relation to C, the consonant tones are F and G.

The remaining 4 notes are referred to as colour tones. These tones are dissonant or less sweeter sounding than the consonant tones.


Your ears guide you through consonance and dissonance. They acknowledge what is sweet or even sweeter ,what’s pleasant and what sounds terrible.

Consonance and dissonance are a function of relativity.

Now as our ear perceives it, the first note C and the fifth note G are the prettiest, followed by the 4th note F in relation to C (hence, whilst playing chords, musicians may opt to play only the C and G from a 3 note  C major chord  having  the notes C E G and omit the E. Guitarists and pianists tend to play the C and G and omit the E in rock music type scenarios.)

The other 9 notes, dissonant in relation to the first note, aren’t as sweet sounding as the other three in relation to the 1st note of the scale (considering the 12 tone chromatic scale).

This is because the equi-tempered system of dividing  the octave into equal parts isn’t tonally sound to the human ear.

Over the years our ears have got so used to dissonance (atonality) that unless you train your ears to judge the sweetness of a note, all remains under the veil.

The dissonant aspect can be noticed when a pianist plays two near notes together. They simply don’t blend (a jarring sound can be heard), whereas if the root note and the 5th is played simultaneously,  it sounds a lot more sweeter.

Ooh A ScENariO! 😛

2 instrumentalists meet up – a guitar player and a sitar player. They happen to decide to play the same song. The guitar is primarily an equitempered instrument.The sitar follows a tuning system which is based on pure intervals. In the pure interval system, successive notes are tuned to unequal intervals within the octave itself, such that each succeeding note blends with the next note when played together producing minimal sense of unrest. Our ears can hear the consonance between the two notes in relation to one another. Hence ,the sitar sounds more melodic as compared to the guitar (sorry guitarist 😛 ,please take note of the fact that the guitar is one screwed up instrument!).

Over the years the equitempered system flourished and now has become a sort of a standard worldwide due to commerce and the ease of manufacturing  involved.

Consonance touches the heart – it resonates instantly with our mind and body, but dissonance helps paint a picture. Indian classical music is very consonant to the ear, whereas jazz, metal, or other simpler pop genres give dissonance its character!

These tones coexist in a symbiotic relationship (:P), feeling helpless without the other.

Consonance and dissonance have been set by the rules of nature. None of us can defy it.

Then again, doesn’t beauty lie in the eye of the beholder?




Workshop on Improvisation

Hey Folks,

Well, the term “improvisation” has been synonymous with painters, writers, sports-persons. Music, too, doesn’t step away from it. In fact, every other tune you hear is solely based on the concept of improvisation; be it a nursery rhyme, electronica, rock, Indian classical etc…

Improvisation: Spontaneous emission of musical ideas which have never been performed.

The spirit of improvisation is based on musical freedom. This means that improvisation is subjective in its entirety! Mistakes made whilst improvising are embarrassing yet humbling at times. Musicians have been actively breaking and bending rules through the concept of  improvisation. Well, it’s an excuse actually, so that you can go around playing rubbish and enjoy yourselves!

Writing music is, in fact, among the simplest activities. Yet at times it can be made complex. All born out of improvisation.

Most of us are aware that music consists of 7 notes.

Now, what is a “note”?

A note is basically a discrete frequency assigned a particular name so that it can be referred to whenever needed namely Sa Re Ga Ma Pa Dha Ni  or Do Re Mi Fa Sol La SI  or C D E F G A B. Different cultures essentially have their own names for the same frequencies.

These 7 notes together comprise of a family. This family is called a SCALE.

A scale can consist of either all the 7 notes or less than 7 by using various permutations and combinations possible. Each possible scale has been assigned a name for reference purposes.Since music is supposed to be felt, it is not necessary to know the names of all the possible scales but it is essential to familiarize oneself with the sound of each scale.

The most basic scale which almost everyone is aware of is the C major scale

It consists of: Sa Re Ga Ma Pa Dha Ni  or Do Re Mi Fa Sol La SI  or C D E F G A B.

Doesn’t that sound familiar? 😛

This scale is all you need to start composing your own songs 😛 …well in tech lingo I mean improvise!!!



Now consider the 7 notes of the scale as 7 different buttons. Now, press random buttons in any order. Soon enough, you will have a nice short melodic phrase. Repeat the process and record yourself whilst doing so. Keep all the good bits, although all music is subjective.

Well, hitting random notes helps to an extent….but to carve a niche in avant garde music there are a few tricks which gives each musician playing the same 7 notes his/her voice. It is referred to as “phrasing”.

Phrasing is basically different ways of emoting the same note.10 musicians can play the same note or song and yet have their identity because of the way they phrase the notes.

Ways to go about with Phrasing:

  1. Play the same note at different speeds/play a repetitive melodic phrase with varying speeds.
  2. Playing each note from a song at varying volumes.
  3. Using vibrato: Oscillating the pitch of a note .
  4. Sliding into the note: this technique helps imitate the human voice (in conjunction with bending).
  5. The most important is playing and fooling around with rhythm. (will be covered in the 2nd article).
  6. Listen to your favourite musicians and imbibe their approach to music.

In order to improvise fluently it is a healthy practice to play over a backing rhythm.

These backing rhythms are easily available online, or else you could use programming softwares such as FL Studio or make your own. I cannot emphasize enough that there is no substitute for developing musicians than to jam with their friends. The art of improvising gets clearer with time…some have it, while others develop it through repeated practice. The musical styles of jazz, Indian classical music, metal have their roots in improvisation. In fact, Jazz is now a universal term used to define any form incorporating improvisation. Develop a vocabulary by listening to various artists; be it simple music or complex, progressive stuff. Remember, there are no right or wrong ways to go about writing music. Entire genres have been created my subtle mistakes over time.


Break, bend and redefine!!

~Tanish Naik & Shawn Kenneth Fernandes