Saturday, 26 November 2011

Filters: Part 1 of Many?

I feel like a little more theory is called for.  I thought this topic was mind blowing, so I'll try and explain the basics in both mathsy and non-mathsy ways.  As a warning, if you don't understand what samples are, or haven't read my other theory stuff, understanding this is not guaranteed...

If you're into digital audio / synths you'll have almost certainly heard of filters.  Up until a couple of weeks ago, I saw filters as magical black boxes that you put sound to, and an altered sound came out.  For example - thick reverb - you put a sound in and you get a more spaced out, echoey sound, as if you were playing the sound in a big space.  A more unusual filter is a flanging filter, which isn't easy to explain but very distinctive, so let wikipedia guide you once again.

The revelation that blew my mind is the term filter is actually very specific.  All a filter does is combine a signal with scaled and delayed versions of itself.  That's it.  Really very fundamental and straightforward, but it has big effects on the audio signal that you put through it.  There are two flavours of filters, feedback (infinite responce) and feedforward (finite response).  Feedforward filters always act purely on the incoming signal, whatever comes out of the filter remains output.  Feedback filters recycle what they produce, explaining the term infinite response.

If you pass a sample of maximum amplitude into a feedback filter, say we add on the previous calculated sample at half amplitude, shown algebraically as y(n) = x(n) + 0.5y(n-1) (x is output and y is input),  the sound will decay exponentially but you'll never get a sample of 0.  You'll get really, really close, but it will take infinitely long.  To illustrate, the first sample will be 1.0, the second 0.5, then 0.25, 0.125, ... and so on.  If you pass a sample of maximum amplitude into a feedforward filter of a similar nature, say one that adds on the amplitude of the previous sample and averages, or algebraically y(n) = 0.5(x(n) + x(n-1)) then you'll get 1 extra sample of sound, but then the signal will be silence again.  The first sample will be 0.5(1+0) = 0.5, the second sample will be 0.5(0+1) = 0.5 and the third and following samples will be 0.5(0+0) = 0, or silence.

The all important "filter diagram" - the top one is feedforward, the bottom one feeback or recursive.  The '+' indicates adding components of the signal while ' * ' indicates scaling the signal by some constant.  The unusual box with "z" in it basically just means delay, with the negative number being the delay in samples.  Got it?
A little more terminology.  Hopefully I haven't lost anyone, but I have no idea how readable I'm making it.  A filter could have bigger delays than a one sample delay.  Also, a single filter might have different delay lines - e.g. y(n) = x(n) + x(n-1) - x(n-3) + ... where x(n- a number) and the number is the delay in samples.  My example had a 1 and 3 sample delay in the same filter.  The biggest delay in the filter is what determines something called the order of the filter - that example is a 3rd order filter because it's biggest delay is 3 samples.  If you want to sound clever, combining all the terms we know, we can call the example filter a 3rd order feedforward filter.

Filters obviously effect sound, and so the input signal.  Otherwise they'd be pretty much useless.  One property of filters that has massive implications is often they have different effects based on the frequency of the input signal.  This effect is called the frequency response.  There is a possibility you've heard of a LP filter.  LP stands for low pass, and it lets through low frequencies unaltered, but kills high frequencies.

This is a cue for another concept in filter design.  The world isn't perfect.  Filters also aren't perfect.  The audio programmer would love it if the low pass filter didn't effect any frequency before the cut-off frequency.  However, filters feel otherwise, and unfortunately will effect frequencies close to the cut-off.  Some filters are better than others though, and mathematicians have done a pretty good job in coming up with filters that are good enough for the job.  Designing a really good filter, something which is way beyond me, is not an easy task, but actually implementing them once they're invented is much, much easier.

Frequency responses for the "Butterworth" filters
For part 1, there's one last concept I'll explain.  Bringing back algebra with a few bonus letters, here's another filter equation: y(n) = ax(n) + bx(n-1).  If your observant, the new letters are 'a' and 'b' and are known as filter coefficients.  Filter coefficients are just how much we scale the different delayed signals, and so generally are numbers between 0 and +/-1.  Explicitly, the signal x(n) is scaled by a, and the signal x(n-1) is scaled by b.  Straightforward?

Hopefully this is still English to you.  It's quite hard when writing to tell how understandable what you've written will be to the average reader.  Any feedback would be appreciated, no pun intended.  More will be explained in the second installment.

No comments:

Post a Comment