View Full Version : How does software equlizer works?
14-11-2009, 04:43 AM
Can somebody enlighten me as to how the equalizer in software media player works?
I got interested in this subject because I just got myself a relatively cheap active computer speaker and an USB DAC. The speaker has such a horrendous bass boom that I have to enable the equalizer to tone down the bass make it listenable. There are some intriguing thing about the speaker that I will write more about later.
Now - the input to the media player (MediaMonkey) is digital bit stream, and the output to the DAC is digital as well. So what sorts of conversion happens in between, and how much is lost in the process?
14-11-2009, 05:22 AM
There is also a windows mixer too - see that attached diagram. Sound stream for all sorts of applications are mixed together. Is the mixing done by Windows OS or by the sound card?
I think it just points to the fact that computer sound is not meant to be hi-fi. There is probably not much point to buy a very expensive sound card/DAC for the PC.
This is not as complex a subject as you may think. The first step is to sample the audio waveform to convert it from an analogue signal (from the microphone) to a digital one so that we can manipulate in in the software equaliser.
So, starting point is to appreciate how the sound waveform is sampled. Unfortunately, I can't find a really simple explanation on the web, so this (http://en.wikipedia.org/wiki/Digital_audio) will have to do. There are lots of words which you really don't need to read - but see that second graph down from the top on the right side. There we see a sine wave and on the graph's left axis we see a number ranging from 0 to 15. Across the horizontal axis we see time. So, the way that sampling works is simple: at every tick of the horizontal axis (the passing of time) the computer decides which number (left scale) between 0 and 15 is the closest approximation to the voltage of the sine wave.
So as time passes, the computer generates a string of numbers to represent the shape of the waverform. Looking at this graph, and starting at 'zero time' it looks to me like the string of numbers would be (judging by eye only) 7, 9, 11, 12, 13, 14, 15, 15, 15, 14 ....... and these are exactly what we would see on the output from a DAC, on its optical or SPDIF outputs.
As a matter of interest, when Philips/Sony invented the CD concept (in the late 1970s) they had two unknowns which they had to tie down. They were:
1. How many ticks along the horizontal (time) axis would we need to sample the waveform with tolerable accuracy? They settled upon 44,000 and 48,000 samples per second.
2. How many steps on the vertical axis between zero and the top of the graph were needed to tolerably code the fine detail in the waveform so that the steps were not too coarse otherwise there would be too much approximation of the smooth edge of the sine wave and the computer couldn't decide which level step best described the signal.
That's the first step. Clear so far?
14-11-2009, 02:51 PM
The first part is easy enough - at least conceptually. I read that D to A conversion is not so straight forward, especially with respect to handling of high frequency. The sharp jump in level from one sample to the next can be problematic, and that's why different brand of DAC can sound different.
But how does equalization work when the signal is in digital form? Does the signal need to be converted into analog form first, or can spectrum analysis can be performed, and equalization applied in pure digital domain?
I fact think of it - I don't even understand how equalizer works for analog wave forms. Is it an array of filters, one for each equalizer band (something like the cross-over of speakers), feeding into an array of op-amps?
OK, let's just ignore all those concerns you raise about fancy state-of-the-art analogue to digital conversion. Let's keep this simple because I'm a simple soul and certainly no mathematician.
We're at the stage now where we have sampled the waveform. All sampling means is that we've decided what number (on the left axis) we are going to attribute to the waveform at any one instant in time, specifically at the sample tick, perhaps every 44,000 times per second. The graph showed a range from 0 to 15, but in fact, that is not anywhere like enough steps on the staircase to say with confidence that the edge of the sine wave is at this step or an adjacent one. It's not a fine enough resolution; in many instances the waveform will indeed have changed between ticks, but due to poor resolution of our simple example, our encoder can't define exactly precisely which step.
So, those clever Philips/Sony engineers conducted tests on better-than-average ears (because the CD was promoted to hi-fi fans initially, so it had be be good technically and sonically) and they decided that what was necessary, assuming that CD was to be a high fidelity carrier, would be not 16 steps but about 64,000 steps*.
As computers work in off-on, yes/no, black/white 1 and 0 logic, the engineers realised by happy coincidence that two eight-bit binary word had enough binary steps in it to code 65,536 steps - a few more than absolutely necessary and by another happy coincidence, the eight bit word was exactly the normal length that computers digested easily - the so-called byte.
A fully saturated (analogue) signal that is as loud as it can be in the computer must sit on the stop step and in binary would have a binary code 1111111111111111 being step 65,536. These two eight-bits bytes combined together all being set to 1 means that the signal is 'fully saturated', and if it was a tiny bit louder or a lot louder, there is no way that the ADC could define that loudness; the signal has used-up all the steps and any loder that the data will be clipped. Conversely, an absolutely silent signal would sit on the very bottom step and would have a binary code 0000000000000000 being step 0.
* Note: history could have played this so very differently. Philips/Sony's marketing departments looked back over the incremental progress of hi-fidelity sound and equipment from the cylinder recorder to FM radio. With each decade and new technology, the frequency response widened, distortion lowered, playing time increased and first stereo then quadrophony (on LP) appeared. To continue the progression - a marketing man's dream situation - all they had to do was to achive a technical standard that was significantly better than the then best analogue system, and they could tap into the accrued goodwill of the high fidelity heritage. But without this heritage they probably would have settled on a much lower technical standard, perhaps only as good as cassettte with only 10,000 steps or so. Lucky for hi-fi that the CD predated the PC by about ten years because otherwise the CD would have primarily been conceived as a bulk data carrier, not a playable audio disk.
I still have my (almost working) Sony CDP-101 player, which I bought the very day CD was launched in the UK - 3 March 1983 (remember it well) and I'd waited years for for the launch of CD. The only other player available that day was the Philips flip-top unit. nce home, I recall comparing my then good turntable with the CD (only three or four disks available that day so not an exact comparison). In fact, on a really fine pressing of music with a moderate loudness and bandwidth that didn't strain my (V15-3) pickup, the sonic differences were relatively small.
19-11-2009, 03:21 PM
Alan, I think you got your mathematics wrong. 1 byte = 8 bits = 256 steps. It takes 16 bites to give 65,536 steps.
8 bites was the bus width of the 8080 microprocessor, which was really famous in the 70's. It was a basic unit of transfer and storage in the digital domain. 8 bits is really too small to be useful, but that was what technology could achieve then. By the time the personal computer became popular the CPU (Intel 80386) was already running on 16 bits. So the 16 bits (or 2 bytes) units of transfer was also referred to as 1 word. But this is not standardized - the mini computer was running on 32 bits then - so 1 word could also means 4 bytes. Now even our humble PC is already endowed with 64 bits bus, so nobody quite care much about the byte and word units any more.
The digital encoding in CD is done using 16 bits on sampling rate of 44.1KHz - the common notation is 16/44.1. For DVD the encoding was 16 bits on 48KHz sampling rate, and the common notation is 16/48.
I just bought a USD140 DAC from China - and found that it can handles 16/44.1, 16/48, 16/88.2, 16/96, 16/176.4, 16/192, 24/44.1, 24/48, 24/88.2, 24/96, 24/176.4, 24/192. Very impressive indeed.
So I checked the specification of my USD650 Denon AVR - and found that it can only handle 14/96.
Hmm.... so is the China DAC really to the Denon AVR? I did a bit of digging and found something really interesting.... (more to come)
Alan, I think you got your mathematics wrong. 1 byte = 8 bits = 256 steps. It takes 16 bites to give 65,536 stepsYou're right: I was trying so hard to avoid the confusion over 16 steps and 16 bits that I confused myself. I've added a couple of words in bold to my post which clarify this - thanks.
Yes I was there at the start of the PC and remember the 8086 and 8088 well, having sold millions of them to industrial customers when I worked at NEC UK. Don't forget the Z80; we sold far more of these.
BTW: by SI Units convention small k in kHz.