Do many people here have a good grasp of 'frequency domain' as opposed to 'time domain' in the context of the representation and manipulation of sound?
This issue has come up in a number of threads here where we are looking at visual representations of sound; what does each axis of a graphical representation tell us and how should we interpret what we see. I am comfortable with the representation of a sine wave on a graph which has amplitude as its vertical axis and time as its horizontal and the fact that from that representation, assuming that the relevant scales are included, we can derive information about the frequency of that wave.
The basic principles of digital audio: that we can divide the time axis into regular intervals (such as 44100 times a second for CD) and similarly with the amplitude axis where the interval between samples results in the 'bit depth', are things that (I think) are fairly straightforward to visualise - see for example the wiki page on pulse code modulation.
But, once we get to 'spectrum analysis' and further when I finally realised that MP3 coding was done in the frequency domain the visualisation gets a bit more difficult.
As I understand it, this means looking at amplitude with respect to frequency rather than to time and that's something I haven't quite got hold of. Clearly 'time' still has to come into it somewhere.
The purpose of the thread is to identify the most basic principles that are not widely understood; if everyone else on HUG is completely happy with swapping between time and frequency domains then all well and good.