HUG - here for all audio enthusiasts

The Harbeth User Group is the primary channel for public communication with Harbeth's HQ. If you have a 'scientific mind' and are curious about how the ear works, how it can lead us to make the right - and wrong - audio equipment decisions, and about the technical ins and outs of audio equipment, how it's designed and what choices the designer makes, then the factual Science of Audio sub-forum area of HUG is your place. The objective methods of comparing audio equipment under controlled conditions has been thoroughly examined here on HUG and elsewhere and should be accessible to non-experts and able to be tried-out at home without deep technical knowledge. From a design perspective, today's award winning Harbeths could not have been designed any other way.

Alternatively, if you just like chatting about audio and subjectivity rules for you, then the Subjective Soundings area is you. If you are quite set in your subjectivity, then HUG is likely to be a bit too fact based for you, as many of the contributors have maximised their pleasure in home music reproduction by allowing their head to rule their heart. If upon examination we think that Posts are better suited to one sub-forum than than the other, they will be redirected during Moderation, which is applied throughout the site.

Questions and Posts about, for example, 'does amplifier A sounds better than amplifier B' or 'which speaker stands or cables are best' are suitable for the Subjective Soundings area only, although HUG is really not the best place to have these sort of purely subjective airings.

The Moderators' decision is final in all matters and Harbeth does not necessarily agree with the contents of any member contributions, especially in the Subjective Soundings area, and has no control over external content.

That's it! Enjoy!

{Updated Oct. 2017}
See more
See less

The human ear is highly non-linear...

This is a sticky topic.
  • Filter
  • Time
  • Show
Clear All
new posts

  • The human ear is highly non-linear...

    ... but it's the only internal sound sensor mechanism we have.

    The loudness of individual sounds across the audio spectrum is by far the dominant factor in how humans attribute subjective sonic quality to sound. All high fidelity experiences are inextricably bound up with the loudness of that experience.

    In summary: change the loudness and you change the subjective experience. If you want to experience a different sonic experience, play music very loud or very quiet and see how your perception differs across a wide dynamic range.

  • #2
    Introduction: The five variables of sound

    There are only five variables that we need to know about when discussing anything factual about sound. Between these five variables we can comprehensively describe and identify a sonic experience. They are:
    1. Loudness (measured in decibels, dB) - or more accurately not loudness but level (also measured in dBs)
    2. Frequency (measured in hertz, Hz)
    3. Phase (measured in degrees or in time units)
    4. Time (measured in seconds, S)
    5. Direction (measured in degrees)

    I think that's about it. Have I missed anything?

    All of the above are measurable physical events. They are external to the observer. They exist when sound is created whether or not the observer is present. The same sonic event could be measured anywhere on earth in comparable environments and would yield the same measurements. They are facts, not opinions. They are universal truths, as far as this planet is concerned.

    Let's make some simple examples of each one of these so we share a common foundation.
    Alan A. Shaw
    Designer, owner
    Harbeth Audio UK


    • #3
      Variable 1: Loudness (by which we really mean 'level')

      I really want to demonstrate, once and for all, how to interpret a frequency response graph or plot. Grasping the concept of being able to put a little X at various spot frequencies (e.g. 57Hz, 123Hz, 554Hz ..... 19752Hz) or along a frequency sweep (e.g. 20Hz to 20kHz swept through in 3.5 seconds) is absolutely core to our understanding. I will do my very best to make understanding level as easy and fun as possible. Armed with that tool you can use the knowledge to guide your purchase decisions. It is much easier to demonstrate level than loudness. They are linked concepts but if we could just concentrate on level, we can use the same measuring tools and graphing processes to lot the frequency response (that is, the frequency v. level response) of an amplifier, CD player, microphone, speaker or room. Loudness is a rather more deep concept.

      There is nothing intimidating about frequency response plots, even though they look a little scary. I got the idea of what they were trying to convey when I was about 13, and if a 13 year old of average intellect can grasp the underlying beauty of a frequency response chart, then we all can! All such charts do is to merely record the level of signal (Y, vertical axis, in dBs) against frequency (X, horizontal axis). Whoever and however the measurement(s) were made isn't our concern; all we need to think about is what is the graph telling us about the equipment under test.

      Just so we can be absolutely sure we are sharing a common experience, let's take a single, fixed tone and step it up in level ....

      Loading the player ...

      This is 880Hz, starting at -30dB, then raised to -24dB (that's 6dB level increase), then raised to -18dB (that's 6dB increase again), then -12dB (+6dB again), then -6dB (another 6dB up) and finally back to -30dB (a drop of 24dB).
      So when we talk of level, which we measure in volts and then convert to decibels (merely to allow us to handle a wide range using a log scale) this is what varying level alone sounds like. You might notice that how you feel about the tone depends upon how loud it is: your emotional response is connected to that of loudness. If the tone was much louder it would begin to menace you.
      Alan A. Shaw
      Designer, owner
      Harbeth Audio UK


      • #4
        Variable 2: Frequency

        ... now measured in units of hertz (Hz), previously called cycles-per-second (c.p.s.) which was more intuitive.

        Frequency and level are linked. If there is zero level (zero loudness) then even if we can conceptualise 'frequency' in our mind as some sort of cyclic phenomena, unless that frequency has a certain level or amplitude, it is not going to be measurable. And to be audible as a sound wave, a considerable amount of energy (level) has to be carried in the sound wave for it to exceed our human ear's minimum loudness threshold and for us to say that we can hear something. All of that we will look at later.

        It is said that all music is made up of combinations of sine waves of different frequency, level and phase relationship to each other. In one musical instrument - say, a violin - the player inputs energy through his arm muscle and the bow, and this sets the strings into resonance. That motion generates sine waves, and then sets the body parts of the instrument into sympathetic vibration. Very rapidly, every part of the violin is in motion, and depending upon the material, how rigidly it is glued to adjacent materials and other factors all these forwards-and-backwards motions of all the piece parts combine acoustically into what we hear. The motion of the back panel may be acoustically out of phase to the strings; the neck may sing a fraction of a second later than the bridge... the frequencies and level that we hear as the sound of the instrument is a mishmash of all these sine waves at one instant reinforcing, and another instant cancelling each other.

        So, if we focus on sine waves we are already in the relevant world of speech and music.

        Example of fixing the level and varying only the frequency

        Loading the player ...

        That is 250Hz (or 250 cycles-per-second) followed by 800Hz and then 6300Hz. The voltage or level is exactly the same although you will probably think that the perceived loudness of the three tones is different. Which one seems louder to you?
        Alan A. Shaw
        Designer, owner
        Harbeth Audio UK


        • #5
          2a. Displaying a "frequency response chart" - the audio lie detector

          The frequency response plot (or chart) is our primary tool for evaluating the relationship between the input signal to a piece of equipment and the output from that equipment at individual frequencies. This combines the ideas in 1) and 2) above.

          The chart has two axis: a horizontal one covers the audio frequency range from the lowest on the left side to the highest on the right and the vertical axis tells us about the level at any particular frequency. Audio frequency response charts were standardised over seventy years ago. In the early days, audio engineers recorded frequency response using inked pen on pre-printed chart paper, still available here. A picture of a beautiful B&K automatic mechanical pen-chart recorder (not dissimilar to a lie detector in operation) is currently for sale on Ebay is attached. The seismometer uses exactly the same principle.

          I apologise in advance if this introductory video is simplistic. To build up this entire subject, I cannot assume any fundamental knowledge amongst our readers and to be sure we are all in-step, I thought I'd start at the basics, because if we understand and are not afraid of the frequency response chart, we have a really powerful analytical tool. If you remain a little confused, don't worry; more examples will help.

          Here is movie #1: A human pen-chart recorder ...

          Loading the player ...
          Attached Files
          Alan A. Shaw
          Designer, owner
          Harbeth Audio UK


          • #6
            Variable 3: Phase

            Of all the technical parameters relating to audio, phase is perhaps the most difficult to grasp and misunderstood - even for me. We can visualise loudness and frequency from our every day lives: we can readily identify the difference between a male and female voice, between someone shouting and whispering... but phase is an abstract concept which has little bearing on our ordinary lives.

            The really essential starting point is the appreciation that 'phase' cannot exist as an absolute. When we introduce the notion of phase we have to mention (whether we expressly say this to ourselves or take it for granted) that what we mean is 'phase relative to something'. That something could be a static event, or it could me a moving event. The event could be some sort of periodic time marker, such as the hands on a clock, or it could be a variable event such as an audio frequency which could itself be varying.

            So, the whole notion of phase mandates that variables around the measurement of that phase are tied down. To talk of 'absolute phase' is really meaningless unless we specify what absolute means in this context, in an expanding universe where there really are no absolutes other than (as far as we know) time. When audiophiles introduce the concept of 'absolute phase' they probably mean that the wave front generated by our loudspeaker is in step, or in-phase, with the wave front generated by the musician. For example, if the tympanist hits the big drum with a single clean strike, the skin initially moves inwards under impact, then flexes outwards and back in again cyclically creating the note. If the entire recording process from microphone to loudspeaker is in-phase with the drum, then the speaker cone should initially move in, then out as it generates the same air pressure experience at home. But the note we hear would be exactly the same if there was a phase shift anywhere along the recording/replay chain: if the microphone or speakers were 180º out of phase (half a cycle out of phase) our wave front at home would be initially the speaker cone moving outwards, then inwards - the opposite of how the drummer generated the sound wave - but it should sound the same even though the phase is reversed. But when the orchestra is playing together and the sounds are all mixed, there is no way we can be sure of absolute phase any more: the whole sound experience is of randomised phase.

            Here is an example of taking a 440Hz tone and shifting its phase relative to its start point in time. What we will here is 440Hz at zero phase, silence, then that same 440Hz shifted by 90º, 180º (relatively out of phase), 270º and 360º shifted. If you hear any difference between the original tone and the subsequent shifted tones, this is a trick of your brain. Your speakers could be 180º reversed (red to black, black to red) or the microphone or CD player or amp or any other component but you would not expect to hear that phase shift providing that it is constant as the music plays.

            Loading the player ...
            Phase shifting of spot tone

            Now, I said that phase always has to be relative to something. It's just like saying that the moon has phases and that, here on earth with the sun and moon in a certain position to us we can say that this evening we will see the moon in the third phase (270º) of its lunar cycle. In fact, we can look-up the exact phase of the moon seen from earth here. But viewed from another perspective, say from behind the moon, what we call the dark side of the moon, the moon would appear to be very differently illuminated and with an entirely different optical phase. So phase critically depends on our vantage point. Stating phase without stating our reference against which we are evaluating phase is meaningless; as meaningless as saying a car travels at 55. Fifty five what? 55 km/h? 55 miles per hour? 55 football pitches per week? If we are in the UK or USA (or some former colonies) by implication, if the national standard is miles per hour, then our speedometers are graduated in mph and by implication, when we say that we were 'doing 70 when we spotted the blue flashing light behind us' it's implied that we mean 70mph. So we do have to be careful with phase to always remember that it is relative to something, implied or explicit.

            If we take the same 440Hz tone, we can process it to give it a sliding phase so that one full audio cycle simulates the 28 day lunar cycle. This wave starts off, as it were, with full illumination and dims to darkness in the middle of the cycle and then back to full at the end*. All we have done is take the fixed tone [A] and mix it with itself [B]. Initially the B is perfectly in-step or in-phase with A so the effect is of a loud tone. Then, as B starts to slip out of phase with A, it partially cancels A until it is 180º out of phase, and exactly cancels A: we hear nothing. Then B slowly starts to slip back into phase with A and the signal level increases back to full again.

            Loading the player ...
            Phase shifting through lunar cycle

            *In fact, my audio example is 180° out of phase with what we call the lunar cycle. We say that full moon is the brightest in the middle of the lunar cycle and new moon, darkness, is at the start of the cycle. My audio example starts with full moon, dims to new moon and reverts to full moon. Again, the event cycle is the same, but in effect, I's starting half way through the lunar cycle .... my example is phase shifted relative to the lunar optical cycle by 14 days or so. Have you been on holiday in the opposite hemisphere and noticed how different the moon appears during its cycle - here. The new/full moon points are the same, but the appearance of the in between stages are different. Phase is all about relative vantage point.
            Alan A. Shaw
            Designer, owner
            Harbeth Audio UK


            • #7
              Liberation v. deception v. the realities of the ear

              Originally posted by Pharos View Post
              The 'closed mind problem' is not exclusive to audio, if we analyse the behaviour of many people in other areas of life, we will see much adhesion to old outdated beliefs, also supported by irrationality.

              This is really is a sad phenomenon because it reflects an outlook which by limiting information input to the individual, also reduces his ability to grow, and thus I argue lowers his quality of life, also resulting in an immature state of development.

              The insistent holding of beliefs which are illustrably illusions, can only be to provide a comfort zone for the individual concerned, but he then is in constant conflict trying to deflect information which counters or threatens his belief system. Often the result is exhibited neurotic behaviour.

              It is actually liberating to dispose of this tendency, and accept that life is full of change which requires adaptation, and in micro world of the field of audio we see 'flat earthers' often desperately trying to hang on to fixed and outdated older beliefs.

              Letting go of investments in illusion is so liberating actually, work load being diverted from desperately trying to maintain archaic beliefs, to enquiry and research in the reality of the world, and acceptance of all its incremental evolution.
              I completely agree: facing the facts and making the best of the real situation as it presents itself seems obvious to rationalists, but to others could be a confrontation that they would move heaven and earth to avoid. One wonders whether such an individual could compartmentalise his thought processes so that, for example, in his interaction with audio equipment he can be guided only by his spiritual beliefs in the equipment unsupported by any objectivity at all, yet in other areas outside of that box, in his personal relationships with his wife/family/friends or in the workplace can be wholly rational.

              I don't think that can be assumed. I have met eminent professional people at the very peak of their abilities who have the most absurdly illogical, unsubstantiated, child-like views about audio equipment, so much so that it's crossed my mind that they were teasing me. After one or two disguised probing questions it's obvious that they wouldn't be able to keep a straight face if they were fooling around, but on they plough, relentlessly. To think that one could be on the slab, knife poised, about to be tinkered with by a man who only the previous evening had 'invested' $60 is audiophile phone-*** as described in the above post. Where does his rational belief system begin and end? Might he have his own private views about how the internal organs work? Very worrying indeed!

              Listening to Choral Evensong yesterday afternoon as I was driving and mulling over these issues, I remain hugely empathetic with those who have taken a rigid, if demonstrably misguided position on uncontrolled audio comparisons. Our task, if we have any at all, is to de-mystify the subject so that we can illuminate the sort of normal human foibles that creep into subjective comparisons, so that decisions can be better made. That's it. There is no other agenda. We don't need to stimulate sales - they take care of themselves, thanks to you.

              If ten audiophiles (what would the collective noun be? Audiotroupe?) were gathered and asked to rank their number one audio holy grail, setting aside all the waffly stuff like 'recreating the experience' etc. etc., I'd suspect that sooner or later the word 'linearity' would emerge. Now, that's a small word with big implications, and one of those is that there is a fixed relationship between the performers, the music, the microphone, the electronics chain and finally the speakers. If at one instant, the music increases by 3.5649123% at frequency XYZ.ABCHz then the sound emitting from the speakers should, in an ideal and unachievable world, also increase by 3.5649123% at frequency XYZ.ABCHz. For a rationalist, an objectivist, a scientist and someone in pursuit of truly high fidelity sound at home, that;s surely the entire game, from start to finish. For them, there is no other reason to invest in quality home audio; it's just so much simpler and less hassle than actually going to the concert hall, which, if one lived at the Barbican, one would do several times a month. For them, it's a close a substitute for the live, hall experience, warts and all. Not bigger. Not smaller. Not richer, not brighter, not more 'detailed', not deeper, not wider, not taller, not sharper, not softer, not more focused, not better stereo, not more or less distortion.... just as close to the same as technology and room considerations permits. Oh, and minus the cost and the coughs.

              And how about linearity? If a wholly rationalist Spock-like, unemotional objective alien was passing by and just happened to hear something curious at the Barbican such that he detoured and smuggled himself into the hall, then surreptitiously followed our friend upstairs to his apartment to hear his hifi system, what would he conclude as the weakest, most non-linear part of the recording chain?

              It is, of course, the speakers and the human ear, but for differing reasons. The speakers perform virtually identically whether they are performing at transistor radio level (you need to put your ear near them to hear anything at all) and when they are so loud that the neighbours are banging on your door. They have, for all practical purposes, exactly the same bass/mid/top sound balance when playing at 0.01W or 100W. Practically, their frequency response is the same regardless of how hard they are working, and that's the same for the microphone, and all the audio electronics right through the entire chain.

              But what about the ear? Ah, well: that's a completely different situation. The one thing the ear is not, is linear. You would have to work hard to design, from scratch, an audio system that is as non-linear as the ear. It is wildly non-linear, and that's just for a good one, in perfect condition, age perhaps 15 or so. From age 20 or so, it's all downhill, and if you were designing a really close copy of the ear's non-linearity, you would have to build-in natural age related loss, disease and illness, exposure damage and so on. No self-styled audiophile should set himself up as a judge of audio equipment unless he has made an effort to understand in basic terms just how the ear works. It's all here on the internet at the click of the mouse. Free. You just have to be curious.
              Alan A. Shaw
              Designer, owner
              Harbeth Audio UK


              • #8
                Getting to grips with the ear - fundamental problems of measurement

                I have a plan to explore the sensitivity of the ear. First some ground work.

                We can measure the frequency response of the ear conceptually the same way that we would measure the frequency response of any component of the audio chain from the microphones onwards.

                When we measure the frequency response of, say, a microphone or amplifier, all we are really doing, and expressing, is the relationship between whatever goes into the device compared with what comes out of it, frequency by frequency. In the case of a microphone, we apply calibrated sound waves to its diaphragm, and we plot on a graph, frequency by frequency, the voltage that it generates at each of those frequencies. For a loudspeaker we reverse the process: we apply a calibrated electrical voltage input signal to the speaker terminals, and using a microphone that we have pre-calibrated to give a known (and ideally perfect) voltage output from the sound waves that press on it, we can determine the frequency response of the loudspeaker.

                Now, the ear is a little more tricky to measure. Conceptually, we have the same issue as measuring a microphone or speaker (or amplifier, or any other audio component). We are comparing its sensitivity to the input stimulus, in the case of the ear, the sound wave passing down the ear canal to its 'output'. How do we know what the 'output' of the ear is? Fundamentally, that must be the electrical activity that the sound wave generates deep inside the ear, frequency by frequency. And how do we know how much electrical activity is generated in the ear? Ah well, that's a bit tricky to say for sure without surgery to place electrodes on the nerve from the ear to that brain. Would you volunteer yourself for that procedure? I blooming wouldn't!

                Problem: we cannot directly measure the frequency response (the input > output) sensitivity of the ear without elective surgery, but we do need to accurately know it's characteristics; quite a challenge.

                Before we look at that a bit more deeply, may I suggest that a look at the short video here would be helpful in explaining how the frequency response of a piece of audio equipment is normally presented, graphically. We'll need to be comfortable with the presentation of a frequency response plot/graph to make any sense of this subject as we delve deeper. Link here.
                Alan A. Shaw
                Designer, owner
                Harbeth Audio UK


                • #9
                  Interpreting frequeency response plots (charts) -1

                  The more one thinks about the incremental knowledge building journey that I undertook across the audio desert equipped with a rucksack containing just a notepad, articles from the great, much missed Wireless World from the 1970s, ditto HiFiNews from the same era, as many BBC Monographs as I could carry and a plentiful supply of water unsure how long this journey would take (about 20 years), the more carefully and efficiently I have to pick a route for you. Less meandering, far less wasteful of energy, straight to the oasis.

                  So, in the last post I touched on frequency response, being the (usually) graphical representation of the relationship between something going into a device along the audio chain, and whatever comes out of it, frequency by frequency. In audio work, we always use decibels as our unit of measurement, and we can use sound decibels and voltage decibels, and very conveniently indeed, voltage decibels can be plotted on the same graph as sound pressure decibels. That means form the sound pressing onto the diaphragm of the microphone (pressure dBs), through the electrical conversion to a voltage in the mic, right through the the power amp output into the speakers, and then the sound pressure dBs generated by the speakers, we have one, common, interchangeable dB scale. Just put that on one side for a moment.

                  The beautiful significances of ability to hop from sound pressure to volts, stay with volts through the audio chain, then convert back to pressure thanks to the loudspeakers took me some time to full appreciate, but as we'll see, it is wonderful that we don't have to think about pressure or voltages as independent things: we can just think about audio dBs right through the audio chain, from input to output. It's brilliant: it makes our life a hundred times easier when we are comparing the frequency response of any and every part of the audio chain because (I'll explain this better later) if the frequency response plot shows some dB deviation at some frequency or other it actually doesn't matter where in the chain, from the microphone to the speakers that deviation is, the sonic effect is the same.

                  See how that was a eureka moment for me? It meant, at a stroke, that there was a wholly causal relationship between deviation from a flat frequency response and it didn't matter if the deviation was in the microphone, the mixing console, the CD/turntable, the amplifier or the speakers: the sonic effect would be identical. As you can see, that means that if we measure the frequency response of any and every part of the audio chain, mic to speakers, we are armed with the essential tools to predict, very reliably and accurately how that system will sound. Neat eh!

                  So, we need to be really, really comfortable reading frequency response plots. They're not difficult; in fact, reading a musical score is vastly more difficult. If we can see what they are telling us we're well on the way to not only understanding audio properly, but to understanding how the ear works.

                  The next step then, just to check our core understanding of the frequency response chart is for me to be a human pen-chart recorder* again, and draw two frequency responses, A (red) and B (blue). The first thing to notice is that most of the red and blue traces lie left to right, low frequency to high frequency directly on what I'm calling my 0dB lines, which have a completely arbitrary place on the vertical scale. I've marked my two 0dB reference lines, and a ±5dB band either side. Because we can plot our curves along any ruled line, left to right, I've squeezed these two frequency response curves onto one piece of graph paper and I could have got another two traces on that same paper, but it would have started to look a mess. Thanks to dB scaling allowing me to put two graphs neatly on one piece of paper, I've saved a tree! The core point is that unlike, say, temperature or voltage which if we were measuring would have to have a graph with a correctly scaled vertical axis, when we work in dBs we are working with relative units. That means we can place our 0dB line along any ruled line, providing that we make it clear to the reader which line we are calling 0dB. OK with this?


                  As you can see, curve A has a bump at around 1.5kHz of about +6dB, and curve B has a depression at about 1.5kHz of about -6dB. Guessing that you are not familiar with interpreting frequency response plots, how similar or dissimilar do you think these two frequency responses are likely to sound? A little different? A lot different? Almost indistinguishable from a ruler-flat response running along the 0dB line?

                  Please bear in mind this essential point. If we were led blindfolded into the listening room, blindfold removed, allowed to make a frequency response measurement with a microphone of the sound in the room, and the plots looked like A or B above, we would have no way of knowing by listening and/or measurement which element in the chain from mic > amps > speakers > room itself was responsible. It could be just one element, or a combination of more than one: example, microphone characteristic plus speaker characteristic.

                  * As I typed this I was take back some 43 years or so when, as a teenager, the beautiful elegance of automatic motor-driven pen chart recorders and the lovely graphs they produced and featured in Wirelesss World fascinated me. How could I have my own audio pen chart recorder on pocket money? We'll, laughable though it is now, somehow or other I acquired an ex-admiralty barometer. I didn't have the mechanical knowledge to couple it to a motor-driven oscillator, and spent years hunting for a way of synchronising it to a sweep oscillator. Imagine my excitement when upon taking over Harbeth in 1986, the asset was a B&K pen chart recorder!
                  Alan A. Shaw
                  Designer, owner
                  Harbeth Audio UK


                  • #10
                    I have been questioning some of my audiophilial assumptions of late, one of which has remained stubbornly set in stone. I am very certain that I could always detect the difference between a recorded talking conversational voice of a human being in comparison to an actual person speaking and even if they were speaking a language completely foreign to me.

                    I am not sure I could do the same with a singing voice under the same circumstances in a musical setting.

                    I feel that the human voice and our special relationship to language is so subtle and profound our intellect so deeply imbued with a necessary empathetic understanding of tone and inflection that we hear the spoken word in a completely different way than when we listen to music. We continually sample it for meaning, context, subtext, veracity, sincerity or the lack thereof.

                    And that music as deep and moving and mysterious its working of magic has upon on us it is still a primitive shorthand compared to the way we listen to the human voice and language.

                    Is the conversational human voice more difficult to record and playback convincingly than a symphony orchestra, if so why?

                    I was watching a DVD of “Out of Africa” just the other night Meryl Streep and Robert Redford as Karen Blixen and Denys Hattan. There were two pieces of technology that featured in the film, the first was the aeroplane that Hattan learns to fly in short order, the second is a wind up gramophone much like the one used for HMV branding purposes (without the dog).

                    It was with some delight watching the two main characters sitting outside immersing themselves and being moved and entranced by Mozart, deeply moved by the music and perhaps a little by the technology of the gramophone itself.


                    • #11
                      Has anybody heard this in person?


                      • #12
                        Yes, that exhibit came to our local art gallery a few years ago. I believe something similar was done where Alan contributed a number of P3ESRs.


                        • #13
                          "Yes, that exhibit came to our local art gallery a few years ago. I believe something similar was done where Alan contributed a number of P3ESRs"

                          Thanks Don, I wish I had known about it when it came to Australia (Tasmania I think). It must have been quite a sound experience by the looks of it.

                          Note to A.S. just finished Michael Foley's "The Age of Absurdity" great book thanks for the tip.


                          • #14
                            I heard it live in King's College Chapel. I am not sure what the special benefit of 40 separate speakers would be. For me the beauty of the piece is about the whole.