Announcement

Collapse

HUG - here for all audio enthusiasts

The Harbeth User Group is the primary channel for public communication with Harbeth's HQ. If you have a 'scientific mind' and are curious about how the ear works, how it can lead us to make the right - and wrong - audio equipment decisions, and about the technical ins and outs of audio equipment, how it's designed and what choices the designer makes, then the factual Science of Audio sub-forum area of HUG is your place. The objective methods of comparing audio equipment under controlled conditions has been thoroughly examined here on HUG and elsewhere and should be accessible to non-experts and able to be tried-out at home without deep technical knowledge. From a design perspective, today's award winning Harbeths could not have been designed any other way.

Alternatively, if you just like chatting about audio and subjectivity rules for you, then the Subjective Soundings area is you. If you are quite set in your subjectivity, then HUG is likely to be a bit too fact based for you, as many of the contributors have maximised their pleasure in home music reproduction by allowing their head to rule their heart. If upon examination we think that Posts are better suited to one sub-forum than than the other, they will be redirected during Moderation, which is applied throughout the site.

Questions and Posts about, for example, 'does amplifier A sounds better than amplifier B' or 'which speaker stands or cables are best' are suitable for the Subjective Soundings area only, although HUG is really not the best place to have these sort of purely subjective airings.

The Moderators' decision is final in all matters and Harbeth does not necessarily agree with the contents of any member contributions, especially in the Subjective Soundings area, and has no control over external content.

That's it! Enjoy!

{Updated Oct. 2017}
See more
See less

The role of the loudspeaker

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by A.S. View Post
    Consider this vital point:
    • The microphone has made a precision measurement of the sound pressure modulation in a tiny matchbox-size volume of air around its capsule diaphragm. It cannot sense the air pressure anywhere in the studio other than where it is placed. That means, it cannot sense precisely the sound pressure waves 1cm or 10cm or 1m away from where it is placed.
    • The microphone is a precision sampling device, but only sampling at the exact point in the studio where it is mounted. Nowhere else.
    • The loudspeaker, in comparison, on replay, has a (woofer) surface area very much bigger than that of the microphone capsules. In fact, about 250 times greater.
    • That means that the speaker's pressure generating surface area - the woofer - cannot make a precision air pressure displacement of the matchbox-volume that the microphone captured. In effect, the loudspeaker has smothered and averaged and degraded the microphone's sensing precision and blasted sound pressure out into the room from the point in the room where the speaker is located.
    • To perfectly reproduce the sound waves sampled by the tiny microphone diaphragm, the speaker diaphragm would have to be no bigger than it. Clearly, the woofer is very much bigger. The tweeter diaphragm is approximately the same diameter (and hence surface area) as the microphone capsule .... but where is it in the listening room?
    Just a quick thought whilst it's still there:

    Tthe microphone as a sampler, we all know how moving about a room changes the sound we hear, some bass frequencies don't even become obvious until we reach a certain distance from the speakers, a microphone must capture a very limited portion with a particular frequency balance of what emanates from the source.
    Getting to know my C7ES3

    Comment


    • #32
      Competition among fruit and vegetable traders ....

      Originally posted by allthumbs View Post
      OK, I will have a stab at this. The microphone in the studio is inundated with competing signals from various signal sources that “compete” for the available air pressure in the studio to arrive at a very small but static solitary target at the same time in some semblance of balance providing a cohesive unified “wave”, like pouring a bucket of water into a funnel to fill a bottle....
      Interesting comment. I wonder if that is a generally held view among the public.

      Could we break your statement down for further consideration? I'm interested in your notion of sound waves 'competing' and 'cohesive wave'. That implies some prioritisation or intelligence or certain repeatable mechanism in the final resulting sound wave.

      Imagine then that you are in a busy, noisy street market where many traders are trying to attract customers by shouting tempting offers. As we stand and listen, we can switch our attention form hearing a general cocophony of sound to being able to select one by one the various voices, and filter out the noises of cars, bicycle bells, passing buses, sirens, aircraft and so on. These sounds are perfectly distinct, and we can follow complete sentences without interruption. How do you suppose those individual and quite discrete sounds are able to co-exist in what seems to be, to the listener, isolation from each other?

      How does that everyday sound experience stack-up with the idea of sound waves competing?
      Alan A. Shaw
      Designer, owner
      Harbeth Audio UK

      Comment


      • #33
        Originally posted by acroyear View Post
        Tthe microphone as a sampler, we all know how moving about a room changes the sound we hear, some bass frequencies don't even become obvious until we reach a certain distance from the speakers, a microphone must capture a very limited portion with a particular frequency balance of what emanates from the source.
        Ummm. That's actually two quite distinct isssues. I'm going to edit out the room-node bit for later discussion leaving:

        Originally posted by acroyear View Post
        The microphone as a sampler ... [and] must capture a very limited portion with a particular frequency balance of what emanates from the source.
        Perhaps we can further clarify that into what I think you mean, here:

        Originally posted by acroyear View Post
        The microphone as a sampler ... [and] must capture a very limited portion of the frequency range/sound power of what emanates from the source.
        Is that what you mean? And if so, bearing in mind that the microphone has a practically weightless diaphragm, perhaps a million times lighter than the woofer cone which we know can cover a very wide frequency and loudness range, by what mechanism could the tiny, lightweight microphone capsule be so exceedingly selective in its operation as a pressure sensor?
        Alan A. Shaw
        Designer, owner
        Harbeth Audio UK

        Comment


        • #34
          From post #30, 31 above

          A.S ,“Stab” may have been a little optimistic, a frenzied blind attack may have been closer.

          What I meant by a unified wave was a distilled sound and here I have to expand a little on what I meant. Your remark about “what if we could see sound waves” informed my thoughts. I imagined .beams of sounds travelling in all sorts of direction, colliding and interfering with each other producing secondary resonances or cancelling each other out, or being diminished before flowing on in a diluted or enhanced form.

          A trio of musicians playing live in the studio recorded by a single microphone was my starting point, a bass instrument, another midrange instrument and something sopranoish like a piccolo flute. I pictured in my mind that all three instruments playing at the same time would have not just different frequency ranges but at some point they would share overlapping frequencies. During orchestral concerts I find that when the whole shebang is going at it pretty vigorously various instruments disappear and reappear in the soundscape. Maybe it is my hearing, or is there a certain amount of common cancellation going on, a low note on a flute being muted by competing resonance of a cello string for instance?

          I have seen studio musicians separated by baffle walls, or sometimes individual players are behind closed doors in different studiolettes, with headphones, so they can hear but not necessarily see the other players of the ensemble. That'’s what I meant by the manipulation of the studio space ensuring some kind of balanced tone, volume, so that one instrument is not drowning out another, leakage or bleeding or such like.

          When I think of your market analogy I have been conscious of stopping and focusing on a particular aspect of the soundscape and by doing so being able to filter out other stuff to hear quite distinctly what I am listening for to such a degree it becomes the most prominent feature of the surrounding soundscape. I can only surmise that pitch/frequency/volume are the determining factors of how sounds compete to make themselves heard in a cacophony of sound for example.

          But although I hear all of the sounds simultaneously I am able to distinguish distance, a truck grating through its gears a couple of blocks away still sounds large compared to a bicycles bells’ ring nearer in the vicinity.

          Comment


          • #35
            Originally posted by A.S. View Post
            Ummm. That's actually two quite distinct isssues. I'm going to edit out the room-node bit for later discussion leaving:

            Perhaps we can further clarify that into what I think you mean, here:

            Is that what you mean? And if so, bearing in mind that the microphone has a practically weightless diaphragm, perhaps a million times lighter than the woofer cone which we know can cover a very wide frequency and loudness range, by what mechanism could the tiny, lightweight microphone capsule be so exceedingly selective in its operation as a pressure sensor?
            AS, you mention leaving out 'room modes' 'til later but the way I was seeing this was imagining the microphone capturing the sound balance at that point which will have it's own frequency signature moment by moment due in part to the environment the microphone and musicians find themselves in (ie the room/or not room (edit) and relative placing). I assume the microphone is not selective in of itself. Hope that clarifies.
            Getting to know my C7ES3

            Comment


            • #36
              Originally posted by allthumbs View Post
              ...A trio of musicians playing live in the studio recorded by a single microphone was my starting point, a bass instrument, another midrange instrument and something sopranoish like a piccolo flute. I pictured in my mind that all three instruments playing at the same time would have not just different frequency ranges but at some point they would share overlapping frequencies....
              Ummmm.

              Let's go back to the market analogy. What we experience there is, say, 50 traders shouting their messages in an unstructured, disconnected way, starting and stopping when they like, unchoreographed, randomly (asynchronously) and all voices of of different frequency pitches, loudness and so on. If sound waves worked as you suggest, what we would hear at any instant would be the result of, as suggested, constructive and destructive interference between the various competing sounds. A sort of second by second 'average', which would of course be an unitelligible, swooshing random noise. It would sound, to the observer, as if he was listening underwater. Human communication would be impossible.

              But as we know, when we apply our attention, we can hear, distinctly and intelligibly, many voices. And the same must logically apply to sitting in a concert hall hearing a live concert. We do experience in all situations the fact that a louder sound drowns out the quieter sound. So standing in our street market and watching from 20mtrs away and listening to a banana vendor shouting 'a lovvvverly bunch, Misses, come and get it!' we see his lips move, but a bus passing behind us smothers his sound. We see his lips continue to move, but we cannot hear him.

              This is simply a matter of the dominant loudness (i.e. sound power) of one source masking (a well researched technical evaluation of the human auditory system) another sound source, and we will hear the banana seller again only when the relationship between the loudness of the bus and his voice rebalance themselves to the human ear (but not necessarily to the 'ear' of a passing alien whose pressure sensing system may not exhibit a pressure masking phenomena). Or, indeed to the microphone, which, unlike the ear, treats all pressure waves around it equally and without favour.

              So how do sound waves carry all the information from an infinte number of sound sources discretely, without jumbling them all up? Because that is, actually, how they behave.

              And what you have unwittingly and correctly described as your concert hall experience, is the issue that we've been promoting here for over ten years: the relationship between sound loudness and sound quality to the human ear, although it is widely ignored, disregarded and disbelieved among audiophiles. We'll keep that issue for later.
              Alan A. Shaw
              Designer, owner
              Harbeth Audio UK

              Comment


              • #37
                I am trying do this a la Faraday, so I am avoiding Google and Wikipedia etc.

                Going back to the market scenario, in the middle of the market if I set up three ( or 4 or 6 or 8 or...) identical microphones equal distance apart as a hub connected to recording devices that add no signal themselves, how will the recorded information differ, if at all from one microphone to the other?

                I am now imagining a sea of sound in which we swim and a bit like Heisenberg's cat, it is our observation that determines the status of that sound (cat), but that doesn't make any sense at all.

                So how we hear and how we listen are two different things? Two completely different modes sharing the same engineering no wonder the confusion.

                As an aside here in the local area I hear every now and then a low tone, a “hoop” noise, very smooth and quite low. It is the sound of a native owl. The sound travels very distinctly and quite far against the urban background noise, much the same as whale song travels through water.

                Back to your original question “to inflate the room with sound”. Yes.

                Comment


                • #38
                  Originally posted by allthumbs View Post
                  I am trying do this a la Faraday, so I am avoiding Google and Wikipedia etc.

                  Going back to the market scenario, in the middle of the market if I set up three ( or 4 or 6 or 8 or...) identical microphones equal distance apart as a hub connected to recording devices that add no signal themselves, how will the recorded information differ, if at all from one microphone to the other?

                  I am now imagining a sea of sound in which we swim and a bit like Heisenberg's cat, it is our observation that determines the status of that sound (cat), but that doesn't make any sense at all.

                  So how we hear and how we listen are two different things? Two completely different modes sharing the same engineering no wonder the confusion.

                  As an aside here in the local area I hear every now and then a low tone, a “hoop” noise, very smooth and quite low. It is the sound of a native owl. The sound travels very distinctly and quite far against the urban background noise, much the same as whale song travels through water.

                  Back to your original question “to inflate the room with sound”. Yes.
                  Original observations beats Google every time! Prof. Faraday lived to create questions in his mind, and them set about illuminating them by personal observation. Concusion-drawing he left to other who followed in his steps - James Clark Maxwell, for example. Here. But we do not need to understand any maths equations to make sense of sound if we develop a way of internally visualising sound waves.

                  Going back to the market scenario, in the middle of the market if I set up three ( or 4 or 6 or 8 or...) identical microphones equal distance apart as a hub connected to recording devices that add no signal themselves, how will the recorded information differ, if at all from one microphone to the other?
                  Ah, good idea. Do you think that the relative distance between these microphones (be it 10cms or 1m or whatever) will be relevant to what we would hear if we had the ability to switch between the microphone's outputs and do a sonic comparison between them?

                  I'd also like to pass on this observation relating to the discreteness of individual sounds as carried by sound waves in the environment. Where we live in a village the ambient noise is low, and the birds can be clearly heard singing and calling to each other. When I cut my grass with an electric Flymo, which is quite noisy in its vicinity, it always surprises me that the birds singing in the nearby trees a few mtrs. away are not frightened by the noise. But they seem quite oblivious to the mower's noise, and sing undisturbed.

                  My curiosity is as to why they are not frightened. Is it that their tiny ears are tuned by evolution to higher pitch sounds - other bird noises - and greatly attenuate noises in lower pitch frequencies, such as my mower? That's plausible until you consider that when a loud, impulsive, dominantly low frequency sound is generated - say, a farmer's shotgun or a car backfiring, or an outside door being slammed shut, every species of bird will take flight and scatter immediately. Since most of the energy in those sounds are predominantly in the low frequencies, does that discredit my theory of the limited LF acuity of bird hearing?
                  Alan A. Shaw
                  Designer, owner
                  Harbeth Audio UK

                  Comment


                  • #39
                    A loud unexpected bang will make any animal jump I would think. Many years ago I spoke to the projectionist at a cinema after a showing of the film Apocalypse Now in 70mm. He told me that he would out through the projection portholes at that moment in the film when a tiger leaps through the jungle foliage towards the camera, from memory there was a musical note and a tiger’s roar, and at that exact moment he would turn the volume up just to watch the cinema goers jump in their seats, he said that with a smile and a great sense of satisfaction.

                    There is a peal of thunder at the beginning of a Moody Blues album (yes Moody Blues) and even on my picnic spinner at the time it would get me everytime.

                    I was thinking of the microphone experiment at the market place standing in place of the same number of humans as I would imagine if you asked each individual what do you hear now, each answer would be markedly different.

                    Sound must be a very resilient and flexible phenomenon as it is in most cases identifiable to those that witness it. A peal of thunder or a bowed violin or the thwack of a drum would be recognizable to most people on the planet in one form or the other, no matter the physical circumstances, inside, outside, in hot weather or cold, at high altitude or in the nether most geographical regions. So despite the circumstances or the environment sounds are resilient to deliver their discrete packets of data universally where air exists.

                    So as a speaker manufacturer and designer what perfect alignment of circumstances are you trying to wrangle to get close to the reproduction of actual sounds? Is it like herding cats? A saxophone being played by a true jazz master will sound different according to an interview I read with I think Sonny Rollins, because the shape of the cavity of the player’s mouth or the sweep of his jaw determines where the mouthpiece sits, Despite that we know what a saxophone sounds like.

                    It would appear to me that you are endeavouring to build into your speaker designs the same flexibility and resilience than enables the natural phenomenon of sound to be so phenomenal! More power to your arm.

                    Comment


                    • #40
                      The role of the loudspeaker:

                      1. To convert an electrical audio signal into a corresponding sound (air presssure in auditioning room).
                      2. To do it in a way the timbre of reproduced acoustic instruments do resemble their sound heard naturally in the same venue they were recorded (as much as possible).
                      3. To create the sounstage (musical scene) resembling this of the venue or studio where recording held place (or this created by sound engineer on the console).
                      4. To enable listener having good or very good recognition of instruments, singers etc's. separation and location in venue / studio where recording held place.
                      5. To give the listener impression the dynamics of recorded music is satisfactory enough to treat it as lively / vibrant / engaging at home (in domestic auditioning).
                      6. Not to be the worst enemy of one's wife due to its ugliness ....

                      ATB
                      Last edited by pkwba; 15-09-2017, 03:17 PM. Reason: Since previous answering regarding birds issue was a great mistake I deleted it and I answer on what is the role of the loudspeaker.

                      Comment


                      • #41
                        So, where are we with this subject? We seem to have gone off track (again - my fault for introducing the bird issue - big mistake).

                        We don't seem to be an inch closer to understanding how sound waves can carry audio information without becoming all jumbled up.

                        Back to a re-read of the street-market and concert hall discussion some posts ago.
                        Alan A. Shaw
                        Designer, owner
                        Harbeth Audio UK

                        Comment


                        • #42
                          Originally posted by A.S. View Post
                          We don't seem to be an inch closer to understanding how sound waves can carry audio information without becoming all jumbled up.
                          Having just scanned this thread, I suspect the answer is a fairly technical one and certainly one based on a scientific approach to understanding it.

                          I would be interested to have a layman's explanation, but I have no interest in trying to guess at something I know nothing about.

                          Can we have an explanation?

                          Comment


                          • #43
                            Have a close look at the ripples shown on water (which behaves as air does in the listening room). Here.

                            You'll note some 'wide' peaks and troughs (i.e. low frequency waves) and some much closer ripples (i.e. high frequency waves). Observations:
                            • The low frequency waves run, approximately, north-west to south-east, or perhaps south-east to north-west, anyway, diagonally across the image. Their generating source is out of view. Because they fill the entire image, one can surmise that the source of these low frequency waves is a considerable distance away from our ducks.
                            • The high frequency waves are created at the leading-edge of the ducks and are, practically generated as point sources. If we positioed ourselves in front of the ducks, we would not see the high frequency ripples, nor would we in a sound sense, hear them. They radiate right to left. This has tremendous significance in the matter of where to place the recording microphone(s). We need to consider the naturally occurring sound dispersion of low, medium and high frequencies of each instrument or we could place the mic in the instruments sonic dead zone, at least in some frequency bands. That is quite a skill set.

                            This is what we find in that street market example. The bass notes (long wavelengths, low frequencies) generated by, passing traffic, fill the entire market with low frequency sound of practically equal intensity regardless of direction. The high frequencies are equivalent to the voices of the various traders shouting their wares which are very definitely point sources, the source being the mouths of the sellers.

                            You can also hear the effect with overhead aircraft aprroaching and then passing overhead. The low frequency rumble radiating from the exhaust of the engines behind the aircraft envelop the aircraft in LF sound which then radiates in all directions, including infront of the aircraft. We hear the plane approaching from miles away. The high frequency components of the front of the aircraft engine - fan blade rotation sounds - only becomes audible as the aircraft is practically overhead, and disappear altogether as the aircraft passes by.

                            See how these very different frequencies remain distinct, happily co-existing, the high frequency waves 'riding on' the low frequency waves. Even as the high frequency waves radiate away from their source, they remain diecrete and 'clean'. Unless there are certain pitch and timing interrelationship factors that come into play, these independent frequencies will remain unaware of each other, and uncorrupted by each other.

                            What we are limbering up to is a comparison between the sound field in a studio, and the recration of that sound filed, captured by one of two microphones placed in spot positions in the recording venue, and the loudspeaker in the replay room.
                            Alan A. Shaw
                            Designer, owner
                            Harbeth Audio UK

                            Comment


                            • #44
                              The animation of the rolling wheel for a split second made sense to me, but then it quickly dissolved into tears. Do sound waves displace particles of air like an icebreaker or do they in some cases hitch wholly or partially a ride on air particles or is it a mixture of those two operations?

                              I do not understand how such a wide range of pressure waves do not interfere with each other and in almost a Goldilocks proportion their discrete characters of data are carried and preserved over distances and arrive with what I take to be very little loss of information.

                              Comment


                              • #45
                                Originally posted by allthumbs View Post
                                ... I do not understand how such a wide range of pressure waves do not interfere with each other and in almost a Goldilocks proportion their discrete characters of data are carried and preserved over distances and arrive with what I take to be very little loss of information.
                                You may not understand the mechanism by which that physically occurs (yet) but logically, isn't that what you experience in ordinary life, out and about? Sounds are discrete and remain so. That triangle player's instrument rings out discretely and unpolluted by the other instruments, even the bass ones. If we carefully and impartially observe, as Prof. Faraday did, we are well on the road to understanding.

                                The first step in human understanding of the world (because, sadly, we are not intelligent enough to visualise the interplay of the world purely intellectually, we have to proceed through long observation and postulation of theory, and further observation refining the theory*) is really careful observation, and documentation.

                                Is it not common experience that multiple conversations, generated by multiple sound sources (mouths), can co-exist completely independently of each other such that if we tune it to this or that one, we can follow the conversation? Would not intelligible human communication be impossible if all sounds fused into one indistinuishable sonic blob? Do we not commonly observe that the RGB lights in your TV screen retain their independence on their way to your eye, and does not visual communication totally rely on that independence, as does sound?

                                So what would happen if the isolation between, say, low frequency and high frequency sound was to lose its independence? Well, we have examples of that, and to the trained ear, it generates a most unpleasant experience. The groove in the gramophone record must, logically, attempt to contain all frequencies in the usable audio spectrum to be able to deliver a satisactory listening experience. Unlike sound waves though, which can retain their independence across the audio spectrum as they pass through air, the shape of groove wall must ideally represent, instant by instant, the sound waves captured at the microphone. The problem is that the stylus/cartridge/arm mechanics, especially stiffness and mass, cannot, through inertia, trace the high tones faithfully, and worse, the motion of the stylus/cartridge/arm is dominated by the low frequency information which throws the relatively massive stylus/cartridge/arm about.

                                The consequence is known as Tracking or Tracing Distortion, and Intermodulation Distortion where there is inescapable cross-pollution of frequencies by other frequencies, possibly in the orders of tens of percents distortion, which is audible. The digital system has none of these problems. The trained ear will notice that when the loudness of a gramophone track increases significantly, example: a loud tutti, that the sonic quality degrades and becomes 'dirty' or foggy, the roar of the ever-present background surface noise becomes elevated and if there are clear solo performers, curious anharmonic tones appear in their performance which stick out like proverbial sore thumbs.

                                * We are totally at the mercy of our organic senses. We have no other way of interacting with our environment.
                                Alan A. Shaw
                                Designer, owner
                                Harbeth Audio UK

                                Comment

                                Working...
                                X