Announcement

Collapse

HUG - here for all audio enthusiasts

Since its inception ten years ago, the Harbeth User Group's ambition has been to create a lasting knowledge archive. Knowledge is based on facts and observations. Knowledge is timeless. Knowledge is human independent and replicatable. However, we live in new world where thanks to social media, 'facts' have become flexible and personal. HUG operates in that real world.

HUG has two approaches to contributor's Posts. If you have, like us, a scientific mind and are curious about how the ear works, how it can lead us to make the right - and wrong - decisions, and about the technical ins and outs of audio equipment, how it's designed and what choices the designer makes, then the factual area of HUG is for you. The objective methods of comparing audio equipment under controlled conditions has been thoroughly examined here on HUG and elsewhere and can be easily understood and tried with negligible technical knowledge.

Alternatively, if you just like chatting about audio and subjectivity rules for you, then the Subjective Soundings sub-forum is you. If upon examination we think that Posts are better suited to one sub-forum than than the other, they will be redirected during Moderation, which is applied throughout the site.

Questions and Posts about, for example, 'does amplifier A sounds better than amplifier B' or 'which speaker stands or cables are best' are suitable for the Subjective Soundings area.

The Moderators' decision is final in all matters regarding what appears here. That said, very few Posts are rejected. HUG Moderation individually spell and layout checks Posts for clarity but due to the workload, Posts in the Subjective Soundings area, from Oct. 2016 will not be. We regret that but we are unable to accept Posts that present what we consider to be free advertising for products that Harbeth does not make.

That's it! Enjoy!

{Updated Nov. 2016A}
See more
See less

Source comparison - listening test: analogue outputs from different CD players

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    I have the original CD of The Royal Scam MCLD 19083 which I bought at least twenty years ago, before the loudness wars started. I also have the LP.

    Comparison between my original track 1, Kid Charlemagne (the one you used for A-D) and your clip A is shown on the attached plot. The original is quieter (less light green waveform plot) and has plenty of dark green headroom around the waveform with nice, occasional peaks. Your examples are much louder, have a much smaller dynamic range (less dark green, more light green). Watching the level meters, if I increase the loudness of my original by an overall 5dB to peak at about the same as your A, it clips. So this implies that either you, or your repackaged/remasted CD has signal compression and loudness increase compared with my original CD.

    If that's true, and assuming that your sole objective was to demonstrate the sonic difference (or not) between CD transports, using a compressed audio example was a dubious choice perhaps as it has introduced unwanted variable into the experiment?

    Here is a comparison between your A (as before) and my CD track; both 320kb MP3.

    Loading the player ...
    Clip A

    Loading the player ...
    My original CD (Clip G)

    I'd guess that had you never heard the original CD that you would be attracted to the louder 'more involving' sound of the compressed A.

    Now, it's very easy to turn my original into a more dynamic modern sound by applying hard limiting - see attached images. Now we can compare your A with my H hardened version of the original CD sound. Which do you prefer now?

    Loading the player ...
    Clip H: Hardened version of my original CD

    So after nearly three hours analysing, I'm somewhat confused as to what we are really comparing between A, B, C and D. Is it really four raw, unadjusted audio tracks exactly as they appear at the analogue outputs of the devices with no more than some simple gain adjustment?

    >

    P.S. Out of curiosity, by how much can the original CD clip (G) be increased in loudness without clipping? In other words, preserving the intentions of the original 1976 mastering engineer, how much can we turn up the loudness? The answer is a tiny 0.53dB - completely inaudible. Beyond that level increase, we changed the sonic intentions of the original recording. We've become part of the artistic process and we shouldn't ever be that.

    Here is the original CD clip again (G, as above)

    Loading the player ...
    Clip G again

    and here is the absolute maximum gain increase before we're changing the artist's intentions:

    Loading the player ...
    Maximum boost (Clip J)

    See picture. You will not be able to hear the 0.53dB level increase between clips G and J.

    It may not be obvious exactly which part of the entire song is the very loudest, reaching 0dB, fully saturated. Of the two I've marked in the picture, it's in the middle of this clip:

    Loading the player ...
    Clip K: the absolute loudest note in the entire track is the piano chord <<< here
    Attached Files
    Alan A. Shaw
    Designer, owner
    Harbeth Audio UK

    Comment


    • #32
      1dB louder equals to better sound?

      Originally posted by A.S. View Post
      .... However, toggling between these two plots shows a level jump, so zoomed-in we can see that D is (exactly) 1dB louder than A overall.

      >
      Originally posted by STHLS5 View Post
      ....... If I have to make a choice then it will be D, A, C and B.
      I don't know what to say. I have previously taken the audiocheeck test and I couldnt tell the difference of 1dB using test tones. And now, my preference was based soley on 1dB difference? Could this mean we listen music differently as compared to test tones?

      ST

      Comment


      • #33
        Looks to me like D, A, C and B are random selections which given enough flips of the coin may change. Yes, we do analyse tones differently to music. The reason must be this: we can give our entire mental processing power and the ear's mechanical limitations to concentrating on just the tone and nothing else but the tone. We can turn ourselves into a single-pitch laboratory instrument.

        But listening to and interpreting music means that we have to divide that focus over many aspects of what we are hearing: pitch, loudness, harmonics, rhythm, speed, interpretation of what is being said/sung, various emotional connotations, considerations of distortion and spectral balance, fidelity etc. etc.. No surprise loudness resolution as a single aspect diminishes - right? And of course, we do not analyse music as a test instrument or audio analyser does.

        Consider this (some homework for you!) .... how fast does one cell in the nerve fibre electrochemically pass its charge (that's how the message is conveyed) to the next cell? And how long is the nerve from the ear to the brain? And how many cells are there in that chain? And what is the total transmission time from the first to the last cell? Now the interesting bit ..... if that time exceeds the periodicity of a high frequency waveform ... how do we sense HF at all because the signal has ceased or changed before the brain is even aware that it arrived at the ear?
        Alan A. Shaw
        Designer, owner
        Harbeth Audio UK

        Comment


        • #34
          Hot mastering and MP3?

          It saddens me a little when I see how easily I (as an "audiophile", haha) am tricked by hot mastering. Even though I know what it is about!

          One thing that came to my mind (because it at some point is a DIFFERENT record afterwards, what Mr Shaw means by becoming part of an artistic process):

          How much dynamic compression does MP3 make by itself? Because in a way, if MP3 results in less dynamic compression than "Remastering" it could well be possible that MP3 somehow is nearer to the (recorded) original, hence better than these "improved" CD's!!

          Comment


          • #35
            Boiling MP3 encode/decode down to the basics

            Originally posted by thurston View Post
            ...How much dynamic compression does MP3 make by itself? Because in a way, if MP3 results in less dynamic compression than "Remastering" it could well be possible that MP3 somehow is nearer to the (recorded) original, hence better than these "improved" CD's!!
            Let's boil MP3 down to the bare bones ....

            First, MP3 encoding is a patented technology. The patents are held by the Fraunhofer Institute in Germany. They are watertight. And they are watertight because they are based on a thorough understanding of how the human ear works. And that means that they cannot be circumvented, bypassed or possibly even improved. These patents are the the complete, concise and proven intellectual model of how to encode and decode audio using the minimum amount of data to describe the audio so that, at different bit rates, users are satisfied with the sound. The business brains behind Fraunhofer Institute have to be congratulated for turning astute observations about the ear into a multi-million pound licence revenue stream. If only the hifi industry has such a firm technical foundation. They follow in the steps of Ray Dolby who, a generation before, used his understanding of the ear to give us noise reduction technology and in return, his own IP revenue stream.

            The objective of a coding/decoding system is to reduce the amount of data (information) in the stream of bits that pass from one equipment to another. That simplifies circuit complexity, component count, reduces cost and size, and increases reliability. Since coding systems are about discarding data that cannot be heard (even if the waves can be seen and measured by technical equipment) there will be two ways of attacking the selection of what to keep and what to throw away ....

            Frequency based analysis: if one or some notes in music swamp others, there is no absolutely no point in coding those that are swamped.

            DIY experiment to prove frequency masking in the human ear: Turn on a portable radio and place 2-3m from your ear. Borrow your wife's hair dryer and waft it around your head as if you were drying your hair. The radio's sound is masked by the hair dryer until you either turn up the radio until it exceeds the masking threshold of your ear and/or you turn off or turn down the hair dryer. You have just demonstrated how frequency masking works.

            Temporal analysis: Sounds fancy doesn't it but all it means is the coder being sensitive to what happens as time passes and the music progresses. Crude explanation: You can think of this as making an analogue tape recording of a song and then chopping it into small pieces with a pair of scissors. Each piece or 'block' of the song may be anything from a mm to many cms long - you decide where to cut. Clearly, if you want to make precision technical decisions on how to encode any slice of tape, it would be best to make those slices as small as possible - perhaps only 1 or 2mm long, just enough for one note to morph into the next in the music. There is no need to have much more resolution that the tempo of a typical tune, because if the notes don't change more often than so many times a second, then there really is no point encoding all the slices; all that would be necessary is to encode one and then say 'repeat that for 7 slices' or whatever. That would save a lot of data. It's very lucky that music proceeds at a regular pace and is not random. If it were random, like hiss, the encoder would not be able to make predictions about what was to follow, and it would not be able to discard data for fear that it had thrown away signal of fundamental importance.

            So these two attacks are made on the audio waveform in tandem. And MP3 works extremely well when even 90% or more of the waveform is discarded, which of course, means that the data rate between equipment A and B can be reduced by 90%. Or alternatively, for the same original data rate, nine more channels can be carried along the same digital pipe. Or even a mix of audio and video. Or audio + data + video.
            Alan A. Shaw
            Designer, owner
            Harbeth Audio UK

            Comment


            • #36
              Originally posted by A.S. View Post
              ....

              Consider this (some homework for you!) .... how fast does one cell in the nerve fibre electrochemically pass its charge (that's how the message is conveyed) to the next cell? And how long is the nerve from the ear to the brain? And how many cells are there in that chain? And what is the total transmission time from the first to the last cell? Now the interesting bit ..... if that time exceeds the periodicity of a high frequency waveform ... how do we sense HF at all because the signal has ceased or changed before the brain is even aware that it arrived at the ear?
              It may take a week or so to finish reading the 91 pages of Neuronal Signalling. So far I know it takes about 2 ms from one neuron to another. Total transmission time varies from 1 to 120 meter/second. I am guessing that evolution would have made that our auditory and visual sensory should transmit at the fastest speed. That's why the eyes and ears are located closer to the brain. ........ I doubt anyone would even want to know this in HUG.

              Could you please explain your last sentence? That part is interesting and I doubt I can find the answer in the 91 pages.

              Thuston, I agree that some MP3 recording sound better than the poor analogue copy of CD. In fact, some of the oldies that I used to hear in LP sounds so much better over the FM radio than on CD .

              ST

              Comment


              • #37
                Originally posted by STHLS5 View Post
                So far I know it takes about 2 ms from one neuron to another. Total transmission time varies from 1 to 120 meter/second. I am guessing that evolution would have made that our auditory and visual sensory should transmit at the fastest speed. That's why the eyes and ears are located closer to the brain. ........ I doubt anyone would even want to know this in HUG.

                Could you please explain your last sentence? That part is interesting and I doubt I can find the answer in the 91 pages....
                Well if they don't appreciate how their own ear/brain works perhaps they are going to be putty in the hands of marketeers!

                OK, you say that for one cell to pass an electrical impulse to another takes 2mS. That's two thousandths of a second. Very, very slow indeed. In fact, that is a signal that crawls along the nervous system at a snails pace (literally).

                I don't have much time to spend on this but here is an excellent video of the basics of the outer and inner ear ... here.

                Here is a good walk through (all we need to know) about nerves here. Specifically here, we can appreciate that the entire electrical transmission system along our nerves is made possible by the presence of chemicals potassium and sodium: without these in the correct balance, we cannot sense the world around us.

                Let's make a guess or two. Let's say that the nerve-to-nerve transmission of the electrical signal from ear to brain is at a rate of 10m per second. The way the signal passes from cell to cell is the same as water is passed along a fireman's bucket chain, with each bucket being in-line. You can see that this is a really slow process.

                I don't know how long the auditory nerve from the brain to ear is, but let's make another guess: let's say 10cms. As we have said that the transmission rate is 10m per second, for our 10cms nerve channel (0.1m) it would take one hundredth of a second (0.1s) for the signal to pass from our ear to brain along the guesstimated 10cms nerve. That's a time equivalent to about two cycles of the mains frequency - almost an eternity. Cross check: if you say that the cell-to-cell transmission is 2mS (0.002s) then that implies that there are (0.1 / 0.002) cells in the line. Is it possible that each cell is (0.1m / 50) = 2mm long? Is my maths right? Does it sound credible?

                Now, suppose we consider what musical note has a period equivalent to 2mS. That will be once cycle of the frequency (1/0.002 =) 500Hz. That surely means that the chemistry in any one cell takes as long as a complete single cycle of 500Hz to pass to the next cell that a sound has been detected by the ear. What happens if the detected sound, for simplification, is a higher frequency of 600Hz, or 1000Hz or 10,000Hz? The period of a 10,000 (10kHz) tone is (1 / 10000 =) 0.1mS - twenty times shorter than the inter-cell transmission time. So just as cell A starts to think about ramping up its electrical charge to send it to cell B, the tone has completed its cycle and perhaps ceased altogether.

                Clearly the way the ear senses frequencies with a period shorter than the inter-cell transmission speed cannot be as discrete frequencies. The nerve channel would be far, far too slow and that high-frequency 'snap' sound that told of an approaching predator simply wouldn't be processed in time to make our escape unless the 'snap' sound persisted for a long time. But we know that high frequency sounds are often impulsive in nature - they come and they go very fast.
                Alan A. Shaw
                Designer, owner
                Harbeth Audio UK

                Comment


                • #38
                  Human hearing

                  Originally posted by A.S. View Post
                  ......
                  I don't know how long the auditory nerve from the brain to ear is, but let's make another guess: let's say 10cms. As we have said that the transmission rate is 10m per second, for our 10cms nerve channel (0.1m) it would take one hundredth of a second (0.1s) for the signal to pass from our ear to brain along the guesstimated 10cms nerve. That's a time equivalent to about two cycles of the mains frequency - almost an eternity. Cross check: if you say that the cell-to-cell transmission is 2mS (0.002s) then that implies that there are (0.1 / 0.002) cells in the line. Is it possible that each cell is (0.1m / 50) = 2mm long? Is my maths right? Does it sound credible?

                  Now, suppose we consider what musical note has a period equivalent to 2mS. That will be once cycle of the frequency (1/0.002 =) 500Hz. That surely means that the chemistry in any one cell takes as long as a complete single cycle of 500Hz to pass to the next cell that a sound has been detected by the ear. What happens if the detected sound, for simplification, is a higher frequency of 600Hz, or 1000Hz or 10,000Hz? The period of a 10,000 (10kHz) tone is (1 / 10000 =) 0.1mS - twenty times shorter than the inter-cell transmission time. So just as cell A starts to think about ramping up its electrical charge to send it to cell B, the tone has completed its cycle and perhaps ceased altogether.

                  ...

                  Cells (neurons) cannot be 2 mm long though the calculation seems to be correct. Otherwise we wouldn't need an electron microscope to see them. Neurons size vary from 0.004mm to 0.1mm.

                  Before we consider the transmission time for the brain to receive the signal, we should consider the speed of sound itself which is 3 times faster than what our sensory transmission capable of. Theoretically, the sound hit our ears with three time more information then our brain could ever capable of processing at a given time. But that doesn't stop us from processing the sound accurately.

                  Looking at another mammal besides humans, the bats, they are are capable of emitting frequency at 200 kHz lasting just about 100 ms for the purpose of echolocation. How do bats process the sound. Being mammals their neurons too function similar to humans.

                  I think the brain, even if it is slow to process the sound but it got enough buffers to store the signals until it processes them. Just like sending your digital photograph from the computer screen to the printer. It maybe slow but the end result will be the exact replica of what you see on the screen. That's how brain too should work. IMHO.

                  There are many things not known about the functions of the brain. Only recently researchers found out that the exact location where our brain processes speech is miles away than originally thought. And in another research, they found that there are two channels in the ears that transmit sound to the brain. One to carry the signal at the onset of the impulse and another at the offset contrary to the long held believe that All signals transmit Thru the same pathway. I read Sciencedaily.com ) Another interesting fact about some parts in the ears of adult males is atleast 30% larger than females. No wonder there are more guys than girls in the audiophile world.

                  ST

                  Comment


                  • #39
                    Firstly, many thanks Alan for posts 30 and 31 - I have come back to them several times over the past couple of days and will continue to do so as my understanding of their implications increases at each reading.

                    Secondly - some time ago (around 18 months or so) discussions of mp3 encoding had progressed quite a long way on HUG and I don't intend to sidetrack this thread back into that area, however one issue that really stuck for me was that the encoding does not simply remove information it also creates noise and that 'masking' is the rug under which this noise is swept. Reading further into this, the issue that I arrived at was 'quantization' and at that point I'm afraid I ran out of ability to understand the maths - or even the implications of the maths.

                    One of the practical consequences of this created noise is that repeated encodings of a file can have much more detrimental affects on quality than might at first be imagined. There was a BBC technical paper on this subject which was cited on HUG but I haven't checked whether the reference is still current.

                    Edit: my post relating to the above paper click here

                    from the technical paper:

                    Introduction

                    The production and broadcast of audio is a technically complex operation. The audio signal will typically pass through several distinct processes including recording, sending to the studio, postproduction and so on. Increasingly, people have been turning to bitrate reduction to reduce the cost, or to increase the speed, of these processes. In isolation, the impact on audio quality of a single application of bitrate reduction can appear negligible. However, the reality is that the cumulative effects of bitrate reduction throughout the broadcast chain is far from negligible. If each process removes all redundant audio information, or uses the signal to mask the noise being introduced, then the next process might have nothing left to remove, or will see previously introduced noise as signal to be used to mask more noise.
                    full paper still available here
                    Last edited by weaver; 04-03-2012, 12:17 PM. Reason: further information

                    Comment


                    • #40
                      Originally posted by weaver View Post
                      Firstly, many thanks Alan for posts 30 and 31 - I have come back to them several times over the past couple of days and will continue to do so as my understanding of their implications increases at each reading.
                      +1 thanks to Alan. I have also been back on and off but my base was quite low and this weekend i discovered that youtube can be more effective than wikipedia with some types of explanations. And my attention was lost when you guys got to cells....

                      After all this, going back to the original question, are we concluding that we are not able to tell from listening the differences between the samples. However, Alan's analysis is somewhat saying that it appears the CD has had compression performed on it therefore unable to conclude that it is a reliable source for the purpose of this exercise?
                      Simpli-Fi: Kuzma>Nighthawk>LebenCS300XS>P3ESR

                      Comment


                      • #41
                        ...did not really understand everything in the previous posts yet but Ill keep trying.

                        One thing I did however is to buy an (old, un-remastered) version of Stelly Dans "Aja" via eBay. I have the remastered version here as well (borrowed from a friend) and liked the sound, allthough I can honestly say that it felt somewhat loud and too overwhelming sometimes.

                        (other side-effect: because so many people think of cds as something of the past, nearly like vinyl, there are lots of bargains to be made second-hand. And usually the unremastered versions are cheaper. Had to buy Led Zeppelins IV as well looking forward to Sandy Denny on "Battle Of Evermore"...ahh!)

                        Comment


                        • #42
                          Interesting homepage: http://www.dr.loudness-war.info/

                          Lots of cds were tested to quantify their loudness/compression. If they are right my fears referring "Aja" are not justified.

                          Extremely difficult to trust your own ears, I am nearly dumb it seems!!

                          Comment


                          • #43
                            Multiple encode-decode-re-encode audio files

                            Originally posted by weaver View Post
                            Firstly, many thanks Alan for posts 30 and 31 - I have come back to them several times over the past couple of days and will continue to do so as my understanding of their implications increases at each reading.

                            Secondly - some time ago (around 18 months or so) discussions of mp3 encoding had progressed quite a long way on HUG and I don't intend to sidetrack this thread back into that area, however one issue that really stuck for me was that the encoding does not simply remove information it also creates noise and that 'masking' is the rug under which this noise is swept. Reading further into this, the issue that I arrived at was 'quantization' ...
                            Thanks for feedback.

                            Actually, a few years ago when MP3 was a hot topic, I gave a short presentation to a pro audio conference about this very issue of quantisation, and the implications of multiple encode-decode-re-encode. What I did - and I think I'll recreate it again - was to use the multi-track audio editor to place side by side (in a vertical stack actually) the recording at each step in the encode-decode cycle. You could see significant changes in the waveform from generation to generation, so that after about three or four cycles it was really markedly different in appearance.

                            I didn't have the technology then to give a demonstration of how the sound changed by switching quickly from one generation to another, but here on HUG we can do that now. I'll add it to the growing heap of things to present: remind me in a month or so please.
                            Alan A. Shaw
                            Designer, owner
                            Harbeth Audio UK

                            Comment


                            • #44
                              I hold Steely Dan as my favourite rock band and it's interesting to read now, decades later, how hard they worked to achieve technical and sonic perfection. I had no idea in the 70s that their attraction to me was the special combination of the great music plus great sound: I just liked what I heard.

                              Seems though that it wasn't all plain sailing .... read here about noise reduction problems from the era of analogue tape. Yo analogue! No wonder they were amongst the first to go digital.

                              Also read the reality of squeezing playable music onto a vinyl record.
                              Alan A. Shaw
                              Designer, owner
                              Harbeth Audio UK

                              Comment


                              • #45
                                Hi Alan

                                This is a very interesting thread! Are you then saying that ALL DACs playing through the same amplifier and speakers would sound the same (identical)?

                                Would speakers that use the same drivers as Harbeths all sound the same, whatever the enclosure, wires, connectors, crossover? Surely not right? That would be diminishing the labourious work that you have put in to design the crossover, the cabinets, etc? Equally, why should ALL DACs sound the same since each manufacturer would use a standard DAC chip and then expend time, money, effort and material tweaking with opamps, etc, to make it sound like what they think sounds the best. Am I correct?

                                Best Regards
                                Dennis

                                Comment

                                Working...
                                X