Why some speakers and amps exceed 20kHz

preview_player
Показать описание
If the limits of human hearing is 20kHz, why would speaker and amp manufacturers concern themselves with exceeding those limitations on frequency?
Рекомендации по теме
Комментарии
Автор

There is another explanation why we actually can "hear" above 20 KHz: Combination tone or subjective tone. When you are playing for example a 30 KHz tone and a 35 KHz tone, then there is also a 5 KHz tone (the subjective tone) which will be recognized by the human ear. So if the speaker can´t reproduce the 35 KHz tone, the subjective tone is missing and it will sound "differently" to us.

That is why Dr. Matti Otala in the late 70´s expanded the frequency range of the Harman Kardon Citation Amplifiers.

desperado
Автор

This is something I wondered about for years and huge thanks to Paul for all these videos that I am enjoying so much . Makes clear sense how it was explained.

"Audiophile Ramblings of a Tuner"

I'm Going to listen to a tone generator later, to see where my own upper hearing rolls off at. I have been tuning pianos for many years now and would like to concur with Paul that, yes we are very sensitive to phase changes and other types of distortions too.
You really don't need golden ears to hear these things and I bet your ears are less worn out than mine! It's the same as looking into a large forest, trying to pick out that single tree. It's there and very clear to see - you just have to filter out the rest of the trees with concentration, focus and practice. If we try to pick out a single conversation in a room full of people talking, this is something we can easily develop and I think it helps to sharpen our perceptions and skills up for when we try to analyze and judge any sound that we are listening to.

If we choose to train our abilities to pick out differences like the odd tree in the forest, then let's start with a much smaller forest to practice it with! - I noticed a great example of how this can work with sound when I discovered recently that I can reliably hear, every time without fail, the variations heard when playing a 3000hz pure sinewave tone into my tape decks to measure wow and flutter. Even at 0.035% (RMS DIN) in a decent deck it's very easy to hear that there is no longer that beatless, still and steady tone that was heard directly, live from the generator through headphones. As the pitch drifts up and down gently, by just a single few Hz, this small change in pitch is easily heard beating between the direct sound from the speaker and the old sound reflected from the wall. Our ears are incredibly sensitive to this when we know what to listen out for. (Be careful what you wish for!)

Another interesting side to this pitch and reflection thing is the fact that when tuning pianos in large buildings, especially when tuning the high treble, the reflected soundwaves can and do flatten in pitch a little from the time they leave the piano and bounce back into the ear, therefore, we need to tune a little higher to compensate for this sometimes. I guess that an acoustical engineer would tell me that different sufaces will modify the pitches during reflections in different ways according to their own properties and layout (So reflections can be desirable and important!?)

It's also well known and accepted that ears and hearing can vary wildly from one person to another, but generally for all of us - and this is important - Our ears/perception will report the higher frequencies as sounding flatter to us than they really are and conversly - we report hearing the lower frequencies as sounding a little too sharp. The science and maths of this part of sound and music is very simple and clear cut though. (too simple in a way) It states that an octave should be exactly double the frequency of the octave below it and exactly half the frequency of the octave above it - but when we take a calculator and start halving or doubling frequencies in this way. 440hz is A above middle C, double it to to 880hz and you have top C, 1760hz is C an octave above that and so on - if we do follow the maths and the theory and tune our instrument to this model over our entire compass of 88 notes, that's 7 and a quarter octaves - our ears do not agree with the maths and it sounds more out of tune as you simultaneously play notes that are further apart, especially the double octaves already start sounding quite off when played together and are a great acid test of a good tuning. So If I tuned the whole piano like this and played together, for example a very low C and a very high C, they will not sound as pure and in tune together as when we stretch the tuning to accomodate and compensate for the properties of our ears, personal preferences, the acoustics of the room and the individual peculiarities of each piano. We normally like to tune about half a beat per second sharp/faster on octaves as we rise up from the middle and the reverse as we progress down from the middle. This is done only after tuning the middle octave of course which is the start of the whole operation, which is our reference if you like, called the temperament. Everything is copied from that centre octave. If I tuned to an electronic tuner and followed exact numbers, halving and doubling octaves, then I can guarantee that most of you would not have me back to tune your piano again, as it would sound really off! Most concert pianists and musicians prefer the high treble to be tuned quite a lot sharper than it should be in theory. AKA Stretch tuning. Even if we try not to do it, the tendancy is always there for this to happen as everything will sound more pure, sweet and musical much more importantly.

The reason I am rambling here about this is to highlight the fact that we are dealing with so many variables and personal preferences and perceptions whilst we try to reach our perceived goal of sonic perfection, we are up against so many different types of convolution, natural and unnatural harmonics and distortions etc. But here is my main point and hunch - if we were to remove every little artifact, every reflection, every bit of wow and flutter and so on - what have we got? We may have a perfectly pure and sterile sound but when we listen to real live music, it will be naturally full of these indiscrepencies. It's my belief that these little differences and nuances are what add up to make music really sound live, real and organic - but too much of these and we all agree it sounds 'distorted' or inaccurate / of low quality. Of course we can't physically rebuild our listening rooms to match the room acoustics of each recorded track that we listen to, so we do our best to stop our rooms colouring or clashing with the sound from our speakers, creating a neutral environment that won't disturb the projection and reception of sound waves to our ears and in the hope that the recording and playback instruments that we use can recreate all those details. Curious how we kill our own room acoustics to allow the recorded acoustics to be reproduced unhindered. I think the essence of it is being so careful not to delete or miss any detail during recording and playback when it should have been left in there. Less is more where electrical noise/distortion is concerned, we all know that but more is more when we talk about the 'natural noises' from instruments and acoustics such as fingernails on guitar strings and shoes on piano pedals, breathing heard between wind instrument phrases and suchlike.

I am very fascinated by the late Arnie Nudell's philosphy, passion and uncompromising designs to make speakers that can reproduce the sound of live music. I think when music is sounding just right and natural it is partly due to it having many 'perfect imperfections' Trying to reproduce this without masking, enhancing or dying their natural colour is the holy grail and secret behind it. I believe we should never underestimate the human brain for it's capacity to recognise and process information from sound, we can identify individuals by the tone of their voices from memories stored decades ago, advanced martial artists can fight blind-folded using partly their hearing through perception of changing acoustics in their environment. Blind people can tell you a lot about their surroundings as they practice and rely on their hearing so much, it can be elevated into a sharper sense allowing us to trust what we are hearing and we really should to some degree, how much is in proportion to how much we have practiced this, it can be hard work and tiring to learn but the rewards are great and some funny party tricks can arise from it!

I think that high end audio could be a little more focused about "what's missing?" than "what shouldn't be in there?" It's these small, unobvious details that really stack up to allow us to detect differences between live or recorded music, real or synthesized music. Keyboards and synths improved by providing dynamic expression controls - pitch bend&modulation wheels, touch sensitivity, breath control and lots more which really helps with the illusion being real real but I believe through years of listening, being around instruments, musicians and technicians and thinking about this subject for so long, I don't beleive that any silicon based digital device made yet or possibly in the future can deal with, copy or reproduce fully the infinite amount of dynamics and detail that exist in analogue instruments or even digital sorces once they have left their stable and become vibrations in the real world when recorded live. I firmly beleive that sound contains a truly infinite amount of detail, it's more complex and exists on levels and domains that computers are not a part of of yet I feel - and this is what we are up against - maybe that's behind the choice for the brand name "Infinity" speakers that so many of us are in awe of. They would have to be infinite to be perfect.

Digital is superb in so many ways but I think there is a big divide still between digital and analogue recordings to the degree of comparing the number of possibilites on a noughts and crosses game compared to the nearly infinite number of possiblities of a chess game. I wonder, if we ever go over to organic computing, what will become of digital sound and what it's successor could sound like.

I hope some of this may be food for thought and of some interest to someone
Happy listening to everyone and kindest regards from Rob the English Piano Tuner

robworrall
Автор

Apart from the fundamenral frequency, I think the 3rd, 4th and higher etc harmonics also affect the timbre of the instrument

alvinng
Автор

Hey Paul my name is Joshua out of Los Angeles California. Im a proud Audiophile who admire your educational video's on Sound. Keep up the good work Paul and thank you for keeping it 100 (a hip term for keeping it real🤣)

joshuajones
Автор

Its not just phase shift. By cutting off frequency's above 20k hz it effects the harmonics with in the audible range. Upper harmonics we also feel not necessarily through our ears but we also feel sounds though our body cavities, bones etc. I believe is also one of the issues with digital. Higher sample rates and DAC's that can sample and reproduce at higher bandwidths sound more realistic and detailed.

ohjoy
Автор

Another way to look at it is that we can't hear below 20hz but we damn sure can feel it. There definitely is sensation above what we can hear and we still want to produce that.

greebuh
Автор

Paul, you very elegantly talked around the essence. Why above 20 kHz? Please, do the world a favor and see if you can record the following on video.
1. Electrical engineers base their design and measurements on pure sine wave signals. A 20 kHz sine wave is easy for any amp and maybe tweeter as well. Real audio has more complex waves that humans can identify, parse and attribute to different sources.
2. A (transversal) flute may produce sine waves, a clarinet block waves and a cello a sine wave with a couple spikes, where human voices, pianos and violins produce a messy wave shape. And we can recognize these, filter them out of noisy environments.
Scenario A. Record each of these voices in an anechoic chamber and wire the analog microphone directly into an analog oscilloscope. Video record all and visually compare wave shapes for each playing 440 Hz (today's central A on the piano).
Discussion: what electronic bandwidth do you need (frequency range!) to play these back faithfully (let's leave dynamic range out for a bit).
Scenario B. Repeat two octaves higher (i.e. 1760 Hz). Test human ability to identify sources blindly.
Scenario C. Repeat A and B with real voices and blind(ed) test person to localize voices in space: L-R direction, distance, height.
Scenario D. Repeat tests with more than one source each at different positions and increase number in iterations. Note how fast your test persons can give source name and position.
3. Considering the sum wave shape of each test on your oscilloscope, now discuss the sine wave frequency that you need for an amp or tweeter to play that back. The 440 Hz blockwave of the clarinet alone may need about 5* that to reproduce (Fourier), but let it play ensemble with a piano, flute and cello, and the wave shape direction changes become so frequent and fast that a 20 kHz sine wave capability is too sad to talk about.
4. See if you can find a neurologist and cognitive psychologist researcher to explain how the human ear works, and subsequent processing in the brain.
Our listening ability is physically capped a 20 kHz, maybe (when younger, my hearing went above 20 kHz - extremely rare in 6'1" men)
5. What sine wave frequency range (bandwidth) do you need to faithfully reproduce a recording of a symphony orchestra? With room acoustics. Discuss.
6. Considering a fast amplifier's inertia its blockwave input will likely have overshoot with each wave-shape-direction change. Or a slow amp will cut corners. Even when both can do 20 kHz sine waves flawlessly. Scope images will illuminate the understanding.
7. Discuss how harmonic distortion and intermodulation distortion relate to electronic engineer's problems, not real audio and how these are bad at predicting why we could prefer an amp that does worse here over one that does better. Discuss the impact of frequency dependent phase shift on wave shape and how frequency dependence may vary with signal complexity and volume.
8. Now turn to digital to explain ... Good luck.
Scenario E. Repeat all scenarios, playing back recorded sources.

jpdj
Автор

Congratulations on the 100k milestone, Paul.

jamesrobinson
Автор

Amplifiers can easily be designed to go well above 20kHz. Speakers move air with mechanical design optimized for a particular frequency range. If a tweeter is optimized for reaching very high frequencies, it means the cone needs to be very light weight to accelerate fast. A smaller cone can do higher frequencies better, but will lose on the ability to move air in the mid range. Thus a better phase precision in the treble with higher frequency range can cost on the mid range linearity and SPL, especially in a two-way speaker where the tweeter needs to go low as well. Speaker design is often about such tradeoffs and even if cost is not the limit, laws of physics are quite limiting on what is possible.

ThinkingBetter
Автор

Congratulations on reaching 100k subscribers!!!

ryan
Автор

100k subs is a giveaway right? ;)
Thank you for the great video's Paul!

BijBijTCG
Автор

I am not an engineer, so....why then are most dacs and amps only rated to 20Khz? Perhaps you can continue the conversation tomorrow.

TimpTim
Автор

they hooked people up to an MRI and apparently, we react to tones above 20Khz... and with harmonic interference we actually react up to 100Khz I hope a didn't butcher the information from the video....

the_sheet
Автор

The reason that an amplifier is considered better if the response is extended beyond 20 kHz is to allow the complex waveform that is music to remain undistorted. That is a square wave of the fundamental frequency can only be reproduced faithfully if the system response is about 10 times the frequency of the fundamental. A square wave that is perfectly produced will contain all of the possible harmonics that each acoustic (or electronic instrument) can produce. Acoustic instruments top out at about 8 kHz so a response from 20 Hz to 80, 000 kHz would be sufficient.


Some of these instruments have harmonics (overtones) that are audible to about 16 kHz. The extension of the high end frequency response is simply to preserve the integrity of the complex musical waveform.

swinde
Автор

I've long maintained that the premise of the human ears bandwidth being limited in the 15-20kHz range is the result of testing with sine waves. Very few musical events are sinewaves, including the upper harmonics. Some, like the short pulse nature of brass, or the rapid on/off nature of reed instruments (why a square wave sounds like a clarinet) have some very steep rise times. FFT theory tells us that such sounds are the equivalent of sine waves of many multiples of the observed frequency.
Who was the guy at Spectral who published the paper referencing bandwidth with the ability to determine the directional origin of a sound? I remember it referencing folks in tanks with external listening microphones and the result that 40kHz of bandwidth greatly improved the operators ability to locate the source of a sound.

stevekirby
Автор

Long time debate but this is often cited: “By means of bone conduction we can hear up to 50 kHz, and values up to 150 kHz have been reported in the young (Pumphrey, 1950). However, it is indeed generally agreed that 20 kHz is the upper acoustical hearing limit through air conduction. The reason for this is debated, but the transfer function of the ossicle chain in the middle ear is a suspected culprit in setting the upper frequency limit to 20 kHz.”

sean_heisler
Автор

I could be wrong but I’m pretty sure the ear can’t hear phase shift as described - at least not low degree phase shift. It can hear phase change over time, but not a constant low angle/degree phase shift. The more the phase discrepancy, the more it CAN be heard but I wonder how many degrees out of phase a low quality tweet really generates. I suspect what is really being heard between a 60k tweet and a 30k tweet is the impact of the low pass filtering in the audible band as partially described.

kevyyt
Автор

You say the ear is sensitive to phase shift. So, now I'm wondering why "time-aligned" speakers designs that appeared in the late 70's seems to have disappeared. (For those who don't know, time-aligned speakers had different drivers at different distances. The idea was to make sure the sound from each driver was properly timed with respect to the other drivers. That created some noticeable designs that broke the "rectangular box" shape.)

russellhltn
Автор

I remember once reading about a sound guy talking about a "Cat Test" for how good a codec was. If you encode some mousy sounds in it and the cat comes running when you play it back, it's a good codec. MP3 failed the test, due to its perceptual shortcuts rendering the high-pitched rodent sounds unrealistic to the cat's ears.

Roxor
Автор

Talking about 100 kHz, congrats on hitting 100K subscribers!

peterotremba
visit shbcf.ru