Why Super High Resolution Audio Makes No Sense

preview_player
Показать описание
Is there any point to super high-res audio? Probably not. Here's why.

For more on this topic, try:

#audioengineer #musicproduction #hires
Рекомендации по теме
Комментарии
Автор

Audiophile nonsense has entered the chat. If your amp didn't cost more than my house, and if the knobs aren't made of bubinga, are you really trying hard enough?

WarrenPostma
Автор

I'm 55 years old, and greatful that I can hear 14 kHz

backspin
Автор

It's so refreshing to watch this.. logical, sensible, reasoned advice. Thank you!

cxbkpmf
Автор

I noticed when I converted some of my masters to mp3 the mp3 file sounded more scratchy bright than the same track with a 24 bit wav. It was noticeably different in the high frequencies.

After some trial and error I eventually fixed the problem by applying a high cut filter to the inaudible high frequencies in the track. From this I surmised that these inaudible frequencies were being mapped to audible frequencies by the mp3 codec.

I’m not saying that high definition audio is superior because it handles frequencies, but rather curious why this happens? Why does the mp3 dithering not handle the mapping more gracefully?

Necropheliac
Автор

All my music is “mixed for bats 🦇”... “can I have a dollar now?” 😎

TheDerider
Автор

Exactly correct. As a digital signal processing expert, I deal with this situation every day. With modern antialiasing filtering, reconstruction algorithms etc higher sample rates does not matter. 16 bit vs 24 bit resolution only changes the effective signal to noise ratio the systems can support. 16 bit supports 65535 slicing levels. 24 bit supports 16, 777, 216 slicing levels. Hence 16 bit supports 96 dB SNR where 24 bit supports a SNR OF 144 dB.

joejurneke
Автор

Excellent, excellent video explaining the trivial quest for high res. I've played with various rates in my DAW and have not heard differences. Now I understand why. Thanks, Justin; my hard drives appreciate you....

theslideguy
Автор

I will say though, one advantage of using higher bit depths is that it can save a take if you realize (too late) that one of the mics wasn't amp'd enough, and you can't take another take, having that lower noise floor, from what I've seen anyways, allows you to really amplify the signal in the mix without getting any noticeable noise.

But admittedly, that is a pretty specific usage, that happens because of user error to begin with.

(edit) the whole thing about audio being better when treated less preciously is precisely why people seem to prefer vinyls & tape, even in today's day and age, I work in digital graphic arts, and it's really the same, often in digital artworks you want to add 'noise' and irregularities into artwork, that usually always makes it pop more, and gives it character

trollakhinmemeborn
Автор

My dog has been upsampling my music. I am pissed.

kurthertel
Автор

No, people always use Nyquist–Shannon sampling theorem to say you only need to sample twice the highest frequency you want to record i.e. in Audio you only need 40kHz. But did not understand the whole theorem. The theorem also said you need to multiple every sample to a infinite function call sinc function to be able to recover the same wave form. The important word is "infinite function". It is impossible to calculate infinite function because that would take infinite time. So you always need to stop some where just like when you use Pi. Higher sample frequency will make calculation many times near to "infinite function". So same DAC chip with same computing power suppose to have easier time with higher sample rate in theory.

alee
Автор

Neil Young and Robert Fripp are suiting up for battle.

mkst
Автор

It's worth repeating, as you mention, that capture and delivery are very different things. For example, Zoom just released a sound recorder that records 32-bit float (the F2 lavalier recorder). The huge benefit of that is that it's impossible to blow out the recording as it doesn't peak at 0dB, and levels don't need to be set. You just hit record and you're golden. Once the sound is captured, that 32-bit float just becomes 32-bit bloat (you heard me). Recording headroom is just recording headroom. The end listener doesn't need to hear the empty headroom, and won't benefit one iota from doing so.

FloatingOnAZephyr
Автор

Thanks so much for clearing this up. A while ago I did the 320kbps mp3 vs wav test and could not hear a difference at all... With high grade converters and headphones! I thought my ears were to blame.

georgearrows
Автор

If you're a music listener, you're not going to hear an improvement going from 320 kbps MP3 to lossless FLAC. Save the hard drive space and invest in better equipment (headphones, amps, speakers, etc.)

rainydaygirlz
Автор

I think there is a better low bitrate formats than the mp3 like m4a aac, ogg, because the spectral analysis sometimes shows that some mp3 converters screw up the songs by cutting too much frequencies. So you still get 320kbps but in reality it's less.

frostmediaprod
Автор

Thank you Justin, this comforts me in my unashamed attitude of "If it sounds good, it's good"! I have a question about the "Air Band" on Maag plugins, I really like the "brightness-not-harshness" it allows my 58 years old ears to perceive and appreciate on instruments and mixes. I usually use 4-7dB @ 20k and only recently used 15k on the EQ2, should we be cautious of unheard frequencies causing aliasing generated by the plugin at 20 or 40k?

wangodan
Автор

Try as I might I have yet to be able to hear the difference between cd quality and the high res audio, which could explain why sa-cds never took off. I’ll have to try the MP3 320k to see, but I know that at 256k I can tell but it’s really difficult.

ryangrow
Автор

I always record in 48 kHz/24 for two simple reasons.

1. All my gear supports 48 kHz, so whatever interface I choose to use it’s working with 48 kHz.

2. If the recordings will be used in a video I will have a perfect sample match. If you use 44.1 kHz with a 24 FPS video each frame will have 1837, 5 samples, so if I cut both the video & audio when the grid in PT is set to frame the cut will be between two samples. This is avoided with 48 kHz.

So, that’s my approach, I use 48 kHz for practical reasons, not audible.

LeffeAndersson
Автор

Yeah, and here are two conceptual things that are often misunderstood:
1. There are no stair steps. Digital audio is entirely smooth, continuous, and analog once it goes through the low-pass filter. That’s the whole point of the filter, is it turns everything at the upper frequencies into pure, smooth sine waves, filtering out all the squareness (which are just the upper partials).
2. It only takes two samples to accurately reconstruct any given sine wave at or below that frequency, perfectly in terms of frequency, amplitude, and phase. Sine waves have a particular shape such that you can mathematically reconstruct the whole thing just from the two samples.

matthewv
Автор

I wish I knew half of what you talking about. Really just wanted to know if I wasting money with tidal Master plan. If hi - fi would have been adequate. Or if I'm just imaging Spotify not sounding good from my system.. Last two mins most useful for me. Listen to what works for you. Sound advice

adamlucas