Should We Make Music In Higher Sample Rates 🤔

preview_player
Показать описание

Рекомендации по теме
Комментарии
Автор

My shit is slapping as an mp3 export 😂

Bittamin
Автор

Hi Res (above 48k) is traditionally used for movies because that’s the sample rates they are using for film. Traditionally it’s overkill to your processors and equipment to make tracks at that sample rate, all for a badge that the average listener can’t hear. Thank you for the amazing information, and research! Keep making awesome content!

fmGemini
Автор

Most people can’t even tell the difference between mp3 and wav 😂

dicnxhxj
Автор

The steam is compressed usually anyway. You're not getting what that says if you measure it. Many times you're just getting a higher source file with the output stream still compressed from that. Better but not lossless

Bthelick
Автор

Exporting something at a higher sample rate doesn’t change the sound at all. However, if the whole production is done at a higher sample rate from the start, that allows all the plugins to work more precisely, thus creating less artifacts and a cleaner overall sound (for example: autotune, or even just quantizing can generate unwanted noises). But honestly if the song is later converted back to lower sample rates, again, you wouldn’t hear a difference between the two

symn.
Автор

It’s rendered at that. But is that what it was produced and recorded in?
I’ve heard multiple mastering engineers say that oversampling can cause potential ringing.

AMMixes
Автор

clearly y'all just don't understand audio. Humans having a hearing range of 20Hz to 20k Hz. The sample rate of music is generally exported in 44.1k. The Nyquist-Shannon sample theorem makes it so we have to double our initial frequency to accurately reproduce the sound. Half of the standard sample rate is 22.05 kHz, meaning it is well over the range of human hearing. Having a higher sample rate doesn't do anything, and just is a bigger file to load. Lossless doesn't have anything to do with sample rate, it has to do with the use of compression (lossless just means uncompressed). The reason they have that is because most of the time with streaming (Spotify) you are just listening to a compressed version of that song. You are right that soundtracks do have higher sample rate (48k instead of 44.1k) but that is mainly because DVDS take in audio at that sample rate (idk why, CDs are 44.1k). You mention bit depth as well, which is a bit of a more crapshoot when it comes to audio, but for most pop albums it doesn't really affect to have higher bit depth because there are less dynamics in the audio, while a classical album would need a higher depth to handle dynamics. You can't hear the difference, its not biologically possible.

nicedevil
Автор

You additional have to consider the audioformat itself. Its not always about the highest samplerate and bit depth, its also important which audioformat you use. There are compressed one with removing actally data to reduce the file size or compressed one without losing audio data. Also there are uncompressed audioformats. The choice really has a dramatic impact on the audioquality too. 😮😮

infinitystratos
Автор

You physically cannot hear the difference between the sample rates. Your ear can only hear about 20-20khz. According the Shannon nyquist theorem, in order to hear a signal, it must be 2x the sample rate of the highest frequency. The only difference between these sample rates are for re-timing the audio

rayner
Автор

I started releasing my music Hi Res but never in ALAC format. I bout to see if Logic Pro can handle that 😂

djkelaux
Автор

Most of the times soundtracks are rendered in HD because the whole workflow of « sound on image » job is done in 88/96k. So if the project is able to be rendered uploaded and streamed in HD, there’s no reason not to do so

Lidjaa
Автор

But if metro boomin made the music using the mpc it had to be up converted from 44.1 to 96khz. Because the MPC's output 44.1. Besides that even if say he used a DAW from start to finish was every sound 24 bit 96khz originally, and was every vst have 24bit 96khz output. I highly doubt it.

donnydarko
Автор

If you want to go deep on sample rates, you should look into the Nyquist theorem. For any frequency you want to record, you have to sample at least twice as fast as that frequency.

meggawatts
Автор

And there’s still people out there emailing each other mp3 beats 😂😂

noahsmusicark
Автор

The top frequency is HR
OR “HEADROOM”
It allows the track to hit harder without clipping or peaking
There will be no ducking and the instruments breathe more they are more dynamic.
Bit rate just adds clarity
Because naturally we record at 24/48
Or 24/96
Which I have done both
So when you export at 24/88.2
Or 24/96
The BitRste becomes completely
“Uncompressed “Lossless”
And 48 or above
The headroom is not “Limited”
Or smashed

Bitrate = Compression
Frequency = HR = Limiting

This is the easiest scientific explanation

Sinista-Beatz
Автор

“we can cuz we’re crazy” 😭😭🤣🤣🤣🤣 felt that

ensea
Автор

I can hear the difference. I’ve rendered quite a few songs this way and they were SMACKIN
Yes I have an Apollo interface

Sinista-Beatz
Автор

Ngl as a musician my distributor don’t even allow me upload high res lossless. It’s just not a common standard yet I hope this will change in the future

CLeansaubermann
Автор

Tidal has a great selection of super high res stuff too. I'm not an apple user, so it's the best option in my opinion.

The point about not being able to hear the difference has always bothered me. I think most people can tell the difference, you don't even really need good equipment to hear it. I wish there was an easy way to make people more confidant in their own ability to hear the difference.

meggawatts
Автор

I always tell my clients to hit me up when they’re ready to release and I re-bounce all the tracks being released at 24-bit 96khz

Prodbykidjake