Sympathy For The Devil (Linear Phase for Crossovers and Oversampling)

preview_player
Показать описание
In which I clarify the seeming contradiction in my last video, and explain why I typically go with linear phase crossovers and oversampling filters, despite the diabolical ringing.

Affiliate links: if you make a purchase using one of the links below I'll get a small commission. You won't pay any extra.

FabFilter Pro-MB (Gear4music)

FabFilter Pro-Q3 (Gear4music)

FabFilter Volcano3 (Gear4music)

FabFilter Saturn2 (Gear4music)

Video edited with VEGAS Pro 17:
{affiliate link)
Рекомендации по теме
Комментарии
Автор

Dan is the only person than make me put on headphones even though I don't feel like it.

ramirorodriguez
Автор

The thing about pre-ringing is that it's... counter-intuitive. We see it on a signal plot, but it's not an artifact. It's just what filtering *means* . If you think about that impulse, it has energy at all frequencies. So it makes sense that if you low-pass it, you get a smeared out impulse containing only the low frequencies. With a minimum phase filter, the smearing happens after the impulse; with a linear phase filter, the smearing happens symmetrically around the impulse. And then of course, if you high-pass it, you're going to get the exact opposite... so the impulse, swept down since you've removed the low frequencies. And that's what you get. And since this is all beautiful math, it indeed all re-combines perfectly - because it was never really an artifact, it's just a fundamental thing about what filtering does to signals.

But what about the actual ringing down below zero, I hear you say? Well, Dan used the wrong kind of wave editor here :-). He should've used one that does proper sinc interpolation between samples, which is the *actual* analog signal you get when you play back those digital signals. And if he'd done that, you'd have seen that... the ringing around zero was already there to begin with, in the original Dirac impulse! It's just that the sample points line up with the zero crossings so you don't see it, and the filtering makes it evident by mis-aligning that.

So why can we hear pre-ringing when you use settings which are extreme enough on something like a kick drum, if it's not really an artifact? Because our ears don't work in the frequency domain - they work in both the frequency domain and the time domain. The switch between those two "modes" happens at around 20Hz or so (but it's not a hard boundary). Stuff below that we hear as events in time; stuff above that we hear as tones and frequencies. So when our ears process a filtered signal where the filter is at a very low frequency, they hear the signal as events in time, not as a frequency. And indeed, the pre-ringing caused by *frequency* processing with a linear phase EQ ends up moving some signal *backwards in time*, so when our ears hear it *in time* since it's such a low frequency, that sounds off.

Conclusion: EQs are broken at low frequencies, because our ears *don't work* in the frequency domain at low frequencies, so an EQ is doing the wrong thing. But if you must use one and majorly screw around with things that far down, then yeah, minimum phase ends up sounding better, because it doesn't violate causality and so is less offensive in the time domain.

marcan
Автор

Best audio engineering content on youtube! Fight me! I'm so sick and tired of this "top5 crap, it's video equivalent of shitposting. This is real content! Great work Dan!

Venomforyall
Автор

My dissertation at uni was based on an experiment where I shift phase of white noise around a frequency on one side of the headphones. This creates an artefakt of our auditory system where we hear a tone in a location. I got people to record where they heard the tone spacially. People couldn't believe that when they swapped headphones round the tone stayed in the same location.

Binaural phenomena are interesting for constructing a model of how our auditory system works.

KultureUK
Автор

Phase is one of the ways your brain can locate the direction of sounds, so it is not surprising that we can perceive phase differences between ears but not on their own.

timseguine
Автор

Dan Worrall: the David Attenborough of daws. Thank you for showing me the majesty of phase in its natural habitat.

ravenatorful
Автор

Every DW video is a priceless master class. He is the only "youtuber" that makes me need to pause the video and take notes.

ChemaMrua
Автор

Dan, you're Mr. Einstein in audio! Bringing all the small complex details together no other can do. This is brilliant.

rogercabo
Автор

To my surprise I can hear a difference when you toggle bypass on and off at 4:57 listening on Airpod Pros. Sound a bit more nasal when it’s turned on. But I’m not 100% sure because it might simply be differences in how you speak. It could also be that the YouTube codec exaggerates the effect compared to the WAV that you were hearing when you made the video.

TheFatRat
Автор

My experience with phase shift when done in a binaural setting (like in this video) is always that the audio seems to "lean" towards the unshifted channel. This is especially prominent in the pink noise test, where the unshifted channel feels as if it has a boost in volume at the phase shift frequency. I made sure I was listening to a perfectly binaural signal (IEMs included) and this is still the case - almost as if my brain is doing the summing and notching within my head.

silent-science
Автор

Regarding the symmetry of the ringing in the oversampling filters of Saturn 2, it is correct that for the equivalent minimum and linear phase filters, there is a difference in whether the ringing occurs after the impulse, or symmetrically before and after respectively. However, this only really applies to FIR filters, which can be minimum or linear phase, or anything in between. As the name suggests, the ringing in IIR filters is potentially unbounded, so it's not really meaningful to compare the impulse response of an IIR and a FIR filter in this way, because it's not really practical to have an infinite pre-ring in the first place. If you design a windowed FIR filter however, you will see this exact difference between a minimum phase and a linear phase variant of it.

The reason why the responses in the case of Saturn 2 are different relates to this, and is perhaps deceivingly simple: the minimum phase oversampling filter is altogether different from the linear phase filter; the former being some sort of an IIR filter, and the latter of course a FIR filter. Not only that though, but if you look at the magnitude response the minimum phase oversampling filter is quite a bit steeper than the the linear phase filter, and consequently has more ringing as well.

Additionally, the "interesting pattern of differences" you see from the The Drop's linear phase oversampling is most likely caused by the window function of the FIR filter kernel. Because the impulse response has to be finite, the window function crops it to a certain width, making it imperfect and leaving behind ripples like this in the passband. For a well designed FIR filter such as this one though this is not an issue, as the ripple is well below any sort of audible threshold.

Hope I got everything right there, please correct any of my mistakes in the replies :)

Friedeggonheadchan
Автор

6:07 - I was listening on monitors, but kept on smacking the "mono" button to see what it sounded like 😊Thank you for another great video!

ephjaymusic
Автор

I can definitely hear the difference. I thought I was just imagining it but on the pink noise part it was very very obvious. It's difficult to put into words what the difference is, but it's kind of like the same feeling as looking at yourself in the mirror versus looking at picture of yourself. It's just off in ways that are difficult to describe but are very clear to you.

Kauk
Автор

Excellent material, as always I learned a lot.
One nitpick though - that I believe is important. With two bands, you don’t need two filters, but one - one signal is “x”, rest is “input - x”. What this means for the pre ringing null test - it doesn’t mean it is not there, just that they cancel (opposite). I know you make this point later in the video, but this mathematical perspective in my opinion makes it easier to understand when it is going to be transparent.
When dynamics reduce a band by 6dB, only half of the ringing will be cancelled by other band flipped phase.

BartWronsk
Автор

... to me, i can hear the difference. My translation is: it's like an Imager. When u activate/deactivate MB, the image from mono turns 45 degrees left/right, lower and higher (360degrees) with a slight fattening on the low frequencies. Of course i can give an example, but it's an image in my mind. Love ur surgical work! Best regards!

marlborr
Автор

"-100 dB is negligible."

n OTTs with 20:1 thresholds: Allow us to introduce ourselves.

baronvonbeandip
Автор

As soon as I start to feel like I'm a pro, Dan Worrall drops a video that humbles me into thinking different.

rollingrock
Автор

Jo, another thing to elaborate on is why you can hear a difference in sound when just changing the phase in 1 channel is due to Haas.

The Haas effect is often seen by producers as an effect of creating an interesting stereo field by delaying one of 2 channels, hence creating a difference, but that is not actually what the Haas effect means.


The Haas effect describes that you will perceive a sound to come from the direction you hear it from FIRST, even if you hear it from somewhere else way louder.

Meaning that even a slight change in phase in 1 channel will change where you feel the sound is coming from and its directionality, with all else being equal.


For how this makes sense, imagine you're in a cave. You might hear a sound coming from somewhere amplified through a cave system very loudly, or a landscape where you hear a sound loudly reflected from a cliff.

However regardless of how quiet it is, if you hear it from someplace else first you know that is the shortest path between you and the thing you're trying to locate.



Phase difference, or more accurately, the time difference between 2 channels is very significant to how we perceive directionality in sound.
This is actively used by the "agent" system in our brain to determine what a sound means to us and where it is coming from.

2 other interesting things related to this, when figuring out where a sound is coming from, people will usually turn their head, as the change in phase observed during this rotation or comparing of the initial with the latter allows the brain to pinpoint a direction something is coming from to an insane accuracy.

The thing that it checks for is phase differences between both ears, not loudness.

The other thing to consider is that this phase change perception is also related to the wavelength of the frequency, as turning your head will have a different effect on the phase of each individual frequency. This also explains why we hear differences in stereo channels with different intensities across the frequency spectrum. Little to none in the bass (as the wavelengths are so large that the phase change is very minimal between both ears), most in low mids with a gradual falloff to the top.

robiaster
Автор

Is unbelievable how clear and precise your explanations are.
Thank you for your amazing job done to prepare this kind of video.
Cheers from Italy!

mastroaka
Автор

Dan, thank you for this insightful video and your empirical approach.
I'm not an audio professional (I just listen to music a lot), but I'm fascinated by the technology and have been using EQ extensively for a while now. This video answered many questions I've been thinking about for months, and it has given me even more things to think about in the future. This goes for many past videos too. You really motivate me to delve into the tougher, less surface level parts of music production and processing. Keep it up!

sensoryoverload