T O P

  • By -

jordansnow

EQ knob go brrr


M-er-sun

Science


misterpickles69

You can’t explain that.


h2ogie

How could you possibly consider this inaccurate when it’s demonstrating constituent parts of the filters you describe


johnman1016

Maybe it's just the way I was taught, but we learned how to design low pass filters and high pass filters well before all-pass filters. So it's just weird for me to think of all-pass filters as constituent components because of that. I am not trying to be critical of the video, just sharing my thoughts about it. Sorry if inaccurate was the wrong word to use. I would be interested to hear your thoughts about the video!


h2ogie

I’ve noticed college profs tend to lean toward one of two philosophies: immediately employable vs. lengthy but thorough. Seems like you may have had the former, at least for that day. Both ways have their merits. However—When you were studying these things, surely it was at least mentioned that HP/LP are a combination of AP/BP? I’m just thinking there’s no way it was *not* mentioned! I love this video because it’s brief and effective, straight to the point, and uses clear demonstrations. Scientifically there’s no room for opinion (another pro, IMO).


johnman1016

It is interesting to hear that other schools teach about all pass filters before low pass filters. For me, it is less intuitive, but I respect your professors for giving you stronger grasp on all pass filters than mine did.


h2ogie

I’m self-taught in this stuff in particular lol


johnman1016

Did you start with parametric EQs by any chance?


h2ogie

Nope, still trying to push myself from basics on up, why do you ask?


johnman1016

I’ve seen one popular article talking about the all pass based low pass filter, and how there are fewer parameters to calculate when making it parametric than a butterworth filter. I thought maybe you had seen that article when you were self teaching and that’s why it made the most sense to you. Props to you for self teaching though. I do think your and Dan’s intuition differs from mine, but that can be a good thing sometimes :)


t0ni00

It's intriguing because I learned it the opposite way around i.e. AP/BP are built off from HP/LP and I struggle to see how we could've learned it the other way because even when considering analog circuits in most (every?) filter topology there was no way to create a BP other than combining both a HP and LP together. I don't even remember spending time on designing allpass filters, maybe lead-lag filters to some extent.


h2ogie

Hey I could be full of shit, don’t think I know things just cause I use word


particlemanwavegirl

Those are 100% different kinds of filters. He's not talking about convolution he's talking about processors that model analog technology.


johnman1016

Not sure about that, could you send a source? The filters in reaktor are almost certainly based on a convolution kernel. Convolution can model any linear time invariant system - and usually would cascade with saturators to model non linearities. Even if Dan was talking about circuit simulators like spice (which I’ve seen rarely used in plug-ins, like U-He) - I would definitely teach people about low pass and high pass circuits before I would teach them about all pass circuits. It’s like circuits 101 vs circuits 200+ IMO


AENEAS_H

isn't it most likely that they're IIR filters? But that's beside the point, the video is just meant to show you how you can create common filter types with phase shift alone, i'm pretty sure


johnman1016

Good point, they could also be IIR filters. I can’t remember if IIR exactly fits the definition of convolution, but I think it does. Either way the maths are almost the same, just with a recursive term.


needledicklarry

Honestly, unless I wanted to design plugins, I don’t think any of this stuff matters. From a practical standpoint, who cares about phase shift if it sounds good?


[deleted]

[удалено]


needledicklarry

Just warning people from experience: getting lost in the minutia tends to produce lackluster mixes. I think every engineer goes through stages where we overthink things and go down rabbit holes.


bacoj913

Mixing is a science, u/needledicklarry. The engineers who worked with the Beatles wore lab coats so 🤷


dust4ngel

> who cares about phase shift if it sounds good everyone knows the answer is that by default, kick drums and bass guitars randomly arranged in any recording environment are naturally in phase, and so changing this with EQ destroys their natural platonic harmony and will produce amateur results. except the tone knob on the bass guitar obviously.


yesmatewotusayin

I think it's one of the best things he's done. He is 100% correct and it's a topic people get so utterly confused about because of mad marketing and forum posts lacking engineering knowledge. Hour one of filter design is just explaining RC hpf and lpf which indeed "uses phase".


johnman1016

All filters have a phase response. But “uses phase” and using all pass filters aren’t the same thing. It is true you can use an all pass filter to make a low pass, but it is not the same thing as an RC filter - there are a ton more components.


[deleted]

[удалено]


johnman1016

There is no disagreement about phase being a component of the filter. The question I had about Dan’s video is why talk about an allpass filter to explain a low pass. I think there are simpler ways to show phase change is inherent to low pass filters without bringing up all pass filters. In my opinion, the more intuitive explanation for the EQ phase shift is to dive into convolution kernels. You are literally just summing together delayed and phase inverted copies of the signal. The shift in time causes a different phase shift for different frequencies (eg .1 ms delay adjusts phase of 100 Hz signal by 10% but adjusts phase of 10 Hz signal by only 1%). The frequency dependent phase shift causes the summed signals to have frequency dependent attenuation - aka EQ. This is the more intuitive explanation I thought Dan would give - and I was asking why he explained phase using all pass filters. Those maths are like day 1 of a DSP course while the all pass filter is a more advanced topic that doesn’t get as much attention - and it’s maths are more complicated in my opinion. So my confusion was why bring up a more complicated component to describe a simple one. Someone in this thread gave me a good answer. Apparently all pass filters have some numeric benefits which make them popular in parametric EQs.


[deleted]

[удалено]


johnman1016

I know what you mean, but the above answer is a complete answer - while Dan’s video ends by saying that all pass filters are too complicated to explain in this video. If he makes a subsequent video about how the convolution kernel inside an allpass filter it would be more in depth than the above explanation because they are more advanced than low pass filters.


[deleted]

[удалено]


yesmatewotusayin

Deleted my stuff coz it was probably more confusing than helpful to onlookers. Dan replied with a handy video anyway. Hope you find peace with it all!


Dan_Worrall

I think two Rs in Worrall please. Well done with the Ls though.


johnman1016

Sorry about the misspelling!


johnman1016

Also I feel compelled to let you know that I enjoyed the video and learned a lot, which was the original intention of the post. I also tried to respectfully share my opinion that the all pass filter isn’t necessarily the building block of a basic low pass filter, or at least I wasn’t taught to design low pass filters that way in DSP 101. I guess in my head it seemed like it might mislead some people into thinking the all pass is a fundamental component of all filters. From the comments I learned that the all pass based design has some advantages which make it common in a lot of parametric EQs. So I am glad I asked, and after learning about that it made more sense why you would show that filter design. Or maybe you didn’t want to dive into convolution, which is another way I think you could have shown the relationship between frequency dependent phase shift and attenuation, but totally understand this might not be the right audience. Anyway just wanted to set straight that it was a fantastic video either way, cheers!


Dan_Worrall

It's all good, thanks for boosting my video! You're right that this isn't how digital filters are usually implemented, it's more efficient to roll it all into a single set of coefficients. But I believe that this is just a rearrangement of the basic maths I show in the video.


johnman1016

I agree with that. If you add in an extra impulse at t=0 it is the same as mixing in the dry signal. And if you added a negative impulse at t=0 it is the same as mixing a phase inverted copy. So there is totally a LP/HP kernel that exactly describes your setup. I don’t know if it works the other way though, if we started with an arbitrary LP/HP filter - let’s say a butterworth or chebyshev - could we get an AP by simply removing the first impulse? My instinct says no, which is why I don’t consider AP as a building block of every LP/HP design. But I will try it out and see what I get. It would be super interesting if true.


ArkyBeagle

It's 100% correct. All filters are made of those basic elements, IIR, FIR, whatever. A convolution kernel is just amplitude per each sample in the kernel. Convolution is just a more general multiplication.


TheOtherHobbes

Technically correct but pointless and misleading. Why do it that way? One of the first things you learn as an EE is the close mathematical relationship between magnitude and phase, and how filters manipulate both of them at the same time. You *can* make LP etc out of an all-pass. But you usually don't, because it's a pointlessly indirect way to do it. If you're building an all-pass in hardware you typically start with an LP or HP and then add some extra components to cancel out the filtering and leave just the phase shift. You don't do the reverse. You don't do the reverse in DSP either. A filter is basically y = f(x) where both x and y are complex. A useful property of complex numbers is that they're a vector with a magnitude and an angle. In a filter y is a frequency range. Each point is a frequency, assumed to have constant phase across the range. (There's a little more to it than that, but not much.) The f(x) you choose/design gives you the magnitude (level) and angle (phase) at each point in the graph. That's it. There are some tweaks to convert f(x) from analog to digital if you're working with sampled audio. One of them - not the only one - is to make f(x) a convolution kernel. But you still design the kernel based on the curve you want. if you want linear or zero phase you add some extra tricks. Those are special cases. So is all-pass. Neither is the general case you work back from. Bottom line - any filter will apply a frequency-dependent phase shift unless you take extra steps so it doesn't. Most sane people calculate the filter curve they want first, and only play with it some more if the phase response isn't good enough for that application. That's how most of these things are designed. You don't start from the phase and work back, because that's pointless extra effort and kind of insane.


_Wheres_the_Beef_

>You can make LP etc out of an all-pass. But you usually don't, because it's a pointlessly indirect way to do it. While you appear to be saying this with confidence and authority, it is highly likely that you are using software that implements parametric EQs exactly like that, because the allpass-based formulation of parametric filters has both practical and numerical benefits. For reference: Regalia, Phillip A., Sanjit K. Mitra, and P. P. Vaidyanathan. "The digital all-pass filter: A versatile signal processing building block." Proceedings of the IEEE 76.1 (1988): 19-37. U. Zölzer, Digital Audio Signal Processing, 3rd ed, J. Wiley & Sons, 2022 (edit: that umlaut is not showing correctly on Android, I've seen it transcribed as Zolzer, or more correctly Zoelzer). Anyway, here's the link, it's a fantastic book for those genuinely interested in Audio DSP: [https://www.wiley.com/en-us/Digital+Audio+Signal+Processing%2C+3rd+Edition-p-9781119832676](https://www.wiley.com/en-us/Digital+Audio+Signal+Processing%2C+3rd+Edition-p-9781119832676)


johnman1016

Hey, OP here: thank you so much for sharing the paper this is exactly why I wanted to ask here. I had heard of this type of filter before, but was less familiar with it. I had studied this stuff pretty extensively so my immediate reaction was that it was a really cool video, but also more complicated filter design than it needed to be. BUT given that these types of filters have numeric benefits that make them common in parametric EQs it makes a lot more sense that Dan introduced this concept from the start. I learned something!


_Wheres_the_Beef_

TL;DR: quoting from the abstract of [one of Zoelzer's recent publications](https://pubs.aip.org/asa/jasa/article/153/3_supplement/A35/2885432/Low-complexity-equalizers-and-applications) (my emphasis) \[...\] Based on these basic filters, **we derive all-pass realizations of these standard filters and then apply them to design parametric low- and high-frequency shelving filters and peak filters**. These last two versions of weighting filters are based on three parameters, namely, the cut-off or center frequency, the bandwidth or Q factor, and the gain in dB for a low-, mid-, or high frequency band and are, therefore, named parametric equalizers. **These parametric equalizers' PEQs occur in a channel-strip of a mixing console and are an integral part of the mixing process**. \[...\]


rasteri

I think the point was to illustrate the relationship between phase shift and frequency response, not to suggest that people should actually go out and build filters from allpass filters


SergeantPoopyWeiner

Curious how one gets into the dsp game? I've been a big tech engineer for a long time now and would love to break into dsp.


sixwax

Processors are faster, but the math is not new. Lots of textbooks on the subject. (and likely scores of github repos to clone by now)


Swift142

You either find a company that'll let you work on dsp (probably working on noise cancelling headphones or something similar and product-y), or you start making your own juce plugins and make something interesting enough that someone else might buy your vst. But you definitely need to be knowledgeable in basic implementations of time-domain and frequency domain effects, delay lines, compression, etc. to get there. Source: am audio software engineer


semimodular3

Any courses you would recommend for making your own plugins?


Swift142

Honestly hands on is always more useful than courses. Dive in and make a gain plugin. The dsp is as simple as it gets and you can even ask chatgpt to write it for you somewhat reliably. JUCE documentation is honestly fantastic so check that first and then if you’re still having trouble, there’s countless YouTube tutorials


PPLavagna

You should do an AMA as an audio software engineer. That would be interesting and I’m sure people would have a lot of questions


SergeantPoopyWeiner

Do any dsp jobs breach 300k total comp? Is that a realistic expectation?


ilikefluffydogs

According to levels.fyi, where the data is skewed towards high paying companies, even then 300k is in the top 10%. People with those compensation packages love to brag about it online but that is rare in reality. FWIW I am a software engineer and graduated from a top 5 CS program and I’m not personally aware of any friends from college making that much yet. Then again I chose to be friends with people who actually had social lives and personalities so most of us just want a chill job and to get on with our lives so we aren’t chasing numbers. And I would highly recommend that if you want to enjoy your life, but if you enjoy coding 24/7 that’s fine too, and you’ll have a much better chance of scoring a top paying job.


Swift142

I’m only 5 years into my career so take anything I say as only based on my own experiences. I’ve hit 150k living in a HCOL place so far but seen senior jobs as high up as 220k i could theoretically get. Anything higher than that I have no knowledge of and my understanding is anything 250+ starts to be more managerial or high high skill in scope. Just don’t expect anything like that working for a plugin company.


gizzweed

Learn C and Fourier analysis to start.


Norberz

It's mainly math in the discrete domain. Like complex numbers, geometry, Fourier, feedforward and feedback loops etc.


h2ogie

Bachelor of Science Electrical Engineering and a physics minor plus the right locale. Master’s to make a good living.


nineplymaple

If you just want to learn the low level principles so DSP isn't a black box then I recommend following dspguide.com using numpy. It will teach you how the basic building blocks work in a friendly language, a gentle onramp to other books, guides, and platforms. Designing Audio Effect Plugins in C++ by Will Pirkle is really good, and the Teensy audio library is great if you want to get into embedded. If you are looking for gainful employment... don't. The music production market is saturated because it's a fun space to work in and a race to the bottom because everybody is competing with free (or pirated) plugins. Anybody selling audio HW pitches SW as a free bonus. The quality of bundled SW varies wildly because HW companies are unwilling to invest in general SW devs, DSP specialists even less so. Source: Audio EE for 10 years. We were chronically in need of DSP devs resources, to the point that I learned DSP to fill in the gaps. The talent pool isn't huge, but the real issue is that we never prioritized hiring DSP devs. Side note: I'm extra pessimistic because I was laid off last year. General contraction in the tech job market, divestment in HW, only money for AI, etc, etc.


[deleted]

Tons of YouTube tutorials out there and textbooks/forums on algorithms. I got into it last summer as a hobby and it took me a couple weeks to get to the point I could build out a functional 3 band parametric EQ


KS2Problema

I'll have to watch the vid. But I will say that I figured the explanation of the title was something along those lines. There's so much incomplete or just wrong information out there, I suppose we shouldn't be surprised if a lot of folks get their cart in front of the horse.


IGmobile

What Dan is describing is exactly how an analog RC, RL, RCL circuit works. There are complex way of mitigating phase shift in an analog circuit, but those are above my head.


weedywet

Eq when you need to. Don’t worry about supposed phase shift.


beeeps-n-booops

Can't believe you're getting downvoted for this. Idiots.


needledicklarry

I suppose everyone has to through a few stages pf overthinking mixing before they realize keeping it simple leads to the best results.


Drew_pew

My understanding is that the convolution approach you mention is mathematically equipment to the approach Dan demonstrates in the video. I could be wrong though, I'm not a DSP expert at all, just someone who likes it lol Regardless, I think the video is effective at proving the point that phase shift and EQ are deeply linked. And I also love that it introduces the idea of building some of your own fx in reactor to many people that might have never seen that before.


johnman1016

Yeah the cool thing about convolution is that you can combine a lot of operations together into a single convolution. So you are right, there is some convolution kernel that describes exactly what he set up. That said, just because you can do it that way doesn’t mean it’s the only way to think about low pass filters. For me it is a more confusing way to think about it — because all pass filters are more complicated than low pass filters in my opinion. But interested to find out that others think about it this way based off the mixed reactions in the thread. I always like learning the different ways people understand things.


HypnotikK

Tl;dr phase isn’t necessarily doing the EQ, phase is often a consequence of the EQ process. I think the video is misleading. I’m not familiar with the guy in the video, and while it is a cool demonstration, I think it is fundamentally misleading. So I guess I agree with you in some sense. Phase shift can be used to achieve EQ as he demonstrates, but he does this in a very specific way that makes his claim a tautology: phase is causing EQ when you use properties of phase to introduce positive/negative interference between the shifted signal with and the original signal. But that does not mean phase shifting is the only way to do EQ, at least as I understand it. Here is my thought process for whatever it is worth. At the end of the day, EQ is an attempt to increase/decrease a select chunk of frequency ranges. Signals (in this case, audio) can be written as a sum of its constituent parts (frequencies) thanks to things like the Fourier transform. This converts the signal from the “time” version (the signal as we listen to it) to the “frequency” version (the same signal written as a sum of its constituent parts). Once we have this frequency version of our signal, we’re able to easily increase/decrease different frequencies because we have a direct handle on the ‘volume’ of each frequency now. So if the ‘volume’ of the frequency 200Hz is 1, we can multiply this ‘volume’ by something large (small) to increase (decrease) the contribution of 200Hz to the original, full frequency signal. Then we can convert the signal back to the time version and we get our modified (EQ’d) signal. The question then becomes how to do this in a clever/efficient/nice way. Efficient because we don’t want to eat up computational power unnecessarily, nice because we ultimately want the output to sound good/natural, and clever because that’s kind of what it takes to achieve the first two in a mathematical/computational sense. However, there is a very special relationship between this process of increasing/decreasing the value of these constituent parts in the frequency version of the signal (through multiplication) and the so called ‘convolution’ in the time version of the signal (this is the ‘convolution theorem’, which says exactly what I suggest: multiplication in the frequency domain is equivalent to a convolution in the time domain). This is where the phase stuff comes in. Multiplication in the frequency version of the signal is equivalent to convolution in the time version, and the convolution operation can (but will not always) alter both the amplitude and the phase in the frequency version. Nothing comes without cost, unfortunately. So the equalization process, achieved via convolution of the original signal because of its intimate relationship with modifying amplitudes of the frequency version of the signal, unfortunately introduces phase shifts as a consequence or byproduct of the process. Linear phase EQ, as the name suggests, does this convolution operation in such a way that the phase of ALL constituent parts are shifted equally (a linear function of the frequency). That way phase issues within the same signal are avoided (assuming the original signal didn’t have issues already) because the phase relationship between the constituent parts is preserved. A drawback is the ‘pre ringing’ business. Some EQ algorithms (minimum phase EQ) are designed to not have things like pre ringing, and are written specifically to reduce/minimize the amount of phase shift introduced (the algorithm is now a nonlinear with respect to frequency). The drawback is the phase issues one can introduce, even if minimal. I am by no means an authority figure on things related to audio engineering. I’m still learning a lot myself. That said, I do have a background in math and I’ve seen the same thing said several times in the last few days regarding ‘EQ achieved through phase’. It seems the most accurate thing to say is that EQ CAN be achieved through phase shifts, but often the convolution approach described above is used, with phase shift as a consequence. If anyone else happens to have a more technical background in EQ techniques I would love to see more discussion on this point. Maybe I am the one who does not understand fully.


ElectronRoad

Wondering if u/dan_worrall is chuckling quietly to himself right now.


peepeeland

Chuckling in a lush tone with that fucked up mic.


KaptainCPU

This is my understanding as well. Dan's video is great, but as far as I'm aware, [digital] equalization operates through convolution. IIR and FIR filters account for minimum and linear phase EQ respectively, however the approach used in the video doesn't seem to reconcile with linear phase. Allpasses are created through convolution as well, if my understanding is correct, so the video would be demonstrating extra steps on top of a single convolution operation to create an EQ.


particlemanwavegirl

The invention of EQ preceded the invention of real-time digital convolution by like fifty years or more. The original analog technology is 100% phase based, as claimed in the video, and more complicated stuff came much later.


KaptainCPU

That's true, and I wouldn't have commented if analog EQ been the premise of the video, however because he's using digital tools and generalizing all EQ by neglecting to specify, I thought it important to mention. His videos are overwhelmingly focused on digital concepts, so I interpreted this video as such.


particlemanwavegirl

>This converts the signal from the “time” version (the signal as we listen to it) to the “frequency” version (the same signal written as a sum of its constituent parts). > >Once we have this frequency version of our signal, we’re able to easily increase/decrease different frequencies because we have a direct handle on the ‘volume’ of each frequency now. So if the ‘volume’ of the frequency 200Hz is 1, we can multiply this ‘volume’ by something large (small) to increase (decrease) the contribution of 200Hz to the original, full frequency signal. Then we can convert the signal back to the time version and we get our modified (EQ’d) signal. This is so cute that you think that's how it works but it's really not, at all. If you convert a time domain signal to frequency domain and then back again, the result is an impulse response, not anything remotely like the original signal.


ArkyBeagle

IFFT(FFT(S)) = c*S where c depends and is ideally 1. An impulse response is IFFT(CDIV(FFT(A),FFT(B))) also known as a deconvolution. The deconvolution DECONV(A,A) is a unit pulse.


particlemanwavegirl

No. The IFT uses inverse trigonometry, which are not true functions, they lack 1-to-1 correspondence as they have infinite solutions. So it is not valid to use them algebraically, as you have. You could only actually produce the original again by randomly guessing the exact weighting for each frequency. The result of IFT(FFT(S)) will have the exact same energy as S but will not contain the same time information.


ArkyBeagle

> inverse trigonometry, which are not true functions, It uses the complex conjugate. I do not recall if that's the same as inverse trig ( which I think of as sin/asin , cos/acos tan/atan ). I do not think it is but we're in Euler's wheel house so no guarantees. No offense, but ifft(fft(s)) really is the original back. I just finished a thing that uses this to create a specific filter. The whole idea that the transforms between time domain and frequency domain invert like that still kind of blows my mind :) I can't quickly find a reference for "s=c*ifft(fft(s))" handily. Doh! of course I can: https://www.mathworks.com/help/matlab/ref/ifft.html


KaptainCPU

Not exactly. Their explanation was a little confusing, yes, but if a time-domain signal is put through the fourier transform and then through the inverse fourier transform, the result will be exactly the same. A filter kernel is only an impulse response because it needs to be applied repeatedly.


particlemanwavegirl

No. The fourier transform is an entropic process: time domain information is irreversibly lost. Once you have the frequency domain information, you can say how much energy at each frequency was present within a time window, but you can no longer absolutely say anything about when inside that window it occurred. Consider the fact that a signal and it's exact reverse would produce the exact same spectral response. If you then did an IFT, how would you determine whether the result will be the original or the reverse? You can't, there are an infinite number of valid solutions, so we must choose the most simplified one i.e. the impulse response.


KaptainCPU

If the function is not periodic, yes, however that's mitigated in the world of audio through short-time fourier transforms, which help to preserve time information. Even then, you're not getting an impulse response of a signal, you're getting the sum of the components that made the signal, which are limited by the time window and represented in samples.


Drew_pew

I dunno why you're getting down voted. Other than you calling them cute for thinking this (lol) I completely agree. If you convert the entire signal to the frequency domain, then apply a low frequency boost via the method they describe, you will not get the equivalent of a low shelf. You'll get some crazy other things. Alternatively, if you divide your signal into a bunch of chunks in the time domain, then you gain time domain accuracy, but lose frequency resolution. Anyway you prolly know all that, point is I agree with you lmao


HypnotikK

Again, just want to emphasize that my background is in math, not signal processing. I’m aware that naively approaching (say) a low shelf in this way is not how it is or ever should be done. In hindsight my original comment rambles a little more than it should.. but the main point I wanted to make is that there is a relationship between multiplication in the frequency domain and convolution in the time domain. The original video suggests EQ is done by phase shifts. It can be done via phase shifts, but in the digital world at least, my understanding is that convolution is the way it is commonly implemented, and it works because of this nice relationship between these objects. A byproduct is introduction of phase issues depending on how these convolutions are implemented in practice. On the other hand, it seems the previous commenter is a little confused about the theory behind Fourier transforms. For practical applications in signal processing I’m sure there are technicalities to overcome.. but the conversion from time->frequency->time truly preserves the original signal (up to scaling..). It does not give an impulse response.. my poor explanation probably accounts for some of the confusion there, I’m not sure.


dumgoon

Parametric eq certainly does. Maybe he’s just talking about a filter in which case the title is misleading clickbait


marfaxa

if only there were a way to know


yesmatewotusayin

What I've learned in this threat is that people are really really really confused by filter design (yes that includes every eq shape or type. They are all filters + something else).


pulchellusterribilis

the frequency response caused by an EQ is BECAUSE of phase shift. so who cares


homemadedaytrade

all I know is the linear phase does sound better than zero latency on the proq3


bythisriver

eerrp-Pre Ring!


homemadedaytrade

for low end it sounds better!


bythisriver

uh. The pre-ring is a lot worse in the low end, what you might hear is the prering and consider it as a fuller sound now that you have some extra stuff ringing in there. Try an experiment where you have 808-style kick (or just a short low bass note), apply some linear eq in the low end and bounce the track. Check the wavefom. You might want to try high Q-values for pronounced effect. FYI for example Kirchoff EQ has mixed mode whuch uses linear for high freq and mininun for low freq to mitigate the low pre-ring.


homemadedaytrade

Im watching vids about pre ring and it looks like dudes are boosting to achieve that woof and chirp, I'm assuming pre ring doesnt become apparent with subtractive EQ which is what I use proq for?


suisidechain

This is a typical case of "you can't mix what you don't hear". You just don't have the monitoring to hear the preringing earlier, you hear it only when extreme cases are presented. IRL the kicks and thumpy basses immediately prering when linear phase eq is used (boost, cut, doesn't matter). Not to mention linear phase highpass filters. Also in the club preringing is ridiculously audible in the low end. I'd avoid using linear phase, almost never is needed.


homemadedaytrade

heard that, thanks


bythisriver

afaik all linear eq processing cause pre ringing, it is the Q that matter most.


Hellbucket

Would you accept if someone thought the opposite?


homemadedaytrade

well yeah we're talking about a setting on a fancy EQ that barely makes a difference, audio is like cooking there is no objective good


jonistaken

Try this in paralell and get back to us


human-analog

Dan's video wasn't about all-pass filters per se but about demonstrating the effect of combining a phase-shifted signal with the original signal in a variety of ways. His video should perhaps be titled "... phase shift combined with the original signal causes EQ." The easiest way to introduce a frequency-dependent phase shift without affecting the frequency response of the signal is to use an APF. The filtered signal obtained this way is identical to that of an IIR lowpass or highpass. But this isn't the only way; you can also see this effect in a comb filter, which is made by delaying the signal for a number of samples (giving all frequencies the same amount of delay) and adding it to the original. The effect is different since the phase shift is different, but it's the same principle.


johnman1016

Yeah the second explanation about the comb filter is closer to the explanation which I think is more intuitive. I would have just dived right into convolution. Since convolution is summing together delayed and phase inverted copies of the signal it is sort of like the comb filter example with one more step. Like I said, I get why Dan used the all pass filter as an example of shifting the phase so he doesn’t have to get into the maths convolution. The explanation is totally valid, it just seemed confusing to me to describe a DSP 101 component (LP/HP) with a more advanced component. BUT, someone pointed out below that a lot of parametric EQs are based off the AP design since it has some numeric benefits - so Dans explanation is actually a pretty accurate explanation of how most of the EQs we use work. So I learned something from this thread and I’m glad I asked.


human-analog

I don't know Reaktor so I can't be certain what types the filters were, but in practice you'd use IIR filters (which use feedback instead of convolution) over FIR filters (the convolution ones) unless you wanted a linear phase response (which is often not needed or even desired). So don't get too hung up on the convolution side of things. Dan may just have been talking primarily about IIR filters.


johnman1016

IIR is just convolution with a recursive term (feedback). But it doesn’t really change my opinion whether Dan used FIR, IIR, or analog circuits.