Computing Derivatives with FFT [Python]

preview_player
Показать описание
This video describes how to compute derivatives with the Fast Fourier Transform (FFT) in Python.

These lectures follow Chapter 2 from:
"Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control" by Brunton and Kutz

This video was produced at the University of Washington
Рекомендации по теме
Комментарии
Автор

One of the most entertaining series of lectures. I would compare it with my all-time favorite lectures of Gilbert Strang. Thank you for the amazing content. If you could ever spend a couple of minutes on the teaching setup (lightboard + computer screen projector) would be extremely interesting

KostasOreopoulos
Автор

This is really blowing my mind, Amazing. Although I know this rules for a long time, I never use it to compute derivatives.. Thank you~

weijinliang
Автор

I think the error of the "finite difference method" is not because the method wasn't good enough for this example but rather because it is plotted with a shift of dx/2. If the midpoints mp = (x[:-1]+x[1:])/2 were used for the plot instead of x, then this method provides actually a relatively good approximation too.
But let me know if Im wrong.

noahstieger
Автор

I'm saying this about the book as well as the video. I felt that there was a kind of a leap from the continuous Fourier transform to the discrete version and that the connection between them was not made clear. Otherwise, good content. Keep it up.

HarshShamkantPatil
Автор

Hello from Sweden! I absolutely love these videos and have even bought the book now. Coming from Computer Science/Math this is a whole new world, and is presented very professional. Do you have a video/description/pointer of/to your setup (I guess you've got this question before)?

Автор

Thank you very much, you illustrate the FFT in the better way

riccardobastianutti
Автор

Hi Steve,

I am replicating the error graphs, and almost no matter which metric I use (mean squared error, L2 norm, mean error), I get a linear rate of convergence for both the finite difference and the spectral derivative when plotted on a loglog scale. I'm using the same code for everything else, but the error graphs look much different.

Part of the issue I think is the definition of dx. Since n is the number of points, I believe dx should be defined as L/(n-1). But, even when define dx this way using the jupyter notebook from the website, I still get a scenario where the spectral derivative initially decreases faster than the finite difference for small n, but then decreases at the same rate as the finite difference after that.

EDIT: I think you're right about the definition of dx in this case because you're using np.arange, which does not include the last point. But, even when using dx=L/n, the finite difference still seems to converge much more rapidly than in the graph you show here. It even looks to be converging MORE rapidly at higher n, although the overall error for the spectral derivative is lower. I'm using numbers of points that are powers of 2. I

navsquid
Автор

Great lectures. Can you point me to a simple explanation of how to get (an approximation of) the second derivative of a vector of data using the FFT? It seems like there should be a shortcut? Does the output from the reverse transform that yields the derivative vector need to be scaled? I don't see where you do that, does it matter? Are the answers in your book?

DataJanitor
Автор

I've been enjoying this series so much!!! Why are the following videos private :(

chinmaybadjatya
Автор

Very good video, thank you for posting! But I have a question about the error estimate, what are the main mechanisms contributing to those different convergence speed for FFT derivative? And how does the time and memory complexity comparison?

zhyfn
Автор

Great lecture. Thank you for sharing. I'm curious about the technical setup you are using. Is this a transparent whiteboard kinda?

mohcinechraibi
Автор

I am a big fan of your teaching style. Thank you very much for creating such wonderful videos.
I tried to use your code for the f=x^3 and df=3*x^2 as well as f=sin(x) and df=cos(x) combinations. I observed a large error near the start and end of the curves. How can I reduce this error? Can you please explain the reason behind this?

manjeetkulhar
Автор

Thanks for the instructing video. Is there a way we can apply fft to compute derivatives of non-periodic functions?

ningliu
Автор

As we usually consider the endpoints of a grid as valid grid points, I guess instead of the "np.arange(-L/2, L/2, dx)", you should do "np.linspace(-L/2, L/2, n)" ?

koushiknaskar
Автор

Hi Prof Brunton, the method of finite difference has an issue at the data boundary, an extrapolation is often required. It seems to me that the method FFT doesn't need to manipulate or extrapolate the boundary, is that correct?

彭九方
Автор

Hello from Chile, this video is really good and very interesting... Congratulations for the lecture. Professor Steve I have a question about, how I can simulation the fractional colored noise consistency? I'm working with this in a paper and I have many doubt about there ... I wolud like to work with python.

silfridojgp
Автор

Professor Steve, nice lecture. I have a question: Is it possible to define a derivative with fractional order in Fourier transform term like d^.5f/dx^.5 = sqrt(i*K)*F(f(x))?

FelipeGabriel-zmzu
Автор

Dear Steve, Thank you very much for your videos.
Basically, one can use the same idea to calculate the numerical value of an integral as well. Please correct me if I am wrong.

marsras
Автор

great video, thank you for posting! I was wondering if you knew how to plot the spectral derivative using nmodes? thank you!

jordankiara
Автор

Thank you. You da man bro. Love your setup. Now I have to fire up Anaconda so I can check the output of these functions to compare with Matlab. Thanks again!
Also, I like to ...
from pylab import *
Edit, just checked your code. Not sure if this makes a difference but your x vector isn't symmetric.
Neither is kappa.

nathanzechar