Padé Approximants

preview_player
Показать описание
In this video we'll talk about Padé approximants: What they are, How to calculate them and why they're useful.

Chapters:
0:00 Introduction
0:33 The Problem with Taylor Series
2:11 Constructing Padé Approximants
4:50 Why Padé Approximants are useful
5:45 Summary

Supporting the Channel.
If you would like to support me in making free mathematics tutorials then you can make a small donation over at
Thank you so much, I hope you find the content useful.
Рекомендации по теме
Комментарии
Автор

I had never known about Padé approximations, and you did such a good job motivating and explaining them. Also, the way you mentioned how it seems almost unreasonably effective, like getting something for nothing, definitely felt like it was speaking directly to the thought passing through my mind at that moment.

bluebrown
Автор

Padé approximations shine in analog filter design where you have poles and zeros. They are particularly effective in analog delay lines.

jamesblank
Автор

In general, expressions of the form at 0:23 are interesting since they have a few nice properties:

1.) Much like how polynomials are capable of expressing any function composed from addition, subtraction, and multiplication (for a finite number of operations, at the very least), expressions of the form at 0:23 do the same thing, but for addition, subtraction, multiplication, *and division*. Another way of putting it is that they are capable of describing any function defined on a field. This might help towards explaining why Padé approximates can be so much more effective than Taylor series.

2.) Approximates of that form are used all the time in highly realistic graphics shaders. This is because they can be used to create fast approximates of functions whose real values could not be calculated in the time it takes to render a frame. Unlike polynomials, they can behave very well over their entire domain, and they avoid large exponents that could introduce floating point precision issues, both of which are important when you need to guarantee that a shader will not create graphical artifacts in a limited environment where all you have to work with is 32 bit floating point precision. They also avoid calls to advanced functions like sin() or exp(), which again makes their execution especially fast.

3.) You don't always need the derivatives of a function to find such an approximate. For instance, if you know that a function has an asymptote, or that it assumes a certain value at 0, or that it's symmetric, or that it tends towards a number at ±∞, then that automatically tells you something about the coefficients within the approximate. It then becomes much easier for you to run an optimization algorithm on a dataset to find good values for the remaining coefficients. Christophe Schlick gives an excellent example of this approach in "An Inexpensive BRDF Model for Physically-based Rendering" (1994).

4.) Multivariate versions of the approximate are a thing, too. To see how such a thing can be done, simply start from the proof for the statement in 1.) but now instead of working with real values and a variable "x" as elements, you'll also be working with another variable "y". As an example, for 3rd order bivariate approximates you'll wind up with polynomials in your numerators and denominators that have the form p₁xxx +p₂xxy + p₃xyy + p₄yyy + p₅xx + p₆xy + p₇yy + p₈x + p₉y + p₁₀

carl
Автор

The algorithm recommended this video to me, I'm so thankful because this is beatiful and very useful.

EliesE
Автор

This is an excellent explanation of something I didn't know existed. Yet it's so simple and elegant. I'm working on a Machine Learning playlist on linear regression and kernel methods and I wish I had seen this video earlier! I'll play around with Padé approximants for a while and see where this leads me.

Thank you for this interesting new perspective!

Автор

I've never heard of this before, and after judging so many bad entries, this is a breath of fresh air

romajimamulo
Автор

Mech Eng grad student here. This is my "did you know?!" flex for the next couple weeks. Amazing video, thanks!!

tbucker
Автор

It seems natural to choose M = N. What are the situations where there is an advantage in choosing M > N or N > M, where I have a "budget" of M+N coefficients that I want to work with?

rolfexner
Автор

Oh nice. e^-x is a very common function to Padé approximate in linear control theory because it's the Laplace transform of a uniform time delay. Notably, x in this context is a complex number, yet it still works. I've never understood how it was computed until now.

I think the aha moment is realizing we are discarding all higher order terms when we perform the equation balance. This is the key reason why the Padé approximation isn't just equal to the Taylor approximation.

Sarsanoa
Автор

Finally! I encountered these so often in physics papers. Finally I get it!

eulefranz
Автор

Having spent a great deal of time reading up on Pade approximants and struggling to find easy to understand introductory examples it is extremely exciting to see content such as this being put out there for people to learn. Fantastic job motivating the need and demonstrating the utility for these rational approximations. In my personal explorations, I have found multipoint Pade approximations to be very cool, being able to capture asymptotic behaviors for both large and small x, or around poles / points of interest is very cool. Keep up the awesome work!

ZakaiOlsen
Автор

I really like how fast you managed to explain it! Only few math videos get a topic like this explained in under 7 minutes

henrikd.
Автор

The Padé Approximant might be closer to the actual function in the long run, but it actually has a larger relative error compared to the Taylor series around x=0. Since we only care about approximating sin(x) from x=0 to x=pi/4, because we can then use reflection and other properties to get the value for other angles, the benefits are overcome by the disadvantages (i.e. you have to do more arithmetic operations, including a division).

ichigonixsun
Автор

A hidden gem of a channel! Never really considered other approximations because the Taylor ones are so commonly used in computation. I remember reading about polynomial approximations of trigonometric functions for low-end hardware but maybe those were less general than the Padé approximation.

DeathStocker
Автор

I really didn't like calculus in University but I find this very interesting. I can appreciate the beauty much more now that I'm not suffering through it

cornevanzyl
Автор

Definitively interesting, but if I get it right: if you decide you need a higher order you cannot re-use the low order coefficients that you already have. That I'd consider a disadvantage.

nikolaimikuszeit
Автор

The best and yet the simplest explanation for Padé approximation I have seen! We use it a lot in finite element simulation software in engineering, but I was always in search for a more intuitive explanation for its merits over default Taylor series! I am happy today.

dodokgp
Автор

I'm missing the point. It's cool and all, but there are two buts:
1) yes, taylor series do not extrapolate well, but it's not the point of taylor series, they are specifically used to approximate the function in some small area near to some point.
2) [N/0] padé approximant is the same as taylor series, and then you have the other N versions for padé approximants - [N-1/1], [N-2/2], etc.
It seems unfair to say that padé approximants work better than taylor series, since padé approximants are a direct extension of taylor series, plus you can cheat by freely choosing how to split N into [N-M, M].

mikelezhnin
Автор

Great explanation. These are the kinds of topics you want to share with everyone, as a scientist, but want to keep quiet, as a company, in order to have an edge. Thank you much.

algorithminc.
Автор

I´ve met Padé-Approximation at university in 5th semester. The name of the course was - as you can guess - "Approximation". :D There are another very interesting methods as well.

Nice video from you. :)

easymathematik