Memoization: The TRUE Way To Optimize Your Code In Python

preview_player
Показать описание
Learn how you can optimize your code using memoization, a form of caching computations that have already been made in recursive functions. Incredibly useful and can really optimize slow functions.

▶ Become job-ready with Python:

▶ Follow me on Instagram:
Рекомендации по теме
Комментарии
Автор

Its good that you showed working of the memoization, but there are some inbuilt decorators for this exact same process, we can use cache or lru_cache from functools library. So that we don't need to write the memoization function every time.

maroofkhatib
Автор

To add to this nice video: Memoization isn't just some random word, it is an optimization technique from the broader topic of "dynamic programming", where we try to remember steps of a recursive function. Recursive functions can be assholes and turn otherwise linear-time algorithms into exponential beasts. Dynamic programming is there to counter that, because sometimes it may be easier to reason about the recursive solution.

Bananananamann
Автор

Memoization is a very useful technique, but it is trading off increased memory usage to hold the cache to get the extra speed. In many case it is a good tradeoff, but it could also use up all of your memory if overused. For the fibonacci function an iterative calculation is very fast and uses a constant amount of memory.

rick-ljpc
Автор

You do mention this at the end, but "from functools import lru_cache" is a) in the standard library b) is even less to type and c) can optionally limit the amount of memory the memoization-cache can occupy.

HexenzirkelZuluhed
Автор

Memoization is very important concept to understand for code performance improvement. 👍
I have used different approach in the past for this exact issue. As a quick way, you can pass a dict as second argument, which will work as cache

def fib(numb: int, cache: dict = {}) -> int:
if numb < 2:
return numb
else:
if numb in cache:
return cache[numb]
else:
cache[numb] = fib(numb - 1, cache) + fib(numb - 2, cache)
return cache[numb]

dainis
Автор

Awesome video. This is wonderful to learn. Thanks, I really appreciate your videos.

swelanauguste
Автор

Maybe some people don't realize why it's so good with fibonacci and why they aren't getting similar results with their loops inside functions.
This caches the function return (taking args and kwargs into account), which is mega helpful because the Fibonacci function is recursive, it calls itself, so each fibonacci(x) has to be calculated only 1 time. Without caching, the fibonacci function has to calculate each previous fibonacci number from 1, requiring rerunning the same function(x) a huge number of times.

erin
Автор

used a for loop for my fibonacci function:

def fib(n):
fibs = [0, 1]
for i in range(n-1):

return fibs[n]

ran like butter even at 1000+ as an input

wtfooqs
Автор

Definitely helping to boost a bit of performance in my massive open world text adventure I'm developing. Thank you for this tip!

adventuresoftext
Автор

Great video, by the way witch theme are you using?

jcdiezdemedina
Автор

key creation here seems risky, as in some odd cases 2 different (k)wargs can end up as same key. Example: Args 1, kwargs "2", args 12, kwargs 12 empty strings. Would recomend adding specjal character between args and kwargs to avoid such thing.

IrbisTheCat
Автор

First of all I want to praise you for your nice videos. I always enjoy them.

That being said, I would like to point out a bug in your code. Since you are using the string thing to create the key, if you call the function in the two equivalent ways

fibonacci(50)
fibonacci(n=50)

the two inputs are mapped into different strings, and so the second function call will not use the previously stored cache.

I get that in the fibonacci example this does not matter and that you are just make an example of code that does memoization (not claiming any optimality), but this is a thing that, in my opinion, should have been mentioned in the video.

stefanomarchesani
Автор

That's so neat. Python has a solution for a problem called Cascade Defines in the QTP component of an ancient language, Powerhouse.

YDV
Автор

Thanks a lot this cntent is incredible for junior python devs like me

xxaqploboxx
Автор

For primitive recursive functions, such as Fibonacci's series, tail recursion would also circumvent the issue with max recursion depth, wouldn't it?

tobiastriesch
Автор

Could this also be used in a while loop for example?:
while a != 3000:
print(a)
a+=1

issaclifts
Автор

how is memoization different from the lru_cache u discussed in another video?

adityahpatel
Автор

Why? In the functools module @cache decorator does the same thing and you don't have to write your own implementation but just from functools import cache

anamoyeee
Автор

I would like to know how you did that arrow as I have never seen it before?

BJnaruto
Автор

Why is there not a new cache defined for each function call?

j.r.