SUPERCHARGED Python Functions!! #python #programming #coding

preview_player
Показать описание

Background Music:
Creative Commons / Attribution 3.0 Unported License (CC BY 3.0)
Рекомендации по теме
Комментарии
Автор

caution: make sure your functions are pure (doesn't do any side affects) if u are use this caching feature otherwise you get un-expected results

manikantareddy
Автор

So it's basically memoization like in dynamic programming

takeoc
Автор

There are multiple ways to supercharge your functions in Python in decreasing order of typical performance gains:

1. GPU acceleration (from numba.cuda import jit)
2. If not possible on GPU, then use multiprocessing (from multiprocessing import Pool)
3. JIT compile (from numba import jit)

LRU cache (shown in this video) can massively save time based on the function, or it may have no effect, or it possibly could even slow down the function.

GPU acceleration with the CUDA toolkit (it’s easy with the numba.cuda package) is limited to restricted data types (for example, no lists or strings are allowed), but it can result in calculation speeds that are over 10, 000x the speed of single- or even multi-threaded ones on the CPU.

Multithreading is good because all CPUs have multiple cores, so that can be leveraged to achieve anywhere from 2x to 10x speed improvement, and any extra cores might have diminishing returns.

JIT compiling doesn’t quite have restricted data types (among the built-in data types for Python) but it still has restricted program flow. Altogether it’s greatly more forgiving than GPU compiling.

JordanMetroidManiac
Автор

It is useful when time complexity is exponential like in case of Fibonacci series with recursion

lucifer
Автор

Good tip, I know how the most basic caching works but I didn't know it had this easy an implementation just in functools. Thanks for sharing!

TAPa
Автор

You can also do cache={Input:output] in the parentheses and do cache[input] = output and check if input in cache if so return output
Eg
def increment(num, cache={}):
if num in cache:
return cache[num]
cache[num] = num+1
return num+1

mgames
Автор

You’d better have a deterministic function with immutable objects otherwise you’ll have a nightmare trying to debug it until you determine your cache is bunked. Also I’d only do this for values that aren’t stored or in objects where proper sanitization is too slow

tc
Автор

Wow, I actually learned something from a coding short! Usually it's just very basic stuff that seems impressive to outsiders. List comprehensions are the visual pythagoras proofs of Python.

cube_cup
Автор

Imagine a function with a thousand lines of code.

vinicus
Автор

It works as memoisation for top down dp as well

shanmugamudhaya
Автор

just wow. thank you for your explanation. i did not know that

SirIsaacNewton-bfse
Автор

Also should be noted that its most useful in recursive functions (like fibonacci) and functions have to be pure (closed and don't connect with anything else like a randomness in the middle of func) other than that if you use it heavily it can cause a huge memory use and slow down your system cause all the input and outputs are stored in RAM (probably you can write a custom decorator so that the input and outputs saved in a database or something other than RAM)

carcedopro
Автор

python programmers discovering dynamic programming

teop
Автор

me who just stores it in a variable lmao

dabossbabie
Автор

Also don't forget to use spaces around your math operators for readability

TheFreeSpiritKID
Автор

Only use for pure functions and please use @cache if you don't know how to tweak a lru cache or what it is.

THEMithrandir
Автор

The type of people we need in Python programming

NEWTONGATHOGO-zl
Автор

Note: Use this with all the calculations because it can takes x storage for each user because it is cache and when user will use then x storage will be multiple by no of user, good property but with responsibility

shahzadsafeofficial
Автор

WOAH That's actually so useful wtf

the_person
Автор

It's kind of like memorization in dynamic programming

all_techz