Turn Python BLAZING FAST with these 6 secrets

preview_player
Показать описание
Don't assume Python is slow. These are some of the best ways to turn python into a language that is comparable for runtime performance to other known fast languages. Here, we look at six different approaches to turn your python code into some of the fastest code out there.

of these, my favorite is using pypy, which I think is one of the easiest ways to get more performance.

#python #fast #coding #developer #programming #optimizing #concurrency #pypy

00:00 Intro
00:15 Built-in functions
00:52 Be Lazy with Generators
01:47 Use Concurrency
02:39 Cython
03:24 Using Compiled Frameworks
03:52 pypy
04:40 outro
Рекомендации по теме
Комментарии
Автор

Remember, folks: The secret to making Python run fast is to use as little Python as possible.

AxidoDE
Автор

Numba is really fast! The problem is it doesn't support objects, but is really good for small numerical functions

xhenryx
Автор

This might be a bit nitpicky, but concurrency != parallelism. If you're using the multiprocessing library you're executing your code in a truly parallel fashion since it spawns new processes each with their own interpreter running, but you don't have to do that to run code concurrently. Asyncio and the threading library will both allow you to make your code concurrent, without needing new processes. If your tasks are largely io bound, asyncio or threads are usually a better choice, while multiprocessing is better for CPU bound processes (generalizing of course).

Multiprocessing isn't always faster either. Depending on the number of threads and complexity of the problem, it might not be worth incurring the additional overhead to spawn new processes.

And on top of all that you're adding complexity to your code. Concurrency/parallelism aren't easy. So all that is to say, it's a nuanced topic and might not be the best example of an easy or effective way to improve the performance of your code.

jeffreyhymas
Автор

Once, one of my programm in python tooked 35min to process, a collegue ported this to a python rust-based librairy and the script was running in less than a second.

vincentbenet
Автор

Nice video. Nice Vim editor too. Thanks!

Автор

🎯 Key Takeaways for quick navigation:

00:00 🚀 Leveraging Built-in Functions for Speed
- Using built-in functions from the standard library boosts Python's speed.
- Comparison of a custom sorting algorithm against the built-in sorted function.
- Built-in functions are often faster due to being written in C.
01:10 🔄 Laziness and Generators in Python
- Embracing laziness as a virtue in coding for efficiency.
- Utilizing the yield keyword to create generator functions.
- Generators help with large data sets, avoiding expensive memory allocation.
02:32 ⚙️ Enhancing Performance with Concurrency
- Introducing concurrency using the multi-processing library.
- Exploring the concept of embarrassingly parallel problems.
- Demonstrating the efficiency gain through concurrent image processing.
03:14 🛠️ Code Compilation with Cython for Optimization
- Using Cython to compile Python-like code to C for performance improvement.
- Optimizing specific parts of the codebase, not a full replacement for Python.
- Significant performance improvement demonstrated with a factorial calculation.
03:42 📚 Harnessing Compiled Libraries and Frameworks
- Leveraging compiled libraries and frameworks like NumPy, Pandas, and Pillow.
- Exploring performance benefits by tapping into C implementation.
- Enhancing code readability while maintaining performance through these frameworks.
04:10 🚀 Boosting Speed with PyPy Interpreter
- Introducing PyPy as an alternative Python interpreter for improved speed.
- Just-in-time compilation method for forward-looking code compilation.
- Consider benchmarking code with both PyPy and CPython for optimal performance.

RameshBaburbabu
Автор

Great ideas! I got some code that runs multiple dd commands and is dog-faced slow. I'll try some of these like pypy and cython to see if it increases the speed

One thing that I do: instead of adding characters to a string I use a list and append items to that list and use " ".join(list_in) to create the string at the end.
Ex. if you're using st1 = st1 + new_char and you're creating an 80 character line you'll have 3200 immutable characters because of all the strings.
by using lst_st1.append(new_char) with a " ".join(lst_st1) at the end you have 160 immutable characters.

scotth
Автор

kids the language itself isn't slow. The thing is the optmization it's done by a interpreter or compiler. so the official python interpreter it's too slow, you can find some alternatives like numpy, cpython, numba, jython and so on.

TheGabrielMoon
Автор

1:06 The amount of times I've decided to spend a few minutes writing a script to automate something, only to enter a fever dream of ideas and end up wasting an hour on optimizing and refactoring it for no productive reason :D

demolazer
Автор

if you have code generating code in a build step, using pickling is sometimes much faster when you use it at runtime. I think it skips having to serialize it all again.

pwdotui
Автор

You have to be careful with multiprocessing though. Since Python is interpreted and e.g. CPython only offers a single Interpreter instance, multiprocessed code can actually be slower, because only a single Interpreter is performing the operations and context switches in between take time (see Global Interpreter Lock). It works well for IO bound operations though, like you showed in the video :)

storyxx
Автор

should I replace asyncio with multiprocessing?

zuowang
Автор

Not that I've looked too deeply into it, but the generators example seems wrong, it seems like the time save comes from not using .split in the generator version

zizvjog
Автор

I find it much easier to use joblib in embarassingly parallel problems than multiprocessing

vidal
Автор

Possibly a dumb question, but is it not possible to just compile Python like you would any other language? Pre-interpret it, if you will? Speed isn't my biggest concern, as long as the machine does it faster than I can (I'm not a big, professional developer), so I've never really thought about it until now.

jameswalker
Автор

0:30 - seems like you're creating new arrays for each partition (quick sort) but that defeats the purpose of quick sort which is supposed to be in place sorting algorithm 😢

SharunKumar
Автор

Make use of set and dict which use internal hashing to improve lookups insanely!
(I used to amount stuff in lists and looped over them ... that was SO bad!)

ewerybody
Автор

How about using maturin and Rust for those intense operations

kovlabs
Автор

Which is the fastest among these?:
- Yield generators
- Inline generators
- List comprehensions

profesionalfailer
Автор

Using mmap to create a hash from a file. This is not much faster than the approach with a buffer.

deadeyea
visit shbcf.ru