How to Make 2500 HTTP Requests in 2 Seconds with Async & Await

preview_player
Показать описание

This is a comparison about how to use Async and Asynio with AIOHttp and Python vs using threads and concurrent futures to best understand how we could make several thousand http requests in just a few seconds. Learning how to do this and understanding how it works will help you when it comes to running your own servers and web services, and stress testing any API environments you offer.

Support Me:

-------------------------------------
Disclaimer: These are affiliate links and as an Amazon Associate I earn from qualifying purchases
-------------------------------------
Рекомендации по теме
Комментарии
Автор

You do not need to explicitly create task on line 18

jmoz
Автор

I can't believe how fast you got to the point. thank you for reading the room

victorhaynes
Автор

Thanks for your video!! Tried to send multiple async request for some api but my previous code was unefficient and your example helped me!!

artemfagradyan
Автор

John, your channel is absolutely fantastic! congratulations!

fmanca
Автор

Thanks for the video, makes me appreciate go routines and their simplicity even more :)

eduardocasanova-personal
Автор

ThreadpoolExecutor is slower than Thread but saves memory. You don't need to use asyncio for waiting. You can use joins, barriers, wait groups and mutex locks with Thread to achieve the same. It comes down to preference, although asyncio is more streamlined, and thread is better suited for manual optimization that requires utmost speed and care.

swordlion
Автор

Awesome man, I was working with these things Today, I'll try this one too ♥️♥️♥️🙏

BringMe_Back
Автор

hey sir👋
as a self taught dev You are one of my inspiration 🙌🙌🙌
i started learning web scraping recently. i downloaded some videos about scrapy and BeautifulSoup tutorials of yours and
i followed along them and found pretty comprehensive and clear i did learn better by your videos.
i do hope more and more tutorial videos!!!
thanks million times !🙏

ihateorangecat
Автор

Thank you good sir, you are a master at this!! You have helped me land and keep my data scraping job!! Thank you so much, truly an inspiration :))

hristijansaveski
Автор

"how to build a ddos attacker..."

ErikS-
Автор

hi john, or anyone who knows, I keep getting the error "UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 28855-28856: invalid continuation byte", do you have a work around for this? the ones that I found on google I can't seem to implement correctly.. much thanks!

terrascape
Автор

Very good vid!! I just finished making use of concurrent futures (based on your previous vid) and it speed up my code considerably! Looks like I have the potential to speed up further 😀. Will making a lot of requests at the same time slow down the source server, thus passing the waiting time to the server?

ericxls
Автор

hey thanks, good info.

how to run 'subprocess' Asynchronously, while I dont want to use 'requests.content' to download

subprocess.run(["yt-dlp", download_link, "-o", f"{output_dir}/{episode_name}.%(ext)s"], shell=True, stdout=PIPE)

Also, What would you prefer Multi-Threading or Async in this case ?

mrjt
Автор

I was looking for a good explaination of the difference between async and multithreading...so threading is for doing things in parellel and async is for waiting for future tasks to complete but dosn't stall the current program?

acatisfinetoo
Автор

While using executors, if the capacity of tomcat to process is set to 200 only. What happens after 200 request? Will we have to wait till 200 request are processed by the server? Or, it will pick up some request when call passes from server to db.

saranpun
Автор

very instructive thank you John, I think that scrapy uses async requests, that's why some scraping jobs can be impressive quick with scrapy

vincentdigiusto
Автор

Nicely done, a Proxy or socks5 "which i believe works with requests" will do the trick bypassing the traffic limit. But how can i implement it in this scenario? Thanks John.

androidmod
Автор

Thanks, I'm gonna implement this in my "yfrake" package (PyPI).

aabmets
Автор

Hi good sir thanks for the vid and most importantly, being actively engaging with people in the comment section Cheers

bobong
Автор

Thanks for your helpful videos.
Please, have you any idea how to avoid dat*ad*ome protection ?

redaoutarid