Python Requests/BS4 Beginners Series Part 4: Retries & Concurrency

preview_player
Показать описание
In any web scraping project, the network delay acts as the initial bottleneck. Scraping requires sending numerous requests to a website and processing their responses.

In Part 4, we'll explore how to make our scraper more robust and scalable by handling failed requests and using concurrency.

00:00 Intro
00:33 Understanding Scraper Performance Bottlenecks
01:03 Retry Requests and Concurrency Importance
02:00 Retry Logic Mechanism
06:08 Concurrency Management

Рекомендации по теме