web scraping | google search results web scraper tutorial | python requests beautifulsoup

preview_player
Показать описание
Creating a Google search results scraper using requests & beautiful soup packages

💖 Become a Patron 💖
Рекомендации по теме
Комментарии
Автор

This video deserves at least 1m views.
You are a legend <3

grozdanovsky
Автор

Wooow!! Great video! Is it possible to extend that script to check let's say top 5 results and scrap H1-H4 headings? I tried everything and I think I am hard stucked with this...

jdtrading
Автор

Too bad for me that I always get the 429 response (tried implementing different user agents but same issue always)

ahmedbouali
Автор

Very nice video!! But how can I use it for searching my own query in google colab? How to finally print the csv file? I grateful to you if kindly help me.

saikathowladar
Автор

getting 429 response how I can get 200 response

muhammadshafique
Автор

Hi Maksim,
when trying to get the links, you get it via next_element and not by referencing "a" and link['href'], just like
links = [
link['href']
for link in
content.find_all('a')
I thought this might work but when I applied it for links, it gave me a KeyError. Can you explain why next elements works but a tag with href attribute does not?

kinjalvora
Автор

Thanks, but at last I don't have files .csv, why?

ikerpalacios
Автор

can oyu get your ip banned from Google by scraping too much?

stephenong
Автор

I keep getting reCaptcha on requests! Please help!!!

NayyarAbbas-shvw