Scrape Any Website for FREE Using DeepSeek & Crawl4AI

preview_player
Показать описание
🤖 Download the full source code here:

Don’t forget to Like & Subscribe for more high-quality AI tutorials and free resources! 🎉

📆 Need Help with AI & Web Scraping?
Join my FREE Skool Community for AI developers:

🛠️ What You’ll Learn in This Tutorial:
In this step-by-step tutorial, I’ll walk you through how to **scrape any website for free** using DeepSeek, Groq, and Crawl4AI. You’ll learn how to:

✅ Use DeepSeek, Crawl4AI, and Groq for powerful web scraping
✅ Tweak a prebuilt template to scrape any site effortlessly
✅ Extract leads and save them to an organized file
✅ Automate the process so you can scrape any website instantly

📰 Stay Updated with My Latest Projects:

⏰ **Video Timestamps:**
00:00 - Intro
01:02 - AI Scraping Tool Overview
03:22 - AI Scraping Scenario Overview
04:58 - AI Scraping Code Setup
06:31 - Crawl4AI Scrape Example
09:31 - AI Scraping Deep Dive
18:16 - AI Scraping In Action
21:03 - Convert CSV to Google Sheets Table
22:08 - Outro
Рекомендации по теме
Комментарии
Автор

The simplicity of these lessons are unmatched! These videos have saved me countless hours searching for the right methods through trial and error.

carterv
Автор

Impressed by a video tutorial after many years! Just awesome way to teach how things work, so detailed with real life, relevant use case.

metaversewallet
Автор

This is a quality level of class... I wish I knew this year ago....

Changheelee-znvj
Автор

GOT IT!! after many trials and tribulations I got it to work, the final hurdle was the .env file for the API key, it kept coming back with "invalid Key error" Thank you for the lesson sir 👏👏👏

corteman
Автор

You are an excellent teacher with a fabulous talent for clarity. Nice job!

rcbrush
Автор

It''s the css_selector which is typically the problematic part. I was kinda hoping this crawler would use AI to somehow magically deal with randomized class names etc. edit: I know there are things like scrapy shell but it's a bit tedious.

abbcc
Автор

Thank you, Guys he give you the first step on the road use ur brain to make that advanced specially with so much Free Advanced AIs.
Whish you all the best

alsaher
Автор

Real utility would be for CrawlAI to perform actions (and not only extractions) on the page based on an LLM prompt. For example: fill in the form with my data. Or: loop through the first twenty pages.

coccoinomane
Автор

Love this video—thanks Brandon for explaining the topic so clearly. Liked, subscribed and joined your Skool community.

mbottambotta
Автор

was searching for this from a week thanks man

HarshKumar-jrg
Автор

How is this different from Playwright and Selenium? You still need to figure out the CSS Selector element which is the most tedious part as some website structures gets updated regularly.

janvincentchioco
Автор

Why I really need to use deepseek here? Isnt it overkill? I mean the webpage is pretty much structured. One can still use python standard libraries to extract the same information right? No need to have powerful processing machine / high computation cost etc, right?

RajeshSingh-hxsc
Автор

Enjoyed this! Thanks for the source code too Brandon!

Cynthia-cwkd
Автор

Could be done easily with Xidel. Don't really see the need for the LLM/Deepseek if all I need are the XPATHs to the fields I need. Xidel can then open each link and get data from the details page as well. And it's super fast, headless and not process hungry.

vish-xiimo
Автор

thank you sir insh'ALLAH i will try my luck on fiverr by making scraping on this method. wish me best of luck ♥

HunzaLuxury
Автор

You're an awesome educator. Thanks for sharing your knowledge

dukeubong
Автор

This is great! But if we spend couple of Hours with Beautifulsoup4 and pandas, more than 90% of it can be done in a cheaper way. Every use case is different ie) html DOM is different, so it's better we do this manually with a python script?

nandhum
Автор

Very nice and clear explain about AI Crawler. Where can I get the code to study on it a lit bit more?

aderitocruz
Автор

this very video has earned you my subscription Good job ... I'm subscribed now

FUNTasticFlutter
Автор

Only 1033 in, but this sounds so cool! My primary scraping goal is to scrape the lds website, and copy all talks from conferences and posts from ensign to three folders: "presidents", "quarum of the 12", and a default folder. I want to do this checking name & date, then checking a list of when they joined the quarem or became president, and sort them appropriately, where quarem includes the presidents, and the default includes them all. (Yes, up to 3 copies) so if Russell Nelson gave a talk when he was only in the 70, it'd go to default only, despite his current position.

rmt
visit shbcf.ru