Creating a Reliable, Random Web Proxy Request Application using Python

preview_player
Показать описание
This video demonstrates how to create a quick, reliable and random Web Proxy request application using Python.

------------------------------------------------------------

More awesome topics covered here:

------------------------------------------------------------

#python #requests #proxy
Рекомендации по теме
Комментарии
Автор

You are a Genius i will watch everyone of your videos.Thanx

businessacademy
Автор

Personally though, I'll just get the entire list of free proxy at the beginning, save it as a tuple and then call that tuple inside the scraping function.

In scraping, time really matters and it's going to take more time if we are scraping for proxy list all the time.

Great video btw. I never thought about using slicing and step to filter data tho. Learned a lot of handy tips.

scarletdcruz
Автор

I think the website to get the proxies has changed since he published this video. There's another table an the end of the website and with the original code there are country codes and stuff that get mixed with the proxies. If someone has the same problem, you may want to filter the soup.findAll('td) by using something like soup.find('table', {'id':'proxylisttable'}) before

joseibanez
Автор

Nice video and still working but I could not find how to use that working proxy on other websites. I cannot hide my ip adress. Are there any information about this topic?

Programlama
Автор

Thank you very much, this is a great video, keep up the good work.

absolutedgefindout
Автор

What kind of IDE is that? I love how it executes code while you are editting it.

aCj
Автор

=QUESTION=
it worked first time and than start to r eturn <Response[200]> without ip address

habilpekdemir
Автор

plz make a video on "scrape dynamic content from websites that are using AJAX"

ahsanmalik
Автор

can i have link for this code on Github i have some changes that i would like to Fork this Repo

SoumilShah
Автор

Traceback (most recent call last):
File "random proxy.py", line 8, in <module>
soup = BeautifulSoup(r.content, 'html5lib')
File "C:\Users\12345\AppData\Local\Programs\Python\Python36\lib\site-packages\bs4\__init__.py", line 216, in __init__
% ", ".join(features))
bs4.FeatureNotFound: Couldn't find a tree builder with the features you requested: html5lib. Do you need to install a parser library?

saadmaksood
Автор

Even though the proxy is different. The code collapses when iterating 10 times. Why is that?

anumhassan
Автор

Can we use these proxies for scrapebox

atultanna
Автор

Bro the way you explain was great but i am getting an error, , UnboundLocalError: local variable 'r' referenced before assignment
can u plz help me in this
?

muhammadazain
Автор

Hi sir mai unlimited working proxies banana chata hu view bot ke lea sir who kaise bante hai sir please reply dena kya python Mai Banti hai kaise

tsgsinghlive
Автор

Interesting tutorial, but can you teach me how to make proxy in python and can be used together with Proxifer for example Proxifier in 127.0.0.1 settings. port 1234, so Proxifier takes a proxy from python. It is necessary to know I am using Python on windows.

younglex
Автор

Hey im getting "NameError: name 'text' is not defined" from the return line of get_proxy()
Can you help me solve this?

machinegunjo
Автор

Thank you for sharing this. I am new to python and have been trying to use proxy servers to scrape some data from a website for some analysis. I have searched a lot and finally found this video which gives a very good step-by-step explanation on how to do it. Just one small point. I found your get_proxy() function a little hard to understand as I am not very confident using some of the functions like map and lambda that has been used. However, I came up with my own way of getting the same result. Below is the code for anyone who would like to further review and optimize it.

def get_proxy():
req = requests.get(url)

soup = BeautifulSoup(req.content, 'html.parser')

proxy_list = []
for proxy in soup.find_all('td'):
proxy_list.append(proxy.text)

ip = proxy_list[0::8]
port = proxy_list[1::8]

proxies = []
for i in range(0, len(ip)):
for j in range(0, len(ip)):
if i == j:
proxies.append(":".join([ip[i], port[j]]))

return({'https': choice(proxies)})

icecoldsipra
Автор

Hi, great video. I am getting an error "Unreachable code" from: "r = proxy_request('get', 'https:youtube.com')". In Jupiter, it runs fine but in V code I'm getting this error message. What is the reason?

raylysonestanista
Автор

Hi, how do I know or check if my proxy is working or not?

lordpigster
Автор

For some reason this did not work for me even after doing lot of repair work

gambet