Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR HTTP 429 #10

Open
noman00910 opened this issue Oct 29, 2020 · 5 comments
Open

ERROR HTTP 429 #10

noman00910 opened this issue Oct 29, 2020 · 5 comments

Comments

@noman00910
Copy link

Hi, your code is working great, however, I am facing this too many requests error (429) after running 3 or 4 queries. I changed proxies as well, still, it gives the same error. Is it about user agents?

@tasos-py
Copy link
Owner

tasos-py commented Nov 1, 2020

Thanks for bringing this to my attention. I don't know if changing User-Agent will be enough, but you could give it a try. Which search engine produces this error? If I remember correcly Google is the most "sensitive" one and usually increasing the delay (with engine._delay) helps. Clearing cookies may help too. This library uses a requests.Session() object for HTTP request, which persists cookies and other HTTP parameters. You could try setting a new client with engine._http_client = http_client.HttpClient(proxy=proxy). If that doesn't help then it's possible that the search engine is detecting us with URL or POST data parameters and unless it requires Js code execution, we may be able to solve it with engine._first_page() and engine._next_page() methods.

@noman00910
Copy link
Author

Yes, I am using it with Google. Thanks for your guide, I will try to make changes according to your instructions. Let's see where it goes.

@nershman
Copy link

I've noticed the ban usually occurs after 4 or 5 searches in a row which returned no results.

@ljhOfGithub
Copy link

ljhOfGithub commented Apr 10, 2022

I meet the problem,too.How to deal the problem?I try time.sleep(1) after search() and it works briefly.May raise an Exception in engine.py and restart the search work?Wish add an Exception.

@nershman
Copy link

What happens generally is they detect unusual activity and redirect you to a captcha completion page. If you want to scrape you will need to write something to detect that and then manually complete the captcha or forward it through a completion service.

When I ran into this problem I had to manually loading pages from my script into my browser and saving them, then extracting data I needed from the saved HTML. Despite the large amount of request I made, I was able to access the pages. I don't know what information Google tracks that kept them from detecting when I did the same behavior manually...

Depending on the amount of data you are collecting it might be better to just use a service which will scrape data for you. There are services out there which have more sophisticated algorithms for avoiding detection, or even use captcha solving services.

@tasos-py tasos-py mentioned this issue Oct 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants