-
Notifications
You must be signed in to change notification settings - Fork 1.3k
After Dockerizing the Selenium Web App not opening the webpage that needs to be crawled. #2976
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
UC Mode doesn't support Also, running a browser in Docker can leave a trail that makes the automation detectable from websites (due to unique fingerprints). I'd suggest not using Docker for UC Mode unless you know how to configure your Docker container to have the appearance of regular Linux (where UC Mode works normally). You can try increasing the default timeout values for |
Thank you so much for your help, can you please tell me how to not trace in my dockerfile?? |
I'm not sure how to undo the fingerprint changes that Docker added. |
But without a docker how can I deploy my crawler downloading Chrome to a host machine like I was doing with Docker?
|
Please help me I have been trying to find a way to publish my crawler web app that uses your software, however as you mentioned it needs Chrome/Chrome-Stable to achieve this I had to dockerize and download to the Linux machine that run on all my function (crawlers) on it. But, I still am stuck. |
For regular SeleniumBase (non UC Mode), try this But if you need to avoid automation-detection, (eg. bypassing Cloudflare), then don't use Docker. |
Okay, I will be trying it. Thanks! |
Okay, hello again. I tried your Dockerfile. However, it is raising an error:
|
Also, the other thing is that I need to Headless=True. I know that it is not used anymore but the server cannot run with Headless=False and also I need to specify some options like no-sandbox and disable-dev-shm-usage. I could not find them on sb_manager.py, I only could write this |
Dear Micheal,
I have build a full web app that scrapes some site and gathers some information. Locally everything runs perfectly, however after dealing with dockerizing the application. Never occured exception has risen due to:
I have never gotten that, I think that after docker the container runs the crawler headless=True and tries to get to the site in 6 seconds but cannot do it. What should I do to workaround that? I will provide my crawler.py and dockerfile.
crawler.py:
My Dockerfile:
The text was updated successfully, but these errors were encountered: