Skip to content

[🐛 Bug]: Possible resource leakage using Firefox v. 4.7.0 #1743

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ghost opened this issue Dec 12, 2022 · 4 comments
Closed

[🐛 Bug]: Possible resource leakage using Firefox v. 4.7.0 #1743

ghost opened this issue Dec 12, 2022 · 4 comments

Comments

@ghost
Copy link

ghost commented Dec 12, 2022

What happened?

I run two instances of Selenium Grid.

Grid 1: Selenium Grid 3.141.59 with 9 chrome and 9 Firefox nodes (both using tag 3.141.59)
Grid 2: Selenium Grid 4.7.0 with 8 chrome and 8 Firefox nodes (both using tag 4.7.0)

I saw Firefox @ 4.7.0 seems to have some issue with resource consumption & the data looks a lot like leakage.

Have you encountered issues like this?
Might this happen only because some users do not close the driver correctly after testing is done?

Memory consumption

Firefox 3.141.59

firefox_v3

Firefox 4.7.0

firefox_v4

CPU

Firefox 3.141.59

firefox_v3_cpu

Firefox 4.7.0

firefox_v4_cpu

Command used to start Selenium Grid with Docker

I use the Helm files provided.

Relevant log output

See images above

Operating System

Running on Openshift (Kubernetes 1.2X)

Docker Selenium version (tag)

4.7.0

@ghost ghost added the needs-triaging label Dec 12, 2022
@ghost
Copy link
Author

ghost commented Dec 12, 2022

SeleniumHQ/selenium#11270 was fixed with 4.7.1. This could be the resource leak problem.

The client was probably not always closed cleanly: SeleniumHQ/selenium#11345
Why only Firefox is affected is beyond me.

I will try out 4.7.1 and report back.

@ghost
Copy link
Author

ghost commented Dec 14, 2022

I tried 4.7.1 and provided it to a subset of the teams. We do not see the same resource leakage here.

See memory usage:

image

What I also saw during the investigation is that we see these graphs only during times when nodes in 4.7.0 are under heavy load:

image

(4 GB is the memory limit – it might be that Kubernetes is cleaning up the processes here)

I believe version 4.7.0 is not causing these effects (alone).
I'll provide 4.7.1 to the teams in the next few days and report back if 4.7.1 completely fixed the issue.
If not, I know that some tests do not close the driver as they should.

@ghost ghost closed this as completed Dec 14, 2022
@ghost
Copy link
Author

ghost commented Dec 20, 2022

Updating to 4.7.1 fixed the resource issues!

Copy link

github-actions bot commented Dec 9, 2023

This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@github-actions github-actions bot locked and limited conversation to collaborators Dec 9, 2023
This issue was closed.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

1 participant