-
Notifications
You must be signed in to change notification settings - Fork 106
Reflector seems to hang for long periods of time #239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Any ideas about this one? If this happens all the time, it pretty much means I cannot use reflector, which would be really sad :(. |
hi I'll keep this issue open for a while in the hope that others facing this problem can provide before and after k8s upgrade insights. |
Many thx for coming back on this!
I have upgrade of our clusters scheduled so will report before/after.
On 6 Dec 2021, at 09:42, Romeo Dumitrescu ***@***.***> wrote:
hi
As mentioned in previous issues, this is not a reflector but k8s issue.
Basically k8s had a bug in older (not that old, but older) versions where events were not pushed by the API server. In 1.21+ it was fixed.
Reflector relies on those events to be pushed (it does not scrape). What you're seeing as "hanging" is basically the API server not sending anything, then the connection closes (idle) and on reconnect everything gets sent. There is no way to detect from the reflector is the API server is not sending events or there are actually no events.
Have a look at #228<#228>
My suggestion is upgrading your version of k8s to the latest supported by your platform.
BTW, this is not an issue that affects reflector only, there are a ton of extensions that rely on those events and do not get them. Most of them have changed from subscribing to events to scraping the data (querying k8s) but this is problematic because, depending on the size of the cluster and number of resources, it can become a serious performance issue. (Reflector is also installed on clusters with hundreds of namespaces and thousands of configmaps and secrets, so querying those every,,,1 minute?...will kill the API server).
I'll keep this issue open for a while in the hope that others facing this problem can provide before and after k8s upgrade insights.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<#239 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AGGVI2P5AW4XDVAGTCVXH7LUPSAPLANCNFSM5JITJO7Q>.
|
Closing this as related to #246 |
Uh oh!
There was an error while loading. Please reload this page.
Noticed recently that our certificates sometimes take up to 30 minutes to reflect to new namespaces, yet sometimes this happens instantly.
As an example of the happy path:
logs:
this one took around a second
the unhappy path:
logs:
still going as of
Fri Dec 3 02:08:12 UTC 2021
note - all cert creation at this point will stall at this point. It will then catch up with log entries like:
So this one took 26 minutes. Unfortunately i'm not proficient in C# so haven't gotten close to working out why this could happen.
cluster info:
The text was updated successfully, but these errors were encountered: