-
Notifications
You must be signed in to change notification settings - Fork 7.6k
replay/refCount subscribe/unsubscribe exhibit O(N) behavior #3469
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi and thanks for the feedback. I've posted the PR #3470 improving on the performance (up to ~20x improvement). |
20x improvement is great but making it constant or near would be also nice. :) At this time we do not have any resources to look at it, but if no one else is able to pick this up, we might try to investigate and possibly contribute. For now, we will have to put a concept of caching observables via replay() aside. That was pretty elegant way of doing async cache though. |
I've updated #3470 to have O(1) subscription cost and don't lose too much on the dispatching side by using the internals of an |
Sweet!!! Hope this is merged into 1.x |
Closing via #3470, should be available in 1.1.6. |
Subscribing to replay/refCount chain slows down as O(N) with number of existing subscribers.
This is reproducible on a simple chain that never emits an item, thus data traffic should not be a factor in the performance degradation behavior:
After certain number of subscribers sharing observable becomes unusable and wasting a lot of CPU cycles. In turn this draws useless some caching scenarios specifically with big number of subscribers.
Here is the simple test:
Here is the output on a 2015 MacBook Pro (PhysMem: 16G, jvm launched with -Xmx8000m):
The main suspect is manageRequest() in OperatorReplay:
https://github.com/ReactiveX/RxJava/blob/1.x/src/main/java/rx/internal/operators/OperatorReplay.java#L481-L546
That's where stack traces are pointing to during both subscribe() and unsubscribe() when the rate slows down.
The text was updated successfully, but these errors were encountered: