@@ -10345,6 +10345,14 @@ public final Disposable forEachWhile(final Predicate<? super T> onNext, final Co
10345
10345
* is subscribed to. For this reason, in order to avoid memory leaks, you should not simply ignore those
10346
10346
* {@code GroupedPublisher}s that do not concern you. Instead, you can signal to them that they may
10347
10347
* discard their buffers by applying an operator like {@link #ignoreElements} to them.
10348
+ * <p>
10349
+ * Note that the {@link GroupedFlowable}s should be subscribed to as soon as possible, otherwise,
10350
+ * the unconsumed groups may starve other groups due to the internal backpressure
10351
+ * coordination of the {@code groupBy} operator. Such hangs can be usually avoided by using
10352
+ * {@link #flatMap(Function, int)} or {@link #concatMapEager(Function, int, int)} and overriding the default maximum concurrency
10353
+ * value to be greater or equal to the expected number of groups, possibly using
10354
+ * {@code Integer.MAX_VALUE} if the number of expected groups is unknown.
10355
+ *
10348
10356
* <dl>
10349
10357
* <dt><b>Backpressure:</b></dt>
10350
10358
* <dd>Both the returned and its inner {@code Publisher}s honor backpressure and the source {@code Publisher}
@@ -10385,6 +10393,13 @@ public final <K> Flowable<GroupedFlowable<K, T>> groupBy(Function<? super T, ? e
10385
10393
* is subscribed to. For this reason, in order to avoid memory leaks, you should not simply ignore those
10386
10394
* {@code GroupedPublisher}s that do not concern you. Instead, you can signal to them that they may
10387
10395
* discard their buffers by applying an operator like {@link #ignoreElements} to them.
10396
+ * <p>
10397
+ * Note that the {@link GroupedFlowable}s should be subscribed to as soon as possible, otherwise,
10398
+ * the unconsumed groups may starve other groups due to the internal backpressure
10399
+ * coordination of the {@code groupBy} operator. Such hangs can be usually avoided by using
10400
+ * {@link #flatMap(Function, int)} or {@link #concatMapEager(Function, int, int)} and overriding the default maximum concurrency
10401
+ * value to be greater or equal to the expected number of groups, possibly using
10402
+ * {@code Integer.MAX_VALUE} if the number of expected groups is unknown.
10388
10403
* <dl>
10389
10404
* <dt><b>Backpressure:</b></dt>
10390
10405
* <dd>Both the returned and its inner {@code Publisher}s honor backpressure and the source {@code Publisher}
@@ -10428,6 +10443,14 @@ public final <K> Flowable<GroupedFlowable<K, T>> groupBy(Function<? super T, ? e
10428
10443
* is subscribed to. For this reason, in order to avoid memory leaks, you should not simply ignore those
10429
10444
* {@code GroupedPublisher}s that do not concern you. Instead, you can signal to them that they may
10430
10445
* discard their buffers by applying an operator like {@link #ignoreElements} to them.
10446
+ * <p>
10447
+ * Note that the {@link GroupedFlowable}s should be subscribed to as soon as possible, otherwise,
10448
+ * the unconsumed groups may starve other groups due to the internal backpressure
10449
+ * coordination of the {@code groupBy} operator. Such hangs can be usually avoided by using
10450
+ * {@link #flatMap(Function, int)} or {@link #concatMapEager(Function, int, int)} and overriding the default maximum concurrency
10451
+ * value to be greater or equal to the expected number of groups, possibly using
10452
+ * {@code Integer.MAX_VALUE} if the number of expected groups is unknown.
10453
+ *
10431
10454
* <dl>
10432
10455
* <dt><b>Backpressure:</b></dt>
10433
10456
* <dd>Both the returned and its inner {@code Publisher}s honor backpressure and the source {@code Publisher}
@@ -10473,6 +10496,14 @@ public final <K, V> Flowable<GroupedFlowable<K, V>> groupBy(Function<? super T,
10473
10496
* is subscribed to. For this reason, in order to avoid memory leaks, you should not simply ignore those
10474
10497
* {@code GroupedPublisher}s that do not concern you. Instead, you can signal to them that they may
10475
10498
* discard their buffers by applying an operator like {@link #ignoreElements} to them.
10499
+ * <p>
10500
+ * Note that the {@link GroupedFlowable}s should be subscribed to as soon as possible, otherwise,
10501
+ * the unconsumed groups may starve other groups due to the internal backpressure
10502
+ * coordination of the {@code groupBy} operator. Such hangs can be usually avoided by using
10503
+ * {@link #flatMap(Function, int)} or {@link #concatMapEager(Function, int, int)} and overriding the default maximum concurrency
10504
+ * value to be greater or equal to the expected number of groups, possibly using
10505
+ * {@code Integer.MAX_VALUE} if the number of expected groups is unknown.
10506
+ *
10476
10507
* <dl>
10477
10508
* <dt><b>Backpressure:</b></dt>
10478
10509
* <dd>Both the returned and its inner {@code Publisher}s honor backpressure and the source {@code Publisher}
@@ -10521,6 +10552,14 @@ public final <K, V> Flowable<GroupedFlowable<K, V>> groupBy(Function<? super T,
10521
10552
* is subscribed to. For this reason, in order to avoid memory leaks, you should not simply ignore those
10522
10553
* {@code GroupedPublisher}s that do not concern you. Instead, you can signal to them that they may
10523
10554
* discard their buffers by applying an operator like {@link #ignoreElements} to them.
10555
+ * <p>
10556
+ * Note that the {@link GroupedFlowable}s should be subscribed to as soon as possible, otherwise,
10557
+ * the unconsumed groups may starve other groups due to the internal backpressure
10558
+ * coordination of the {@code groupBy} operator. Such hangs can be usually avoided by using
10559
+ * {@link #flatMap(Function, int)} or {@link #concatMapEager(Function, int, int)} and overriding the default maximum concurrency
10560
+ * value to be greater or equal to the expected number of groups, possibly using
10561
+ * {@code Integer.MAX_VALUE} if the number of expected groups is unknown.
10562
+ *
10524
10563
* <dl>
10525
10564
* <dt><b>Backpressure:</b></dt>
10526
10565
* <dd>Both the returned and its inner {@code Publisher}s honor backpressure and the source {@code Publisher}
@@ -10617,6 +10656,14 @@ public final <K, V> Flowable<GroupedFlowable<K, V>> groupBy(Function<? super T,
10617
10656
* is subscribed to. For this reason, in order to avoid memory leaks, you should not simply ignore those
10618
10657
* {@code GroupedFlowable}s that do not concern you. Instead, you can signal to them that they may
10619
10658
* discard their buffers by applying an operator like {@link #ignoreElements} to them.
10659
+ * <p>
10660
+ * Note that the {@link GroupedFlowable}s should be subscribed to as soon as possible, otherwise,
10661
+ * the unconsumed groups may starve other groups due to the internal backpressure
10662
+ * coordination of the {@code groupBy} operator. Such hangs can be usually avoided by using
10663
+ * {@link #flatMap(Function, int)} or {@link #concatMapEager(Function, int, int)} and overriding the default maximum concurrency
10664
+ * value to be greater or equal to the expected number of groups, possibly using
10665
+ * {@code Integer.MAX_VALUE} if the number of expected groups is unknown.
10666
+ *
10620
10667
* <dl>
10621
10668
* <dt><b>Backpressure:</b></dt>
10622
10669
* <dd>Both the returned and its inner {@code GroupedFlowable}s honor backpressure and the source {@code Publisher}
0 commit comments