Conversation
| * <dd>{@code publish} does not operate by default on a particular {@link Scheduler}.</dd> | ||
| * </dl> | ||
| * | ||
| * @param n the initial request amount, further request will happen after 75% of this value |
There was a problem hiding this comment.
|
👍 |
|
@abersnaze, @stealthcode you had some use cases for this, any objections? |
|
the reuse of the observeOn is interesting but couldn't it be done without the allocation of a queue? |
|
If the downstream request is unbounded and the downstream has caught up then the queue can be skipped. In this case, Otherwise, the upstream emissions have to be stored temporarily for an underrequesting downstream. |
|
👍 |
|
I know that @abersnaze still had reservations about this. I think that this should not be using |
|
My concern is this - If @abersnaze implemented the batching functionality then why wouldn't we use that? The queue in observeOn scheduling creates a layer of indirection that seems unnecessary. |
|
Remember, this started out as a change to |
|
Thanks for reminding me of the context of this work. It seems like we have 2 implementations for the same functionality. I think @abersnaze and I agree that the 2 features of request batching and request valve type functionality could be composed. However I think that using |
|
I personally would be okay with either implementation. I think Also it's interesting to note that users are gravitating more and more to taking direct control over the |
|
For example this PR does something similar but exactly n (could be modified to have optional 25%) and without a queue #3781. |
This is a follow-up on #3964 but with a separate operator on
Observable.