Skip to content

Implement server::Builder::max_send_buffer_size #577

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from

Conversation

nox
Copy link
Contributor

@nox nox commented Nov 19, 2021

No description provided.

@nox
Copy link
Contributor Author

nox commented Nov 19, 2021

I tried using wait_for_capacity to write a test but I couldn't understand how to use it in conjunction with .drive on the server side.

@nox nox force-pushed the max-send-buffer-size branch from f7d388f to 4693e9c Compare November 20, 2021 11:22
@nox
Copy link
Contributor Author

nox commented Nov 24, 2021

@seanmonstar Do you think h2 should have a default maximum?

@nox
Copy link
Contributor Author

nox commented Nov 24, 2021

Also, do you have any idea how I could write a test for this?

@nox nox force-pushed the max-send-buffer-size branch from 4693e9c to 30a66ec Compare November 24, 2021 10:53
@@ -175,6 +189,8 @@ impl Prioritize {
self.try_assign_capacity(stream);
}

self.current_send_buffer_size += sz as usize;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I used to do this prior to calling try_assign_capacity. AFAIU that was wrong, right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That sounds right. The connection-level and stream-level should be updated together, but since updating the stream-level may make it exceed it's requested capacity and thus needs to check for more capacity, that must happen in-between.

A code comment about the ordering being important may be prudent.

@seanmonstar
Copy link
Member

Do you think h2 should have a default maximum?

Hm, probably yes.

Also, it might be good (once we have it working) to compare how this change affects hyper's http2 end_to_end benchmarks, just so we know.

@@ -736,6 +765,8 @@ impl Prioritize {
tracing::trace_span!("updating stream flow").in_scope(|| {
stream.send_flow.send_data(len);

self.current_send_buffer_size -= len as usize;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this area needs to potentially wake up any tasks that have been waiting on "capacity", since now there is "more buffer space to use". The reason it needs to be done here, is because with flow control capacity, the connection doesn't actually get more until the peer has sent another WINDOW_UPDATE frame. So recv_window_update will notify a task there. But buffer size isn't affected by WINDOW_UPDATE frames.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I understand. Why do we even call self.flow.assign_capacity(len); below that line, if we are actually supposed to wait for the peer to send another WINDOW_UPDATE frame?

Copy link
Contributor Author

@nox nox Nov 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am even more confused that we call self.flow.send_data(len); immediately afterwards we called self.flow.assign_capacity(len);.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't that mean that self.flow.available never changes from Prioritize::pop_frame? I don't understand the logic there.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I called assign_connection_capacity from poll_complete after the reclaim_frame call.

@nox nox force-pushed the max-send-buffer-size branch from 30a66ec to d837e1f Compare November 29, 2021 09:47
@nox nox force-pushed the max-send-buffer-size branch from d837e1f to acb0041 Compare November 29, 2021 09:48

for _ in 0..3 {
stream.reserve_capacity(5);
stream = util::wait_for_capacity(stream, 5).await;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test is hanging there, and by the time h2 returns Poll::Pending, Actions::task is None so there is no way to wake up the connection task to call poll_complete from there.

Comment on lines +1640 to +1642
t2.await.expect("srv body spawn");
t1.await.expect("srv body spawn");
t0.await.expect("srv end");
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK the test is still wrong. We await for t2 before we await for t0, so t0 is pending and nothing will wake it up when t2 notifies the connection task that it has frames to send, so the send buffer size never decreases and everything gets stuck.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Never mind I misread. That being said, the test really doesn't help much finding where the issue with my patch is :/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
let _ = ready!(self.stream().poll_capacity(cx)).unwrap();
let act = self.stream().capacity();
if act >= self.target {
return Poll::Ready(self.stream.take().unwrap().into());
}
Poll::Pending
}

There, poll_capacity returns Ready(Some(0)), and capacity returns 0, so act >= self.target is false, and Poll::Pending gets returned even though nothing has been set up to wake up the task.

What do you think is wrong here? That poll_capacity returns Ready(Some(0)), or that wait_for_capacity is badly conceived, or both?

@nox nox closed this Dec 8, 2021
@seanmonstar seanmonstar deleted the max-send-buffer-size branch January 10, 2024 20:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants