-
Notifications
You must be signed in to change notification settings - Fork 10.3k
HTTP2: Send trailers with data #11270
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Theory: It's likely this is the result of two passes through the loop. The first pass picks up the data before the pipe is complete here. The second pass picks up the complete status and sends the trailers. In most cases there's going to be a non-trivial time gap between the data write and the call to Complete that sends the trailers. What pattern would you use to prevent the data from being available to the read loop before calling Complete? GetMemory, Advance, but no call to FlushAsync? Even in that case we'd have to tweak Http2FrameWriter.WriteDataAsync not to flush. |
Yes, gRPC does that for calls with a single message response. I expect that FlushAsync would always result in separate packets.
That's what I thought. There would be a new version of |
Yes.
I don't think line 322 where Http2OutputProducer.ProcessDataWrites() calls Http2FrameWriter.WriteDataAsync() needs to change. We want to ensure the Http2OutputProducer.ProcessDataWrites() doesn't wake up at all meaning the call to PipeReader.ReadAsync() shouldn't complete until the entire response body is available. This should already work because Kestrel's internal HttpResponsePipeWriter.Advance() calls HttpProtocol.Advance() which in turn calls Http2OutputProducer.Advance() which shouldn't flush. We just need to ensure that we can signal the end of the response body and flush to observe backpressure at the same time. CompleteAsync() should work for that. |
@halter73 I don't follow. CompleteAsync isn't doing anything new, it's just doing it earlier than normal. The traces above show that the current model is splitting frames, presumably due to a flush somewhere, so how do we prevent that? |
You said it yourself. "GetMemory, Advance, but no call to FlushAsync." The backpressure problem I bring up is unavoidable if we want to buffer indefinitely in order to avoid early flushes. We should have logic to always call FlushAsync after a certain amount of data is written. Kestrel will do the final flush by calling WriteStreamSuffixAsync after the middleware exits. This no-ops if the middleware already called CompleteAsync. If middleware for whatever reason wanted to observe if there was backpressure due to flow control or something, the middleware would have to call CompleteAsync itself to observe Http2OutputProducer's |
@JamesNK want to retry this scenario with CompleteAsync? |
I'm not advocating that Kestrel never flush data out before the response completes. I think for small responses (which will cover most API/RPC response messages) it is a good optimization. |
I didn't think you were advocating that. The point I'm trying to make is that the "GetMemory, Advance, but no call to FlushAsync" pattern leaves when to flush completely up to the application not Kestrel. If the app doesn't call FlushAsync or WriteAsync, Kestrel cannot by contract flush on your behalf. Kestrel can and does coalesce flushes if they are two frequent, but you can still end up with multi-frame responses as you observed, so having a way for the application to easily buffer and prevent flushing is great, but the application needs to have the logic "to always call FlushAsync after a certain amount of data is written" not Kestrel.
We'll need to investigate that. I don't think theres any reason in principle this shouldn't be combined into a single flush/packet given the current design. |
Kestrel is sending HEADERS and DATA together in a TCP packet, followed by HEADERS (trailers) in the next TCP packet.
The C-Core gRPC server sends HEADERS, DATA and HEADERS (trailers) in a single TCP packet.
So..
// @Tratcher @steveharter @davidfowl
The text was updated successfully, but these errors were encountered: