Skip to content

grpc-uds: update to 0.3.18 #2

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 54 commits into from
Apr 19, 2023
Merged

Conversation

nightkr
Copy link
Member

@nightkr nightkr commented Apr 19, 2023

No description provided.

seanmonstar and others added 30 commits November 23, 2021 10:32
…#586)

This makes reading the logs way easier on the eyes.
We have reports of runtime panics (linkerd/linkerd2#7748) that sound a
lot like rust-lang/rust#86470. We don't have any evidence that these
panics originate in h2, but there is one use of `Instant::sub` that
could panic in this way.

Even though this is almost definitely a bug in Rust, it seems most
prudent to actively avoid the uses of `Instant` that are prone to this
bug. These fixes should ultimately be made in the standard library, but
this change lets us avoid this problem while we wait for those fixes.

This change replaces uses of `Instant::elapsed` and `Instant::sub` with
calls to `Instant::saturating_duration_since` to prevent this class of
panic.

See also hyperium/hyper#2746
…h decoded

Decoding error when processing continuation header which contains normal
header name at boundary
## Motivation

Currently, the `tracing` spans for the client and server handshakes
contain the name of the I/O type. In some cases, where nested I/O types
are in use, these names can be quite long; for example, in Linkerd, we
see log lines like this:

```
2022-03-07T23:38:15.322506670Z [ 10533.916262s] DEBUG ThreadId(01) inbound:accept{client.addr=192.168.1.9:1227}:server{port=4143}:direct:gateway{dst=server.echo.svc.cluster.local:8080}:server_handshake{io=hyper::common::io::rewind::Rewind<linkerd_io::either::EitherIo<linkerd_io::sensor::SensorIo<linkerd_io::prefixed::PrefixedIo<linkerd_io::either::EitherIo<tokio_rustls::server::TlsStream<linkerd_io::either::EitherIo<linkerd_io::scoped::ScopedIo<tokio::net::tcp::stream::TcpStream>, linkerd_io::prefixed::PrefixedIo<linkerd_io::scoped::ScopedIo<tokio::net::tcp::stream::TcpStream>>>>, linkerd_io::either::EitherIo<linkerd_io::scoped::ScopedIo<tokio::net::tcp::stream::TcpStream>, linkerd_io::prefixed::PrefixedIo<linkerd_io::scoped::ScopedIo<tokio::net::tcp::stream::TcpStream>>>>>, linkerd_transport_metrics::sensor::Sensor>, linkerd_io::sensor::SensorIo<linkerd_io::either::EitherIo<tokio_rustls::server::TlsStream<linkerd_io::either::EitherIo<linkerd_io::scoped::ScopedIo<tokio::net::tcp::stream::TcpStream>, linkerd_io::prefixed::PrefixedIo<linkerd_io::scoped::ScopedIo<tokio::net::tcp::stream::TcpStream>>>>, linkerd_io::either::EitherIo<linkerd_io::scoped::ScopedIo<tokio::net::tcp::stream::TcpStream>, linkerd_io::prefixed::PrefixedIo<linkerd_io::scoped::ScopedIo<tokio::net::tcp::stream::TcpStream>>>>, linkerd_transport_metrics::sensor::Sensor>>>}:FramedWrite::buffer{frame=Settings { flags: (0x0), initial_window_size: 65535, max_frame_size: 16384 }}: h2::codec::framed_write: send frame=Settings { flags: (0x0), initial_window_size: 65535, max_frame_size: 16384 }
```

which is kinda not great.

## Solution

This branch removes the IO type's type name from the spans for the
server and client handshakes. In practice, these are not particularly
useful, because a given server or client instance is parameterized over
the IO types and will only serve connections of that type.
# 0.3.12 (March 9, 2022)

* Avoid time operations that can panic (hyperium#599)
* Bump MSRV to Rust 1.49 (hyperium#606)
* Fix header decoding error when a header name is contained at a continuation
  header boundary (hyperium#589)
* Remove I/O type names from handshake `tracing` spans (hyperium#608)
Nightly has begun running doctests for unexported macros as of
rust-lang/rust#96630, which caused a doctest for
test_unpack_octets_4 which was previously ignored to be run. This broke
the CI because macros that are not exported with `#[macro_export]`
cannot be used from external crates (and thus cannot be doctested). This
change ignores the doctest and copies the relevant code into a unit
test.

Co-authored-by: David Koloski <[email protected]>
Signed-off-by: Ryan Russell <[email protected]>
…yperium#634)

Http2 Server are allowed to early respond without fully
  consuming client input stream, but must respond with an
  error code of NO_ERROR when sending RST_STREAM.
  Nginx treat any other error code as fatal if not done so

  Commit change error code from CANCEL to NO_ERROR, when the
  server is early responding to the client

  hyperium#633
  https://trac.nginx.org/nginx/ticket/2376
vi and others added 20 commits October 29, 2022 14:22
Remove redundant and misleading phrase in
client::Builder::enable_push documentation.

Resolves hyperium#645
This exposes the :protocol pseudo header as Request extension.

Fixes hyperium#347
…am (hyperium#657)

We met the panic in our production environment, so handle this panic
    condition before panic. The stack backtrace:

Co-authored-by: winters.zc <[email protected]>
…m#661)

Fixes hyperium#628

Sometimes `poll_capacity` returns `Ready(Some(0))` - in which case
caller will have no way to wait for the stream capacity to become available.
The previous attempt on the fix has addressed only a part of the problem.

The root cause - in a nutshell - is the race condition between the
application tasks that performs stream I/O and the task that serves
the underlying HTTP/2 connection. The application thread that is about
to send data calls `reserve_capacity/poll_capacity`, is provided
with some send capacity and proceeds to `send_data`.

Meanwhile the service thread may send some buffered data and/or
receive some window updates - either way the stream's effective
allocated send capacity may not change, but, since the capacity still
available, `send_capacity_inc` flag may be set.

The sending task calls `send_data` and uses the entire allocated
capacity, leaving the flag set. Next time `poll_capacity` returns
`Ready(Some(0))`.

This change sets the flag and dispatches the wakeup event only in
cases when the effective capacity reported by `poll_capacity` actually
increases.
It's quite confusing from production logs when all I get is
"broken pipe" and I don't know which path was taken for that error
to be logged.
Streams that have been received by the peer, but not accepted by the
user, can also receive a RST_STREAM. This is a legitimate pattern: one
could send a request and then shortly after, realize it is not needed,
sending a CANCEL.

However, since those streams are now "closed", they don't count towards
the max concurrent streams. So, they will sit in the accept queue, using
memory.

In most cases, the user is calling `accept` in a loop, and they can
accept requests that have been reset fast enough that this isn't an
issue in practice.

But if the peer is able to flood the network faster than the server
accept loop can run (simply accepting, not processing requests; that
tends to happen in a separate task), the memory could grow.

So, this introduces a maximum count for streams in the pending-accept
but remotely-reset state. If the maximum is reached, a GOAWAY frame with
the error code of ENHANCE_YOUR_CALM is sent, and the connection marks
itself as errored.

ref CVE-2023-26964
ref GHSA-f8vr-r385-rh5r

Closes hyperium/hyper#2877
The new option is available to both client and server `Builder`s.
@nightkr nightkr requested review from lfrancke and a team April 19, 2023 13:38
@stackable-bot
Copy link

stackable-bot commented Apr 19, 2023

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
1 out of 23 committers have signed the CLA.

✅ djc
❌ seanmonstar
❌ nox
❌ LPardue
❌ olix0r
❌ hikaricai
❌ hawkw
❌ djkoloski
❌ bruceg
❌ ryanrussell
❌ kckeiks
❌ erebe
❌ LucioFranco
❌ ehaydenr
❌ vi
❌ silence-coding
❌ gtsiam
❌ howardjohn
❌ cloneable
❌ aftersnow
❌ vadim-eg
❌ atouchet
❌ nickelc
You have signed the CLA already but the status is still pending? Let us recheck it.

@nightkr
Copy link
Member Author

nightkr commented Apr 19, 2023

Excluded CLAbot from this repo since it's not a Stackable Product.

Copy link
Member

@lfrancke lfrancke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Uhhhmm yes! I took a close look and approve!

@nightkr nightkr merged commit 557dd10 into feature/grpc-uds Apr 19, 2023
@nightkr nightkr deleted the feature/grpc-uds-/0.3.18 branch April 19, 2023 13:42
bors bot pushed a commit to stackabletech/listener-operator that referenced this pull request Apr 19, 2023
# Description

Depends on stackabletech/h2#2. Replaces #67. Should also be replicated in secret-op once merged.
@nightkr nightkr mentioned this pull request May 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.