-
Notifications
You must be signed in to change notification settings - Fork 4.6k
Description
We fork a goroutine for every stream here:
Lines 297 to 304 in ec9275b
go func() { | |
select { | |
case <-cc.ctx.Done(): | |
cs.finish(ErrClientConnClosing) | |
case <-ctx.Done(): | |
cs.finish(toRPCErr(ctx.Err())) | |
} | |
}() |
The main purpose of this goroutine is to monitor the user's context in order to abort the RPC when they cancel it. This introduces some moderate overhead on the cost of an RPC (~1-2%). To avoid this, we could instead add a Cancel()
to clientStream
that users are able to call synchronously when they wish to cancel the stream (we may be doing this anyway). With this in place, if the user agrees to use it instead of the context for unscheduled cancellations (i.e. besides deadlines), we no longer need to fork this goroutine. There are some other prerequisites for this work. To list them all:
- Implement
clientStream.Cancel()
(New API: grpc.CancelClientStream(ClientStream) #1933) - Use a different mechanism to cancel streams when the
ClientConn
is closed. E.g. haveClientConn.Close()
synchronously cancel all active streams, or use callbacks from the transport when thenet.Conn
is closed. - Use a different mechanism to cancel streams when the deadline is reached. This could be one goroutine per ClientConn that sleeps until the next deadline, or until it is awoken due to a new deadline or one being removed. Since Go doesn't support
WaitUntil
on async.Cond
(sync: Cond WaitFor and/or WaitUntil method(s) golang/go#24429), something else will have to be used (e.g. atime.Timer
, a channel, and blocking on aselect
). [EDIT: or use async.Cond
and dotime.AfterFunc(nextTime, cond.Broadcast)
; stopping that timer if awoken before it fires.]
(It's possible 3 has similar overhead to forking a goroutine, in which case this would not be worth pursuing, so we would have to measure performance before starting on this.)