Description
While gopls performs some background operations such as producing diagnostics asynchronously, it has never handled jsonrpc2 requests asynchronously. In the past, this didn't matter much, because the bulk of request handling was type checking, and type checked open packages are memoized, so typically the first request would be slow, but subsequent requests would be fast.
However, there have always been areas where true concurrent request handling would be helpful, and recently we've added a couple more:
- Not every request requires type information. For example, DocumentSymbols and WorkspaceSymbols use only syntax (and perhaps a simplified version of SemanticTokens could as well).
- Some requests, such as completion, may not wait to await pending loads of package information (x/tools/gopls: GOPACKAGESDRIVER calls happen at intervals that make the results incorrect #59625).
- Some requests, such as pull diagnostics, may benefit from being batched together (x/tools/gopls: initial support for pull-based diagnostics available in LSP 3.17 #53275).
- Some executeCommand requests are long running, and yet want to return a result (e.g. gopls.vulncheck). We've been working around this with an ad-hoc callback to fetch the results. This is important for building more complicated client logic on top of gopls, such as we want to do for refactoring.
We should make gopls concurrent. In order for this to work, we need jsonrpc2 APIs that delegate control over concurrent handling to the core of gopls (not the RPC layer), because gopls needs to start handling the request (acquiring a Snapshot) before the next request may proceed, in order to preserve the logical ordering of requests.
Eventually, we should do this with the new jsonrpc2_v2 library, which has better APIs for handler binding, and will result in a cleaner overall result. However, @adonovan and I were discussing, and he pointed out that we can use a trick similar to t.Parallel to "release" requests, allowing concurrency to be implemented with minimal API change.