-
Notifications
You must be signed in to change notification settings - Fork 184
Handle keep-alive behavior to close the connection #201
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@leszekhanusz I fixed the virtualenv so it's not longer modifying lot of things. Note: from https://github.com/graphql-python/gql/blame/master/CONTRIBUTING.md#L34 Now I'm still getting the line length errors... I'm wondering which tool should be managing this. EDIT: I pushed a manual adjustment for lines length... sounds hacky and looks a bit weird. I'm curious what you use to make this automatic? The |
Codecov Report
@@ Coverage Diff @@
## master #201 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 16 16
Lines 985 1008 +23
=========================================
+ Hits 985 1008 +23
Continue to review full report at Codecov.
|
modifications: * clean_close = False * keep_alive_timeout is now Optional[int] default None * calling self._fail directly from the _check_ws_liveness coro * no need to cancel the _receive_data_loop coro, it will stop itself once the websocket will close
@sneko Could you please check if the refactor is working correctly for you ? |
Hi @leszekhanusz ,
Here the PR for the keep-alive behavior #200, tested with real servers sending
ka
messages.I read the CONTRIBUTING.md but struggled with the
make check
, I installed them but it changes all my files (including yours) like if parameters were not taken in account. Moreoverisort
tells me some lines are >88chars but no other tools automatically adjust them (I thought it would?). I guess I missed something... I did the venv creation +make dev-setup
.Tip for others: in my app code I forgot to
await ws_transport.wait_close()
making the reconnection after a keep-alive failure failing because some stuff were not clean in the meantime (asyncio
giving the priority to other tasks including mines trying a reconnect).Tell me if I need to adjust something :)