If you set a Livy server in kernel_scala_credentials or kernel_python_credentials, it works automatically when running the corresponding wrapper kernel. But if you start a normal Python notebook on the same Jupyter instance and run the %manage_spark magic, it doesn't take that configuration into account at all; you have to re-enter it before you can create a session.
Is there any good reason not to pre-configure an endpoint entry if the user has already gone to the trouble of setting it up for the wrapper kernel? Of course, they still can specify other endpoints in %manage_spark but this seems like a convenient shortcut for 90% of use cases (with very little damage done to the other 10%).
Would a pull request along these lines be accepted, or are you opposed to this idea? Thanks :)