Skip to content

[k8s] Fix incluster auth after multi-context support #4014

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Sep 29, 2024

Conversation

romilbhardwaj
Copy link
Collaborator

@romilbhardwaj romilbhardwaj commented Sep 28, 2024

In-cluster auth broke after our recent multi-context support. This is because incluster auth does not have any kubeconfig (and consequently, does not have any contexts) and relies solely on mounted service accounts.

As a result, SkyServe controller (and sky jobs controller) was not able to launch replicas.

This PR fixes it by making context optional throughout our codebase, and using the in-cluster auth when context is not detected.

Note: we probably need to update our docs/elsewhere to mention that multiple contexts may not work when the controller runs in a Kubernetes cluster.

Tested

  • sky serve up -n http http_server.yaml
  • sky launch -c test --cloud kubernetes -- echo hi
  • Above, but with allowed_contexts set in config.yaml.

@romilbhardwaj romilbhardwaj changed the title [k8s] Fix incluster auth after [k8s] Fix incluster auth after multi-context support Sep 28, 2024
@concretevitamin
Copy link
Member

Shall we add such a smoke test?

@romilbhardwaj
Copy link
Collaborator Author

This would have been caught by test_skyserve_kubernetes_http, but the full suite of smoke tests were not run for #3968.

Copy link
Collaborator

@Michaelvll Michaelvll left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing this @romilbhardwaj! Just wondering, when we are running things through controller, do we need to pop the allowed_contexts from the config.yaml being used for the jobs or services as well?

https://github.com/skypilot-org/skypilot/blob/master/sky/utils/controller_utils.py#L359-L370

@@ -380,6 +380,11 @@ def setup_kubernetes_authentication(config: Dict[str, Any]) -> Dict[str, Any]:
secret_field_name = clouds.Kubernetes().ssh_key_secret_field_name
context = config['provider'].get(
'context', kubernetes_utils.get_current_kube_config_context_name())
if context == kubernetes_utils.SINGLETON_REGION:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we rename this SINGLETON_REGION to some thing else, such as IN_CLUSTER_CONTEXT?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, should we just make the value of SINGLETON_REGION = '_in-cluster' or something like that?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea - updated to in-cluster.

If running from within a cluster, it now looks like this:

(base) sky@test-2ea4-head:~$ sky launch -c wow --cloud kubernetes -- echo hi
Task from command: echo hi
I 09-28 03:39:33 optimizer.py:719] == Optimizer ==
I 09-28 03:39:33 optimizer.py:742] Estimated cost: $0.0 / hour
I 09-28 03:39:33 optimizer.py:742]
I 09-28 03:39:33 optimizer.py:867] Considered resources (1 node):
I 09-28 03:39:33 optimizer.py:937] ---------------------------------------------------------------------------------------------
I 09-28 03:39:33 optimizer.py:937]  CLOUD        INSTANCE    vCPUs   Mem(GB)   ACCELERATORS   REGION/ZONE   COST ($)   CHOSEN
I 09-28 03:39:33 optimizer.py:937] ---------------------------------------------------------------------------------------------
I 09-28 03:39:33 optimizer.py:937]  Kubernetes   2CPU--2GB   2       2         -              in-cluster    0.00          ✔
I 09-28 03:39:33 optimizer.py:937] ---------------------------------------------------------------------------------------------
I 09-28 03:39:33 optimizer.py:937]
Launching a new cluster 'wow'. Proceed? [Y/n]:

@romilbhardwaj
Copy link
Collaborator Author

Thanks @Michaelvll - updated the PR and ran tests again.

Tested

  • sky serve up -n http http_server.yaml
  • sky launch -c test --cloud kubernetes -- echo hi
  • Above, but with allowed_contexts set in config.yaml.
  • pytest tests/test_smoke.py::test_skyserve_kubernetes_http --kubernetes

Copy link
Collaborator

@Michaelvll Michaelvll left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the quick fix @romilbhardwaj! LGTM.

Comment on lines 139 to 141
all_contexts = kubernetes_utils.get_all_kube_config_context_names()
if all_contexts is None:
return []
return [None]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh, actually, is it possible that directly return a [None] for the get_all_kube_config_context_names() when we detect it is actually within a cluster, and still return an empty list of context if is not in a cluster. Otherwise, it seems to mix the two situations (able to access kubernetes cluster or not) when all_contexts is None which feels unintuitive.

Copy link
Collaborator Author

@romilbhardwaj romilbhardwaj Sep 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see your point. I've updated get_all_kube_config_context_names() to return [None] if incluster and [] if no kubeconfig is detected, so the caller can distinguish between these two cases.

Comment on lines 366 to 370
# Remove allowed_contexts from local_user_config since the controller
# may be running in a Kubernetes cluster with in-cluster auth and may
# not have kubeconfig available to it. This is the typical case since
# remote_identity default for Kubernetes is SERVICE_ACCOUNT.
local_user_config.pop('allowed_contexts', None)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems a bit more complicated, since if the controller is running on a cloud, it can be fine and correct to set the allowed_contexts. How about we add a TODO here?

@romilbhardwaj
Copy link
Collaborator Author

Thanks @Michaelvll! Re-running above tests after the changes to verify correctness, will merge after tests pass.

@romilbhardwaj romilbhardwaj added this pull request to the merge queue Sep 29, 2024
Merged via the queue into master with commit e6a3b83 Sep 29, 2024
20 checks passed
@romilbhardwaj romilbhardwaj deleted the k8s_fix_incluster_auth branch September 29, 2024 06:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants