Skip to content

Change default namespace logic #85

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

anishasthana
Copy link
Contributor

  • Update developer instructions
  • Change Default namespace logic to use user's current namespace

Signed-off-by: Anish Asthana <[email protected]>
@anishasthana anishasthana changed the title update appwrapper instascale WIP: Change default namespace logic Apr 6, 2023
@anishasthana anishasthana force-pushed the update_appwrapper_instascale branch 3 times, most recently from 0d6e32c to 44e5a26 Compare April 6, 2023 20:21
@anishasthana anishasthana changed the title WIP: Change default namespace logic Change default namespace logic Apr 6, 2023
@anishasthana anishasthana force-pushed the update_appwrapper_instascale branch from 44e5a26 to 046a86f Compare April 6, 2023 21:59
@MichaelClifford
Copy link
Collaborator

Ok, kind of annoying, but I think we need to change this PR a bit to account for the fact that using the MCAD scheduler also needs a get_namespace call to make sure it doesn't just use the "default" namespace.

scheduler="kubernetes_mcad",
cfg=self.scheduler_args if self.scheduler_args is not None else None,
workspace="",
)

We need to add something here like:

self.scheduler_args["namespace"] = openshift.get_project()

We also need to deal with the fact that we already have a get_current_namespace() function within the cluster.py file. (This is an artifact from when we assumed a cluster object would always be required).

My suggestion is that we delete get_current_namespace() as its not used by any other function, and then rely on the openshift package to call openshift.get_project() wherever we need the project name (basically your idea from yesterday 😄). This avoids the need for a higher level utils.py type file to house a shared function like get_current_namespace.

Let me know what you think.

@anishasthana
Copy link
Contributor Author

My suggestion is that we delete get_current_namespace() as its not used by any other function, and then rely on the openshift package to call openshift.get_project() wherever we need the project name (basically your idea from yesterday 😄). This avoids the need for a higher level utils.py type file to house a shared function like get_current_namespace.

Question about that -- yesterday we also said that there is a need to allow users to determine what namespace they are in from the API. How would we structure things for that?

@MichaelClifford
Copy link
Collaborator

Question about that -- yesterday we also said that there is a need to allow users to determine what namespace they are in from the API. How would we structure things for that?

I did say that 😄, But on reflection, not sure how critical that really is. Especially if we are outputting it somewhere when we define the job or cluster objects.

If this becomes an issue we could add it in in a later PR, as a utils function like I suggested. But I think correctly auto-setting the namespace is a much more important feature than just getting it.

That said, if you disagree, by all means go the utils route :)

@anishasthana anishasthana force-pushed the update_appwrapper_instascale branch from 046a86f to bf1c57c Compare April 7, 2023 18:09
Copy link
Collaborator

@MichaelClifford MichaelClifford left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @anishasthana
LGTM!

@anishasthana anishasthana merged commit 14969f2 into project-codeflare:main Apr 7, 2023
@anishasthana anishasthana deleted the update_appwrapper_instascale branch April 7, 2023 18:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants