-
Notifications
You must be signed in to change notification settings - Fork 1.3k
[ws-manager] Make cluster selection filter by application cluster #14000
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Allow filtering by name and applicationCluster.
started the job as gitpod-build-af-use-app-cluster-in-ws-manager-api.8 because the annotations in the pull request description changed |
started the job as gitpod-build-af-use-app-cluster-in-ws-manager-api.9 because the annotations in the pull request description changed |
|
||
public async getWorkspaceCluster(name: string): Promise<WorkspaceCluster | undefined> { | ||
return this.clusters.find(m => m.name === name); | ||
return this.clusters.find((m) => m.name === name && m.applicationCluster === this.applicationCluster); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For debugging purposes, I'd recommend logging the following:
- The set of clusters available
- The set of clusters which match this application cluster
We can add these as log fields. This would help us identify scenarios when the filtering logic doesn't work as expected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The extra benefit of this is if we accidentally end up in a situation where there is more than 1 cluster that matches, we'd see that in the logs when debugging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Left a Q around logging but it can be tackled in a follow-up PR.
Description
Context
As part of #9198 we want to start syncing the `d_b_workspace_cluster` table with `db-sync`.Currently, the table differs between US and EU regions because each table contains only the data relevant to that region. For example, in the EU table, the
eu70
workspace cluster is marked asavailable
and theus70
cluster iscordoned
. In the US clustereu70
iscordoned
andus70
isavailable
.In order to sync the table we need to get to a point where there is no difference in the data in the table between EU and US regions.
To do that we will introduce a new field in the table called
applicationCluster
which records the name of the application cluster to which the record belongs. Thus, for each workspace cluster there will be two rows in Gitpod SaaS:Effectively the new
applicationCluster
column gives the table an extra dimension so that we can combine both tables (EU and US) into one.#13722 added the column to the table and made
gpctl
fill the value whengpctl register
ing a new workspace cluster. The value is taken from theGITPOD_INSTALLATION_SHORTNAME
environment variable inws-manager-bridge
.Make the cluster selection mechanism in
ws-manager-api
aware of the application cluster to which each workspace cluster is registered. Previously, the selection was by workspace cluster name only. Soon, as described in the context section, each region in gitpod will store records in the database for all workspace clusters - not just the ones in its region. This PR ensures that workspace clusters not in the same regions as the application cluster will not be considered.Related Issue(s)
Part of #9198 and #13800
How to test
Edit the
server
deployment and change theWSMAN_CFG_MANAGERS
environment variable to change theapplicationCluster
to which the default workspace cluster in the preview environment is registered:The
WSMAN_CFG_MANAGERS
environment variable is base64 encoded.Once the
server
deployment is edited it should no longer be possible to start a workspace as the only workspace cluster is now associated with a different (non-existent) application cluster.Edit the
WSMAN_CFG_MANAGERS
environment variable once more and set theapplicationCluster
field back to""
. Starting a workspace should now proceed as normal (the application cluster in preview environments is currently called""
).Release Notes
Documentation
Werft options:
If enabled this will build
install/preview
Valid options are
all
,workspace
,webapp
,ide