|
| 1 | +# Updating a v1alpha1 provider to a v1alpha2 infrastructure provider |
| 2 | + |
| 3 | +This document outlines how to update a cluster API (CAPI) v1alpha1 provider to a v1alpha2 infrastructure provider. |
| 4 | + |
| 5 | +* [Updating a v1alpha1 provider to a v1alpha2 infrastructure provider](#updating-a-v1alpha1-provider-to-a-v1alpha2-infrastructure-provider) |
| 6 | + * [General information](#general-information) |
| 7 | + * [The new API groups](#the-new-api-groups) |
| 8 | + * [Kubebuilder](#kubebuilder) |
| 9 | + * [Sample code and other examples](#sample-code-and-other-examples) |
| 10 | + * [Create a branch for new v1alpha1 work](#create-a-branch-for-new-v1alpha1-work) |
| 11 | + * [Update the API group in the `PROJECT` file](#update-the-api-group-in-the-project-file) |
| 12 | + * [Create the provider's v1alpha2 resources](#create-the-providers-v1alpha2-resources) |
| 13 | + * [The cluster and machine resources](#the-cluster-and-machine-resources) |
| 14 | + * [The spec and status types](#the-spec-and-status-types) |
| 15 | + * [Infrastructure provider cluster status fields](#infrastructure-provider-cluster-status-fields) |
| 16 | + * [Infrastructure provider cluster status `ready`](#infrastructure-provider-cluster-status-ready) |
| 17 | + * [Infrastructure provider cluster status `apiEndpoints`](#infrastructure-provider-cluster-status-apiendpoints) |
| 18 | + * [Create the infrastructure controllers](#create-the-infrastructure-controllers) |
| 19 | + * [The infrastructure provider cluster controller](#the-infrastructure-provider-cluster-controller) |
| 20 | + * [The infrastructure provider machine controller](#the-infrastructure-provider-machine-controller) |
| 21 | + |
| 22 | +## General information |
| 23 | + |
| 24 | +This section contains several general notes about the update process. |
| 25 | + |
| 26 | +### The new API groups |
| 27 | + |
| 28 | +This section describes the API groups used by CAPI v1alpha2: |
| 29 | + |
| 30 | +| Group | Description | |
| 31 | +|---|---| |
| 32 | +| `cluster.x-k8s.io` | The root CAPI API group | |
| 33 | +| `infrastructure.cluster.x-k8s.io` | The API group for all resources related to CAPI infrastructure providers | |
| 34 | +| `bootstrap.cluster.x-k8s.io` | The API group for all resources related to CAPI bootstrap providers | |
| 35 | + |
| 36 | +Only SIG-sponsored providers may declare their components or resources to belong to any API group that ends with `x-k8s.io`. |
| 37 | + |
| 38 | +Externally owned providers should use an appropriate API group for their ownership and would require additional RBAC rules to be configured and deployed for the common cluster-api components. |
| 39 | + |
| 40 | +### Kubebuilder |
| 41 | + |
| 42 | +While [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder) v2 is available, the recommended approach for updating a CAPI provider to v1alpha2 is to stick with kubebuilder v1 during the update process and then reevaluate kubebuilder v2 after a successful migration to CAPI v1alpha2. |
| 43 | + |
| 44 | +Please note if webhooks are required, it may necessitate migrating to kubebuilder v2 as part of the initial migration. |
| 45 | + |
| 46 | +### Sample code and other examples |
| 47 | + |
| 48 | +This document uses the CAPI provider for AWS ([CAPA](https://github.com/kubernetes-sigs/cluster-api-provider-aws)) for sample code and other examples. |
| 49 | + |
| 50 | +## Create a branch for new v1alpha1 work |
| 51 | + |
| 52 | +This document assumes the work required to update a provider to v1alpha2 will occur on the project's `master` branch. Therefore, the recommendation is to create a branch `release-MAJOR.MINOR` in the repository from the latest v1alpha1-based release. For example, if the latest release of a provider based on CAPI v1alpha1 was `v0.4.1` then the branch `release-0.4` should be created. Now the project's `master` branch is free to be a target for the work required to update the provider to v1alpha2, and fixes or backported features for the v1alpha1 version of the provider may target the `release-0.4` branch. |
| 53 | + |
| 54 | +## Update the API group in the `PROJECT` file |
| 55 | + |
| 56 | +Please update the `PROJECT` file at the root of the provider's repository to reflect the API group `cluster.x-k8s.io`: |
| 57 | + |
| 58 | +```properties |
| 59 | +version: "1" |
| 60 | +domain: cluster.x-k8s.io |
| 61 | +repo: sigs.k8s.io/cluster-api-provider-aws |
| 62 | +``` |
| 63 | + |
| 64 | +## Create the provider's v1alpha2 resources |
| 65 | + |
| 66 | +The new v1alpha2 types are located in `pkg/apis/infrastructure/v1alpha2`. |
| 67 | + |
| 68 | +### The cluster and machine resources |
| 69 | + |
| 70 | +Providers no longer store configuration and status data for clusters and machines in the CAPI `Cluster` and `Machine` resources. Instead, this information is stored in two, new, provider-specific CRDs: |
| 71 | + |
| 72 | +* `pkg/apis/infrastructure/v1alpha2.`_Provider_`Cluster` |
| 73 | +* `pkg/apis/infrastructure/v1alpha2.`_Provider_`Machine` |
| 74 | + |
| 75 | +For example, the AWS provider defines: |
| 76 | + |
| 77 | +* [`sigs.k8s.io/cluster-api-provider-aws/pkg/apis/infrastructure/v1alpha2.AWSCluster`](https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/6de25b31def9b4203a3c0a92b868a1819ea6e3e7/pkg/apis/infrastructure/v1alpha2/awscluster_types.go#L138-L146) |
| 78 | +* [`sigs.k8s.io/cluster-api-provider-aws/pkg/apis/infrastructure/v1alpha2.AWSMachine`](https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/6de25b31def9b4203a3c0a92b868a1819ea6e3e7/pkg/apis/infrastructure/v1alpha2/awsmachine_types.go#L144-L152) |
| 79 | + |
| 80 | +### The spec and status types |
| 81 | + |
| 82 | +The `Spec` and `Status` types used to store configuration and status information are effectively the same in v1alpha2 as they were in v1alpha1: |
| 83 | + |
| 84 | +| v1alpha1 | v1alpha2 | |
| 85 | +|---|---| |
| 86 | +| [`sigs.k8s.io/cluster-api-provider-aws/pkg/apis/awsprovider/v1alpha1.AWSClusterProviderSpec`](https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/e6a57dc61826b8c7806eba22a513c9722c420754/pkg/apis/awsprovider/v1alpha1/awsclusterproviderconfig_types.go#L30-L65) | [`sigs.k8s.io/cluster-api-provider-aws/pkg/apis/infrastructure/v1alpha2.AWSClusterSpec`](https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/6de25b31def9b4203a3c0a92b868a1819ea6e3e7/pkg/apis/infrastructure/v1alpha2/awscluster_types.go#L33-L43) | |
| 87 | +| [`sigs.k8s.io/cluster-api-provider-aws/pkg/apis/awsprovider/v1alpha1.AWSClusterProviderStatus`](https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/e6a57dc61826b8c7806eba22a513c9722c420754/pkg/apis/awsprovider/v1alpha1/awsclusterproviderstatus_types.go#L26-L35) | [`sigs.k8s.io/cluster-api-provider-aws/pkg/apis/infrastructure/v1alpha2.AWSClusterStatus`](https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/6de25b31def9b4203a3c0a92b868a1819ea6e3e7/pkg/apis/infrastructure/v1alpha2/awscluster_types.go#L116-L124) | |
| 88 | +| [`sigs.k8s.io/cluster-api-provider-aws/pkg/apis/awsprovider/v1alpha1.AWSMachineProviderSpec`](https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/e6a57dc61826b8c7806eba22a513c9722c420754/pkg/apis/awsprovider/v1alpha1/awsmachineproviderconfig_types.go#L28-L97) | [`sigs.k8s.io/cluster-api-provider-aws/pkg/apis/infrastructure/v1alpha2.AWSMachineSpec`](https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/6de25b31def9b4203a3c0a92b868a1819ea6e3e7/pkg/apis/infrastructure/v1alpha2/awsmachine_types.go#L31-L87) | |
| 89 | +| [`sigs.k8s.io/cluster-api-provider-aws/pkg/apis/awsprovider/v1alpha1.AWSMachineProviderStatus`](https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/e6a57dc61826b8c7806eba22a513c9722c420754/pkg/apis/awsprovider/v1alpha1/awsmachineproviderstatus_types.go#L26-L44) | [`sigs.k8s.io/cluster-api-provider-aws/pkg/apis/infrastructure/v1alpha2.AWSMachineStatus`](https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/6de25b31def9b4203a3c0a92b868a1819ea6e3e7/pkg/apis/infrastructure/v1alpha2/awsmachine_types.go#L89-L139) | |
| 90 | + |
| 91 | +Information related to kubeadm or certificates has been extracted and is now owned by the bootstrap provider and its corresponding resources, ex. `KubeadmConfig`. |
| 92 | + |
| 93 | +#### Infrastructure provider cluster status fields |
| 94 | + |
| 95 | +A CAPI v1alpha2 provider cluster status resource has two special fields, `ready` and `apiEndpoints`. For example, take the `AWSClusterStatus`: |
| 96 | + |
| 97 | +```golang |
| 98 | +// AWSClusterStatus defines the observed state of AWSCluster |
| 99 | +type AWSClusterStatus struct { |
| 100 | + Ready bool `json:"ready"` |
| 101 | + // APIEndpoints represents the endpoints to communicate with the control plane. |
| 102 | + // +optional |
| 103 | + APIEndpoints []APIEndpoint `json:"apiEndpoints,omitempty"` |
| 104 | +} |
| 105 | +``` |
| 106 | + |
| 107 | +##### Infrastructure provider cluster status `ready` |
| 108 | + |
| 109 | +A Provider`Cluster`'s `status` object must define a boolean field named `ready` and set the value to `true` only when the infrastructure required to provision a cluster is ready and available. |
| 110 | + |
| 111 | +##### Infrastructure provider cluster status `apiEndpoints` |
| 112 | + |
| 113 | +A Provider`Cluster`'s `status` object may optionally define a field named `apiEndpoints` that is a list of the following objects: |
| 114 | + |
| 115 | +```golang |
| 116 | +// APIEndpoint represents a reachable Kubernetes API endpoint. |
| 117 | +type APIEndpoint struct { |
| 118 | + // The hostname on which the API server is serving. |
| 119 | + Host string `json:"host"` |
| 120 | + |
| 121 | + // The port on which the API server is serving. |
| 122 | + Port int `json:"port"` |
| 123 | +} |
| 124 | +``` |
| 125 | + |
| 126 | +If present, this field is automatically inspected in order to obtain an endpoint at which the Kubernetes cluster may be accessed. |
| 127 | + |
| 128 | +## Create the infrastructure controllers |
| 129 | + |
| 130 | +The actuator model from v1alpha1 has been replaced by the infrastructure controllers in v1alpha2: |
| 131 | + |
| 132 | +| v1alpha1 | v1alpha2 | |
| 133 | +|---|---| |
| 134 | +| [`sigs.k8s.io/cluster-api-provider-aws/pkg/cloud/aws/actuators/cluster.Actuator`](https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/e6a57dc61826b8c7806eba22a513c9722c420754/pkg/cloud/aws/actuators/cluster/actuator.go#L50-L57) | [`sigs.k8s.io/cluster-api-provider-aws/pkg/controller/awscluster.ReconcileAWSCluster`](https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/0a05127734a4fb955742b27c6e326a65821851ce/pkg/controller/awscluster/awscluster_controller.go#L98-L103) | |
| 135 | +| [`sigs.k8s.io/cluster-api-provider-aws/pkg/cloud/aws/actuators/machine.Actuator`](https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/e6a57dc61826b8c7806eba22a513c9722c420754/pkg/cloud/aws/actuators/machine/actuator.go#L57-L65) | [`sigs.k8s.io/cluster-api-provider-aws/pkg/controller/awsmachine.ReconcileAWSMachine`](https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/0a05127734a4fb955742b27c6e326a65821851ce/pkg/controller/awsmachine/awsmachine_controller.go#L104-L109) | |
| 136 | + |
| 137 | +### The infrastructure provider cluster controller |
| 138 | + |
| 139 | +Instead of processing the CAPI `Cluster` resources like the old actuator model, the new provider cluster controller operates on the new provider `Cluster` CRD. However, the overall workflow should feel the same as the old cluster actuator. For example, take the `AWSCluster` controller's [reconcile function](https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/0a05127734a4fb955742b27c6e326a65821851ce/pkg/controller/awscluster/awscluster_controller.go#L105-L162), it: |
| 140 | + |
| 141 | +1. Fetches the [`AWSCluster` resource](https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/0a05127734a4fb955742b27c6e326a65821851ce/pkg/controller/awscluster/awscluster_controller.go#L113-L121): |
| 142 | + |
| 143 | + ```golang |
| 144 | + awsCluster := &infrastructurev1alpha2.AWSCluster{} |
| 145 | + err := r.Get(ctx, request.NamespacedName, awsCluster) |
| 146 | + if err != nil { |
| 147 | + if apierrors.IsNotFound(err) { |
| 148 | + return reconcile.Result{}, nil |
| 149 | + } |
| 150 | + return reconcile.Result{}, err |
| 151 | + } |
| 152 | + ``` |
| 153 | + |
| 154 | +2. [Fetches the CAPI cluster resource](https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/0a05127734a4fb955742b27c6e326a65821851ce/pkg/controller/awscluster/awscluster_controller.go#L125-L133) that has a one-to-one relationship with the `AWSCluster` resource: |
| 155 | + |
| 156 | + ```golang |
| 157 | + cluster, err := util.GetOwnerCluster(ctx, r.Client, awsCluster.ObjectMeta) |
| 158 | + if err != nil { |
| 159 | + return reconcile.Result{}, err |
| 160 | + } |
| 161 | + if cluster == nil { |
| 162 | + logger.Info("Waiting for Cluster Controller to set OwnerRef on AWSCluster") |
| 163 | + return reconcile.Result{}, nil |
| 164 | + } |
| 165 | + ``` |
| 166 | + |
| 167 | + If the `AWSCluster` resource does not have a corresponding CAPI cluster resource then the reconcile will be reissued once the OwnerRef is assigned to the `AWSCluster` resource by the CAPI controller, triggering a new reconcile event. |
| 168 | + |
| 169 | +3. Uses a `defer` statement to [ensure the `AWSCluster` resource is always patched](https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/0a05127734a4fb955742b27c6e326a65821851ce/pkg/controller/awscluster/awscluster_controller.go#L148-L153) back to the API server: |
| 170 | + |
| 171 | + ```golang |
| 172 | + defer func() { |
| 173 | + if err := clusterScope.Close(); err != nil && reterr == nil { |
| 174 | + reterr = err |
| 175 | + } |
| 176 | + }() |
| 177 | + ``` |
| 178 | + |
| 179 | +4. Handles both [deleted and non-deleted](https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/0a05127734a4fb955742b27c6e326a65821851ce/pkg/controller/awscluster/awscluster_controller.go#L155-L161) clusters resources: |
| 180 | + |
| 181 | + ```golang |
| 182 | + // Handle deleted clusters |
| 183 | + if !awsCluster.DeletionTimestamp.IsZero() { |
| 184 | + return reconcileDelete(clusterScope) |
| 185 | + } |
| 186 | +
|
| 187 | + // Handle non-deleted clusters |
| 188 | + return reconcileNormal(clusterScope) |
| 189 | + ``` |
| 190 | + |
| 191 | +### The infrastructure provider machine controller |
| 192 | + |
| 193 | +The new provider machine controller is a slightly larger departure from the v1alpha1 machine actuator. This is because the machine actuator was based around a _Create_, _Read_, _Update_, _Delete_ (CRUD) model. Providers implementing the v1alpha1 machine actuator would implement each of those four functions. However, this was just an abstract way to represent a Kubernetes controller's reconcile loop. |
| 194 | +
|
| 195 | +The new, v1alpha2, provider machine controller merely takes the same CRUD model from the v1alpha1 machine actuator and applies it to a Kubernetes reconcile activity. The CAPI provider for vSphere (CAPV) actually includes a diagram that illustrates the v1alpha1 machine actuator CRUD operations as a reconcile loop. |
| 196 | +
|
| 197 | + |
0 commit comments