Skip to content

Building Calico with Kubernetes

linuxonz edited this page Oct 19, 2023 · 49 revisions

Integrating Calico with Kubernetes

Calico enables networking and network policy in Kubernetes clusters across the cloud. The instructions provided you the steps to integrate Calico with Kubernetes on Linux on IBM Z for following distribution:

  • RHEL (7.8, 7.9, 8.6, 8.8, 9.0, 9.2)
  • SLES (12 SP5, 15 SP4, 15 SP5)
  • Ubuntu (20.04, 22.04, 23.04)

General Notes:

  • When following the steps below please use a standard permission user unless otherwise specified.

  • A directory /<source_root>/ will be referred to in these instructions, this is a temporary writable directory anywhere you'd like to place it.

  • Following build instructions were tested using Kubernetes version 1.27.4 and for few distros using 1.28.2

Step 1: Build Calico basic components

Instructions for building the basic Calico components, which includes calico/ctl , calico/node and calico/kube-controllers can be found here

Step 2: Build Kubernetes Support

2.1) Export Environment Variable

export PATCH_URL=https://raw.githubusercontent.com/linux-on-ibm-z/scripts/master/Calico/3.26.1/patch

2.2) Build the Tigera Operator image

  • This builds a docker image tigera/operator that will be used to manage the lifecycle of a Calico installation on Kubernetes.

    mkdir -p $GOPATH/src/github.com/tigera/operator
    git clone -b v1.30.4 https://github.com/tigera/operator $GOPATH/src/github.com/tigera/operator
    
    cd $GOPATH/src/github.com/tigera/operator
    curl -s $PATCH_URL/operator.patch | git apply -
    make image
    # The built image needs to be tagged with the version number to correctly work with kubernetes
    docker tag tigera/operator:latest-s390x quay.io/tigera/operator:v1.30.4

2.3) Build the node-driver-registrar image

cd $GOPATH/src/github.com/projectcalico/calico/pod2daemon/
make image
docker tag calico/node-driver-registrar:latest-s390x calico/node-driver-registrar:v3.26.1
  • Verify the following images are on the system:

    REPOSITORY                                  TAG
    calico/kube-controllers                       latest-s390x
    calico/node-driver-registrar                  latest-s390x
    tigera/operator                               latest-s390x
    calico/felix                                  latest-s390x
    calico/node                                   latest-s390x
    calico/typha                                  latest-s390x
    calico/dikastes                               latest-s390x
    calico/flannel-migration-controller           latest-s390x
    calico/apiserver                              latest-s390x
    calico/cni                                    latest-s390x
    calico/ctl                                    latest-s390x
    calico/csi                                    latest-s390x
    calico/pod2daemon-flexvol                     latest-s390x
    calico/pod2daemon                             latest-s390x
    calico/bird                                   latest-s390x
    calico/kube-controllers                       v3.26.1
    calico/node-driver-registrar                  v3.26.1
    calico/felix                                  v3.26.1
    calico/node                                   v3.26.1
    calico/typha                                  v3.26.1
    calico/dikastes                               v3.26.1
    calico/flannel-migration-controller           v3.26.1
    calico/apiserver                              v3.26.1
    calico/cni                                    v3.26.1
    calico/ctl                                    v3.26.1
    calico/pod2daemon-flexvol                     v3.26.1
    calico/pod2daemon                             v3.26.1

Step 3: Install Calico in Kubernetes environment

Once you have all necessary components built on Z systems, you can

  • Configure and run your Kubernetes following here

  • Install Calico as per instructions; ensure tigera-opeartor.yaml and custom-resources.yaml have correct values reflecting the operational cluster:

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml
  • Following pods are expected following a successful deployment:
NAMESPACE          NAME                                               READY   STATUS    RESTARTS   AGE
calico-apiserver   calico-apiserver-6df957cc7c-nqw97                  1/1     Running   0          31s
calico-apiserver   calico-apiserver-6df957cc7c-xbdmr                  1/1     Running   0          30s
calico-system      calico-kube-controllers-687cf68bbc-875ss           1/1     Running   0          59s
calico-system      calico-node-z7t68                                  1/1     Running   0          60s
calico-system      calico-typha-7f464b8467-j6xp6                      1/1     Running   0          60s
calico-system      csi-node-driver-98tl6                              2/2     Running   0          60s
kube-flannel       kube-flannel-ds-7tdnj                              1/1     Running   0          5m10s
kube-system        coredns-5d78c9869d-2cqx9                           1/1     Running   0          7m22s
kube-system        coredns-5d78c9869d-kspr4                           1/1     Running   0          7m22s
kube-system        etcd-pandurang11.fyre.ibm.com                      1/1     Running   5          7m34s
kube-system        kube-apiserver-pandurang11.fyre.ibm.com            1/1     Running   5          7m36s
kube-system        kube-controller-manager-pandurang11.fyre.ibm.com   1/1     Running   1          7m34s
kube-system        kube-proxy-d4jww                                   1/1     Running   0          7m22s
kube-system        kube-scheduler-pandurang11.fyre.ibm.com            1/1     Running   5          7m35s
tigera-operator    tigera-operator-5f4668786-tcmx4                    1/1     Running   0          6m45s

Step 4: (Optional) Use Calico network policy on top of flannel networking - Flannel

  • Ensure you have a Calico compatible Kubernetes cluster

  • Download and install flannel networking manifest for the Kubernetes API datastore

curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/canal.yaml -O
kubectl apply -f canal.yaml
  • Following pods are expected upon successful deployment:
NAMESPACE          NAME                                               READY   STATUS    RESTARTS   AGE
calico-apiserver   calico-apiserver-6df957cc7c-nqw97                  1/1     Running   0          2m
calico-apiserver   calico-apiserver-6df957cc7c-xbdmr                  1/1     Running   0          119s
calico-system      calico-kube-controllers-687cf68bbc-875ss           1/1     Running   0          2m28s
calico-system      calico-node-z7t68                                  1/1     Running   0          2m29s
calico-system      calico-typha-7f464b8467-j6xp6                      1/1     Running   0          2m29s
calico-system      csi-node-driver-98tl6                              2/2     Running   0          2m29s
kube-flannel       kube-flannel-ds-7tdnj                              1/1     Running   0          6m39s
kube-system        calico-kube-controllers-85578c44bf-rqf2v           1/1     Running   0          19s
kube-system        canal-4wcgc                                        2/2     Running   0          19s
kube-system        coredns-5d78c9869d-2cqx9                           1/1     Running   0          8m51s
kube-system        coredns-5d78c9869d-kspr4                           1/1     Running   0          8m51s
kube-system        etcd-pandurang11.fyre.ibm.com                      1/1     Running   5          9m3s
kube-system        kube-apiserver-pandurang11.fyre.ibm.com            1/1     Running   5          9m5s
kube-system        kube-controller-manager-pandurang11.fyre.ibm.com   1/1     Running   1          9m3s
kube-system        kube-proxy-d4jww                                   1/1     Running   0          8m51s
kube-system        kube-scheduler-pandurang11.fyre.ibm.com            1/1     Running   5          9m4s
tigera-operator    tigera-operator-5f4668786-tcmx4                    1/1     Running   0          8m14s

Networking test tutorial

Clone this wiki locally