Skip to content

Building Calico with Kubernetes

aborkar-ibm edited this page Feb 15, 2023 · 49 revisions

Integrating Calico with Kubernetes

Calico enables networking and network policy in Kubernetes clusters across the cloud. The instructions provided you the steps to integrate Calico with Kubernetes on Linux on IBM Z for following distribution:

  • RHEL (7.8, 7.9, 8.4, 8.6, 8.7, 9.0, 9.1)
  • SLES (12 SP5, 15 SP4)
  • Ubuntu (18.04, 20.04, 22.04, 22.10)

General Notes:

  • When following the steps below please use a standard permission user unless otherwise specified.

  • A directory /<source_root>/ will be referred to in these instructions, this is a temporary writable directory anywhere you'd like to place it.

  • Following build instructions were tested using Kubernetes version 1.26.1

Step 1: Build Calico basic components

Instructions for building the basic Calico components, which includes calico/ctl , calico/node and calico/kube-controllers can be found here

Step 2: Build Kubernetes Support

2.1) Export Environment Variable

export PATCH_URL=https://raw.githubusercontent.com/linux-on-ibm-z/scripts/master/Calico/3.24.5/patch

2.2) Build the Tigera Operator image

  • This builds a docker image tigera/operator that will be used to manage the lifecycle of a Calico installation on Kubernetes.

    mkdir -p $GOPATH/src/github.com/tigera/operator
    git clone -b v1.25.8 https://github.com/tigera/operator $GOPATH/src/github.com/tigera/operator
    
    cd $GOPATH/src/github.com/tigera/operator
    curl -s $PATCH_URL/operator.patch | git apply -
    mv build/Dockerfile.amd64 build/Dockerfile.s390x
    sed -i 's/amd64/s390x/g' build/Dockerfile.s390x
    docker tag calico/go-build:v0.65.1 calico/go-build:v0.65.1-s390x
    
    ARCH=s390x GO_BUILD_VER=v0.75 make image
    # The built image needs to be tagged with the version number to correctly work with kubernetes
    docker tag tigera/operator:latest-s390x quay.io/tigera/operator:v1.28.5
  • Verify the following images are on the system:

    REPOSITORY                                  TAG
    calico/pod2daemon-flexvol                   latest-s390x
    calico/kube-controllers                     latest-s390x
    calico/flannel-migration-controller         latest-s390x
    calico/node                                 latest-s390x
    calico/cni                                  latest-s390x
    calico/felix                                latest-s390x
    calico/typha                                latest-s390x
    calico/ctl                                  latest-s390x
    calico/apiserver                            latest-s390x
    tigera/operator                             latest-s390x
    calico/pod2daemon-flexvol                   v3.24.5
    calico/kube-controllers                     v3.24.5
    calico/flannel-migration-controller         v3.24.5
    calico/node                                 v3.24.5
    calico/cni                                  v3.24.5
    calico/felix                                v3.24.5
    calico/typha                                v3.24.5
    calico/ctl                                  v3.24.5
    calico/apiserver                            v3.24.5
    calico/go-build                             v0.65.1
    quay.io/tigera/operator                     v1.25.8

Step 3: Install Calico in Kubernetes environment

Once you have all necessary components built on Z systems, you can

  • Configure and run your Kubernetes following here

  • Install the Calico policy controller via Tigera Calico operator following here

    Note: You need to modify tigera-operator.yaml file to set the version of tigera/operator image to v1.25.8 before running kubectl create -f tigera-operator.yaml.

  • Following pods are expected following a successful deployment:

NAMESPACE          NAME                                       READY   STATUS    RESTARTS      AGE
calico-apiserver   calico-apiserver-678dc75449-lnm7b          1/1     Running   0             44s
calico-apiserver   calico-apiserver-678dc75449-pw8d4          1/1     Running   0             44s
calico-system      calico-kube-controllers-557cb7fd8b-qddzj   1/1     Running   0             76s
calico-system      calico-node-vdnzp                          1/1     Running   0             76s
calico-system      calico-typha-76b74f9d55-jdzkv              1/1     Running   0             76s
kube-system        coredns-64897985d-g2vfb                    1/1     Running   0             9m7s
kube-system        coredns-64897985d-xxd8z                    1/1     Running   0             9m7s
kube-system        etcd-<target node>                         1/1     Running   1 (10m ago)   9m23s
kube-system        kube-apiserver-<target node>               1/1     Running   1 (10m ago)   9m26s
kube-system        kube-controller-manager-<target node>      1/1     Running   1 (10m ago)   9m18s
kube-system        kube-proxy-4f8dr                           1/1     Running   0             9m7s
kube-system        kube-scheduler-<target node>               1/1     Running   1 (10m ago)   9m23s
tigera-operator    tigera-operator-788f8549cf-srwjt           1/1     Running   0             2m24s

Step 4: (Optional) Use Calico network policy on top of flannel networking - Flannel

  • Ensure you have a Calico compatible Kubernetes cluster

  • Download the flannel networking manifest for the Kubernetes API datastore

wget --no-check-certificate https://docs.projectcalico.org/manifests/canal.yaml
  • Modify canal.yaml file to point to correct container images generated during Calico build, specifically:
image: docker.io/calico/cni:v3.24.5
image: docker.io/calico/pod2daemon-flexvol:v3.24.5
image: docker.io/calico/node:v3.24.5
image: docker.io/calico/kube-controllers:v3.24.5
  • Issue the following command to install Calico:
kubectl apply -f canal.yaml
  • Following pods are expected upon successful deployment:
NAMESPACE     NAME                                             READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-56dc597597-lhzwk         1/1     Running   0          23m
kube-system   canal-t6bf4                                      2/2     Running   0          23m
kube-system   coredns-558bd4d5db-hwknt                         1/1     Running   0          39m
kube-system   coredns-558bd4d5db-sk2rd                         1/1     Running   0          39m
kube-system   etcd-<target-node>                               1/1     Running   0          39m
kube-system   kube-apiserver-<target node>                     1/1     Running   0          39m
kube-system   kube-controller-manager-<target node>            1/1     Running   0          39m
kube-system   kube-proxy-4bqwg                                 1/1     Running   0          39m
kube-system   kube-scheduler-<target node>                     1/1     Running   0          39m

Step 5: Usage samples

Clone this wiki locally