|
| 1 | +Cassandra Across Multiple Availability Zones |
| 2 | +============================================ |
| 3 | + |
| 4 | +With rack awareness |
| 5 | +------------------- |
| 6 | + |
| 7 | +Navigator supports running Cassandra with |
| 8 | +[rack and datacenter-aware replication](https://docs.datastax.com/en/cassandra/latest/cassandra/architecture/archDataDistributeReplication.html). |
| 9 | +To deploy this, you must run a `nodePool` in each availability zone, and mark each as a separate Cassandra rack. |
| 10 | + |
| 11 | +The |
| 12 | +[`nodeSelector`](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) |
| 13 | +field of a nodePool allows scheduling the nodePool to a set of nodes matching labels. |
| 14 | +This should be used with a node label such as |
| 15 | +[`failure-domain.beta.kubernetes.io/zone`](https://kubernetes.io/docs/reference/labels-annotations-taints/#failure-domainbetakubernetesiozone). |
| 16 | + |
| 17 | +The `datacenter` and `rack` fields mark all Cassandra nodes in a nodepool as being located in that datacenter and rack. |
| 18 | +This information can then be used with the |
| 19 | +[`NetworkTopologyStrategy`](http://cassandra.apache.org/doc/latest/architecture/dynamo.html#network-topology-strategy) |
| 20 | +keyspace replica placement strategy. |
| 21 | +If these are not specified, Navigator will select an appropriate name for each: `datacenter` defaults to a static value, and `rack` defaults to the nodePool's name. |
| 22 | + |
| 23 | +As an example, the nodePool section of a CassandraCluster spec for deploying into GKE in europe-west1 with rack awareness enabled: |
| 24 | + |
| 25 | +```yaml |
| 26 | + nodePools: |
| 27 | + - name: "np-europe-west1-b" |
| 28 | + replicas: 3 |
| 29 | + datacenter: "europe-west1" |
| 30 | + rack: "europe-west1-b" |
| 31 | + nodeSelector: |
| 32 | + failure-domain.beta.kubernetes.io/zone: "europe-west1-b" |
| 33 | + persistence: |
| 34 | + enabled: true |
| 35 | + size: "5Gi" |
| 36 | + storageClass: "default" |
| 37 | + - name: "np-europe-west1-c" |
| 38 | + replicas: 3 |
| 39 | + datacenter: "europe-west1" |
| 40 | + rack: "europe-west1-c" |
| 41 | + nodeSelector: |
| 42 | + failure-domain.beta.kubernetes.io/zone: "europe-west1-c" |
| 43 | + persistence: |
| 44 | + enabled: true |
| 45 | + size: "5Gi" |
| 46 | + storageClass: "default" |
| 47 | + - name: "np-europe-west1-d" |
| 48 | + replicas: 3 |
| 49 | + datacenter: "europe-west1" |
| 50 | + rack: "europe-west1-d" |
| 51 | + nodeSelector: |
| 52 | + failure-domain.beta.kubernetes.io/zone: "europe-west1-d" |
| 53 | + persistence: |
| 54 | + enabled: true |
| 55 | + size: "5Gi" |
| 56 | + storageClass: "default" |
| 57 | +``` |
| 58 | +
|
| 59 | +Without rack awareness |
| 60 | +---------------------- |
| 61 | +
|
| 62 | +Since the default rack name is equal to the nodepool name, |
| 63 | +simply set the rack name to the same static value in each nodepool to disable rack awareness. |
| 64 | +
|
| 65 | +A simplified example: |
| 66 | +
|
| 67 | +```yaml |
| 68 | + nodePools: |
| 69 | + - name: "np-europe-west1-b" |
| 70 | + replicas: 3 |
| 71 | + datacenter: "europe-west1" |
| 72 | + rack: "default-rack" |
| 73 | + nodeSelector: |
| 74 | + failure-domain.beta.kubernetes.io/zone: "europe-west1-b" |
| 75 | + - name: "np-europe-west1-c" |
| 76 | + replicas: 3 |
| 77 | + datacenter: "europe-west1" |
| 78 | + rack: "default-rack" |
| 79 | + nodeSelector: |
| 80 | + failure-domain.beta.kubernetes.io/zone: "europe-west1-c" |
| 81 | + - name: "np-europe-west1-d" |
| 82 | + replicas: 3 |
| 83 | + datacenter: "europe-west1" |
| 84 | + rack: "default-rack" |
| 85 | + nodeSelector: |
| 86 | + failure-domain.beta.kubernetes.io/zone: "europe-west1-d" |
| 87 | +``` |
0 commit comments