Skip to content
This repository was archived by the owner on Apr 4, 2023. It is now read-only.

Conversation

@kragniz
Copy link
Contributor

@kragniz kragniz commented Feb 28, 2018

NONE

Copy link
Member

@wallrj wallrj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @kragniz

I left a few comments below.

I propose we adhere to http://rhodesmill.org/brandon/2012/one-sentence-per-line/ in our docs.

It'll make diffs easier to read and review comments easier to pinpoint.

@@ -0,0 +1,51 @@
Cassandra across multiple availability zones
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe Use Title Case


Navigator supports running Cassandra with rack and datacenter-aware
replication. To deploy this, you must run a `nodePool` in each availability
zone, and mark each as a separate Cassandra rack.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a link to Cassandra docs on DC / rack awareness.


The `nodeSelector` field of a nodePool allows scheduling the nodePool to a set
of nodes matching labels. This should be used with a node label such as
`failure-domain.beta.kubernetes.io/zone`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❔ Maybe add a link to kubernetes failure-domain label docs. If there are any.


The `datacenter` and `rack` fields mark all Cassandra nodes in a nodepool as
being located in that datacenter and rack. This information can then be used
with the `NetworkTopologyStrategy` keyspace replica placement strategy.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❔ Link to NetworkTopologyStrategy docs.

- name: "np-europe-west1-b"
replicas: 3
datacenter: "europe-west1"
rack: "europe-west1-b"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❔ This makes me wonder what happens if I used rack: "b". Does Cassandra know that rack b in europe-west1 is different than rack b in us-west1?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And if not, perhaps we should add a note that rack labels must be unique.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ec2Snitch simply labels a rack in us-east-1a as 1a, so it seems cassandra knows the difference:

https://github.com/apache/cassandra/blob/4a50f4467f439f9a00aac6aa4d3b83a6c1c8d6c4/src/java/org/apache/cassandra/locator/Ec2Snitch.java#L54-L62

The current version is failing with an obscure sed command.
@kragniz
Copy link
Contributor Author

kragniz commented Feb 28, 2018

The current hack/verify-links.sh is failing to parse the markdown added in this patch, so I've updated it with the version in https://github.com/duglin/vlinker which seems to work better.


The `nodeSelector` field of a nodePool allows scheduling the nodePool to a set of nodes matching labels.
This should be used with a node label such as
[`failure-domain.beta.kubernetes.io/zone`](https://kubernetes.io/docs/reference/labels-annotations-taints/#failure-domainbetakubernetesiozone).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be good to add a link out to the official docs on nodeSelector. It's not a concept we've invented and there are probably better descriptions out there that we want to include 😄

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

[`NetworkTopologyStrategy`](http://cassandra.apache.org/doc/latest/architecture/dynamo.html#network-topology-strategy)
keyspace replica placement strategy.

As an example, the nodePool section of a CassandraCluster spec for deploying into GKE in europe-west1:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... "with rack awareness/network topology strategy/whatever it is called enabled"

datacenter: "europe-west1"
rack: "europe-west1-c"
nodeSelector:
failure-domain.beta.kubernetes.io/zone: "europe-west1-c"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did we drop the "auto-pull zone from nodeSelector into rack attribute" functionality for now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I figure it's better to be explicit about it for now. We can add it in later if we think it'll be useful.

@munnerz
Copy link
Contributor

munnerz commented Mar 1, 2018

As we default the 'rack' to the node pool name, perhaps we should also document how someone would disable rack awareness? I worry that we've made it harder for users to do this, and it might be unexpected during some failure scenario (and difficult to change after initial deployment). Otherwise lgtm - happy to add the lgtm label once you've responded to this query.

@kragniz
Copy link
Contributor Author

kragniz commented Mar 1, 2018

I updated with two sections - with/without rack awareness

Copy link
Member

@wallrj wallrj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@jetstack-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: wallrj

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these OWNERS Files:

You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment

@jetstack-ci-bot
Copy link
Contributor

/test all [submit-queue is verifying that this PR is safe to merge]

@jetstack-ci-bot
Copy link
Contributor

Automatic merge from submit-queue.

@jetstack-ci-bot jetstack-ci-bot merged commit d3ae156 into jetstack:master Mar 6, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants