Skip to content
This repository was archived by the owner on Apr 4, 2023. It is now read-only.

Commit fb2505c

Browse files
Merge pull request #273 from kragniz/sphinx
Automatic merge from submit-queue. Initial sphinx documentation This is a start to documentation managed by Sphinx. Related to #268. ```release-note none ```
2 parents d3ae156 + 9fc673a commit fb2505c

21 files changed

+485
-441
lines changed

Makefile

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -54,6 +54,10 @@ verify: .hack_verify dep_verify go_verify helm_verify
5454
@echo Running generated client checker:
5555
@${HACK_DIR}/verify-client-gen.sh
5656

57+
doc_verify:
58+
make -C docs html
59+
make -C docs check
60+
5761
dep_verify:
5862
${HACK_DIR}/verify-deps.sh
5963

README.md

Lines changed: 10 additions & 71 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Navigator - managed DBaaS on Kubernetes [![Build Status Widget]][Build Status]
1+
# Navigator - self managed DBaaS on Kubernetes [![Build Status Widget]][Build Status]
22

33
Navigator is a Kubernetes extension for managing common stateful services on
44
Kubernetes. It is implemented as a custom apiserver that operates behind
@@ -10,47 +10,13 @@ other resource in Kubernetes core. This means you can manage fine-grained
1010
permissions via conventional RBAC rules, allowing you to offer popular but
1111
complex services "as a Service" within your organisation.
1212

13-
To get started, jump to the [quick-start guide](docs/quick-start).
13+
For more in-depth information and to get started, jump to the [docs](https://navigator-dbaas.readthedocs.io).
1414

15-
## Design
15+
Here's a quick demo of creating, scaling and deleting a Cassandra database:
1616

17-
As well as following "the operator model", Navigator additionally introduces
18-
the concept of 'Pilots' - small 'nanny' processes that run inside each pod
19-
in your application deployment. These Pilots are responsible for managing the
20-
lifecycle of your underlying application process (e.g. an Elasticsearch JVM
21-
process) and periodically report state information about the individual node
22-
back to the Navigator API.
17+
![](docs/images/demo.gif)
2318

24-
By separating this logic into it's own binary that is run alongside each node,
25-
in certain failure events the Pilot is able to intervene in order to help
26-
prevent data loss, or otherwise update the Navigator API with details of the
27-
failure so that navigator-controller can take action to restore service.
28-
29-
Navigator has a few unique traits that differ from similar projects (such as
30-
elasticsearch-operator, etcd-operator etc).
31-
32-
- **navigator-apiserver** - this takes on a similar role to `kube-apiserver`.
33-
It is responsible for storing and coordinating all of the state stored for
34-
Navigator. It requires a connection to an etcd cluster in order to do this. In
35-
order to make Navigator API types generally consumable to users of your cluster,
36-
it registers itself with kube-aggregator. It performs validation of your
37-
resources, as well as performing conversions between API versions which allow
38-
us to maintain a stable API without hindering development.
39-
40-
- **navigator-controller** - the controller is akin to `kube-controller-manager`.
41-
It is responsible for actually realising your deployments within the Kubernetes
42-
cluster. It can be seen as the 'operator' for the various applications
43-
supported by `navigator-apiserver`.
44-
45-
- **pilots** - the pilot is responsible for managing each database process.
46-
Currently Navigator has two types: `pilot-elasticsearch` and
47-
`pilot-cassandra`.
48-
49-
## Architecture
50-
51-
![alt text](docs/arch.jpg)
52-
53-
## Supported applications
19+
## Supported databases
5420

5521
Whilst we aim to support as many common applications as possible, it does take
5622
a certain level of operational knowledge of the applications in question in
@@ -62,38 +28,11 @@ Please search for or create an issue for the application in question you'd like
6228
to see a part of Navigator, and we can begin discussion on implementation &
6329
planning.
6430

65-
| Name | Version | Status | Notes |
66-
| ------------- | --------- | ----------- | ----------------------------------------------------------- |
67-
| Elasticsearch | 5.x | Alpha | [more info](docs/supported-types/elasticsearch-cluster.md) |
68-
| Cassandra | 3.x | Alpha | [more info](docs/supported-types/cassandra-cluster.md) |
69-
| Couchbase | | Coming soon | |
70-
71-
## Links
72-
73-
* [Quick-start](docs/quick-start)
74-
* [Developing quick-start](docs/developing.md)
75-
* [Resource types](docs/supported-types/README.md)
76-
* [ElasticsearchCluster](docs/supported-types/elasticsearch-cluster.md)
77-
78-
79-
## E2E Testing
80-
81-
Navigator has an end-to-end test suite which verifies that Navigator can be
82-
installed [as documented in the quick start guide](docs/quick-start). The
83-
tests are run on a Minikube cluster. Run the tests using the following
84-
sequence of commands:
85-
86-
```
87-
minikube start
88-
# This ensures that the Docker image will be built in the Minikube VM
89-
eval $(minikube docker-env)
90-
# Override the Docker image tag so that it is built as :latest
91-
# (the tag used in the documented deployment)
92-
# XXX: This is a hack.
93-
# Better if we had a helm chart in the documentation,
94-
# so that we could provide an alternative navigator image and tag.
95-
make BUILD_TAG=latest e2e-test
96-
```
31+
| Name | Version | Status | Notes |
32+
| ------------- | --------- | ----------- | --------------------------------------------------------------------------------- |
33+
| Elasticsearch | 5.x | Alpha | [more info](https://navigator-dbaas.readthedocs.io/en/latest/elasticsearch.html) |
34+
| Cassandra | 3.x | Alpha | [more info](https://navigator-dbaas.readthedocs.io/en/latest/cassandra.html) |
35+
| Couchbase | | Coming soon | |
9736

9837
## Credits
9938

docs/Makefile

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
# You can set these variables from the command line.
2+
SPHINXOPTS =
3+
SPHINXBUILD = $(VENV_PATH)/bin/python -msphinx
4+
SPHINXPROJ = Navigator
5+
SOURCEDIR = .
6+
VENV_PATH = .venv
7+
BUILDDIR = _build
8+
9+
# Put it first so that "make" without argument is like "make help".
10+
help:
11+
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
12+
13+
.PHONY: help Makefile
14+
15+
# Catch-all target: route all unknown targets to Sphinx using the new
16+
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
17+
%: .venv Makefile
18+
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
19+
20+
.venv:
21+
virtualenv -p $(shell which python3) $(VENV_PATH)
22+
$(VENV_PATH)/bin/pip install -r requirements.txt
23+
touch .venv
24+
25+
check: .venv
26+
$(SPHINXBUILD) "$(SOURCEDIR)" "$(BUILDDIR)" -b linkcheck
27+
$(SPHINXBUILD) "$(SOURCEDIR)" "$(BUILDDIR)" -b spelling

docs/cassandra.rst

Lines changed: 98 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,98 @@
1+
Cassandra
2+
=========
3+
4+
Example cluster definition
5+
--------------------------
6+
7+
Example ``CassandraCluster`` resource:
8+
9+
.. include:: quick-start/cassandra-cluster.yaml
10+
:literal:
11+
12+
Cassandra Across Multiple Availability Zones
13+
--------------------------------------------
14+
15+
With rack awareness
16+
~~~~~~~~~~~~~~~~~~~
17+
18+
Navigator supports running Cassandra with
19+
`rack and datacenter-aware replication <https://docs.datastax.com/en/cassandra/latest/cassandra/architecture/archDataDistributeReplication.html>`_
20+
To deploy this, you must run a ``nodePool`` in each availability zone, and mark each as a separate Cassandra rack.
21+
22+
The
23+
`nodeSelector <(https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector>`_
24+
field of a nodePool allows scheduling the nodePool to a set of nodes matching labels.
25+
This should be used with a node label such as
26+
`failure-domain.beta.kubernetes.io/zone <https://kubernetes.io/docs/reference/labels-annotations-taints/#failure-domainbetakubernetesiozone>`_.
27+
28+
The ``datacenter`` and ``rack`` fields mark all Cassandra nodes in a nodepool as being located in that datacenter and rack.
29+
This information can then be used with the
30+
`NetworkTopologyStrategy <http://cassandra.apache.org/doc/latest/architecture/dynamo.html#network-topology-strategy>`_
31+
keyspace replica placement strategy.
32+
If these are not specified, Navigator will select an appropriate name for each: ``datacenter`` defaults to a static value, and ``rack`` defaults to the nodePool's name.
33+
34+
As an example, the nodePool section of a CassandraCluster spec for deploying into GKE in europe-west1 with rack awareness enabled:
35+
36+
.. code-block:: yaml
37+
38+
nodePools:
39+
- name: "np-europe-west1-b"
40+
replicas: 3
41+
datacenter: "europe-west1"
42+
rack: "europe-west1-b"
43+
nodeSelector:
44+
failure-domain.beta.kubernetes.io/zone: "europe-west1-b"
45+
persistence:
46+
enabled: true
47+
size: "5Gi"
48+
storageClass: "default"
49+
- name: "np-europe-west1-c"
50+
replicas: 3
51+
datacenter: "europe-west1"
52+
rack: "europe-west1-c"
53+
nodeSelector:
54+
failure-domain.beta.kubernetes.io/zone: "europe-west1-c"
55+
persistence:
56+
enabled: true
57+
size: "5Gi"
58+
storageClass: "default"
59+
- name: "np-europe-west1-d"
60+
replicas: 3
61+
datacenter: "europe-west1"
62+
rack: "europe-west1-d"
63+
nodeSelector:
64+
failure-domain.beta.kubernetes.io/zone: "europe-west1-d"
65+
persistence:
66+
enabled: true
67+
size: "5Gi"
68+
storageClass: "default"
69+
70+
Without rack awareness
71+
~~~~~~~~~~~~~~~~~~~~~~
72+
73+
Since the default rack name is equal to the nodepool name,
74+
simply set the rack name to the same static value in each nodepool to disable rack awareness.
75+
76+
A simplified example:
77+
78+
.. code-block:: yaml
79+
80+
nodePools:
81+
- name: "np-europe-west1-b"
82+
replicas: 3
83+
datacenter: "europe-west1"
84+
rack: "default-rack"
85+
nodeSelector:
86+
failure-domain.beta.kubernetes.io/zone: "europe-west1-b"
87+
- name: "np-europe-west1-c"
88+
replicas: 3
89+
datacenter: "europe-west1"
90+
rack: "default-rack"
91+
nodeSelector:
92+
failure-domain.beta.kubernetes.io/zone: "europe-west1-c"
93+
- name: "np-europe-west1-d"
94+
replicas: 3
95+
datacenter: "europe-west1"
96+
rack: "default-rack"
97+
nodeSelector:
98+
failure-domain.beta.kubernetes.io/zone: "europe-west1-d"

docs/cassandra/multi-az.md

Lines changed: 0 additions & 87 deletions
This file was deleted.

0 commit comments

Comments
 (0)