You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/admin/enterprise-management/configuring-high-availability-replication-for-a-cluster.md
+61-43Lines changed: 61 additions & 43 deletions
Original file line number
Diff line number
Diff line change
@@ -57,32 +57,36 @@ Before you define a secondary datacenter for your passive nodes, ensure that you
57
57
mysql-master = <em>HOSTNAME</em>
58
58
redis-master = <em>HOSTNAME</em>
59
59
<strong>primary-datacenter = default</strong>
60
-
```
60
+
```
61
61
62
62
- Optionally, change the name of the primary datacenter to something more descriptive or accurate by editing the value of `primary-datacenter`.
63
63
64
64
4. {% data reusables.enterprise_clustering.configuration-file-heading %} Under each node's heading, add a new key-value pair to assign the node to a datacenter. Use the same value as `primary-datacenter` from step 3 above. For example, if you want to use the default name (`default`), add the following key-value pair to the section for each node.
65
65
66
-
datacenter = default
66
+
```
67
+
datacenter = default
68
+
```
67
69
68
70
When you're done, the section foreach nodein the cluster configuration file should look like the following example. {% data reusables.enterprise_clustering.key-value-pair-order-irrelevant %}
69
71
70
-
```shell
71
-
[cluster "<em>HOSTNAME</em>"]
72
-
<strong>datacenter = default</strong>
73
-
hostname = <em>HOSTNAME</em>
74
-
ipv4 = <em>IP ADDRESS</em>
72
+
```shell
73
+
[cluster "<em>HOSTNAME</em>"]
74
+
<strong>datacenter = default</strong>
75
+
hostname = <em>HOSTNAME</em>
76
+
ipv4 = <em>IP ADDRESS</em>
77
+
...
75
78
...
76
-
...
77
-
```
79
+
```
78
80
79
-
{% note %}
81
+
{% note %}
80
82
81
-
**Note**: If you changed the name of the primary datacenter in step 3, find the `consul-datacenter` key-value pair in the section for each node and change the value to the renamed primary datacenter. For example, if you named the primary datacenter `primary`, use the following key-value pair for each node.
83
+
**Note**: If you changed the name of the primary datacenter in step 3, find the `consul-datacenter` key-value pair in the section for each node and change the value to the renamed primary datacenter. For example, if you named the primary datacenter `primary`, use the following key-value pair for each node.
82
84
83
-
consul-datacenter = primary
85
+
```
86
+
consul-datacenter = primary
87
+
```
84
88
85
-
{% endnote %}
89
+
{% endnote %}
86
90
87
91
{% data reusables.enterprise_clustering.apply-configuration %}
88
92
@@ -103,31 +107,37 @@ For an example configuration, see "[Example configuration](#example-configuratio
103
107
104
108
1. For each node in your cluster, provision a matching virtual machine with identical specifications, running the same version of {% data variables.product.prodname_ghe_server %}. Note the IPv4 address and hostname for each new cluster node. For more information, see "[Prerequisites](#prerequisites)."
105
109
106
-
{% note %}
110
+
{% note %}
107
111
108
-
**Note**: If you're reconfiguring high availability after a failover, you can use the old nodes from the primary datacenter instead.
112
+
**Note**: If you're reconfiguring high availability after a failover, you can use the old nodes from the primary datacenter instead.
109
113
110
-
{% endnote %}
114
+
{% endnote %}
111
115
112
116
{% data reusables.enterprise_clustering.ssh-to-a-node %}
4. Create a copy of your existing cluster configuration file in a temporary location, like _/home/admin/cluster-passive.conf_. Delete unique key-value pairs for IP addresses (`ipv*`), UUIDs (`uuid`), and public keys for WireGuard (`wireguard-pubkey`).
6. Decide on a name for the secondary datacenter where you provisioned your passive nodes, then update the temporary cluster configuration file with the new datacenter name. Replace `SECONDARY` with the name you choose.
127
137
128
138
```shell
129
-
sed -i 's/datacenter = default/datacenter = <em>SECONDARY</em>/g'~/cluster-passive.conf
130
-
```
139
+
sed -i 's/datacenter = default/datacenter = <em>SECONDARY</em>/g' ~/cluster-passive.conf
140
+
```
131
141
132
142
7. Decide on a pattern for the passive nodes' hostnames.
133
143
@@ -140,7 +150,7 @@ For an example configuration, see "[Example configuration](#example-configuratio
140
150
8. Open the temporary cluster configuration file from step 3 in a text editor. For example, you can use Vim.
141
151
142
152
```shell
143
-
sudo vim ~/cluster-passive.conf
153
+
sudo vim ~/cluster-passive.conf
144
154
```
145
155
146
156
9. In each section within the temporary cluster configuration file, update the node's configuration. {% data reusables.enterprise_clustering.configuration-file-heading %}
@@ -150,37 +160,37 @@ For an example configuration, see "[Example configuration](#example-configuratio
150
160
- Add a new key-value pair, `replica = enabled`.
151
161
152
162
```shell
153
-
[cluster "<em>NEW PASSIVE NODE HOSTNAME</em>"]
154
-
...
155
-
hostname = <em>NEW PASSIVE NODE HOSTNAME</em>
156
-
ipv4 = <em>NEW PASSIVE NODE IPV4 ADDRESS</em>
157
-
<strong>replica = enabled</strong>
163
+
[cluster "<em>NEW PASSIVE NODE HOSTNAME</em>"]
164
+
...
165
+
hostname = <em>NEW PASSIVE NODE HOSTNAME</em>
166
+
ipv4 = <em>NEW PASSIVE NODE IPV4 ADDRESS</em>
167
+
<strong>replica = enabled</strong>
168
+
...
158
169
...
159
-
...
160
170
```
161
171
162
172
10. Append the contents of the temporary cluster configuration file that you created in step 4 to the active configuration file.
11. Designate the primary MySQL and Redis nodes in the secondary datacenter. Replace `REPLICA MYSQL PRIMARY HOSTNAME` and `REPLICA REDIS PRIMARY HOSTNAME` with the hostnames of the passives node that you provisioned to match your existing MySQL and Redis primaries.
169
179
170
180
```shell
171
-
git config -f /data/user/common/cluster.conf cluster.mysql-master-replica <em>REPLICA MYSQL PRIMARY HOSTNAME</em>
**Warning**: Review your cluster configuration file before proceeding.
193
+
**Warning**: Review your cluster configuration file before proceeding.
184
194
185
195
- In the top-level `[cluster]` section, ensure that the values for `mysql-master-replica` and `redis-master-replica` are the correct hostnames for the passive nodes in the secondary datacenter that will serve as the MySQL and Redis primaries after a failover.
186
196
- In each section for an active node named <code>[cluster "<em>ACTIVE NODE HOSTNAME</em>"]</code>, double-check the following key-value pairs.
@@ -194,9 +204,9 @@ For an example configuration, see "[Example configuration](#example-configuratio
194
204
- `replica` should be configured as `enabled`.
195
205
- Take the opportunity to remove sections foroffline nodes that are no longerin use.
196
206
197
-
To review an example configuration, see "[Example configuration](#example-configuration)."
207
+
To review an example configuration, see "[Example configuration](#example-configuration)."
198
208
199
-
{% endwarning %}
209
+
{% endwarning %}
200
210
201
211
13. Initialize the new cluster configuration. {% data reusables.enterprise.use-a-multiplexer %}
202
212
@@ -207,7 +217,7 @@ For an example configuration, see "[Example configuration](#example-configuratio
207
217
14. After the initialization finishes, {% data variables.product.prodname_ghe_server %} displays the following message.
208
218
209
219
```shell
210
-
Finished cluster initialization
220
+
Finished cluster initialization
211
221
```
212
222
213
223
{% data reusables.enterprise_clustering.apply-configuration %}
@@ -294,19 +304,27 @@ You can monitor the progress on any node in the cluster, using command-line tool
- Monitor replication of repository and Gist data:
300
312
301
-
ghe-spokes status
313
+
```
314
+
ghe-spokes status
315
+
```
302
316
303
317
- Monitor replication of attachment and LFS data:
304
318
305
-
ghe-storage replication-status
319
+
```
320
+
ghe-storage replication-status
321
+
```
306
322
307
323
- Monitor replication of Pages data:
308
324
309
-
ghe-dpages replication-status
325
+
```
326
+
ghe-dpages replication-status
327
+
```
310
328
311
329
You can use `ghe-cluster-status` to review the overall health of your cluster. For more information, see "[Command-line utilities](/enterprise/admin/configuration/command-line-utilities#ghe-cluster-status)."
Copy file name to clipboardExpand all lines: content/admin/installation/installing-github-enterprise-server-on-google-cloud-platform.md
+25-25Lines changed: 25 additions & 25 deletions
Original file line number
Diff line number
Diff line change
@@ -27,7 +27,7 @@ Before launching {% data variables.product.product_location %} on Google Cloud P
27
27
{% data variables.product.prodname_ghe_server %} is supported on the following Google Compute Engine (GCE) machine types. For more information, see [the Google Cloud Platform machine types article](https://cloud.google.com/compute/docs/machine-types).
28
28
29
29
| High-memory |
30
-
------------- |
30
+
| ------------- |
31
31
| n1-highmem-4 |
32
32
| n1-highmem-8 |
33
33
| n1-highmem-16 |
@@ -54,7 +54,7 @@ Based on your user license count, we recommend these machine types.
54
54
1. Using the [gcloud compute](https://cloud.google.com/compute/docs/gcloud-compute/) command-line tool, list the public {% data variables.product.prodname_ghe_server %} images:
55
55
```shell
56
56
$ gcloud compute images list --project github-enterprise-public --no-standard-images
57
-
```
57
+
```
58
58
59
59
2. Take note of the image name for the latest GCE image of {% data variables.product.prodname_ghe_server %}.
60
60
@@ -63,18 +63,18 @@ Based on your user license count, we recommend these machine types.
63
63
GCE virtual machines are created as a member of a network, which has a firewall. For the network associated with the {% data variables.product.prodname_ghe_server %} VM, you'll need to configure the firewall to allow the required ports listed in the table below. For more information about firewall rules on Google Cloud Platform, see the Google guide "[Firewall Rules Overview](https://cloud.google.com/vpc/docs/firewalls)."
64
64
65
65
1. Using the gcloud compute command-line tool, create the network. For more information, see "[gcloud compute networks create](https://cloud.google.com/sdk/gcloud/reference/compute/networks/create)" in the Google documentation.
66
-
```shell
67
-
$ gcloud compute networks create <em>NETWORK-NAME</em> --subnet-mode auto
68
-
```
66
+
```shell
67
+
$ gcloud compute networks create <em>NETWORK-NAME</em> --subnet-mode auto
68
+
```
69
69
2. Create a firewall rule for each of the ports in the table below. For more information, see "[gcloud compute firewall-rules](https://cloud.google.com/sdk/gcloud/reference/compute/firewall-rules/)" in the Google documentation.
This table identifies the required ports and what each port is used for.
76
76
77
-
{% data reusables.enterprise_installation.necessary_ports %}
77
+
{% data reusables.enterprise_installation.necessary_ports %}
78
78
79
79
### Allocating a static IP and assigning it to the VM
80
80
@@ -87,21 +87,21 @@ In production High Availability configurations, both primary and replica applian
87
87
To create the {% data variables.product.prodname_ghe_server %} instance, you'll need to create a GCE instance with your {% data variables.product.prodname_ghe_server %} image and attach an additional storage volume for your instance data. For more information, see "[Hardware considerations](#hardware-considerations)."
88
88
89
89
1. Using the gcloud compute command-line tool, create a data disk to use as an attached storage volume for your instance data, and configure the size based on your user license count. For more information, see "[gcloud compute disks create](https://cloud.google.com/sdk/gcloud/reference/compute/disks/create)" in the Google documentation.
2. Then create an instance using the name of the {% data variables.product.prodname_ghe_server %} image you selected, and attach the data disk. For more information, see "[gcloud compute instances create](https://cloud.google.com/sdk/gcloud/reference/compute/instances/create)" in the Google documentation.
This file `alpine-3.3.tar.gz` is ready to be uploaded to the {% data variables.product.prodname_ghe_server %} appliance.
51
+
This file `alpine-3.3.tar.gz` is ready to be uploaded to the {% data variables.product.prodname_ghe_server %} appliance.
52
52
53
53
### Creating a pre-receive hook environment using chroot
54
54
55
55
1. Create a Linux `chroot` environment.
56
56
2. Create a `gzip` compressed `tar` file of the `chroot` directory.
57
-
```shell
58
-
$ cd /path/to/chroot
59
-
$ tar -czf /path/to/pre-receive-environment.tar.gz .
57
+
```shell
58
+
$ cd /path/to/chroot
59
+
$ tar -czf /path/to/pre-receive-environment.tar.gz .
60
60
```
61
61
62
-
{% note %}
62
+
{% note %}
63
63
64
-
**Notes:**
65
-
- Do not include leading directory paths of files within the tar archive, such as `/path/to/chroot`.
66
-
- `/bin/sh` must exist and be executable, as the entry point into the chroot environment.
67
-
- Unlike traditional chroots, the `dev` directory is not required by the chroot environment for pre-receive hooks.
64
+
**Notes:**
65
+
- Do not include leading directory paths of files within the tar archive, such as `/path/to/chroot`.
66
+
-`/bin/sh` must exist and be executable, as the entry point into the chroot environment.
67
+
- Unlike traditional chroots, the `dev` directory is not required by the chroot environment for pre-receive hooks.
68
68
69
-
{% endnote %}
69
+
{% endnote %}
70
70
71
71
For more information about creating a chroot environment see "[Chroot](https://wiki.debian.org/chroot)" from the *Debian Wiki*, "[BasicChroot](https://help.ubuntu.com/community/BasicChroot)" from the *Ubuntu Community Help Wiki*, or "[Installing Alpine Linux in a chroot](http://wiki.alpinelinux.org/wiki/Installing_Alpine_Linux_in_a_chroot)" from the *Alpine Linux Wiki*.
72
72
@@ -94,4 +94,4 @@ For more information about creating a chroot environment see "[Chroot](https://w
0 commit comments