Skip to content

Commit 93c24ac

Browse files
authored
Merge pull request #675 from nschonni/code-fence-indents
fix: Bad code fence indenting in ordered lists
2 parents e0cae0a + 8506c52 commit 93c24ac

File tree

5 files changed

+146
-128
lines changed

5 files changed

+146
-128
lines changed

content/admin/enterprise-management/configuring-high-availability-replication-for-a-cluster.md

Lines changed: 61 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -57,32 +57,36 @@ Before you define a secondary datacenter for your passive nodes, ensure that you
5757
mysql-master = <em>HOSTNAME</em>
5858
redis-master = <em>HOSTNAME</em>
5959
<strong>primary-datacenter = default</strong>
60-
```
60+
```
6161

6262
- Optionally, change the name of the primary datacenter to something more descriptive or accurate by editing the value of `primary-datacenter`.
6363

6464
4. {% data reusables.enterprise_clustering.configuration-file-heading %} Under each node's heading, add a new key-value pair to assign the node to a datacenter. Use the same value as `primary-datacenter` from step 3 above. For example, if you want to use the default name (`default`), add the following key-value pair to the section for each node.
6565
66-
datacenter = default
66+
```
67+
datacenter = default
68+
```
6769
6870
When you're done, the section for each node in the cluster configuration file should look like the following example. {% data reusables.enterprise_clustering.key-value-pair-order-irrelevant %}
6971

70-
```shell
71-
[cluster "<em>HOSTNAME</em>"]
72-
<strong>datacenter = default</strong>
73-
hostname = <em>HOSTNAME</em>
74-
ipv4 = <em>IP ADDRESS</em>
72+
```shell
73+
[cluster "<em>HOSTNAME</em>"]
74+
<strong>datacenter = default</strong>
75+
hostname = <em>HOSTNAME</em>
76+
ipv4 = <em>IP ADDRESS</em>
77+
...
7578
...
76-
...
77-
```
79+
```
7880

79-
{% note %}
81+
{% note %}
8082

81-
**Note**: If you changed the name of the primary datacenter in step 3, find the `consul-datacenter` key-value pair in the section for each node and change the value to the renamed primary datacenter. For example, if you named the primary datacenter `primary`, use the following key-value pair for each node.
83+
**Note**: If you changed the name of the primary datacenter in step 3, find the `consul-datacenter` key-value pair in the section for each node and change the value to the renamed primary datacenter. For example, if you named the primary datacenter `primary`, use the following key-value pair for each node.
8284

83-
consul-datacenter = primary
85+
```
86+
consul-datacenter = primary
87+
```
8488

85-
{% endnote %}
89+
{% endnote %}
8690

8791
{% data reusables.enterprise_clustering.apply-configuration %}
8892

@@ -103,31 +107,37 @@ For an example configuration, see "[Example configuration](#example-configuratio
103107

104108
1. For each node in your cluster, provision a matching virtual machine with identical specifications, running the same version of {% data variables.product.prodname_ghe_server %}. Note the IPv4 address and hostname for each new cluster node. For more information, see "[Prerequisites](#prerequisites)."
105109

106-
{% note %}
110+
{% note %}
107111

108-
**Note**: If you're reconfiguring high availability after a failover, you can use the old nodes from the primary datacenter instead.
112+
**Note**: If you're reconfiguring high availability after a failover, you can use the old nodes from the primary datacenter instead.
109113
110-
{% endnote %}
114+
{% endnote %}
111115
112116
{% data reusables.enterprise_clustering.ssh-to-a-node %}
113117
114118
3. Back up your existing cluster configuration.
115119
116-
cp /data/user/common/cluster.conf ~/$(date +%Y-%m-%d)-cluster.conf.backup
120+
```
121+
cp /data/user/common/cluster.conf ~/$(date +%Y-%m-%d)-cluster.conf.backup
122+
```
117123
118124
4. Create a copy of your existing cluster configuration file in a temporary location, like _/home/admin/cluster-passive.conf_. Delete unique key-value pairs for IP addresses (`ipv*`), UUIDs (`uuid`), and public keys for WireGuard (`wireguard-pubkey`).
119125
120-
grep -Ev "(?:|ipv|uuid|vpn|wireguard\-pubkey)" /data/user/common/cluster.conf > ~/cluster-passive.conf
126+
```
127+
grep -Ev "(?:|ipv|uuid|vpn|wireguard\-pubkey)" /data/user/common/cluster.conf > ~/cluster-passive.conf
128+
```
121129
122130
5. Remove the `[cluster]` section from the temporary cluster configuration file that you copied in the previous step.
123131
124-
git config -f ~/cluster-passive.conf --remove-section cluster
132+
```
133+
git config -f ~/cluster-passive.conf --remove-section cluster
134+
```
125135
126136
6. Decide on a name for the secondary datacenter where you provisioned your passive nodes, then update the temporary cluster configuration file with the new datacenter name. Replace `SECONDARY` with the name you choose.
127137
128138
```shell
129-
sed -i 's/datacenter = default/datacenter = <em>SECONDARY</em>/g' ~/cluster-passive.conf
130-
```
139+
sed -i 's/datacenter = default/datacenter = <em>SECONDARY</em>/g' ~/cluster-passive.conf
140+
```
131141
132142
7. Decide on a pattern for the passive nodes' hostnames.
133143

@@ -140,7 +150,7 @@ For an example configuration, see "[Example configuration](#example-configuratio
140150
8. Open the temporary cluster configuration file from step 3 in a text editor. For example, you can use Vim.
141151

142152
```shell
143-
sudo vim ~/cluster-passive.conf
153+
sudo vim ~/cluster-passive.conf
144154
```
145155

146156
9. In each section within the temporary cluster configuration file, update the node's configuration. {% data reusables.enterprise_clustering.configuration-file-heading %}
@@ -150,37 +160,37 @@ For an example configuration, see "[Example configuration](#example-configuratio
150160
- Add a new key-value pair, `replica = enabled`.
151161
152162
```shell
153-
[cluster "<em>NEW PASSIVE NODE HOSTNAME</em>"]
154-
...
155-
hostname = <em>NEW PASSIVE NODE HOSTNAME</em>
156-
ipv4 = <em>NEW PASSIVE NODE IPV4 ADDRESS</em>
157-
<strong>replica = enabled</strong>
163+
[cluster "<em>NEW PASSIVE NODE HOSTNAME</em>"]
164+
...
165+
hostname = <em>NEW PASSIVE NODE HOSTNAME</em>
166+
ipv4 = <em>NEW PASSIVE NODE IPV4 ADDRESS</em>
167+
<strong>replica = enabled</strong>
168+
...
158169
...
159-
...
160170
```
161171
162172
10. Append the contents of the temporary cluster configuration file that you created in step 4 to the active configuration file.
163173
164174
```shell
165-
cat ~/cluster-passive.conf >> /data/user/common/cluster.conf
166-
```
175+
cat ~/cluster-passive.conf >> /data/user/common/cluster.conf
176+
```
167177
168178
11. Designate the primary MySQL and Redis nodes in the secondary datacenter. Replace `REPLICA MYSQL PRIMARY HOSTNAME` and `REPLICA REDIS PRIMARY HOSTNAME` with the hostnames of the passives node that you provisioned to match your existing MySQL and Redis primaries.
169179
170180
```shell
171-
git config -f /data/user/common/cluster.conf cluster.mysql-master-replica <em>REPLICA MYSQL PRIMARY HOSTNAME</em>
172-
git config -f /data/user/common/cluster.conf cluster.redis-master-replica <em>REPLICA REDIS PRIMARY HOSTNAME</em>
173-
```
181+
git config -f /data/user/common/cluster.conf cluster.mysql-master-replica <em>REPLICA MYSQL PRIMARY HOSTNAME</em>
182+
git config -f /data/user/common/cluster.conf cluster.redis-master-replica <em>REPLICA REDIS PRIMARY HOSTNAME</em>
183+
```
174184
175185
12. Enable MySQL to fail over automatically when you fail over to the passive replica nodes.
176186
177187
```shell
178-
git config -f /data/user/common/cluster.conf cluster.mysql-auto-failover true
188+
git config -f /data/user/common/cluster.conf cluster.mysql-auto-failover true
179189
```
180190
181-
{% warning %}
191+
{% warning %}
182192
183-
**Warning**: Review your cluster configuration file before proceeding.
193+
**Warning**: Review your cluster configuration file before proceeding.
184194
185195
- In the top-level `[cluster]` section, ensure that the values for `mysql-master-replica` and `redis-master-replica` are the correct hostnames for the passive nodes in the secondary datacenter that will serve as the MySQL and Redis primaries after a failover.
186196
- In each section for an active node named <code>[cluster "<em>ACTIVE NODE HOSTNAME</em>"]</code>, double-check the following key-value pairs.
@@ -194,9 +204,9 @@ For an example configuration, see "[Example configuration](#example-configuratio
194204
- `replica` should be configured as `enabled`.
195205
- Take the opportunity to remove sections for offline nodes that are no longer in use.
196206

197-
To review an example configuration, see "[Example configuration](#example-configuration)."
207+
To review an example configuration, see "[Example configuration](#example-configuration)."
198208

199-
{% endwarning %}
209+
{% endwarning %}
200210

201211
13. Initialize the new cluster configuration. {% data reusables.enterprise.use-a-multiplexer %}
202212

@@ -207,7 +217,7 @@ For an example configuration, see "[Example configuration](#example-configuratio
207217
14. After the initialization finishes, {% data variables.product.prodname_ghe_server %} displays the following message.
208218

209219
```shell
210-
Finished cluster initialization
220+
Finished cluster initialization
211221
```
212222

213223
{% data reusables.enterprise_clustering.apply-configuration %}
@@ -294,19 +304,27 @@ You can monitor the progress on any node in the cluster, using command-line tool
294304
295305
- Monitor replication of databases:
296306
297-
/usr/local/share/enterprise/ghe-cluster-status-mysql
307+
```
308+
/usr/local/share/enterprise/ghe-cluster-status-mysql
309+
```
298310
299311
- Monitor replication of repository and Gist data:
300312
301-
ghe-spokes status
313+
```
314+
ghe-spokes status
315+
```
302316
303317
- Monitor replication of attachment and LFS data:
304318
305-
ghe-storage replication-status
319+
```
320+
ghe-storage replication-status
321+
```
306322
307323
- Monitor replication of Pages data:
308324
309-
ghe-dpages replication-status
325+
```
326+
ghe-dpages replication-status
327+
```
310328
311329
You can use `ghe-cluster-status` to review the overall health of your cluster. For more information, see "[Command-line utilities](/enterprise/admin/configuration/command-line-utilities#ghe-cluster-status)."
312330

content/admin/installation/installing-github-enterprise-server-on-google-cloud-platform.md

Lines changed: 25 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ Before launching {% data variables.product.product_location %} on Google Cloud P
2727
{% data variables.product.prodname_ghe_server %} is supported on the following Google Compute Engine (GCE) machine types. For more information, see [the Google Cloud Platform machine types article](https://cloud.google.com/compute/docs/machine-types).
2828

2929
| High-memory |
30-
------------- |
30+
| ------------- |
3131
| n1-highmem-4 |
3232
| n1-highmem-8 |
3333
| n1-highmem-16 |
@@ -54,7 +54,7 @@ Based on your user license count, we recommend these machine types.
5454
1. Using the [gcloud compute](https://cloud.google.com/compute/docs/gcloud-compute/) command-line tool, list the public {% data variables.product.prodname_ghe_server %} images:
5555
```shell
5656
$ gcloud compute images list --project github-enterprise-public --no-standard-images
57-
```
57+
```
5858

5959
2. Take note of the image name for the latest GCE image of {% data variables.product.prodname_ghe_server %}.
6060

@@ -63,18 +63,18 @@ Based on your user license count, we recommend these machine types.
6363
GCE virtual machines are created as a member of a network, which has a firewall. For the network associated with the {% data variables.product.prodname_ghe_server %} VM, you'll need to configure the firewall to allow the required ports listed in the table below. For more information about firewall rules on Google Cloud Platform, see the Google guide "[Firewall Rules Overview](https://cloud.google.com/vpc/docs/firewalls)."
6464

6565
1. Using the gcloud compute command-line tool, create the network. For more information, see "[gcloud compute networks create](https://cloud.google.com/sdk/gcloud/reference/compute/networks/create)" in the Google documentation.
66-
```shell
67-
$ gcloud compute networks create <em>NETWORK-NAME</em> --subnet-mode auto
68-
```
66+
```shell
67+
$ gcloud compute networks create <em>NETWORK-NAME</em> --subnet-mode auto
68+
```
6969
2. Create a firewall rule for each of the ports in the table below. For more information, see "[gcloud compute firewall-rules](https://cloud.google.com/sdk/gcloud/reference/compute/firewall-rules/)" in the Google documentation.
70-
```shell
71-
$ gcloud compute firewall-rules create <em>RULE-NAME</em> \
72-
--network <em>NETWORK-NAME</em> \
73-
--allow tcp:22,tcp:25,tcp:80,tcp:122,udp:161,tcp:443,udp:1194,tcp:8080,tcp:8443,tcp:9418,icmp
74-
```
75-
This table identifies the required ports and what each port is used for.
70+
```shell
71+
$ gcloud compute firewall-rules create <em>RULE-NAME</em> \
72+
--network <em>NETWORK-NAME</em> \
73+
--allow tcp:22,tcp:25,tcp:80,tcp:122,udp:161,tcp:443,udp:1194,tcp:8080,tcp:8443,tcp:9418,icmp
74+
```
75+
This table identifies the required ports and what each port is used for.
7676

77-
{% data reusables.enterprise_installation.necessary_ports %}
77+
{% data reusables.enterprise_installation.necessary_ports %}
7878

7979
### Allocating a static IP and assigning it to the VM
8080

@@ -87,21 +87,21 @@ In production High Availability configurations, both primary and replica applian
8787
To create the {% data variables.product.prodname_ghe_server %} instance, you'll need to create a GCE instance with your {% data variables.product.prodname_ghe_server %} image and attach an additional storage volume for your instance data. For more information, see "[Hardware considerations](#hardware-considerations)."
8888

8989
1. Using the gcloud compute command-line tool, create a data disk to use as an attached storage volume for your instance data, and configure the size based on your user license count. For more information, see "[gcloud compute disks create](https://cloud.google.com/sdk/gcloud/reference/compute/disks/create)" in the Google documentation.
90-
```shell
91-
$ gcloud compute disks create <em>DATA-DISK-NAME</em> --size <em>DATA-DISK-SIZE</em> --type <em>DATA-DISK-TYPE</em> --zone <em>ZONE</em>
92-
```
90+
```shell
91+
$ gcloud compute disks create <em>DATA-DISK-NAME</em> --size <em>DATA-DISK-SIZE</em> --type <em>DATA-DISK-TYPE</em> --zone <em>ZONE</em>
92+
```
9393

9494
2. Then create an instance using the name of the {% data variables.product.prodname_ghe_server %} image you selected, and attach the data disk. For more information, see "[gcloud compute instances create](https://cloud.google.com/sdk/gcloud/reference/compute/instances/create)" in the Google documentation.
95-
```shell
96-
$ gcloud compute instances create <em>INSTANCE-NAME</em> \
97-
--machine-type n1-standard-8 \
98-
--image <em>GITHUB-ENTERPRISE-IMAGE-NAME</em> \
99-
--disk name=<em>DATA-DISK-NAME</em> \
100-
--metadata serial-port-enable=1 \
101-
--zone <em>ZONE</em> \
102-
--network <em>NETWORK-NAME</em> \
103-
--image-project github-enterprise-public
104-
```
95+
```shell
96+
$ gcloud compute instances create <em>INSTANCE-NAME</em> \
97+
--machine-type n1-standard-8 \
98+
--image <em>GITHUB-ENTERPRISE-IMAGE-NAME</em> \
99+
--disk name=<em>DATA-DISK-NAME</em> \
100+
--metadata serial-port-enable=1 \
101+
--zone <em>ZONE</em> \
102+
--network <em>NETWORK-NAME</em> \
103+
--image-project github-enterprise-public
104+
```
105105

106106
### Configuring the instance
107107

content/admin/policies/creating-a-pre-receive-hook-environment.md

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -21,10 +21,10 @@ You can use a Linux container management tool to build a pre-receive hook enviro
2121
{% data reusables.linux.ensure-docker %}
2222
2. Create the file `Dockerfile.alpine-3.3` that contains this information:
2323

24-
```
25-
FROM gliderlabs/alpine:3.3
26-
RUN apk add --no-cache git bash
27-
```
24+
```
25+
FROM gliderlabs/alpine:3.3
26+
RUN apk add --no-cache git bash
27+
```
2828
3. From the working directory that contains `Dockerfile.alpine-3.3`, build an image:
2929

3030
```shell
@@ -36,37 +36,37 @@ You can use a Linux container management tool to build a pre-receive hook enviro
3636
> ---> Using cache
3737
> ---> 0250ab3be9c5
3838
> Successfully built 0250ab3be9c5
39-
```
39+
```
4040
4. Create a container:
4141

4242
```shell
4343
$ docker create --name pre-receive.alpine-3.3 pre-receive.alpine-3.3 /bin/true
44-
```
44+
```
4545
5. Export the Docker container to a `gzip` compressed `tar` file:
4646

4747
```shell
4848
$ docker export pre-receive.alpine-3.3 | gzip > alpine-3.3.tar.gz
49-
```
49+
```
5050

51-
This file `alpine-3.3.tar.gz` is ready to be uploaded to the {% data variables.product.prodname_ghe_server %} appliance.
51+
This file `alpine-3.3.tar.gz` is ready to be uploaded to the {% data variables.product.prodname_ghe_server %} appliance.
5252

5353
### Creating a pre-receive hook environment using chroot
5454

5555
1. Create a Linux `chroot` environment.
5656
2. Create a `gzip` compressed `tar` file of the `chroot` directory.
57-
```shell
58-
$ cd /path/to/chroot
59-
$ tar -czf /path/to/pre-receive-environment.tar.gz .
57+
```shell
58+
$ cd /path/to/chroot
59+
$ tar -czf /path/to/pre-receive-environment.tar.gz .
6060
```
6161

62-
{% note %}
62+
{% note %}
6363

64-
**Notes:**
65-
- Do not include leading directory paths of files within the tar archive, such as `/path/to/chroot`.
66-
- `/bin/sh` must exist and be executable, as the entry point into the chroot environment.
67-
- Unlike traditional chroots, the `dev` directory is not required by the chroot environment for pre-receive hooks.
64+
**Notes:**
65+
- Do not include leading directory paths of files within the tar archive, such as `/path/to/chroot`.
66+
- `/bin/sh` must exist and be executable, as the entry point into the chroot environment.
67+
- Unlike traditional chroots, the `dev` directory is not required by the chroot environment for pre-receive hooks.
6868

69-
{% endnote %}
69+
{% endnote %}
7070

7171
For more information about creating a chroot environment see "[Chroot](https://wiki.debian.org/chroot)" from the *Debian Wiki*, "[BasicChroot](https://help.ubuntu.com/community/BasicChroot)" from the *Ubuntu Community Help Wiki*, or "[Installing Alpine Linux in a chroot](http://wiki.alpinelinux.org/wiki/Installing_Alpine_Linux_in_a_chroot)" from the *Alpine Linux Wiki*.
7272

@@ -94,4 +94,4 @@ For more information about creating a chroot environment see "[Chroot](https://w
9494
```shell
9595
admin@ghe-host:~$ ghe-hook-env-create AlpineTestEnv /home/admin/alpine-3.3.tar.gz
9696
> Pre-receive hook environment 'AlpineTestEnv' (2) has been created.
97-
```
97+
```

0 commit comments

Comments
 (0)