diff --git a/content/admin/enterprise-management/configuring-high-availability-replication-for-a-cluster.md b/content/admin/enterprise-management/configuring-high-availability-replication-for-a-cluster.md
index 52b3c0b59c53..35d6abdd2fa1 100644
--- a/content/admin/enterprise-management/configuring-high-availability-replication-for-a-cluster.md
+++ b/content/admin/enterprise-management/configuring-high-availability-replication-for-a-cluster.md
@@ -57,32 +57,36 @@ Before you define a secondary datacenter for your passive nodes, ensure that you
mysql-master = HOSTNAME
redis-master = HOSTNAME
primary-datacenter = default
- ```
+ ```
- Optionally, change the name of the primary datacenter to something more descriptive or accurate by editing the value of `primary-datacenter`.
4. {% data reusables.enterprise_clustering.configuration-file-heading %} Under each node's heading, add a new key-value pair to assign the node to a datacenter. Use the same value as `primary-datacenter` from step 3 above. For example, if you want to use the default name (`default`), add the following key-value pair to the section for each node.
- datacenter = default
+ ```
+ datacenter = default
+ ```
When you're done, the section for each node in the cluster configuration file should look like the following example. {% data reusables.enterprise_clustering.key-value-pair-order-irrelevant %}
- ```shell
- [cluster "HOSTNAME"]
- datacenter = default
- hostname = HOSTNAME
- ipv4 = IP ADDRESS
+ ```shell
+ [cluster "HOSTNAME"]
+ datacenter = default
+ hostname = HOSTNAME
+ ipv4 = IP ADDRESS
+ ...
...
- ...
- ```
+ ```
- {% note %}
+ {% note %}
- **Note**: If you changed the name of the primary datacenter in step 3, find the `consul-datacenter` key-value pair in the section for each node and change the value to the renamed primary datacenter. For example, if you named the primary datacenter `primary`, use the following key-value pair for each node.
+ **Note**: If you changed the name of the primary datacenter in step 3, find the `consul-datacenter` key-value pair in the section for each node and change the value to the renamed primary datacenter. For example, if you named the primary datacenter `primary`, use the following key-value pair for each node.
- consul-datacenter = primary
+ ```
+ consul-datacenter = primary
+ ```
- {% endnote %}
+ {% endnote %}
{% data reusables.enterprise_clustering.apply-configuration %}
@@ -103,31 +107,37 @@ For an example configuration, see "[Example configuration](#example-configuratio
1. For each node in your cluster, provision a matching virtual machine with identical specifications, running the same version of {% data variables.product.prodname_ghe_server %}. Note the IPv4 address and hostname for each new cluster node. For more information, see "[Prerequisites](#prerequisites)."
- {% note %}
+ {% note %}
- **Note**: If you're reconfiguring high availability after a failover, you can use the old nodes from the primary datacenter instead.
+ **Note**: If you're reconfiguring high availability after a failover, you can use the old nodes from the primary datacenter instead.
- {% endnote %}
+ {% endnote %}
{% data reusables.enterprise_clustering.ssh-to-a-node %}
3. Back up your existing cluster configuration.
- cp /data/user/common/cluster.conf ~/$(date +%Y-%m-%d)-cluster.conf.backup
+ ```
+ cp /data/user/common/cluster.conf ~/$(date +%Y-%m-%d)-cluster.conf.backup
+ ```
4. Create a copy of your existing cluster configuration file in a temporary location, like _/home/admin/cluster-passive.conf_. Delete unique key-value pairs for IP addresses (`ipv*`), UUIDs (`uuid`), and public keys for WireGuard (`wireguard-pubkey`).
- grep -Ev "(?:|ipv|uuid|vpn|wireguard\-pubkey)" /data/user/common/cluster.conf > ~/cluster-passive.conf
+ ```
+ grep -Ev "(?:|ipv|uuid|vpn|wireguard\-pubkey)" /data/user/common/cluster.conf > ~/cluster-passive.conf
+ ```
5. Remove the `[cluster]` section from the temporary cluster configuration file that you copied in the previous step.
- git config -f ~/cluster-passive.conf --remove-section cluster
+ ```
+ git config -f ~/cluster-passive.conf --remove-section cluster
+ ```
6. Decide on a name for the secondary datacenter where you provisioned your passive nodes, then update the temporary cluster configuration file with the new datacenter name. Replace `SECONDARY` with the name you choose.
```shell
- sed -i 's/datacenter = default/datacenter = SECONDARY/g' ~/cluster-passive.conf
- ```
+ sed -i 's/datacenter = default/datacenter = SECONDARY/g' ~/cluster-passive.conf
+ ```
7. Decide on a pattern for the passive nodes' hostnames.
@@ -140,7 +150,7 @@ For an example configuration, see "[Example configuration](#example-configuratio
8. Open the temporary cluster configuration file from step 3 in a text editor. For example, you can use Vim.
```shell
- sudo vim ~/cluster-passive.conf
+ sudo vim ~/cluster-passive.conf
```
9. In each section within the temporary cluster configuration file, update the node's configuration. {% data reusables.enterprise_clustering.configuration-file-heading %}
@@ -150,37 +160,37 @@ For an example configuration, see "[Example configuration](#example-configuratio
- Add a new key-value pair, `replica = enabled`.
```shell
- [cluster "NEW PASSIVE NODE HOSTNAME"]
- ...
- hostname = NEW PASSIVE NODE HOSTNAME
- ipv4 = NEW PASSIVE NODE IPV4 ADDRESS
- replica = enabled
+ [cluster "NEW PASSIVE NODE HOSTNAME"]
+ ...
+ hostname = NEW PASSIVE NODE HOSTNAME
+ ipv4 = NEW PASSIVE NODE IPV4 ADDRESS
+ replica = enabled
+ ...
...
- ...
```
10. Append the contents of the temporary cluster configuration file that you created in step 4 to the active configuration file.
```shell
- cat ~/cluster-passive.conf >> /data/user/common/cluster.conf
- ```
+ cat ~/cluster-passive.conf >> /data/user/common/cluster.conf
+ ```
11. Designate the primary MySQL and Redis nodes in the secondary datacenter. Replace `REPLICA MYSQL PRIMARY HOSTNAME` and `REPLICA REDIS PRIMARY HOSTNAME` with the hostnames of the passives node that you provisioned to match your existing MySQL and Redis primaries.
```shell
- git config -f /data/user/common/cluster.conf cluster.mysql-master-replica REPLICA MYSQL PRIMARY HOSTNAME
- git config -f /data/user/common/cluster.conf cluster.redis-master-replica REPLICA REDIS PRIMARY HOSTNAME
- ```
+ git config -f /data/user/common/cluster.conf cluster.mysql-master-replica REPLICA MYSQL PRIMARY HOSTNAME
+ git config -f /data/user/common/cluster.conf cluster.redis-master-replica REPLICA REDIS PRIMARY HOSTNAME
+ ```
12. Enable MySQL to fail over automatically when you fail over to the passive replica nodes.
```shell
- git config -f /data/user/common/cluster.conf cluster.mysql-auto-failover true
+ git config -f /data/user/common/cluster.conf cluster.mysql-auto-failover true
```
- {% warning %}
+ {% warning %}
- **Warning**: Review your cluster configuration file before proceeding.
+ **Warning**: Review your cluster configuration file before proceeding.
- In the top-level `[cluster]` section, ensure that the values for `mysql-master-replica` and `redis-master-replica` are the correct hostnames for the passive nodes in the secondary datacenter that will serve as the MySQL and Redis primaries after a failover.
- In each section for an active node named [cluster "ACTIVE NODE HOSTNAME"]
, double-check the following key-value pairs.
@@ -194,9 +204,9 @@ For an example configuration, see "[Example configuration](#example-configuratio
- `replica` should be configured as `enabled`.
- Take the opportunity to remove sections for offline nodes that are no longer in use.
- To review an example configuration, see "[Example configuration](#example-configuration)."
+ To review an example configuration, see "[Example configuration](#example-configuration)."
- {% endwarning %}
+ {% endwarning %}
13. Initialize the new cluster configuration. {% data reusables.enterprise.use-a-multiplexer %}
@@ -207,7 +217,7 @@ For an example configuration, see "[Example configuration](#example-configuratio
14. After the initialization finishes, {% data variables.product.prodname_ghe_server %} displays the following message.
```shell
- Finished cluster initialization
+ Finished cluster initialization
```
{% data reusables.enterprise_clustering.apply-configuration %}
@@ -294,19 +304,27 @@ You can monitor the progress on any node in the cluster, using command-line tool
- Monitor replication of databases:
- /usr/local/share/enterprise/ghe-cluster-status-mysql
+ ```
+ /usr/local/share/enterprise/ghe-cluster-status-mysql
+ ```
- Monitor replication of repository and Gist data:
- ghe-spokes status
+ ```
+ ghe-spokes status
+ ```
- Monitor replication of attachment and LFS data:
- ghe-storage replication-status
+ ```
+ ghe-storage replication-status
+ ```
- Monitor replication of Pages data:
- ghe-dpages replication-status
+ ```
+ ghe-dpages replication-status
+ ```
You can use `ghe-cluster-status` to review the overall health of your cluster. For more information, see "[Command-line utilities](/enterprise/admin/configuration/command-line-utilities#ghe-cluster-status)."
diff --git a/content/admin/installation/installing-github-enterprise-server-on-google-cloud-platform.md b/content/admin/installation/installing-github-enterprise-server-on-google-cloud-platform.md
index 2978fe87aa42..e095afb0db85 100644
--- a/content/admin/installation/installing-github-enterprise-server-on-google-cloud-platform.md
+++ b/content/admin/installation/installing-github-enterprise-server-on-google-cloud-platform.md
@@ -27,7 +27,7 @@ Before launching {% data variables.product.product_location %} on Google Cloud P
{% data variables.product.prodname_ghe_server %} is supported on the following Google Compute Engine (GCE) machine types. For more information, see [the Google Cloud Platform machine types article](https://cloud.google.com/compute/docs/machine-types).
| High-memory |
- ------------- |
+| ------------- |
| n1-highmem-4 |
| n1-highmem-8 |
| n1-highmem-16 |
@@ -54,7 +54,7 @@ Based on your user license count, we recommend these machine types.
1. Using the [gcloud compute](https://cloud.google.com/compute/docs/gcloud-compute/) command-line tool, list the public {% data variables.product.prodname_ghe_server %} images:
```shell
$ gcloud compute images list --project github-enterprise-public --no-standard-images
- ```
+ ```
2. Take note of the image name for the latest GCE image of {% data variables.product.prodname_ghe_server %}.
@@ -63,18 +63,18 @@ Based on your user license count, we recommend these machine types.
GCE virtual machines are created as a member of a network, which has a firewall. For the network associated with the {% data variables.product.prodname_ghe_server %} VM, you'll need to configure the firewall to allow the required ports listed in the table below. For more information about firewall rules on Google Cloud Platform, see the Google guide "[Firewall Rules Overview](https://cloud.google.com/vpc/docs/firewalls)."
1. Using the gcloud compute command-line tool, create the network. For more information, see "[gcloud compute networks create](https://cloud.google.com/sdk/gcloud/reference/compute/networks/create)" in the Google documentation.
- ```shell
- $ gcloud compute networks create NETWORK-NAME --subnet-mode auto
- ```
+ ```shell
+ $ gcloud compute networks create NETWORK-NAME --subnet-mode auto
+ ```
2. Create a firewall rule for each of the ports in the table below. For more information, see "[gcloud compute firewall-rules](https://cloud.google.com/sdk/gcloud/reference/compute/firewall-rules/)" in the Google documentation.
- ```shell
- $ gcloud compute firewall-rules create RULE-NAME \
- --network NETWORK-NAME \
- --allow tcp:22,tcp:25,tcp:80,tcp:122,udp:161,tcp:443,udp:1194,tcp:8080,tcp:8443,tcp:9418,icmp
- ```
- This table identifies the required ports and what each port is used for.
+ ```shell
+ $ gcloud compute firewall-rules create RULE-NAME \
+ --network NETWORK-NAME \
+ --allow tcp:22,tcp:25,tcp:80,tcp:122,udp:161,tcp:443,udp:1194,tcp:8080,tcp:8443,tcp:9418,icmp
+ ```
+ This table identifies the required ports and what each port is used for.
- {% data reusables.enterprise_installation.necessary_ports %}
+ {% data reusables.enterprise_installation.necessary_ports %}
### Allocating a static IP and assigning it to the VM
@@ -87,21 +87,21 @@ In production High Availability configurations, both primary and replica applian
To create the {% data variables.product.prodname_ghe_server %} instance, you'll need to create a GCE instance with your {% data variables.product.prodname_ghe_server %} image and attach an additional storage volume for your instance data. For more information, see "[Hardware considerations](#hardware-considerations)."
1. Using the gcloud compute command-line tool, create a data disk to use as an attached storage volume for your instance data, and configure the size based on your user license count. For more information, see "[gcloud compute disks create](https://cloud.google.com/sdk/gcloud/reference/compute/disks/create)" in the Google documentation.
- ```shell
- $ gcloud compute disks create DATA-DISK-NAME --size DATA-DISK-SIZE --type DATA-DISK-TYPE --zone ZONE
- ```
+ ```shell
+ $ gcloud compute disks create DATA-DISK-NAME --size DATA-DISK-SIZE --type DATA-DISK-TYPE --zone ZONE
+ ```
2. Then create an instance using the name of the {% data variables.product.prodname_ghe_server %} image you selected, and attach the data disk. For more information, see "[gcloud compute instances create](https://cloud.google.com/sdk/gcloud/reference/compute/instances/create)" in the Google documentation.
- ```shell
- $ gcloud compute instances create INSTANCE-NAME \
- --machine-type n1-standard-8 \
- --image GITHUB-ENTERPRISE-IMAGE-NAME \
- --disk name=DATA-DISK-NAME \
- --metadata serial-port-enable=1 \
- --zone ZONE \
- --network NETWORK-NAME \
- --image-project github-enterprise-public
- ```
+ ```shell
+ $ gcloud compute instances create INSTANCE-NAME \
+ --machine-type n1-standard-8 \
+ --image GITHUB-ENTERPRISE-IMAGE-NAME \
+ --disk name=DATA-DISK-NAME \
+ --metadata serial-port-enable=1 \
+ --zone ZONE \
+ --network NETWORK-NAME \
+ --image-project github-enterprise-public
+ ```
### Configuring the instance
diff --git a/content/admin/policies/creating-a-pre-receive-hook-environment.md b/content/admin/policies/creating-a-pre-receive-hook-environment.md
index 94769c1ce13e..64ab4099fcfb 100644
--- a/content/admin/policies/creating-a-pre-receive-hook-environment.md
+++ b/content/admin/policies/creating-a-pre-receive-hook-environment.md
@@ -21,10 +21,10 @@ You can use a Linux container management tool to build a pre-receive hook enviro
{% data reusables.linux.ensure-docker %}
2. Create the file `Dockerfile.alpine-3.3` that contains this information:
- ```
- FROM gliderlabs/alpine:3.3
- RUN apk add --no-cache git bash
- ```
+ ```
+ FROM gliderlabs/alpine:3.3
+ RUN apk add --no-cache git bash
+ ```
3. From the working directory that contains `Dockerfile.alpine-3.3`, build an image:
```shell
@@ -36,37 +36,37 @@ You can use a Linux container management tool to build a pre-receive hook enviro
> ---> Using cache
> ---> 0250ab3be9c5
> Successfully built 0250ab3be9c5
- ```
+ ```
4. Create a container:
```shell
$ docker create --name pre-receive.alpine-3.3 pre-receive.alpine-3.3 /bin/true
- ```
+ ```
5. Export the Docker container to a `gzip` compressed `tar` file:
```shell
$ docker export pre-receive.alpine-3.3 | gzip > alpine-3.3.tar.gz
- ```
+ ```
- This file `alpine-3.3.tar.gz` is ready to be uploaded to the {% data variables.product.prodname_ghe_server %} appliance.
+ This file `alpine-3.3.tar.gz` is ready to be uploaded to the {% data variables.product.prodname_ghe_server %} appliance.
### Creating a pre-receive hook environment using chroot
1. Create a Linux `chroot` environment.
2. Create a `gzip` compressed `tar` file of the `chroot` directory.
- ```shell
- $ cd /path/to/chroot
- $ tar -czf /path/to/pre-receive-environment.tar.gz .
+ ```shell
+ $ cd /path/to/chroot
+ $ tar -czf /path/to/pre-receive-environment.tar.gz .
```
- {% note %}
+ {% note %}
- **Notes:**
- - Do not include leading directory paths of files within the tar archive, such as `/path/to/chroot`.
- - `/bin/sh` must exist and be executable, as the entry point into the chroot environment.
- - Unlike traditional chroots, the `dev` directory is not required by the chroot environment for pre-receive hooks.
+ **Notes:**
+ - Do not include leading directory paths of files within the tar archive, such as `/path/to/chroot`.
+ - `/bin/sh` must exist and be executable, as the entry point into the chroot environment.
+ - Unlike traditional chroots, the `dev` directory is not required by the chroot environment for pre-receive hooks.
- {% endnote %}
+ {% endnote %}
For more information about creating a chroot environment see "[Chroot](https://wiki.debian.org/chroot)" from the *Debian Wiki*, "[BasicChroot](https://help.ubuntu.com/community/BasicChroot)" from the *Ubuntu Community Help Wiki*, or "[Installing Alpine Linux in a chroot](http://wiki.alpinelinux.org/wiki/Installing_Alpine_Linux_in_a_chroot)" from the *Alpine Linux Wiki*.
@@ -94,4 +94,4 @@ For more information about creating a chroot environment see "[Chroot](https://w
```shell
admin@ghe-host:~$ ghe-hook-env-create AlpineTestEnv /home/admin/alpine-3.3.tar.gz
> Pre-receive hook environment 'AlpineTestEnv' (2) has been created.
- ```
+ ```
diff --git a/content/admin/policies/creating-a-pre-receive-hook-script.md b/content/admin/policies/creating-a-pre-receive-hook-script.md
index 9a37b9dee71d..f425ea8e24f9 100644
--- a/content/admin/policies/creating-a-pre-receive-hook-script.md
+++ b/content/admin/policies/creating-a-pre-receive-hook-script.md
@@ -71,19 +71,19 @@ We recommend consolidating hooks to a single repository. If the consolidated hoo
```shell
$ sudo chmod +x SCRIPT_FILE.sh
- ```
- For Windows users, ensure the scripts have execute permissions:
+ ```
+ For Windows users, ensure the scripts have execute permissions:
- ```shell
- git update-index --chmod=+x SCRIPT_FILE.sh
- ```
+ ```shell
+ git update-index --chmod=+x SCRIPT_FILE.sh
+ ```
2. Commit and push to your designated pre-receive hooks repository on the {% data variables.product.prodname_ghe_server %} instance.
```shell
$ git commit -m "YOUR COMMIT MESSAGE"
$ git push
- ```
+ ```
3. [Create the pre-receive hook](/enterprise/{{ currentVersion }}/admin/guides/developer-workflow/managing-pre-receive-hooks-on-the-github-enterprise-server-appliance/#creating-pre-receive-hooks) on the {% data variables.product.prodname_ghe_server %} instance.
@@ -94,40 +94,40 @@ You can test a pre-receive hook script locally before you create or update it on
2. Create a file called `Dockerfile.dev` containing:
- ```
- FROM gliderlabs/alpine:3.3
- RUN \
- apk add --no-cache git openssh bash && \
- ssh-keygen -A && \
- sed -i "s/#AuthorizedKeysFile/AuthorizedKeysFile/g" /etc/ssh/sshd_config && \
- adduser git -D -G root -h /home/git -s /bin/bash && \
- passwd -d git && \
- su git -c "mkdir /home/git/.ssh && \
- ssh-keygen -t ed25519 -f /home/git/.ssh/id_ed25519 -P '' && \
- mv /home/git/.ssh/id_ed25519.pub /home/git/.ssh/authorized_keys && \
- mkdir /home/git/test.git && \
- git --bare init /home/git/test.git"
-
- VOLUME ["/home/git/.ssh", "/home/git/test.git/hooks"]
- WORKDIR /home/git
-
- CMD ["/usr/sbin/sshd", "-D"]
- ```
+ ```
+ FROM gliderlabs/alpine:3.3
+ RUN \
+ apk add --no-cache git openssh bash && \
+ ssh-keygen -A && \
+ sed -i "s/#AuthorizedKeysFile/AuthorizedKeysFile/g" /etc/ssh/sshd_config && \
+ adduser git -D -G root -h /home/git -s /bin/bash && \
+ passwd -d git && \
+ su git -c "mkdir /home/git/.ssh && \
+ ssh-keygen -t ed25519 -f /home/git/.ssh/id_ed25519 -P '' && \
+ mv /home/git/.ssh/id_ed25519.pub /home/git/.ssh/authorized_keys && \
+ mkdir /home/git/test.git && \
+ git --bare init /home/git/test.git"
+
+ VOLUME ["/home/git/.ssh", "/home/git/test.git/hooks"]
+ WORKDIR /home/git
+
+ CMD ["/usr/sbin/sshd", "-D"]
+ ```
3. Create a test pre-receive script called `always_reject.sh`. This example script will reject all pushes, which is useful for locking a repository:
- ```
- #!/usr/bin/env bash
+ ```
+ #!/usr/bin/env bash
- echo "error: rejecting all pushes"
- exit 1
- ```
+ echo "error: rejecting all pushes"
+ exit 1
+ ```
4. Ensure the `always_reject.sh` scripts has execute permissions:
```shell
$ chmod +x always_reject.sh
- ```
+ ```
5. From the directory containing `Dockerfile.dev`, build an image:
@@ -150,32 +150,32 @@ You can test a pre-receive hook script locally before you create or update it on
....truncated output....
> Initialized empty Git repository in /home/git/test.git/
> Successfully built dd8610c24f82
- ```
+ ```
6. Run a data container that contains a generated SSH key:
```shell
$ docker run --name data pre-receive.dev /bin/true
- ```
+ ```
7. Copy the test pre-receive hook `always_reject.sh` into the data container:
```shell
$ docker cp always_reject.sh data:/home/git/test.git/hooks/pre-receive
- ```
+ ```
8. Run an application container that runs `sshd` and executes the hook. Take note of the container id that is returned:
```shell
$ docker run -d -p 52311:22 --volumes-from data pre-receive.dev
> 7f888bc700b8d23405dbcaf039e6c71d486793cad7d8ae4dd184f4a47000bc58
- ```
+ ```
9. Copy the generated SSH key from the data container to the local machine:
```shell
$ docker cp data:/home/git/.ssh/id_ed25519 .
- ```
+ ```
10. Modify the remote of a test repository and push to the `test.git` repo within the Docker container. This example uses `git@github.com:octocat/Hello-World.git` but you can use any repo you want. This example assumes your local machine (127.0.0.1) is binding port 52311, but you can use a different IP address if docker is running on a remote machine.
@@ -194,9 +194,9 @@ You can test a pre-receive hook script locally before you create or update it on
> To git@192.168.99.100:test.git
> ! [remote rejected] main -> main (pre-receive hook declined)
> error: failed to push some refs to 'git@192.168.99.100:test.git'
- ```
+ ```
- Notice that the push was rejected after executing the pre-receive hook and echoing the output from the script.
+ Notice that the push was rejected after executing the pre-receive hook and echoing the output from the script.
### Further reading
- "[Customizing Git - An Example Git-Enforced Policy](https://git-scm.com/book/en/v2/Customizing-Git-An-Example-Git-Enforced-Policy)" from the *Pro Git website*
diff --git a/content/github/using-git/setting-your-username-in-git.md b/content/github/using-git/setting-your-username-in-git.md
index 6622992fa3f2..fc69247e3e82 100644
--- a/content/github/using-git/setting-your-username-in-git.md
+++ b/content/github/using-git/setting-your-username-in-git.md
@@ -20,13 +20,13 @@ Changing the name associated with your Git commits using `git config` will only
2. {% data reusables.user_settings.set_your_git_username %}
```shell
$ git config --global user.name "Mona Lisa"
- ```
+ ```
3. {% data reusables.user_settings.confirm_git_username_correct %}
```shell
$ git config --global user.name
> Mona Lisa
- ```
+ ```
### Setting your Git username for a single repository
@@ -37,13 +37,13 @@ Changing the name associated with your Git commits using `git config` will only
3. {% data reusables.user_settings.set_your_git_username %}
```shell
$ git config user.name "Mona Lisa"
- ```
+ ```
3. {% data reusables.user_settings.confirm_git_username_correct %}
```shell
$ git config user.name
> Mona Lisa
- ```
+ ```
### Further reading