From f674d89eda3cd3e54a3c7995b75af649bf4c4e40 Mon Sep 17 00:00:00 2001 From: Tejeev Date: Mon, 24 Jan 2022 19:45:01 -0700 Subject: [PATCH 1/9] Note regarding mega nodes --- content/rke/latest/en/os/_index.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/content/rke/latest/en/os/_index.md b/content/rke/latest/en/os/_index.md index 08855564c8..66c820a43b 100644 --- a/content/rke/latest/en/os/_index.md +++ b/content/rke/latest/en/os/_index.md @@ -289,6 +289,14 @@ Confirm that a Kubernetes supported version of Docker is installed on your machi This section describes the hardware requirements for the worker role, large Kubernetes clusters, and etcd clusters. +--- +*NOTE regarding large nodes for Kubernetes* +Kubernetes is engineered around the concept of horizontal scaling for redundancy so scaling vertically with large nodes can be very problematic +- If you must use nodes larger than 24 CPU, utilize virtualization tooling such as Harvester to subdivide the nodes. +- There are Kubernetes, kernel, and network limitations that prohibit too many pods per node so we advise keeping to roughly 24 cores per node and the recommended 100 pods per node as a maximum unless you are deploying workloads that specifically require enormous amounts of resources (such as multi threaded heavy compute jobs) or have good reason to increase the pod limit (with an upper limit of 250). +- Even when deploying workloads like the above, it is recommended to use a virtualization layer to facilitate less downtime/shorter reboots during upgrades and failures +--- + ### Worker Role The hardware requirements for nodes with the `worker` role mostly depend on your workloads. The minimum to run the Kubernetes node components is 1 CPU (core) and 1GB of memory. From 3c78d328b96877c0ea2575512905700ce852499f Mon Sep 17 00:00:00 2001 From: Jennifer Travinski Date: Tue, 25 Jan 2022 14:06:03 -0500 Subject: [PATCH 2/9] Edited note --- content/rke/latest/en/os/_index.md | 20 +++++++++++--------- 1 file changed, 11 insertions(+), 9 deletions(-) diff --git a/content/rke/latest/en/os/_index.md b/content/rke/latest/en/os/_index.md index 66c820a43b..5e739bf92f 100644 --- a/content/rke/latest/en/os/_index.md +++ b/content/rke/latest/en/os/_index.md @@ -288,14 +288,6 @@ Confirm that a Kubernetes supported version of Docker is installed on your machi ## Hardware This section describes the hardware requirements for the worker role, large Kubernetes clusters, and etcd clusters. - ---- -*NOTE regarding large nodes for Kubernetes* -Kubernetes is engineered around the concept of horizontal scaling for redundancy so scaling vertically with large nodes can be very problematic -- If you must use nodes larger than 24 CPU, utilize virtualization tooling such as Harvester to subdivide the nodes. -- There are Kubernetes, kernel, and network limitations that prohibit too many pods per node so we advise keeping to roughly 24 cores per node and the recommended 100 pods per node as a maximum unless you are deploying workloads that specifically require enormous amounts of resources (such as multi threaded heavy compute jobs) or have good reason to increase the pod limit (with an upper limit of 250). -- Even when deploying workloads like the above, it is recommended to use a virtualization layer to facilitate less downtime/shorter reboots during upgrades and failures ---- ### Worker Role @@ -305,7 +297,17 @@ Regarding CPU and memory, it is recommended that the different planes of Kuberne ### Large Kubernetes Clusters -For hardware recommendations for large Kubernetes clusters, refer to the official Kubernetes documentation on [building large clusters](https://kubernetes.io/docs/setup/best-practices/cluster-large/). +Kubernetes is engineered around the concept of horizontal scaling for redundancy, so scaling vertically with large nodes can be problematic if the proper minimum/maximum requirements aren’t followed. The following are tips and recommendations for large Kubernetes nodes: + +- If you must use nodes larger than 24 CPU, use virtualization tooling, such as what Harvester provides, to subdivide the nodes. + +- Kubernetes, kernel, and network limitations prevent having too many pods per node. You should maintain a minimum of roughly 24 cores per node and a maximum of the recommended 100 pods per node. + +- If you are deploying workloads that specifically require enormous amounts of resources (such as multi-threaded heavy compute jobs), you may increase the pod limit up to 250. + +- Even when deploying small workloads, it is recommended that you use a virtualization layer to facilitate less downtime and shorter reboots during upgrades and failures. + +For additional hardware recommendations for large Kubernetes clusters and nodes, refer to the official Kubernetes documentation on [building large clusters](https://kubernetes.io/docs/setup/best-practices/cluster-large/). ### Etcd Clusters From b2695da033b5ef01f9cd79c5b736b27e44ea306d Mon Sep 17 00:00:00 2001 From: Jennifer Travinski Date: Tue, 25 Jan 2022 14:13:30 -0500 Subject: [PATCH 3/9] Fixed extra line spacing --- content/rke/latest/en/os/_index.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/content/rke/latest/en/os/_index.md b/content/rke/latest/en/os/_index.md index 5e739bf92f..e442231360 100644 --- a/content/rke/latest/en/os/_index.md +++ b/content/rke/latest/en/os/_index.md @@ -287,8 +287,7 @@ Confirm that a Kubernetes supported version of Docker is installed on your machi ## Hardware -This section describes the hardware requirements for the worker role, large Kubernetes clusters, and etcd clusters. - +This section describes the hardware requirements for the worker role, large Kubernetes clusters, and etcd clusters. ### Worker Role The hardware requirements for nodes with the `worker` role mostly depend on your workloads. The minimum to run the Kubernetes node components is 1 CPU (core) and 1GB of memory. From d1c364b094d3164e505f8d9eba5fe1dddb3bda39 Mon Sep 17 00:00:00 2001 From: Jennifer Travinski Date: Tue, 25 Jan 2022 14:14:50 -0500 Subject: [PATCH 4/9] Edited spacing --- content/rke/latest/en/os/_index.md | 1 + 1 file changed, 1 insertion(+) diff --git a/content/rke/latest/en/os/_index.md b/content/rke/latest/en/os/_index.md index e442231360..91a6fb5c33 100644 --- a/content/rke/latest/en/os/_index.md +++ b/content/rke/latest/en/os/_index.md @@ -288,6 +288,7 @@ Confirm that a Kubernetes supported version of Docker is installed on your machi ## Hardware This section describes the hardware requirements for the worker role, large Kubernetes clusters, and etcd clusters. + ### Worker Role The hardware requirements for nodes with the `worker` role mostly depend on your workloads. The minimum to run the Kubernetes node components is 1 CPU (core) and 1GB of memory. From a0a20a2e180d08cde4894d5f7fc580e48e196577 Mon Sep 17 00:00:00 2001 From: Jennifer Travinski Date: Tue, 25 Jan 2022 14:15:50 -0500 Subject: [PATCH 5/9] Removed unnecessary space --- content/rke/latest/en/os/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rke/latest/en/os/_index.md b/content/rke/latest/en/os/_index.md index 91a6fb5c33..f60cbca9a1 100644 --- a/content/rke/latest/en/os/_index.md +++ b/content/rke/latest/en/os/_index.md @@ -287,7 +287,7 @@ Confirm that a Kubernetes supported version of Docker is installed on your machi ## Hardware -This section describes the hardware requirements for the worker role, large Kubernetes clusters, and etcd clusters. +This section describes the hardware requirements for the worker role, large Kubernetes clusters, and etcd clusters. ### Worker Role From def366e0cd3785ef342a4fd5d4e98499ed8306f7 Mon Sep 17 00:00:00 2001 From: Jennifer Travinski Date: Tue, 25 Jan 2022 14:16:49 -0500 Subject: [PATCH 6/9] Removed unnecessary space --- content/rke/latest/en/os/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rke/latest/en/os/_index.md b/content/rke/latest/en/os/_index.md index f60cbca9a1..89c528c7b7 100644 --- a/content/rke/latest/en/os/_index.md +++ b/content/rke/latest/en/os/_index.md @@ -287,7 +287,7 @@ Confirm that a Kubernetes supported version of Docker is installed on your machi ## Hardware -This section describes the hardware requirements for the worker role, large Kubernetes clusters, and etcd clusters. +This section describes the hardware requirements for the worker role, large Kubernetes clusters, and etcd clusters. ### Worker Role From e190af35c55dca7728febeefc7add4fd41f5a878 Mon Sep 17 00:00:00 2001 From: Jennifer Travinski Date: Tue, 25 Jan 2022 14:24:36 -0500 Subject: [PATCH 7/9] Added link --- content/rke/latest/en/os/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rke/latest/en/os/_index.md b/content/rke/latest/en/os/_index.md index 89c528c7b7..172819ac36 100644 --- a/content/rke/latest/en/os/_index.md +++ b/content/rke/latest/en/os/_index.md @@ -299,7 +299,7 @@ Regarding CPU and memory, it is recommended that the different planes of Kuberne Kubernetes is engineered around the concept of horizontal scaling for redundancy, so scaling vertically with large nodes can be problematic if the proper minimum/maximum requirements aren’t followed. The following are tips and recommendations for large Kubernetes nodes: -- If you must use nodes larger than 24 CPU, use virtualization tooling, such as what Harvester provides, to subdivide the nodes. +- If you must use nodes larger than 24 CPU, use virtualization tooling, such as what [Harvester](https://docs.harvesterhci.io/v1.0/rancher/virtualization-management/) provides, to subdivide the nodes. - Kubernetes, kernel, and network limitations prevent having too many pods per node. You should maintain a minimum of roughly 24 cores per node and a maximum of the recommended 100 pods per node. From f78fcf2b5d9881179ba35c49c47d558c427d4cf6 Mon Sep 17 00:00:00 2001 From: Jennifer Travinski Date: Thu, 10 Feb 2022 11:49:28 -0500 Subject: [PATCH 8/9] Updated verbiage per feedback --- content/rke/latest/en/os/_index.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/content/rke/latest/en/os/_index.md b/content/rke/latest/en/os/_index.md index 172819ac36..fda08d964b 100644 --- a/content/rke/latest/en/os/_index.md +++ b/content/rke/latest/en/os/_index.md @@ -297,17 +297,17 @@ Regarding CPU and memory, it is recommended that the different planes of Kuberne ### Large Kubernetes Clusters -Kubernetes is engineered around the concept of horizontal scaling for redundancy, so scaling vertically with large nodes can be problematic if the proper minimum/maximum requirements aren’t followed. The following are tips and recommendations for large Kubernetes nodes: +Kubernetes is engineered around the concept of horizontal scaling for redundancy, so scaling vertically with large servers can be problematic if the proper minimum/maximum requirements aren’t followed. The following are tips and recommendations for large Kubernetes clusters: -- If you must use nodes larger than 24 CPU, use virtualization tooling, such as what [Harvester](https://docs.harvesterhci.io/v1.0/rancher/virtualization-management/) provides, to subdivide the nodes. +- If you must use servers larger than 24 CPU, use virtualization tooling, such as what [Harvester](https://docs.harvesterhci.io/v1.0/rancher/virtualization-management/) provides, to subdivide the servers. -- Kubernetes, kernel, and network limitations prevent having too many pods per node. You should maintain a minimum of roughly 24 cores per node and a maximum of the recommended 100 pods per node. +- Kubernetes, kernel, and network limitations prevent having too many pods per server. You should maintain a minimum of roughly 24 cores per server and a maximum of the recommended 110 pods per server. -- If you are deploying workloads that specifically require enormous amounts of resources (such as multi-threaded heavy compute jobs), you may increase the pod limit up to 250. +- If you are deploying an application or system that specifically requires a large number of pods, you may increase the pod limit. Please note, however, that going above 254 pods per server is not supported by default pod CIDR settings unless the pods are using host networking. - Even when deploying small workloads, it is recommended that you use a virtualization layer to facilitate less downtime and shorter reboots during upgrades and failures. -For additional hardware recommendations for large Kubernetes clusters and nodes, refer to the official Kubernetes documentation on [building large clusters](https://kubernetes.io/docs/setup/best-practices/cluster-large/). +For additional hardware recommendations for large Kubernetes clusters and nodes, refer to the official Kubernetes documentation on [building large clusters](https://kubernetes.io/docs/setup/best-practices/cluster-large/). ### Etcd Clusters From 8a99c3de7f360d618d2fc1a04d8e61cb91b397e5 Mon Sep 17 00:00:00 2001 From: Tejeev Date: Wed, 23 Mar 2022 10:46:31 -0600 Subject: [PATCH 9/9] Update content/rke/latest/en/os/_index.md Co-authored-by: Jen Travinski --- content/rke/latest/en/os/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/rke/latest/en/os/_index.md b/content/rke/latest/en/os/_index.md index fda08d964b..f9af8690e7 100644 --- a/content/rke/latest/en/os/_index.md +++ b/content/rke/latest/en/os/_index.md @@ -301,7 +301,7 @@ Kubernetes is engineered around the concept of horizontal scaling for redundancy - If you must use servers larger than 24 CPU, use virtualization tooling, such as what [Harvester](https://docs.harvesterhci.io/v1.0/rancher/virtualization-management/) provides, to subdivide the servers. -- Kubernetes, kernel, and network limitations prevent having too many pods per server. You should maintain a minimum of roughly 24 cores per server and a maximum of the recommended 110 pods per server. +- Kubernetes, kernel, and network limitations prevent having too many pods per server. You should maintain a maximum of roughly 24 cores per server and a maximum of the recommended 110 pods per server. - If you are deploying an application or system that specifically requires a large number of pods, you may increase the pod limit. Please note, however, that going above 254 pods per server is not supported by default pod CIDR settings unless the pods are using host networking.