|
1 | 1 | <!-- First line should be a H1: Badges on top please! -->
|
2 |
| -<!-- markdownlint-disable MD041 --> |
| 2 | +<!-- markdownlint-disable MD041/first-line-heading/first-line-h1 --> |
3 | 3 | [](https://registry.terraform.io/modules/cattle-ops/gitlab-runner/aws/)
|
4 | 4 | [](https://gitter.im/terraform-aws-gitlab-runner/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
|
5 | 5 | [](https://github.com/cattle-ops/terraform-aws-gitlab-runner/actions)
|
| 6 | +<!-- markdownlint-enable MD041/first-line-heading/first-line-h1 --> |
6 | 7 |
|
7 | 8 | # Terraform module for GitLab auto scaling runners on AWS spot instances <!-- omit in toc -->
|
8 | 9 |
|
@@ -385,13 +386,12 @@ module "runner" {
|
385 | 386 |
|
386 | 387 | Since spot instances can be taken over by AWS depending on the instance type and AZ you are using, you may want multiple instances
|
387 | 388 | types in multiple AZs. This is where spot fleets come in, when there is no capacity on one instance type and one AZ, AWS will take
|
388 |
| -the next instance type and so on. This update has been possible since the [fork](https://gitlab.com/cki-project/docker-machine/-/tree/v0.16.2-gitlab.19-cki.2) |
389 |
| -of docker-machine supports spot fleets. |
| 389 | +the next instance type and so on. This update has been possible since the |
| 390 | +[fork](https://gitlab.com/cki-project/docker-machine/-/tree/v0.16.2-gitlab.19-cki.2) of docker-machine supports spot fleets. |
390 | 391 |
|
391 | 392 | We have seen that the [fork](https://gitlab.com/cki-project/docker-machine/-/tree/v0.16.2-gitlab.19-cki.2) of docker-machine this
|
392 |
| -module is using consume more RAM using spot fleets. |
393 |
| -For comparison, if you launch 50 machines in the same time, it consumes ~1.2GB of RAM. In our case, we had to change the |
394 |
| -`instance_type` of the runner from `t3.micro` to `t3.small`. |
| 393 | +module is using consume more RAM using spot fleets. For comparison, if you launch 50 machines in the same time, it consumes |
| 394 | +~1.2GB of RAM. In our case, we had to change the `instance_type` of the runner from `t3.micro` to `t3.small`. |
395 | 395 |
|
396 | 396 | #### Configuration example
|
397 | 397 |
|
@@ -685,7 +685,6 @@ Made with [contributors-img](https://contrib.rocks).
|
685 | 685 | | <a name="input_runners_pre_clone_script"></a> [runners\_pre\_clone\_script](#input\_runners\_pre\_clone\_script) | Commands to be executed on the Runner before cloning the Git repository. this can be used to adjust the Git client configuration first, for example. | `string` | `"\"\""` | no |
|
686 | 686 | | <a name="input_runners_privileged"></a> [runners\_privileged](#input\_runners\_privileged) | Runners will run in privileged mode, will be used in the runner config.toml | `bool` | `true` | no |
|
687 | 687 | | <a name="input_runners_pull_policies"></a> [runners\_pull\_policies](#input\_runners\_pull\_policies) | pull policies for the runners, will be used in the runner config.toml, for Gitlab Runner >= 13.8, see https://docs.gitlab.com/runner/executors/docker.html#using-multiple-pull-policies | `list(string)` | <pre>[<br> "always"<br>]</pre> | no |
|
688 |
| -| <a name="input_runners_pull_policy"></a> [runners\_pull\_policy](#input\_runners\_pull\_policy) | Deprecated! Use runners\_pull\_policies instead. pull\_policy for the runners, will be used in the runner config.toml | `string` | `""` | no | |
689 | 688 | | <a name="input_runners_request_concurrency"></a> [runners\_request\_concurrency](#input\_runners\_request\_concurrency) | Limit number of concurrent requests for new jobs from GitLab (default 1). | `number` | `1` | no |
|
690 | 689 | | <a name="input_runners_request_spot_instance"></a> [runners\_request\_spot\_instance](#input\_runners\_request\_spot\_instance) | Whether or not to request spot instances via docker-machine | `bool` | `true` | no |
|
691 | 690 | | <a name="input_runners_root_size"></a> [runners\_root\_size](#input\_runners\_root\_size) | Runner instance root size in GB. | `number` | `16` | no |
|
|
0 commit comments