-
Notifications
You must be signed in to change notification settings - Fork 631
Add Amazon VPC CNI support #1158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Hi @Sn0rt. Thanks for your PR. I'm waiting for a kubernetes-sigs or kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would like to follow K8s convention for CamelCase on field values and use AmazonVPC
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would like to follow K8s convention for CamelCase on field values and use
AmazonVPC
?
copy that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to make this optional? Amazon best practices is to grant Least Privilege. A cluster not running aws-vpc-cni doesn't need these privileges.
The question is how to make this optional? Would this be best exposed in clusterawsadm
as a flag?
Maybe --cni amazon-vpc-cni
. Not sure there are other CNI's that would require different policy.
or --add-vpc-cni-policy
Thoughts on if this should be optional and if so, how best to expose?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
someone wants to create multiple clusters with different network solutions, such as calico and the amazon-vpc-cni, under one AWS account.
that use case how to separate the privilege of account ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make it accept multi-value with comma-separation?
Options:
- Calico
- AmazonVPC
Default: [Calico]
Or leave it as one or another for now, and we can revisit this in a follow up issue. Ideally, we want to move CNI rules out of the controller loop anyway, so can figure it out properly then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agree with you, we can talk about this issue later.
1 cluster spec1.1 cluster specthe cluster yaml will tell CAPA controller I want to create a cluster and the network of cluster is AmazonVPC. apiVersion: cluster.x-k8s.io/v1alpha2
kind: Cluster
metadata:
annotations:
cluster.k8s.io/network-cni: AmazonVPC
name: newcluster
spec:
clusterNetwork:
pods:
cidrBlocks:
- 10.0.0.0/16
services:
cidrBlocks:
- 192.168.0.0/16
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSCluster
name: newcluster
namespace: default
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSCluster
metadata:
name: newcluster
namespace: default
spec:
region: us-east-2
sshKeyName: guohao 1.2 controlplanewill create three master nodes with one AZ, and will set kubelet parmeter with maxpod and nodeip. apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2
kind: KubeadmConfig
metadata:
name: newcluster-controlplane-0
namespace: default
spec:
clusterConfiguration:
apiServer:
extraArgs:
cloud-provider: aws
controllerManager:
extraArgs:
cloud-provider: aws
initConfiguration:
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
max-pods: "19"
node-ip: '{{ ds.meta_data.local_ipv4 }}'
name: '{{ ds.meta_data.hostname }}'
---
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2
kind: KubeadmConfig
metadata:
name: newcluster-controlplane-1
namespace: default
spec:
joinConfiguration:
controlPlane: {}
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
max-pods: "19"
node-ip: '{{ ds.meta_data.local_ipv4 }}'
name: '{{ ds.meta_data.hostname }}'
---
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2
kind: KubeadmConfig
metadata:
name: newcluster-controlplane-2
namespace: default
spec:
joinConfiguration:
controlPlane: {}
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
max-pods: "19"
node-ip: '{{ ds.meta_data.local_ipv4 }}'
name: '{{ ds.meta_data.hostname }}'
---
apiVersion: cluster.x-k8s.io/v1alpha2
kind: Machine
metadata:
labels:
cluster.x-k8s.io/cluster-name: newcluster
cluster.x-k8s.io/control-plane: "true"
name: newcluster-controlplane-0
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2
kind: KubeadmConfig
name: newcluster-controlplane-0
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSMachine
name: newcluster-controlplane-0
namespace: default
version: v1.15.3
---
apiVersion: cluster.x-k8s.io/v1alpha2
kind: Machine
metadata:
labels:
cluster.x-k8s.io/cluster-name: newcluster
cluster.x-k8s.io/control-plane: "true"
name: newcluster-controlplane-1
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2
kind: KubeadmConfig
name: newcluster-controlplane-1
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSMachine
name: newcluster-controlplane-1
namespace: default
version: v1.15.3
---
apiVersion: cluster.x-k8s.io/v1alpha2
kind: Machine
metadata:
labels:
cluster.x-k8s.io/cluster-name: newcluster
cluster.x-k8s.io/control-plane: "true"
name: newcluster-controlplane-2
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2
kind: KubeadmConfig
name: newcluster-controlplane-2
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSMachine
name: newcluster-controlplane-2
namespace: default
version: v1.15.3
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSMachine
metadata:
name: newcluster-controlplane-0
namespace: default
spec:
iamInstanceProfile: control-plane.cluster-api-provider-aws.sigs.k8s.io
instanceType: t2.medium
sshKeyName: guohao
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSMachine
metadata:
name: newcluster-controlplane-1
namespace: default
spec:
iamInstanceProfile: control-plane.cluster-api-provider-aws.sigs.k8s.io
instanceType: t2.medium
sshKeyName: guohao
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSMachine
metadata:
name: newcluster-controlplane-2
namespace: default
spec:
iamInstanceProfile: control-plane.cluster-api-provider-aws.sigs.k8s.io
instanceType: t2.medium
sshKeyName: guohaoyaml 2 k8s info2.1 node infoall of the node are ready.
2.2 pod info2048 pods are running.
|
/ok-to-test |
Make sure to put the license headers on the new files. |
/test pull-cluster-api-provider-aws-verify |
Is this for #931? |
/hold |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: Sn0rt The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
examples/cluster/cluster.yaml
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cluster.k8s.io/network-cni: ${NETWORK} | |
cluster.x-k8s.io/network-cni: ${NETWORK} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to adopt the alpha x-k8s.io
annotations for now, but otherwise good to go.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to adopt the alpha
x-k8s.io
annotations for now, but otherwise good to go.
thx very much.
examples/cluster/cluster.yaml
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to adopt the alpha x-k8s.io
annotations for now, but otherwise good to go.
Signed-off-by: guohaowang <[email protected]>
Apologies @Sn0rt for leaving this so long. /close |
@randomvariable: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@Sn0rt: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
this PR for Amazone vpc cni support wtih v1alpha2 , v1alpha1 here is #1062
how to use
this is different from HEAD of repo, the user should set a bash env variable as follow
cluster and machine spec
the cluster file example
the KubeadmConfig of test1-controlplane-0
the AWSMachine of test1-controlplane-0
the AWSMachine of test1-controlplane-0
the machine deployment
security group info
Drawing on the security group policy of EKS, nodes in the cluster directly communicate with each other by default.