3

I have provisioned EKS cluster on AWS with public access to api endpoint. While doing I configured SG with ingress only from specific IP. But I could still run the kubectl get svc against the cluster when accessing it from another IP.

I want to have IP restricted access to EKS cluster. ref - Terraform - Master cluster SG

enter image description here

If public access is enabled does it mean that anyone who has cluster name can deploy anything?

2 Answers 2

4

When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using Kubernetes management tools such as kubectl as you have done). By default, this API server endpoint is public to the internet, and access to the API server is secured using a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access Control (RBAC).

So the public access does not mean that anyone who has the cluster name can deploy anything. You can read more about that in the Amazon EKS Cluster Endpoint Access Control AWS documentation.

If you want to provision EKS with Terraform and manage the network topology it's happened through the VPC (Virtual Private Network). You can check this VPC Terraform Module to get all the proper settings. Hope it'll help.

Sign up to request clarification or add additional context in comments.

1 Comment

Thanks. I might need to check kubeconfig and RBAC further to secure public endpoints.
0

As well as Claire Bellivier' answer about how EKS clusters are protected via authentication using IAM and RBAC you can now also configure your EKS cluster to be only accessible from private networks such as the VPC the cluster resides in or any peered VPCs.

This has been added in the (as yet unreleased) 2.3.0 version of the AWS provider and can be configured as part of the vpc_options config of the aws_eks_cluster resource:

resource "aws_eks_cluster" "example" { name = %[2]q role_arn = "${aws_iam_role.example.arn}" vpc_config { endpoint_private_access = true endpoint_public_access = false subnet_ids = [ "${aws_subnet.example.*.id[0]}", "${aws_subnet.example.*.id[1]}", ] } } 

3 Comments

@ydaetsjcoR qq on ` VPC the cluster resides` - EKS cluster(master nodes) are on the same VPC I am provisioning is it?
Im trying to figure out if I have vpc1 in east and vpc2 in west. Will I be able to use manage nodes from both vpcs with same cluster. If yes, and I create ingress will that be hosted in same vpc as clusters. Or creating cluster per vpc wrt region
It should work if there is a VPC peering connection or other form of private connection between the VPCs (such as a NAT'ed VPN so that traffic seems to originate from the VPN instance). That said, I'd probably have separate EKS clusters in this case unless you have a good reason not to.