I'm trying to create a EKS cluster in a private subnet. I'm having issues getting it working. I get the error unhealthy nodes in the kubernetes cluster. Wonder if its due to security group or some other issues like VPC endpoints?
When I use NAT gateway setup then it works fine. But I don't want to use nat gateway anymore.
One think I'm not sure is should the EKS cluster subnet_ids be only private subnets?
In the below config I'm using both public and private subnets.
resource "aws_eks_cluster" "main" { name = var.eks_cluster_name role_arn = aws_iam_role.eks_cluster.arn vpc_config { subnet_ids = concat(var.public_subnet_ids, var.private_subnet_ids) security_group_ids = [aws_security_group.eks_cluster.id, aws_security_group.eks_nodes.id, aws_security_group.external_access.id] endpoint_private_access = true endpoint_public_access = false } # Ensure that IAM Role permissions are created before and deleted after EKS Cluster handling. # Otherwise, EKS will not be able to properly delete EKS managed EC2 infrastructure such as Security Groups. depends_on = [ "aws_iam_role_policy_attachment.aws_eks_cluster_policy", "aws_iam_role_policy_attachment.aws_eks_service_policy" ] }