I have contacted AWS support team regarding this issue and got the following response.
From your correspondence, I understand that you are facing issues while creating the mongodb pods in your EKS cluster, and after creating the pod, your pod is going to pending status.
Please let me know if I misunderstood your query. Thanks for sharing the GitHub repository URL using the same. I put some effort into replicating the same issue on my side, and thankfully I was able to replicate the issue.
Further investigation into my pending pod problem I ran the following describe command on my cluster,
"kubectl describe pod <pending_pod_name>"
After several minutes, I found the following line in the "event" part of my output.
"running PreBind plugin "VolumeBinding": binding volumes: timed out waiting for the condition".
On further investigation, I found that the mongodb pod module that you are trying to deploy on your cluster is trying to create an EBS volume as a persistent volume, which is why I got the aforementioned error. We need the EBS CSI driver add-on installed in your cluster to create an EBS volume using EKS, and the above error usually occurs if the EBS CSI driver add-on is not present. Since this add-on is not installed by default while creating the cluster you need to install it via EKS console add-on tab.
Or another possibility is that, even though the add-on is present, it won't have the required permission to create the EBS volume. So, before we even install the EBS CSI driver add-on to the cluster, we need to make sure that we have created the IAM role for attaching to the add-on. The same is referred to over here[1].
In your case, you can check whether the EBS CSI driver is present by running the following command:
"kubectl get pods -n kube-system"
And look for pods with names like "ebs-csi-controller-xxxxxxx." If you find one, it means you've already installed the EBS CSI driver, and the problem could be with permissions.
For that, you need to run the following command.
"kubectl describe pod ebs-csi-controller-xxxxxxx -c csi-provisioner -n kube-system"
This will give an output of the configuration of the driver pod. In that output, you need to check for an environment called "AWS_ROLE_ARN:" If that wasn't present in your output, this implies that you haven't provided the IAM OIDC provider role for the add-on. So you need to create that role in the IAM console, then remove the existing EBS CSI driver add-on from the EKS cluster console, and then again add the EBS CSI driver add-on with that role as "Service account role". More details for adding the EBS CSI driver add-on to the cluster are referred to here[3].
If you already have the value for "AWS_ROLE_ARN" then you need to check for the configuration of the role by using this documentation[2].
So, keeping the above things in mind, I have created the IAM OIDC provider role for the add-on. For that, you need to follow all the steps regarding how to create an IAM role for the add-on as referred to here[2].
After creating the IAM OIDC provider role, I have installed the add-on via console by following the steps in this documentation[3] and for the service account role, I have selected the OIDC provider role that was created in the above step.
After installing the add-on, I tried to delete the mogodb database pod by running the following command.
"kubectl delete -f config/samples/mongodb.com_v1_mongodbcommunity_cr.yaml"
Then run the following apply command to redeploy the pods.
"kubectl apply -f config/samples/mongodb.com_v1_mongodbcommunity_cr.yaml"
After I checked the pods, I could see that the mongodb database pod had come to running status.
The above is the most common issue that might happen, if none of the above is your problem then please share a convenient time along with the timezone you're working in as well as contact number with country code so that we can connect over a call and have a screen sharing troubleshooting session.
reference links:
[1] Amazon EBS CSI driver add-on : https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html
[2] How to create IAM OIDC provider for EBS CSI driver add-on : https://docs.aws.amazon.com/eks/latest/userguide/csi-iam-role.html
[3] Managing the EBS CSI driver add-on : https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html
Working commands/steps
(Steps mentioned by support team)
- Creation of EKS cluster
- Go to the newly created EKS cluster in AWS console. In the Overview tab, copy the value of OpenID Connect provider URL and save the value in some place for future reference.
- Go to IAM -> Identity providers -> Add Provider. Select OpenID Connect as the provider type.
- Paste the copied url from step 2, in the Provider URL textbox and click ‘Get thumbprint’. Set Audience - sts.amazonaws.com in the corresponding text box.
- Click the ‘Add Provider’ button.
- Create the required iam role. IAM -> Roles -> Create Role. In the ‘Select trusted entity’ section, choose ‘Web Identity’ . In Identity provider drop down, select the OIDC option that is created in step 5. Choose Audience - sts.amazonaws.com in the drop down. Click ‘Next’
- Search for AmazonEBSCSIDriverPolicy policy in the next window and click ‘Next’ and give name,description,tags for the role and click create role.
- In the Roles section, search for the newly created role in step 7 and go inside that role. Trust relationships -> Edit trust policy.
"oidc.eks.eu-west-1.amazonaws.com/id/385AA11111111116116:sub": "system:serviceaccount:kube-system:ebs-csi-controller-sa"
Update the above text with current oidc id and add it as new key-value in the Statement[0] -> Condition -> StringEquals. Refer the full json structure of this trusted relationship json data in the last.
After updating the text, click ‘Update Policy’ Go to EKS -> Clusters -> Newly created cluster in step 1. Click Add-ons tab, Add new.
In the pop up choose Name as Amazon EBS CSI Driver. Version as latest. Choose Role as the role created in step 7. If the above role is not listed in drop down, reload the section using the reload button and click Add.
After some time, the new Add on will become active. Then run this kubectl get pods -n kube-system command and we should see csi pods as shown.
ebs-csi-controller-68d49f84c8-sl7w6 6/6 Running 0 109s ebs-csi-controller-68d49f84c8-w2k6r 6/6 Running 0 2m19s ebs-csi-node-ldmsm 3/3 Running 0 2m20s
Then run the commands given in the question.
Following dictionary is the Trusted relationships json for role
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::112345678900:oidc-provider/oidc.eks.eu-west-1.amazonaws.com/id/Axxxxxxxxxxxxx" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "oidc.eks.eu-west-1.amazonaws.com/id/Axxxxxxxxxxxxx:sub": "system:serviceaccount:kube-system:ebs-csi-controller-sa", "oidc.eks.eu-west-1.amazonaws.com/id/Axxxxxxxxxxxxx:aud": "sts.amazonaws.com" } } } ] }