1

I am following a tutorial to set Kubernetes with an ingress managed service. The cluster is,

  • 1 controller
  • 2 worker Kubernetes cluster
  • kubeadm built
  • running Kubernetes v1.25.3 (latest at the time of writing)
  • running weave-net
  • running ingress-nginx
  • EC2, not EKS

I am just expecting to see the nginx default page when I access the AWS Application Load Balancer, ALB, DNS name - nothing fancy.

I first used this helm chart to deploy nginx-ingress, as per the "Quick start" docs.

helm upgrade --install ingress-nginx ingress-nginx \ --repo https://kubernetes.github.io/ingress-nginx \ --namespace ingress-nginx --create-namespace 

I then deployed the following in the default namespace.

ingress.yaml

--- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-app-ingress namespace: default spec: ingressClassName: nginx rules: - host: ***alb***.elb.amazonaws.com http: paths: - backend: service: name: nginx-service port: number: 8080 path: / pathType: Exact 

service.yaml

--- apiVersion: v1 kind: Service metadata: name: nginx-service labels: app: nginx svc: test-nginx spec: selector: app: nginx ports: - protocol: TCP port: 8080 targetPort: 80 

deployment.yaml

--- apiVersion: v1 kind: Service metadata: name: nginx-service labels: app: nginx svc: test-nginx spec: selector: app: nginx ports: - protocol: TCP port: 8080 targetPort: 80 
k get svc -A 
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default demo ClusterIP ***** <none> 80/TCP 7h53m default kubernetes ClusterIP ***** <none> 443/TCP 2d6h default nginx-service ClusterIP ***** <none> 8080/TCP 26h ingress-nginx ingress-nginx-controller LoadBalancer ***** <pending> 80:32573/TCP,443:32512/TCP 35h ingress-nginx ingress-nginx-controller-admission ClusterIP ***** <none> 443/TCP 35h kube-system kube-dns ClusterIP ***** <none> 53/UDP,53/TCP,9153/TCP 2d6h 

Two AWS security groups are in effect, one for the controller and one for the workers. Both these security groups have ports 6783 - 6784 open as required by ingress-nginx.

The ALB is set with the following.

  • same Availability Zones as the worker nodes
  • default (open) security group
  • Listener protocol:port = HTTP:80
  • same VPC as the EC2 instances.
  • Scheme = internet-facing
  • IP address type = ipv4

The target group for this ALB is set as follows.

  • both worker nodes
  • Protocol : Port = HTTP: 32573
  • Protocol version = HTTP1
  • same VPC as the EC2 instances.
  • Health path = /

On the assumption that the target group would block traffic to "unhealthy" nodes, I previously deployed a separate service directly on a different NodePort, rather than via Ingress, to fudge the health check to Healthy, but this made no difference.

I have,

  • double checked that I have followed the steps in the tutorial exactly
  • looked through the logs but cannot find anything that would suggest an error.
  • terminated all the pods.
  • restarted the nodes.

When I run

k logs ingress-nginx-controller-***** -n ingress-nginx 

It returns

------------------------------------------------------------------------------- NGINX Ingress controller Release: v1.4.0 Build: 50be2bf95fd1ef480420e2aa1d6c5c7c138c95ea Repository: https://github.com/kubernetes/ingress-nginx nginx version: nginx/1.19.10 ------------------------------------------------------------------------------- W1021 13:49:00.607448 7 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I1021 13:49:00.607678 7 main.go:209] "Creating API client" host="https://10.96.0.1:443" I1021 13:49:00.613511 7 main.go:253] "Running in Kubernetes cluster" major="1" minor="25" git="v1.25.3" state="clean" commit="434bfd82814af038ad94d62ebe59b133fcb50506" platform="linux/amd64" I1021 13:49:00.776507 7 main.go:104] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem" I1021 13:49:00.788407 7 ssl.go:533] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key" I1021 13:49:00.807812 7 nginx.go:260] "Starting NGINX Ingress controller" I1021 13:49:00.820423 7 event.go:285] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"f4d537f7-2b89-4fe5-a9ed-c064533b08a2", APIVersion:"v1", ResourceVersion:"96138", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller I1021 13:49:01.910567 7 store.go:430] "Found valid IngressClass" ingress="default/my-app-ingress" ingressclass="nginx" I1021 13:49:01.910942 7 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"my-app-ingress", UID:"9111168a-9dc8-4cf8-a0f6-fe871c3ada61", APIVersion:"networking.k8s.io/v1", ResourceVersion:"245885", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync I1021 13:49:02.009443 7 nginx.go:303] "Starting NGINX process" I1021 13:49:02.009750 7 leaderelection.go:248] attempting to acquire leader lease ingress-nginx/ingress-controller-leader... I1021 13:49:02.010156 7 nginx.go:323] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key" I1021 13:49:02.010553 7 controller.go:168] "Configuration changes detected, backend reload required" I1021 13:49:02.015673 7 status.go:84] "New leader elected" identity="ingress-nginx-controller-567c84f6f-8s5zv" I1021 13:49:02.081076 7 controller.go:185] "Backend successfully reloaded" I1021 13:49:02.081398 7 controller.go:196] "Initial sync, sleeping for 1 second" I1021 13:49:02.081913 7 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-567c84f6f-52k47", UID:"fa2b26ad-0594-4e43-927a-11a9def12467", APIVersion:"v1", ResourceVersion:"249556", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration I1021 13:49:43.652768 7 leaderelection.go:258] successfully acquired lease ingress-nginx/ingress-controller-leader I1021 13:49:43.652910 7 status.go:84] "New leader elected" identity="ingress-nginx-controller-567c84f6f-52k47" W1021 14:22:31.247404 7 controller.go:1112] Service "default/demo" does not have any active Endpoint. I1021 14:22:31.283535 7 admission.go:149] processed ingress via admission controller {testedIngressLength:2 testedIngressTime:0.036s renderingIngressLength:2 renderingIngressTime:0s admissionTime:25.8kBs testedConfigurationSize:0.036} I1021 14:22:31.283727 7 main.go:100] "successfully validated configuration, accepting" ingress="default/demo" I1021 14:22:31.289380 7 store.go:430] "Found valid IngressClass" ingress="default/demo" ingressclass="nginx" I1021 14:22:31.289790 7 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"demo", UID:"50962ac3-d7f1-45bc-8e73-7baf6337331b", APIVersion:"networking.k8s.io/v1", ResourceVersion:"252977", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync W1021 14:22:31.290058 7 controller.go:1112] Service "default/demo" does not have any active Endpoint. I1021 14:22:31.290210 7 controller.go:168] "Configuration changes detected, backend reload required" I1021 14:22:31.366582 7 controller.go:185] "Backend successfully reloaded" I1021 14:22:31.367273 7 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-567c84f6f-52k47", UID:"fa2b26ad-0594-4e43-927a-11a9def12467", APIVersion:"v1", ResourceVersion:"249556", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration I1021 14:25:34.757766 7 controller.go:168] "Configuration changes detected, backend reload required" I1021 14:25:34.827908 7 controller.go:185] "Backend successfully reloaded" I1021 14:25:34.828291 7 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-567c84f6f-52k47", UID:"fa2b26ad-0594-4e43-927a-11a9def12467", APIVersion:"v1", ResourceVersion:"249556", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration I1021 14:25:41.191636 7 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.036s renderingIngressLength:1 renderingIngressTime:0s admissionTime:22.1kBs testedConfigurationSize:0.036} I1021 14:25:41.191800 7 main.go:100] "successfully validated configuration, accepting" ingress="default/my-app-ingress" I1021 14:25:41.195876 7 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"my-app-ingress", UID:"9111168a-9dc8-4cf8-a0f6-fe871c3ada61", APIVersion:"networking.k8s.io/v1", ResourceVersion:"253276", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync I1021 20:40:45.084934 7 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.049s renderingIngressLength:1 renderingIngressTime:0s admissionTime:22.1kBs testedConfigurationSize:0.049} I1021 20:40:45.085124 7 main.go:100] "successfully validated configuration, accepting" ingress="default/my-app-ingress" I1021 20:40:45.088698 7 controller.go:168] "Configuration changes detected, backend reload required" I1021 20:40:45.088779 7 event.go:285] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"my-app-ingress", UID:"9111168a-9dc8-4cf8-a0f6-fe871c3ada61", APIVersion:"networking.k8s.io/v1", ResourceVersion:"287850", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync I1021 20:40:45.183140 7 controller.go:185] "Backend successfully reloaded" I1021 20:40:45.184054 7 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-567c84f6f-52k47", UID:"fa2b26ad-0594-4e43-927a-11a9def12467", APIVersion:"v1", ResourceVersion:"249556", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration 

I notice that the following appears in the logs for the weave-net pods.

k logs weave-net-*w1* -n kube-system 

Where *w1* is the pod running on worker node1.

INFO: 2022/10/21 13:49:27.195158 ->[*controller*:6783] error during connection attempt: dial tcp :0->*controller*:6783: connect: connection refused 

Where *controller* is the IP address of the control node.

After all of the above, when I navigate to the ALB DNS address, I just get,

internal error - server connection terminated

This is clearly a PEBKAC, but what am I missing?

2
  • There’s a lot of open steps in this question. Have you verified that new pods in the cluster can access the nginx-service.default:8080 and the alb, and are you running external-dns? Commented Nov 29, 2022 at 5:08
  • If you have an ALB why not use alb ingress controller instead of involving nginx controller? Commented Dec 27, 2022 at 18:21

2 Answers 2

0

The load balancer will connect to the default security group by default. This security group has a source of itself, which basically means that it blocks everything.

Add the inbound rule of all traffic from "Anywhere-IPv4" to ensure that all traffic can access the load balancer.

0

Check Security Groups again. Ensure, that SG used for Kubernetes service is open for your ALB SG (the best practice clearly said, you shouldn't use defasults, please, create a new one).

ALB must have Target Group. Check events for this TG, you will see the messages, you should see errors if any.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.