Hybrid Cloud demo: Quarkus backends distributed in different OpenShift clusters installed on public clouds, controlled by Skupper and consumed by a Frontend.
You find the guide on how to install such clusters inside installation dir.
# Windows curl -fL https://github.com/skupperproject/skupper/releases/download/1.0.2/skupper-cli-1.0.2-windows-amd64.zip unzip skupper-cli-1.0.2-windows-amd64.zip mkdir %UserProfile%\bin move skupper.exe %UserProfile%\bin set PATH=%PATH%;%UserProfile%\bin # Linux curl -fL https://github.com/skupperproject/skupper/releases/download/1.0.2/skupper-cli-1.0.2-linux-amd64.tgz | tar -xzf - mkdir $HOME/bin mv skupper $HOME/bin export PATH=$PATH:$HOME/bin # MacOs curl -fL https://github.com/skupperproject/skupper/releases/download/1.0.2/skupper-cli-1.0.2-mac-amd64.tgz | tar -xzf - mkdir $HOME/bin mv skupper $HOME/bin export PATH=$PATH:$HOME/bin| Important | If you are using minikube as one cluster, run minikube tunnel in a new terminal. |
As Kubernetes enables you to have cloud portability, the workload can be deployed on any Kubernetes context. Deploy the backend using the following three commands on all clusters and change only WORKER_CLOUD_ID value to differentiate between them. The name of the cloud or the location/geography are useful to illustrate where the work is being conducted/transacted.
kubectl create namespace hybrid kubectl -n hybrid apply -f backend.yml kubectl -n hybrid set env deployment/hybrid-cloud-backend WORKER_CLOUD_ID="localhost" # aws, azr, gcpThe annotation has already been applied in the backend.yml file but if wish to apply it manually, use the following command.
kubectl annotate service hybrid-cloud-backend skupper.io/proxy=httpDeploy the frontend to your main cluster:
The Kubernetes service is of type LoadBalancer. If you are deploying the frontend in your public cluster open frontend.yml file and modify Ingress configuration with your host:
spec: rules: - host: ""In your main cluster, deploy the frontend by calling:
kubectl apply -f frontend.yml| Tip | In case of OpenShift you can run: oc expose service hybrid-cloud-frontend after deploying frontend resource, and it is not required to modify the Ingress configuration. But of course, the first approach works as well in OpenShift. |
To find the frontend URL, on OpenShift use the Route
kubectl get routes | grep frontendOn vanilla Kubernetes, try the external IP of the Service
#AWS kubectl get service hybrid-cloud-frontend -o jsonpath="{.status.loadBalancer.ingress[0].hostname}" #Azure, GCP kubectl get service hybrid-cloud-frontend -o jsonpath="{.status.loadBalancer.ingress[0].ip}"In your main cluster, init skupper and create the connection-token:
skupper init --console-auth unsecured # (1) Skupper is now installed in namespace 'hybrid'. Use 'skupper status' to get more information. skupper status Skupper is installed in namespace '"hybrid"'. Status pending...-
This makes anyone be able to access the Skupper UI to visualize the clouds. Fine for demos, not to be used in production.
See the status of the skupper pods. It takes a bit of time (usually around 2 minutes) until the pods are running:
kubectl get pods NAME READY STATUS RESTARTS AGE hybrid-cloud-backend-5cbd67d789-mfvbz 1/1 Running 0 3m55s hybrid-cloud-frontend-55bdf64c95-gk2tf 1/1 Running 0 3m27s skupper-router-dd7dfff55-tklgg 2/2 Running 0 59s skupper-service-controller-dc779b7c-5prhc 1/1 Running 0 56sFinally create a token:
skupper token create token.yaml -t cert Connection token written to token.yaml
In all the other clusters, use the connection token created in the previous step:
skupper init skupper link create token.yamlCheck the service status on all clusters
skupper service status Services exposed through Skupper: ╰─ hybrid-cloud-backend (http port 8080) ╰─ Targets: ╰─ app.kubernetes.io/name=hybrid-cloud-backend,app.kubernetes.io/version=1.0.0 name=hybrid-cloud-backendCheck the link status on the 2nd/3rd cluster
skupper link status Link link1 is activeThis has been the short-version to get started, continue reading if you want to learn how to build the Docker images, deply them , etc.
If you run:
kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hybrid-cloud-backend ClusterIP 172.30.157.62 <none> 8080/TCP 10m hybrid-cloud-frontend LoadBalancer 172.30.70.80 acf3bee14b0274403a6f02dc062a3784-405180745.eu-west-1.elb.amazonaws.com 8080:32156/TCP 10m skupper ClusterIP 172.30.128.55 <none> 8080/TCP,8081/TCP 7m50s skupper-router ClusterIP 172.30.7.7 <none> 55671/TCP,45671/TCP 7m53s skupper-router-local ClusterIP 172.30.8.239 <none> 5671/TCP 7m53s 5671/TCP 34mIf you want to build, push and deploy the service:
cd backend ./mvnw clean package -DskipTests -Dquarkus.kubernetes.deploy=true -PazureIf service is already pushed in quay.io, so you can skip the push part:
cd backend ./mvnw clean package -DskipTests -Pazure -Dquarkus.kubernetes.deploy=true -Dquarkus.container-image.build=false -Dquarkus.container-image.push=falseIf you want to build, push and deploy the service:
cd backend ./mvnw clean package -DskipTests -Dquarkus.kubernetes.deploy=true -Pazure -Dquarkus.kubernetes.host=<your_public_host>If service is already pushed in quay.io, so you can skip the push part:
cd backend ./mvnw clean package -DskipTests -Pazr -Dquarkus.kubernetes.deploy=true -Dquarkus.container-image.build=false -Dquarkus.container-image.push=falseThe next profiles are provided: -Pazr, -Paws, -Pgcp and -Plocal, this just sets an environment variable to identify the cluster.
Make sure you have a least the backend project deployed on 2 different clusters. The frontend project can be deployed to just one cluster.
Here, we will make the assumption that we have it deployed in a local cluster local and a public cluster public.
Make sure to have 2 terminals with separate sessions logged into each of your cluster with the correct namespace context (but within the same folder).
Follow the instructions provided here.
-
In your public terminal session :
skupper init --id public skupper connection-token private-to-public.yaml-
In your local terminal session :
skupper init --id private skupper connect private-to-public.yaml-
In the terminal for the local cluster, annotate the hybrid-cloud-backend service:
kubectl annotate service hybrid-cloud-backend skupper.io/proxy=http-
In the terminal for the public cluster, annotate the hybrid-cloud-backend service:
kubectl annotate service hybrid-cloud-backend skupper.io/proxy=httpBoth services are now connected, if you scale one to 0 or it gets overloaded it will transparently load-balance to the other cluster.