1

How does one deploy a node app from Gitlab-ci to GKE? I already have cluster integration enabled and functional. But the documentation on what that means is almost non existent. I don't know what variables having a GKE cluster connected gives me or how to use it in my CI.

enter image description here

Here's my gitlab-ci.yml, it puts the image in gitlabhq Registry, meaning I'll have to copy it to google or somehow setup GKE to use a private registry, which no one seems to have managed to do.

image: docker:git services: - docker:dind stages: - build - test - release - deploy variables: DOCKER_DRIVER: overlay2 CONTAINER_TEST_IMAGE: registry.gitlab.com/my-proj:$CI_BUILD_REF_NAME CONTAINER_RELEASE_IMAGE: registry.gitlab.com/my-proj:latest before_script: - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com build: stage: build script: - docker build -t $CONTAINER_TEST_IMAGE . - docker push $CONTAINER_TEST_IMAGE .test1: stage: test script: - docker run $CONTAINER_TEST_IMAGE npm run eslint .test2: stage: test script: - docker run $CONTAINER_TEST_IMAGE npm run mocha release-image: stage: release script: - docker pull $CONTAINER_TEST_IMAGE - docker tag $CONTAINER_TEST_IMAGE $CONTAINER_RELEASE_IMAGE - docker push $CONTAINER_RELEASE_IMAGE only: - master deploy: ?????? 
2

1 Answer 1

1

I haven't used Auto DevOps integration, but I can try and generalize a working approach.

If you have tiller installed on the k8s cluster, it's best if you create a helm chart for your application. If you haven't done that already, there is a a tutorial on how to do that here: https://github.com/kubernetes/helm/blob/master/docs/charts.md (check Using Helm to Manage Charts)

A basic deployment.yaml managed by helm would look like this:

apiVersion: extensions/v1beta1 kind: Deployment metadata: name: {{ template "name" . }} labels: app: {{ template "name" . }} chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} spec: replicas: {{ .Values.replicaCount }} template: metadata: labels: app: {{ template "name" . }} release: {{ .Release.Name }} spec: containers: - name: {{ .Chart.Name }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy }} 

and the corresponding values in the .Values file:

image: repository: registry.gitlab.com/my-proj tag: latest 

A sample .gitlab-ci.yml file should look like this:

... deploy: stage: deploy script: - helm upgrade <your-app-name> <path-to-the-helm-chart> --install --set image.tag=$CI_BUILD_REF_NAME 

The build phase publishes the docker image and the deploy phase installs a helm chart which tries to download that image from registry.gitlab.com/my-proj.

I take that the k8s cluster has access to that registry. If the registry is private, you need to create a secret in kubernetes that holds the authorization token (unless it is automatically created): https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

The default pipeline image you're using (image: docker:git) doesn't have the helm CLI installed, so you should change that image with one that has helm and kubectl installed. In the gitlab tutorial, they seem to be doing the installation on each run: https://gitlab.com/gitlab-org/gitlab-ci-yml/blob/master/Auto-DevOps.gitlab-ci.yml (check function install_dependencies())

Sign up to request clarification or add additional context in comments.

1 Comment

I do have a private docker image repository. I went the Auto DevOps route as it manages transferring the secrets to k8s. It makes a chart based on your Dockerfile and assumes you use port 5000. The secrets it makes aren't permanent, which breaks autoscaling, but they're working on that.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.