2

I might have to rebuild the GKE cluster but the compute engine disks won't be delete and needs to be re-used as persistent volumes for the pods. I haven't found a documentation showing how to link the existing GCP compute engine disk as persistent volumes for the pods.

Is it possible to use the existing GCP compute engine disks with GKE storage class and Persistent volumes?

2

1 Answer 1

5

Yes, it's possible to reuse Persistent Disk as Persistent Volume for another clusters, however there is one limitation:

The persistent disk must be in the same zone as the cluster nodes.

If PD will be in a different zone, the cluster will not find this disk.

In Documentation Using preexisting persistent disks as PersistentVolumes you can find information and examples how to reuse persistent disks.

If you didn't create Persistent Disk yet, you can create it based on Creating and attaching a disk documentation. For this tests, I've used below disk:

gcloud compute disks create pd-name \ --size 10G \ --type pd-standard \ --zone europe-west3-b 

If you will create PD with less than 200G you will get below Warning, everything depends on your needs. In zone europe-west3-b, pd-standard type can have storage between 10GB - 65536GB.

You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/com pute/docs/disks#performance. 

Keep in mind that you might get different types of Persistent Disk on different zones. For more details you can check Disk Types documentation or run $ gcloud compute disk-types list.

Once you have Persistent Disk you can create PersistentVolume and PersistentVolumeClaim.

apiVersion: v1 kind: PersistentVolume metadata: name: pv spec: storageClassName: "test" capacity: storage: 10G accessModes: - ReadWriteOnce claimRef: namespace: default name: pv-claim gcePersistentDisk: pdName: pd-name fsType: ext4 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pv-claim spec: storageClassName: "test" accessModes: - ReadWriteOnce resources: requests: storage: 10G --- kind: Pod apiVersion: v1 metadata: name: task-pv-pod spec: volumes: - name: task-pv-storage persistentVolumeClaim: claimName: pv-claim containers: - name: task-pv-container image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - mountPath: "/usr/data" name: task-pv-storage 

Tests

$ kubectl get pv,pvc,pod NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pv 10G RWO Retain Bound default/pv-claim test 22s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/pv-claim Bound pv 10G RWO test 22s NAME READY STATUS RESTARTS AGE pod/task-pv-pod 1/1 Running 0 21s 

Write some information to disk

$ kubectl exec -ti task-pv-pod -- bin/bash root@task-pv-pod:/# cd /usr/share/nginx/html root@task-pv-pod:/usr/share/nginx/html# echo "This is test message from Nginx pod" >> message.txt 

Now I removed all previous resources: pv, pvc and pod.

$ kubectl get pv,pvc,pod No resources found 

Now If I would recreate pv, pvc with small changes in pod, for example busybox.

 containers: - name: busybox image: busybox command: ["/bin/sh"] args: ["-c", "while true; do echo hello; sleep 10;done"] volumeMounts: - mountPath: "/usr/data" name: task-pv-storage 

It will be rebound

$ kubectl get pv,pvc,po NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pv 10G RWO Retain Bound default/pv-claim 43m NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/pv-claim Bound pv 10G RWO 43m NAME READY STATUS RESTARTS AGE pod/busybox 1/1 Running 0 3m43s 

And in the busybox pod I will be able to find Message.txt.

$ kubectl exec -ti busybox -- bin/sh / # cd usr / # cd usr/data /usr/data # ls lost+found message.txt /usr/data # cat message.txt This is test message from Nginx pod 

As additional information, you won't be able to use it in 2 clusters in the same time, if you would try you will get error:

AttachVolume.Attach failed for volume "pv" : googleapi: Error 400: RESOURCE_IN_USE_B Y_ANOTHER_RESOURCE - The disk resource 'projects/<myproject>/zones/europe-west3-b/disks/pd-name' is already being used by 'projects/<myproject>/zones/europe-west3-b/instances/gke-cluster-3-default-pool-bb545f05-t5hc' 
Sign up to request clarification or add additional context in comments.

2 Comments

@John do you have any further questions?
No questions. That explains. Thank you

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.