3

I have a GKE cluster running with several persistent disks for storage. To set up a staging environment, I created a second cluster inside the same project. Now I want to use the data from the persistent disks of the production cluster in the staging cluster.

I already created persistent disks for the staging cluster. What is the best approach to move over the production data to the disks of the staging cluster.

1 Answer 1

6

You can use the open source tool Velero which is designed to migrate Kubernetes cluster resources.

Follow these steps to migrate a persistent disk within GKE clusters:

  1. Create a GCS bucket:
BUCKET=<your_bucket_name> gsutil mb gs://$BUCKET/ 
  1. Create a Google Service Account and store the associated email in a variable for later use:
GSA_NAME=<your_service_account_name> gcloud iam service-accounts create $GSA_NAME \ --display-name "Velero service account" SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)') 
  1. Create a custom role for the Service Account:
PROJECT_ID=<your_project_id> ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list ) gcloud iam roles create velero.server \ --project $PROJECT_ID \ --title "Velero Server" \ --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")" gcloud projects add-iam-policy-binding $PROJECT_ID \ --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \ --role projects/$PROJECT_ID/roles/velero.server gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET} 
  1. Grant access to Velero:
gcloud iam service-accounts keys create credentials-velero \ --iam-account $SERVICE_ACCOUNT_EMAIL 
  1. Download and install Velero on the source cluster:
wget https://github.com/vmware-tanzu/velero/releases/download/v1.8.1/velero-v1.8.1-linux-amd64.tar.gz tar -xvzf velero-v1.8.1-linux-amd64.tar.gz sudo mv velero-v1.8.1-linux-amd64/velero /usr/local/bin/velero velero install \ --provider gcp \ --plugins velero/velero-plugin-for-gcp:v1.4.0 \ --bucket $BUCKET \ --secret-file ./credentials-velero 

Note: The download and installation was performed on a Linux system, which is the OS used by Cloud Shell. If you are managing your GCP resources via Cloud SDK, the release and installation process could vary.

  1. Confirm that the velero pod is running:
$ kubectl get pods -n velero NAME READY STATUS RESTARTS AGE velero-xxxxxxxxxxx-xxxx 1/1 Running 0 11s 
  1. Create a backup for the PV,PVCs:
velero backup create <your_backup_name> --include-resources pvc,pv --selector app.kubernetes.io/<your_label_name>=<your_label_value> 
  1. Verify that your backup was successful with no errors or warnings:
$ velero backup describe <your_backup_name> --details Name: your_backup_name Namespace: velero Labels: velero.io/storage-location=default Annotations: velero.io/source-cluster-k8s-gitversion=v1.21.6-gke.1503 velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=21 Phase: Completed Errors: 0 Warnings: 0 

Now that the Persistent Volumes are backed up, you can proceed with the migration to the destination cluster following these steps:

  1. Authenticate in the destination cluster
gcloud container clusters get-credentials <your_destination_cluster> --zone <your_zone> --project <your_project> 
  1. Install Velero using the same parameters as step 5 on the first part:
velero install \ --provider gcp \ --plugins velero/velero-plugin-for-gcp:v1.4.0 \ --bucket $BUCKET \ --secret-file ./credentials-velero 
  1. Confirm that the velero pod is running:
kubectl get pods -n velero NAME READY STATUS RESTARTS AGE velero-xxxxxxxxxx-xxxxx 1/1 Running 0 19s 
  1. To avoid the backup data being overwritten, change the bucket to read-only mode:
kubectl patch backupstoragelocation default -n velero --type merge --patch '{"spec":{"accessMode":"ReadOnly"}}' 
  1. Confirm Velero is able to access the backup from bucket:
velero backup describe <your_backup_name> --details 
  1. Restore the backed up Volumes:
velero restore create --from-backup <your_backup_name> 
  1. Confirm that the persistent volumes have been restored on the destination cluster:
kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE redis-data-my-release-redis-master-0 Bound pvc-ae11172a-13fa-4ac4-95c5-d0a51349d914 8Gi RWO standard 79s redis-data-my-release-redis-replicas-0 Bound pvc-f2cc7e07-b234-415d-afb0-47dd7b9993e7 8Gi RWO standard 79s redis-data-my-release-redis-replicas-1 Bound pvc-ef9d116d-2b12-4168-be7f-e30b8d5ccc69 8Gi RWO standard 79s redis-data-my-release-redis-replicas-2 Bound pvc-65d7471a-7885-46b6-a377-0703e7b01484 8Gi RWO standard 79s 

Check out this tutorial as a reference.

Sign up to request clarification or add additional context in comments.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.