0

After the PVC is deleted the associated PV still shows the claim. This might be expected behavior, but for my usecase it must turn to "Available" status again. When I remove the claim object in the PV manifest by hand, it can be claimed by a different pod again. This is what I wanted.

CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM 10Gi RWO Retain Released arc-runners/k8s-runner-set-5tsbc-runner-qpz97-runner-cache CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM 10Gi RWO Retain Available 

Is there an option to so in a proper k8s manner?

More context:

In the pod spec I'm using ephemeral volumeClaimTemplate. I'm forced to do so because I need to integrate it to this helm chart. Here is the helm/values.yaml where the pod spec can be overridden.

... volumes: - name: runner-cache ephemeral: volumeClaimTemplate: spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "cache-dir-dispatcher" resources: requests: storage: 10Gi 

I got multiple static defined volumes like this

apiVersion: v1 kind: PersistentVolume metadata: name: github-runner-slot1-pv labels: type: local spec: persistentVolumeReclaimPolicy: Retain storageClassName: cache-dir-dispatcher ... 

It claims properly till every PV is in the "Released" status. Once I delete the claim entry in the manifest of the PV, it turns to "Available" again which is working well.

3
  • What is the abstract problem you are trying to solve here? Are you trying to get separate CI runners to reuse a volume and the data on the volume? Kubernetes explicitly guards against this, and ARC leverages this to make sure each workflow has a clean volume to work on. Commented Sep 18, 2024 at 11:08
  • Thanks for your reply. I want these volumes to be used as a local cache for each runner. But since I have ephemeral self-hosted github-runners I kinda need to dispatch the volumes. Commented Sep 18, 2024 at 13:05
  • Every new runner is clean, except the directory I want to mount for the local cache which comes from the volume we talking about. Commented Sep 18, 2024 at 13:11

1 Answer 1

1

This is an intended behavior for security reasons because if we accidentally delete the pod then when the reclaim policy is set to retain means that the system should not automatically release the PV and make it available. It indicates that there may be user action required to copy or purge the data from the disk. So it is up to the user to clear the ClaimRef from the PV when they are ready to make the disk available again. So there is no feature to automatically delete pvc after the pod is deleted. There are plenty of git issues on it.

But using a stateful set you have a .spec.persistentVolumeClaimRetentionPolicy field which controls if and how PVCs are deleted during the lifecycle of a StatefulSet. You can find more information about this in this document.

Sign up to request clarification or add additional context in comments.

4 Comments

As per this answer by Dee, there is an option which is built by him you can study and implement if required.
Thanks for your reply. To be clear: the PVCs are being deleted properly. That is not the problem. But the claim to the PV remains. Unfortunately I can only change the pod-spec as I wrote. See github.com/actions/actions-runner-controller/blob/master/charts/…
Dee's answer sounds promising. I'll take a look.
Well Dee's answer is exactly my usecase and I tested it, works perfect :) Thank you.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.