12

I'm trying to delete a bunch of old ZFS snapshots but I get errors saying that the datasets are busy:

[root@pool-01 ~]# zfs list -t snapshot -o name -S creation | grep ^pool/nfs/public/mydir | xargs -n 1 zfs destroy -vr will destroy pool/nfs/public/mydir@autosnap_2019-02-24_03:13:17_hourly will reclaim 408M cannot destroy snapshot pool/nfs/public/mydir@autosnap_2019-02-24_03:13:17_hourly: dataset is busy will destroy pool/nfs/public/mydir@autosnap_2019-02-24_02:13:17_hourly will reclaim 409M cannot destroy snapshot pool/nfs/public/mydir@autosnap_2019-02-24_02:13:17_hourly: dataset is busy will destroy pool/nfs/public/mydir@autosnap_2019-02-24_01:13:18_hourly will reclaim 394M 

Running lsof shows no processes accessing these snapshots:

[root@pool-01 ~]# lsof | grep pool/nfs/public/mydir 

There also appears to be no holds on any of the snapshots:

[root@pool-01 ~]# zfs holds pool/nfs/public/mydir@autosnap_2019-02-24_03:13:17_hourly NAME TAG TIMESTAMP 

Is there anything else I should look out for? Anything else I can do besides a reoot?

6 Answers 6

4

This appears to be unintended behavior on ZoL, I left the ZFS box alone for a few days and finally gave up and rebooted the said box, and I was able to destroy those snapshots after reboot.

3

I noticed that my snapshots were indeed busy for some reason - they all were shown in the output of

mount 

So I did something reckless and just cast

sudo umount /.zfs/snapshot/* 

Against all expectations, nothing bad seems to have happened. And then my sudo zfs destroy worked.

1
  • 1
    Indeed. I had the visibility of snapshots for some datasets enabled "zfs set snapdir=visible zfs/appdata/my_dataset" which kept them mounted and busy for zfs operations. Commented Nov 26, 2022 at 9:31
3

Originally, I used the following method to stop a busy dataset to enable me to export this dataset for a pool rebuild. I use a ZFS dataset for my /home directory and I was unable to find the process which kept it busy. Here's my solution which should work for you too, when you cannot find the process using your dataset:

  1. On all dataset(s) you wish to export (but had trouble exporting) set:

    zfs set canmount=noauto dataset1 zfs set canmount=noauto dataset2 ... # and so on where you substitute your datasets' names for dataset1, dataset2, ... 

    Setting canmount=noauto ensures that the dataset will not mount on reboot

  2. Make a user account (or use the root account) which doesn't use the dataset for /home etc... Give this account sudo privileges.

  3. Reboot and log into the above account, i.e. the account you just created in step 2. This account should boot up without mounting the datasets you modified in step 1 and therefore, deny those datasets to any daemons/programs.

  4. Since the datasets are now not busy, you can now destroy them and/or their snapshots.

  5. Be sure to:

    zfs set canmount=on dataset1 zfs set canmount=on dataset2 ... 

    to any datasets that you want to mount on boot. This is the zfs default.

2

I would suggest the "zfs-way",
This fixes it for me and I assume it's considered more correct/clean:

#Use a variable so we need less hard-coding: THE_DATA_SET=pool/nfs/public/mydir #Make mounting impossible for a while (this also umounts it): zfs canmount=off $THE_DATA_SET #Find all snapshots and destroy them: #("tail -n +2" is needed to remove the header in the "zfs list" output) zfs list $THE_DATA_SET -t snapshot -o name | tail -n +2 | xargs -n 1 zfs destroy #Make mounting possible again and do it: zfs canmount=on $THE_DATA_SET zfs mount $THE_DATA_SET #And now we longer need the variable: unset THE_DATA_SET 
1
  • I you want to be really safe: Put a echo in front of zfs destroy to see what would we be destroyed. (And if you are happy run it again without the echo) Commented Oct 27, 2022 at 9:42
2

I came over this question as first search result when I was looking for the same problem.

Turns out my snapshot was held by zfs hold. Had to do

zfs holds <snapshot> 

to get the hold tag name and then

zfs release <tag> <snapshot> 

Same symptoms, different underlying issue. Maybe this will help someone.

0

I had one more corner case, which is worth to share.

$ sudo zfs destroy pool/volume-disk-1 cannot destroy 'pool/volume-disk-1': dataset is busy 

There was nothing mounted, also no snapshots where holding zfs volume:

$ zfs list -t snapshot no datasets available 

Even system reboot did not help!

Still it appeared, that zfs volume contained mdraid physical partition, thus even after reboot, it was immediately in use again, because of:

$ cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10] md127 : active (auto-read-only) raid1 zd0p3[0] 324819968 blocks super 1.2 [1/1] [U] 

So after executing stop mdraid:

$ sudo mdadm -S /dev/md127 mdadm: stopped /dev/md127 $ sudo zfs destroy pool/volume-disk-1 $ 

The zfs volume was finally destroyed.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.