0

I have this situation

zfs destroy -r rpool@1-9-2025 cannot destroy snapshot rpool/ROOT/nas@1-9-2025: dataset is busy 

So..

umount -v /root/.zfs/snapshot/1-9-2025 umount: /root/.zfs/snapshot/1-9-2025 (rpool/ROOT/nas@1-9-2025) unmounted 

Umounted?

df -h /root/.zfs/snapshot/1-9-2025 Filesystem Size Used Avail Use% Mounted on rpool/ROOT/nas@1-9-2025 5.9T 2.6G 5.9T 1% /root/.zfs/snapshot/1-9-2025 

How to umount and destroy snapshot?

Lsof report no use on it, but it remount immediately.

lsof /root/.zfs/snapshot/1-9-2025 

no holds on dataset

zfs holds rpool@1-9-2025 NAME TAG TIMESTAMP 

Actually I use this workaround, instead of use -r I use destroy the snaps one by one

zfs destroy rpool/home@1-9-2025 zfs destroy rpool/opt@1-9-2025 zfs destroy rpool/root@1-9-2025 

but some are still busy

zfs list -t snapshot|grep 1-9-2025|cut -d ' ' -f 1 rpool/ROOT/nas@1-9-2025 zfs destroy rpool/ROOT/nas@1-9-2025 cannot destroy snapshot rpool/ROOT/nas@1-9-2025: dataset is busy 
6
  • I think you're misinterpreting "dataset is busy" - it doesn't necessarily mean that the snapshot is mounted (and even if it was, you should use zfs umount rather than umount) - the .zfs/ dir at the top level of a dataset (and everything under it) is controlled by a dataset's snapdir property - see man zfsprops and man zfsconcepts. The snapshot might be held, needing to be released before destroying it - see man zfs-hold for details. Commented Oct 2 at 14:43
  • using zfs umount give me same situation Commented Oct 2 at 14:47
  • IIRC, you're using an ancient version of the ZFS? If so, try rebooting. That may allow you to destroy the snapshot. BTW, i recommend using YYYY-MM-DD instead of D-M-YYYY for snapshot names. it's standard, unambiguous, and sorts correctly. Commented Oct 2 at 14:49
  • 1
    is the nas dataset mounted on another machine? if so, is there some process using the .zfs/snapshots/1-9-2025 snapshot directory in the nas dataset? maybe a backup or restore process, but all it takes is a possibly-forgotten shell with that dir as its cwd Commented Oct 2 at 14:58
  • 1
    also worth noting: if the nas dataset is a backup from another dataset (either remote or local), you should make sure that the canmount property is set to noauto. e.g. by using -o canmount=noauto with the zfs recv command, or by modifying your backup script to run zfs set canmount=noauto on all backed-up datasets after the backup has completed. Commented Oct 2 at 15:02

1 Answer 1

0

Solution found.

Seems a lots of mounts happen (I don't know why)

mount|grep snap|wc -l 1557 

So...

umount -R /root/.zfs/snapshot/1-9-2025 

So...

zfs destroy -r rpool/ROOT/nas@1-9-2025 

:)

2
  • 3
    There's something very odd going on here. Snapshots aren't normally mounted. You should find out why they're being mounted (prime candidate is whatever snapshot + backup script you're using) and stop it from doing that, otherwise you'll just have the same problem again some day. Commented Oct 2 at 15:08
  • Yes I know. Meanwhile I have a working workaround, without needing of reboot Commented Oct 2 at 15:09

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.