So, I started getting filesystem errors on my 4-drive btrfs RAID10 array, and btrfsck is unsuccessful in repairing them (goes into an infinite loop producing the same output for days on end).
The great majority of the data is still readable, so rebuilding the filesystem seems to be the most sensible way forward.
Given that there are no spare drives on hand, the plan so far looks like something along the lines of:
- Drop redundancy and "convert" RAID10 to RAID0, freeing up two drives;
- Format the newly-freed drives anew;
- Copy over readable data from the old filesystem to the new one (
rsync/btrfs send | btrfs receive); - Nuke the old filesystem;
- Add the old drives to and rebalance the new filesystem, getting it back to RAID10.
The question is how to do the first step. As I understand, btrfs device delete is not suitable here because it will keep trying to satisfy the RAID1 profile. And, how to find which two drives can be removed to minimize the amount of data shuffling it will need to do?