If you create your btrfs with 2 hard disks with raid1 metadata and raid1 data that is, example below
mkfs.btrfs -L Test -m raid1 -d raid1 /dev/sda /dev/sdb In this case of two hard disks all files will be stored 2 times ( one copy of each file on each hard disk ) and if you remove one hard disk
You can mount (the attached hard disk to ur PC) with:
mount -o degraded /dev/sda /mnt/Test You can recover your data here
Now you won't be able to recover your data if you created mkfs.btrfs| and added 3 hard disk at a time with a raid1data andraid1` metadata such as
mkfs.btrfs -L Test2 -m raid1 -d raid1 /dev/sda /dev/sdb /dev/sdc In this setting, data is divided and spread on all hard disk such as a 1gb file will be stored as
333 mb on sda 333 mb on sdb 333 mb on sdc Sure you can remove 1 hard disk and mount the remaining 1st or 2nd hard disk in degraded but that 300 mb in hard disk which is not connected is not going to be there so just stick with 2 hard disk , do scrub command daily , and all good
The GParted shows true disk usage and btrfs filesystem show command shows somewhat fake disk usage so check disk usage in GParted.
I have been testing btrfs and this test was done on btrfs 5.4.1
update : i just came to know btrfs with raid1c3 can store 3 copies which is ment for 3 hard disk and raid1c4 stores 4 copies which is ment for 4 hard disk
it stores full copy of files and metadata on all hard disks without splitting files
we just need to use
btrfs -L Test -m raid1c3 and -d raid1c3 /dev/sda /dev/sdb /dev/sdc
for creating filesystem for 3 hard disks ( it will delete/format all files in those hard disks when creating this )
or -m raid1c4 -d raidc4 when creating filesystem for 4 hard disks
in raid1c3 or raid1c4 , data can be recovered by mounting the hard disk in degraded option even if other 1 or 2 hard disk are missing