Skip to main content
added 672 characters in body
Source Link

If you create your btrfs with 2 hard disks with raid1 metadata and raid1 data that is, example below

mkfs.btrfs -L Test -m raid1 -d raid1 /dev/sda /dev/sdb 

In this case of two hard disks all files will be stored 2 times ( one copy of each file on each hard disk ) and if you remove one hard disk

You can mount (the attached hard disk to ur PC) with:

mount -o degraded /dev/sda /mnt/Test 

You can recover your data here


Now you won't be able to recover your data if you created mkfs.btrfs| and added 3 hard disk at a time with a raid1data andraid1` metadata such as

mkfs.btrfs -L Test2 -m raid1 -d raid1 /dev/sda /dev/sdb /dev/sdc 

In this setting, data is divided and spread on all hard disk such as a 1gb file will be stored as

333 mb on sda 333 mb on sdb 333 mb on sdc 

Sure you can remove 1 hard disk and mount the remaining 1st or 2nd hard disk in degraded but that 300 mb in hard disk which is not connected is not going to be there so just stick with 2 hard disk , do scrub command daily , and all good


The GParted shows true disk usage and btrfs filesystem show command shows somewhat fake disk usage so check disk usage in GParted.

I have been testing btrfs and this test was done on btrfs 5.4.1

update : i just came to know btrfs with raid1c3 can store 3 copies which is ment for 3 hard disk and raid1c4 stores 4 copies which is ment for 4 hard disk

it stores full copy of files and metadata on all hard disks without splitting files

we just need to use

btrfs -L Test -m raid1c3 and -d raid1c3 /dev/sda /dev/sdb /dev/sdc

for creating filesystem for 3 hard disks ( it will delete/format all files in those hard disks when creating this )

or -m raid1c4 -d raidc4 when creating filesystem for 4 hard disks


in raid1c3 or raid1c4 , data can be recovered by mounting the hard disk in degraded option even if other 1 or 2 hard disk are missing

If you create your btrfs with 2 hard disks with raid1 metadata and raid1 data that is, example below

mkfs.btrfs -L Test -m raid1 -d raid1 /dev/sda /dev/sdb 

In this case of two hard disks all files will be stored 2 times ( one copy of each file on each hard disk ) and if you remove one hard disk

You can mount (the attached hard disk to ur PC) with:

mount -o degraded /dev/sda /mnt/Test 

You can recover your data here


Now you won't be able to recover your data if you created mkfs.btrfs| and added 3 hard disk at a time with a raid1data andraid1` metadata such as

mkfs.btrfs -L Test2 -m raid1 -d raid1 /dev/sda /dev/sdb /dev/sdc 

In this setting, data is divided and spread on all hard disk such as a 1gb file will be stored as

333 mb on sda 333 mb on sdb 333 mb on sdc 

Sure you can remove 1 hard disk and mount the remaining 1st or 2nd hard disk in degraded but that 300 mb in hard disk which is not connected is not going to be there so just stick with 2 hard disk , do scrub command daily , and all good


The GParted shows true disk usage and btrfs filesystem show command shows somewhat fake disk usage so check disk usage in GParted.

I have been testing btrfs and this test was done on btrfs 5.4.1

If you create your btrfs with 2 hard disks with raid1 metadata and raid1 data that is, example below

mkfs.btrfs -L Test -m raid1 -d raid1 /dev/sda /dev/sdb 

In this case of two hard disks all files will be stored 2 times ( one copy of each file on each hard disk ) and if you remove one hard disk

You can mount (the attached hard disk to ur PC) with:

mount -o degraded /dev/sda /mnt/Test 

You can recover your data here


Now you won't be able to recover your data if you created mkfs.btrfs| and added 3 hard disk at a time with a raid1data andraid1` metadata such as

mkfs.btrfs -L Test2 -m raid1 -d raid1 /dev/sda /dev/sdb /dev/sdc 

In this setting, data is divided and spread on all hard disk such as a 1gb file will be stored as

333 mb on sda 333 mb on sdb 333 mb on sdc 

Sure you can remove 1 hard disk and mount the remaining 1st or 2nd hard disk in degraded but that 300 mb in hard disk which is not connected is not going to be there so just stick with 2 hard disk , do scrub command daily , and all good


The GParted shows true disk usage and btrfs filesystem show command shows somewhat fake disk usage so check disk usage in GParted.

I have been testing btrfs and this test was done on btrfs 5.4.1

update : i just came to know btrfs with raid1c3 can store 3 copies which is ment for 3 hard disk and raid1c4 stores 4 copies which is ment for 4 hard disk

it stores full copy of files and metadata on all hard disks without splitting files

we just need to use

btrfs -L Test -m raid1c3 and -d raid1c3 /dev/sda /dev/sdb /dev/sdc

for creating filesystem for 3 hard disks ( it will delete/format all files in those hard disks when creating this )

or -m raid1c4 -d raidc4 when creating filesystem for 4 hard disks


in raid1c3 or raid1c4 , data can be recovered by mounting the hard disk in degraded option even if other 1 or 2 hard disk are missing

Improved formatting, spelling
Source Link
Edgar Magallon
  • 5.2k
  • 3
  • 15
  • 29

see here if uIf you create ur btrfsyour btrfs with 2 hard disks with raid1raid1 metadata and raid1raid1 data that is  , example below

mkfs.btrfs -L Test -m raid1 -d raid1 /dev/sda /dev/sdb

mkfs.btrfs -L Test -m raid1 -d raid1 /dev/sda /dev/sdb 

inIn this case of two hard disks all files will be stored 2 times ( one copy of each file on each hard disk ) and if uyou remove one hard disk

uYou can mount (the attached hard disk to ur PC) with mount -o degraded /dev/sda /mnt/Test:

mount -o degraded /dev/sda /mnt/Test 

uYou can recover uryour data here


nowNow you wontwon't be able to recover uryour data if

u you created mkfs.btrfs and added 3 hard disk at a time with a raid1 data and raid1mkfs.btrfs| and added 3 hard disk at a time with a raid1data andraid1` metadata such as

mkfs.btrfs -L Test2 -m raid1 -d raid1 /dev/sda /dev/sdb /dev/sdc

mkfs.btrfs -L Test2 -m raid1 -d raid1 /dev/sda /dev/sdb /dev/sdc 

inIn this setting  , data is dividedivided and spread on all hard disk such as a 1gb file will be stored as

333 mb on sda 333 mb on sdb 333 mb on sdc

333 mb on sda 333 mb on sdb 333 mb on sdc 

sure uSure you can remove 1 hard disk and mount the remaining 1st or 2nd hard disk in degraded but that 300 mb in hard disk which is not connected is not going to be there so

  just stick with 2 hard disk , do scrub command daily , and all good


the gepartedThe GParted shows true disk usage and "btrfs filesystem show" btrfs filesystem show command shows somewhat fake disk usage so

  check disk usage in geparted GParted.

iI have been testing btrfsbtrfs and this test was done on btrfs 5.4.1

see here if u create ur btrfs with 2 hard disks with raid1 metadata and raid1 data that is  , example below

mkfs.btrfs -L Test -m raid1 -d raid1 /dev/sda /dev/sdb

in this case of two hard disks all files will be stored 2 times ( one copy of each file on each hard disk ) and if u remove one hard disk

u can mount (the attached hard disk to ur PC) with mount -o degraded /dev/sda /mnt/Test

u can recover ur data here


now you wont be able to recover ur data if

u created mkfs.btrfs and added 3 hard disk at a time with a raid1 data and raid1 metadata such as

mkfs.btrfs -L Test2 -m raid1 -d raid1 /dev/sda /dev/sdb /dev/sdc

in this setting  , data is divide and spread on all hard disk such as a 1gb file will be stored as

333 mb on sda 333 mb on sdb 333 mb on sdc

sure u can remove 1 hard disk and mount the remaining 1st or 2nd hard disk in degraded but that 300 mb in hard disk which is not connected is not going to be there so

  just stick with 2 hard disk , do scrub command daily , and all good


the geparted shows true disk usage and "btrfs filesystem show" command shows somewhat fake disk usage so

  check disk usage in geparted .

i have been testing btrfs and this test was done on btrfs 5.4.1

If you create your btrfs with 2 hard disks with raid1 metadata and raid1 data that is, example below

mkfs.btrfs -L Test -m raid1 -d raid1 /dev/sda /dev/sdb 

In this case of two hard disks all files will be stored 2 times ( one copy of each file on each hard disk ) and if you remove one hard disk

You can mount (the attached hard disk to ur PC) with:

mount -o degraded /dev/sda /mnt/Test 

You can recover your data here


Now you won't be able to recover your data if you created mkfs.btrfs| and added 3 hard disk at a time with a raid1data andraid1` metadata such as

mkfs.btrfs -L Test2 -m raid1 -d raid1 /dev/sda /dev/sdb /dev/sdc 

In this setting, data is divided and spread on all hard disk such as a 1gb file will be stored as

333 mb on sda 333 mb on sdb 333 mb on sdc 

Sure you can remove 1 hard disk and mount the remaining 1st or 2nd hard disk in degraded but that 300 mb in hard disk which is not connected is not going to be there so just stick with 2 hard disk , do scrub command daily , and all good


The GParted shows true disk usage and btrfs filesystem show command shows somewhat fake disk usage so check disk usage in GParted.

I have been testing btrfs and this test was done on btrfs 5.4.1

edited body
Source Link

see here if u create ur btrfs with 2 hard disks with raid1 metadata and raid1 data that is , example below

mkfs.btrfs -L Test -m raid1 -d raid1 /dev/sda /dev/sdb

in this case of two hard disks all files will be stored 2 times ( one copy of each file on each hard disk ) and if u remove one hard disk

u can mount (the attached hard disk to ur PC) with mount -o degraded /dev/sda /mnt/Test

u can recover ur data here


now you wont be able to recover ur data if

u created mkfs.btrfs and added 3 hard disk at a time with a raid1 data and raid1 metadata such as

mkfs.btrfs -L Test2 -m raid1 -d raid1 /dev/sda /dev/sdb /dev/sdc

in this setting , data is divide and spread on all hard disk such as a 1gb file will be stored as

333 mb on sda 333 mb on sdb 333 mb on sdc

sure u can remove 1 hard disk and mount the remaining 1st or 2nd hard disk in degraded but that 300 mb in hard disk which is not connected is not going to be there so

just stick with 2 hard disk , do scrub command daily , and all good


the geparted shows true disk usage and "btrfs filesystem show" command shows somewhat fake disk usage so

check disk usage in geparted .

i have been testing byrfsbtrfs and this test was done on btrfs 5.4.1

see here if u create ur btrfs with 2 hard disks with raid1 metadata and raid1 data that is , example below

mkfs.btrfs -L Test -m raid1 -d raid1 /dev/sda /dev/sdb

in this case of two hard disks all files will be stored 2 times ( one copy of each file on each hard disk ) and if u remove one hard disk

u can mount (the attached hard disk to ur PC) with mount -o degraded /dev/sda /mnt/Test

u can recover ur data here


now you wont be able to recover ur data if

u created mkfs.btrfs and added 3 hard disk at a time with a raid1 data and raid1 metadata such as

mkfs.btrfs -L Test2 -m raid1 -d raid1 /dev/sda /dev/sdb /dev/sdc

in this setting , data is divide and spread on all hard disk such as a 1gb file will be stored as

333 mb on sda 333 mb on sdb 333 mb on sdc

sure u can remove 1 hard disk and mount the remaining 1st or 2nd hard disk in degraded but that 300 mb in hard disk which is not connected is not going to be there so

just stick with 2 hard disk , do scrub command daily , and all good


the geparted shows true disk usage and "btrfs filesystem show" command shows somewhat fake disk usage so

check disk usage in geparted .

i have been testing byrfs and this test was done on btrfs 5.4.1

see here if u create ur btrfs with 2 hard disks with raid1 metadata and raid1 data that is , example below

mkfs.btrfs -L Test -m raid1 -d raid1 /dev/sda /dev/sdb

in this case of two hard disks all files will be stored 2 times ( one copy of each file on each hard disk ) and if u remove one hard disk

u can mount (the attached hard disk to ur PC) with mount -o degraded /dev/sda /mnt/Test

u can recover ur data here


now you wont be able to recover ur data if

u created mkfs.btrfs and added 3 hard disk at a time with a raid1 data and raid1 metadata such as

mkfs.btrfs -L Test2 -m raid1 -d raid1 /dev/sda /dev/sdb /dev/sdc

in this setting , data is divide and spread on all hard disk such as a 1gb file will be stored as

333 mb on sda 333 mb on sdb 333 mb on sdc

sure u can remove 1 hard disk and mount the remaining 1st or 2nd hard disk in degraded but that 300 mb in hard disk which is not connected is not going to be there so

just stick with 2 hard disk , do scrub command daily , and all good


the geparted shows true disk usage and "btrfs filesystem show" command shows somewhat fake disk usage so

check disk usage in geparted .

i have been testing btrfs and this test was done on btrfs 5.4.1

added 85 characters in body
Source Link
Loading
added 270 characters in body
Source Link
Loading
Source Link
Loading