Skip to main content
added 67 characters in body
Source Link
Tom Hale
  • 33.4k
  • 42
  • 165
  • 258

Using replace is the preferred solution, and 2-3x faster than balance. (device remove first rebalances. Perhaps it doesn't use the soft conversion type making it slower)

This answer prevents the failed disk from blocking kernel I/O.

I did the following:

  1. Ensured that the degraded filesystem was noauto in /etc/fstab

  2. Rebooted the machine (which took about 20 minutes due to I/O hangs)

  3. Disabled the LVM VG containing the btrfs fs on the failed drive:

     sudo vgchange -an <failed-vg> 
  4. Disabled the failed device:

     echo 1 | sudo tee /sys/block/sdb/device/delete 
  5. Mounted the filesystem -o rw,degraded (Note: degraded can only be used once)

  6. Got the failed devid from:

     btrfs filesystem show /mountpoint 
  7.  btrfs replace start -B <devid> /dev/new-disk /mountpoint 

As I'm writing this:

  • replace status shows a healthy 0.1% progress every 30 seconds or so
  • iostat -d 1 -m <target-dev> shows about 145MB/s (Seagate advertises 160MB/s)

Using replace is the preferred solution, and 2-3x faster than balance. (device remove first rebalances.)

This answer prevents the failed disk from blocking kernel I/O.

I did the following:

  1. Ensured that the degraded filesystem was noauto in /etc/fstab

  2. Rebooted the machine (which took about 20 minutes due to I/O hangs)

  3. Disabled the LVM VG containing the btrfs fs on the failed drive:

     sudo vgchange -an <failed-vg> 
  4. Disabled the failed device:

     echo 1 | sudo tee /sys/block/sdb/device/delete 
  5. Mounted the filesystem -o rw,degraded (Note: degraded can only be used once)

  6. Got the failed devid from:

     btrfs filesystem show /mountpoint 
  7.  btrfs replace start -B <devid> /dev/new-disk /mountpoint 

As I'm writing this:

  • replace status shows a healthy 0.1% progress every 30 seconds or so
  • iostat -d 1 -m <target-dev> shows about 145MB/s (Seagate advertises 160MB/s)

Using replace is the preferred solution, and 2-3x faster than balance. (device remove first rebalances. Perhaps it doesn't use the soft conversion type making it slower)

This answer prevents the failed disk from blocking kernel I/O.

I did the following:

  1. Ensured that the degraded filesystem was noauto in /etc/fstab

  2. Rebooted the machine (which took about 20 minutes due to I/O hangs)

  3. Disabled the LVM VG containing the btrfs fs on the failed drive:

     sudo vgchange -an <failed-vg> 
  4. Disabled the failed device:

     echo 1 | sudo tee /sys/block/sdb/device/delete 
  5. Mounted the filesystem -o rw,degraded (Note: degraded can only be used once)

  6. Got the failed devid from:

     btrfs filesystem show /mountpoint 
  7.  btrfs replace start -B <devid> /dev/new-disk /mountpoint 

As I'm writing this:

  • replace status shows a healthy 0.1% progress every 30 seconds or so
  • iostat -d 1 -m <target-dev> shows about 145MB/s (Seagate advertises 160MB/s)
Source Link
Tom Hale
  • 33.4k
  • 42
  • 165
  • 258

Using replace is the preferred solution, and 2-3x faster than balance. (device remove first rebalances.)

This answer prevents the failed disk from blocking kernel I/O.

I did the following:

  1. Ensured that the degraded filesystem was noauto in /etc/fstab

  2. Rebooted the machine (which took about 20 minutes due to I/O hangs)

  3. Disabled the LVM VG containing the btrfs fs on the failed drive:

     sudo vgchange -an <failed-vg> 
  4. Disabled the failed device:

     echo 1 | sudo tee /sys/block/sdb/device/delete 
  5. Mounted the filesystem -o rw,degraded (Note: degraded can only be used once)

  6. Got the failed devid from:

     btrfs filesystem show /mountpoint 
  7.  btrfs replace start -B <devid> /dev/new-disk /mountpoint 

As I'm writing this:

  • replace status shows a healthy 0.1% progress every 30 seconds or so
  • iostat -d 1 -m <target-dev> shows about 145MB/s (Seagate advertises 160MB/s)