Skip to main content
deleted 26 characters in body
Source Link
Tom Hale
  • 33.4k
  • 42
  • 165
  • 258

Given that the replace was crawling, I did the following:

  1. Ensured that the degraded filesystem was noauto in /etc/fstab

  2. Rebooted the machine (which took about 20 minutes due to I/O hangs)

  3. Disabled the LVM VG containing the btrfs fs on the failed drive:

     sudo vgchange -an <failed-vg> 
  4. Disabled the failed device:

     echo 1 | sudo tee /sys/block/sdb/device/delete 
  5. Mounted the filesystem -o ro,degraded (degraded can only be used once)

  6. Checked replace status and saw it was suspended:

     Started on 26.Jan 00:36:12, suspended on 26.Jan 10:13:30 at 4.1%, 0 write errs, 0 
  7. Mounted -o remount,rw and saw the replace continue:

     kernel: BTRFS info (device dm-5): continuing dev_replace from <missing disk> (devid 2) to target /dev/mapper/vg6TBd1-ark @4% 

As I'm writing this:

  • replace status shows a healthy 0.1% progress every 30 seconds or so
  • iostat -d 1 -m <target-dev> shows about 145MB/s (Seagate advertises 160MB/s)

Update:

After completion, I noticed that btrfs device usage /mountpoint was showing some Data,DUP and Metadata,single, rather than only RAID1, so I rebalanced:

btrfs balance start -dconvert=raid1,soft -mconvert=raid1,soft /mountpoint 

Also, consider resizeing if both devices now contain slack:

btrfs filesystem resize max /mountpoint 

I would also recommend that you scrub as I had 262016 correctable csum errors as detailed here262016 correctable csum errors seemingly related to the interrupted replace, which seems to be related to the interrupted replace.

Given that the replace was crawling, I did the following:

  1. Ensured that the degraded filesystem was noauto in /etc/fstab

  2. Rebooted the machine (which took about 20 minutes due to I/O hangs)

  3. Disabled the LVM VG containing the btrfs fs on the failed drive:

     sudo vgchange -an <failed-vg> 
  4. Disabled the failed device:

     echo 1 | sudo tee /sys/block/sdb/device/delete 
  5. Mounted the filesystem -o ro,degraded (degraded can only be used once)

  6. Checked replace status and saw it was suspended:

     Started on 26.Jan 00:36:12, suspended on 26.Jan 10:13:30 at 4.1%, 0 write errs, 0 
  7. Mounted -o remount,rw and saw the replace continue:

     kernel: BTRFS info (device dm-5): continuing dev_replace from <missing disk> (devid 2) to target /dev/mapper/vg6TBd1-ark @4% 

As I'm writing this:

  • replace status shows a healthy 0.1% progress every 30 seconds or so
  • iostat -d 1 -m <target-dev> shows about 145MB/s (Seagate advertises 160MB/s)

Update:

After completion, I noticed that btrfs device usage /mountpoint was showing some Data,DUP and Metadata,single, rather than only RAID1, so I rebalanced:

btrfs balance start -dconvert=raid1,soft -mconvert=raid1,soft /mountpoint 

Also, consider resizeing if both devices now contain slack:

btrfs filesystem resize max /mountpoint 

I would also recommend that you scrub as I had 262016 correctable csum errors as detailed here, which seems to be related to the interrupted replace.

Given that the replace was crawling, I did the following:

  1. Ensured that the degraded filesystem was noauto in /etc/fstab

  2. Rebooted the machine (which took about 20 minutes due to I/O hangs)

  3. Disabled the LVM VG containing the btrfs fs on the failed drive:

     sudo vgchange -an <failed-vg> 
  4. Disabled the failed device:

     echo 1 | sudo tee /sys/block/sdb/device/delete 
  5. Mounted the filesystem -o ro,degraded (degraded can only be used once)

  6. Checked replace status and saw it was suspended:

     Started on 26.Jan 00:36:12, suspended on 26.Jan 10:13:30 at 4.1%, 0 write errs, 0 
  7. Mounted -o remount,rw and saw the replace continue:

     kernel: BTRFS info (device dm-5): continuing dev_replace from <missing disk> (devid 2) to target /dev/mapper/vg6TBd1-ark @4% 

As I'm writing this:

  • replace status shows a healthy 0.1% progress every 30 seconds or so
  • iostat -d 1 -m <target-dev> shows about 145MB/s (Seagate advertises 160MB/s)

Update:

After completion, I noticed that btrfs device usage /mountpoint was showing some Data,DUP and Metadata,single, rather than only RAID1, so I rebalanced:

btrfs balance start -dconvert=raid1,soft -mconvert=raid1,soft /mountpoint 

Also, consider resizeing if both devices now contain slack:

btrfs filesystem resize max /mountpoint 

I would also recommend that you scrub as I had 262016 correctable csum errors seemingly related to the interrupted replace.

Add resize and scrub
Source Link
Tom Hale
  • 33.4k
  • 42
  • 165
  • 258

Given that the replace was crawling, I did the following:

  1. Ensured that the degraded filesystem was noauto in /etc/fstab

  2. Rebooted the machine (which took about 20 minutes due to I/O hangs)

  3. Disabled the LVM VG containing the btrfs fs on the failed drive:

     sudo vgchange -an <failed-vg> 
  4. Disabled the failed device:

     echo 1 | sudo tee /sys/block/sdb/device/delete 
  5. Mounted the filesystem -o ro,degraded (degraded can only be used once)

  6. Checked replace status and saw it was suspended:

     Started on 26.Jan 00:36:12, suspended on 26.Jan 10:13:30 at 4.1%, 0 write errs, 0 
  7. Mounted -o remount,rw and saw the replace continue:

     kernel: BTRFS info (device dm-5): continuing dev_replace from <missing disk> (devid 2) to target /dev/mapper/vg6TBd1-ark @4% 

As I'm writing this:

  • replace status shows a healthy 0.1% progress every 30 seconds or so
  • iostat -d 1 -m <target-dev> shows about 145MB/s (Seagate advertises 160MB/s)

Update:

After completion, I noticed that btrfs device usage /mountpoint was showing some Data,DUP and Metadata,single, rather than only RAID1, so I rebalanced:

btrfs balance start -dconvert=raid1,soft -mconvert=raid1,soft /mountpoint 

Also, consider resizeingresizeing if both devices now contain slack:

btrfs filesystem resize max /mountpoint 

I would also recommend that you (iescrub as I had 262016 correctable csum errors as detailed here, the fs is smaller thanwhich seems to be related to the smaller device)interrupted replace.

Given that the replace was crawling, I did the following:

  1. Ensured that the degraded filesystem was noauto in /etc/fstab

  2. Rebooted the machine (which took about 20 minutes due to I/O hangs)

  3. Disabled the LVM VG containing the btrfs fs on the failed drive:

     sudo vgchange -an <failed-vg> 
  4. Disabled the failed device:

     echo 1 | sudo tee /sys/block/sdb/device/delete 
  5. Mounted the filesystem -o ro,degraded (degraded can only be used once)

  6. Checked replace status and saw it was suspended:

     Started on 26.Jan 00:36:12, suspended on 26.Jan 10:13:30 at 4.1%, 0 write errs, 0 
  7. Mounted -o remount,rw and saw the replace continue:

     kernel: BTRFS info (device dm-5): continuing dev_replace from <missing disk> (devid 2) to target /dev/mapper/vg6TBd1-ark @4% 

As I'm writing this:

  • replace status shows a healthy 0.1% progress every 30 seconds or so
  • iostat -d 1 -m <target-dev> shows about 145MB/s (Seagate advertises 160MB/s)

Update:

After completion, I noticed that btrfs device usage /mountpoint was showing some Data,DUP and Metadata,single, rather than only RAID1, so I rebalanced:

btrfs balance start -dconvert=raid1,soft -mconvert=raid1,soft /mountpoint 

Also, consider resizeing if both devices now contain slack (ie, the fs is smaller than the smaller device).

Given that the replace was crawling, I did the following:

  1. Ensured that the degraded filesystem was noauto in /etc/fstab

  2. Rebooted the machine (which took about 20 minutes due to I/O hangs)

  3. Disabled the LVM VG containing the btrfs fs on the failed drive:

     sudo vgchange -an <failed-vg> 
  4. Disabled the failed device:

     echo 1 | sudo tee /sys/block/sdb/device/delete 
  5. Mounted the filesystem -o ro,degraded (degraded can only be used once)

  6. Checked replace status and saw it was suspended:

     Started on 26.Jan 00:36:12, suspended on 26.Jan 10:13:30 at 4.1%, 0 write errs, 0 
  7. Mounted -o remount,rw and saw the replace continue:

     kernel: BTRFS info (device dm-5): continuing dev_replace from <missing disk> (devid 2) to target /dev/mapper/vg6TBd1-ark @4% 

As I'm writing this:

  • replace status shows a healthy 0.1% progress every 30 seconds or so
  • iostat -d 1 -m <target-dev> shows about 145MB/s (Seagate advertises 160MB/s)

Update:

After completion, I noticed that btrfs device usage /mountpoint was showing some Data,DUP and Metadata,single, rather than only RAID1, so I rebalanced:

btrfs balance start -dconvert=raid1,soft -mconvert=raid1,soft /mountpoint 

Also, consider resizeing if both devices now contain slack:

btrfs filesystem resize max /mountpoint 

I would also recommend that you scrub as I had 262016 correctable csum errors as detailed here, which seems to be related to the interrupted replace.

Add convert and resize
Source Link
Tom Hale
  • 33.4k
  • 42
  • 165
  • 258

Given that the replace was crawling, I did the following:

  1. Ensured that the degraded filesystem was noauto in /etc/fstab

  2. Rebooted the machine (which took about 20 minutes due to I/O hangs)

  3. Disabled the LVM VG containing the btrfs fs on the failed drive:

     sudo vgchange -an <failed-vg> 
  4. Disabled the failed device:

     echo 1 | sudo tee /sys/block/sdb/device/delete 
  5. Mounted the filesystem -o ro,degraded (degraded can only be used once)

  6. Checked replace status and saw it was suspended:

     Started on 26.Jan 00:36:12, suspended on 26.Jan 10:13:30 at 4.1%, 0 write errs, 0 
  7. Mounted -o remount,rw and saw the replace continue:

     kernel: BTRFS info (device dm-5): continuing dev_replace from <missing disk> (devid 2) to target /dev/mapper/vg6TBd1-ark @4% 

As I'm writing this:

  • replace status shows a healthy 0.1% progress every 30 seconds or so
  • iostat -d 1 -m <target-dev> shows about 145MB/s (Seagate advertises 160MB/s)

Update:

After completion, I noticed that btrfs device usage /mountpoint was showing some Data,DUP and Metadata,single, rather than only RAID1, so I rebalanced:

btrfs balance start -dconvert=raid1,soft -mconvert=raid1,soft /mountpoint 

Also, consider resizeing if both devices now contain slack (ie, the fs is smaller than the smaller device).

Given that the replace was crawling, I did the following:

  1. Ensured that the degraded filesystem was noauto in /etc/fstab

  2. Rebooted the machine (which took about 20 minutes due to I/O hangs)

  3. Disabled the LVM VG containing the btrfs fs on the failed drive:

     sudo vgchange -an <failed-vg> 
  4. Disabled the failed device:

     echo 1 | sudo tee /sys/block/sdb/device/delete 
  5. Mounted the filesystem -o ro,degraded (degraded can only be used once)

  6. Checked replace status and saw it was suspended:

     Started on 26.Jan 00:36:12, suspended on 26.Jan 10:13:30 at 4.1%, 0 write errs, 0 
  7. Mounted -o remount,rw and saw the replace continue:

     kernel: BTRFS info (device dm-5): continuing dev_replace from <missing disk> (devid 2) to target /dev/mapper/vg6TBd1-ark @4% 

As I'm writing this:

  • replace status shows a healthy 0.1% progress every 30 seconds or so
  • iostat -d 1 -m <target-dev> shows about 145MB/s (Seagate advertises 160MB/s)

Given that the replace was crawling, I did the following:

  1. Ensured that the degraded filesystem was noauto in /etc/fstab

  2. Rebooted the machine (which took about 20 minutes due to I/O hangs)

  3. Disabled the LVM VG containing the btrfs fs on the failed drive:

     sudo vgchange -an <failed-vg> 
  4. Disabled the failed device:

     echo 1 | sudo tee /sys/block/sdb/device/delete 
  5. Mounted the filesystem -o ro,degraded (degraded can only be used once)

  6. Checked replace status and saw it was suspended:

     Started on 26.Jan 00:36:12, suspended on 26.Jan 10:13:30 at 4.1%, 0 write errs, 0 
  7. Mounted -o remount,rw and saw the replace continue:

     kernel: BTRFS info (device dm-5): continuing dev_replace from <missing disk> (devid 2) to target /dev/mapper/vg6TBd1-ark @4% 

As I'm writing this:

  • replace status shows a healthy 0.1% progress every 30 seconds or so
  • iostat -d 1 -m <target-dev> shows about 145MB/s (Seagate advertises 160MB/s)

Update:

After completion, I noticed that btrfs device usage /mountpoint was showing some Data,DUP and Metadata,single, rather than only RAID1, so I rebalanced:

btrfs balance start -dconvert=raid1,soft -mconvert=raid1,soft /mountpoint 

Also, consider resizeing if both devices now contain slack (ie, the fs is smaller than the smaller device).

`degraded` is one-shot; correct speed
Source Link
Tom Hale
  • 33.4k
  • 42
  • 165
  • 258
Loading
Source Link
Tom Hale
  • 33.4k
  • 42
  • 165
  • 258
Loading