Given that the `replace` was crawling, I did the following:

1. Ensured that the degraded filesystem was `noauto` in `/etc/fstab`
1. Rebooted the machine (which took about 20 minutes due to I/O hangs)
1. Disabled the LVM VG containing the btrfs fs on the failed drive:

 sudo vgchange -an <failed-vg>

1. Disabled the failed device:

 echo 1 | sudo tee /sys/block/sdb/device/delete

1. Mounted the filesystem `-o ro,degraded` (`degraded` can only be used [once](https://btrfs.wiki.kernel.org/index.php/Gotchas#raid1_volumes_only_mountable_once_RW_if_degraded))
1. Checked `replace status` and saw it was suspended:

 Started on 26.Jan 00:36:12, suspended on 26.Jan 10:13:30 at 4.1%, 0 write errs, 0 

1. Mounted `-o remount,rw` and saw the `replace` continue:

 kernel: BTRFS info (device dm-5): continuing dev_replace from <missing disk> (devid 2) to target /dev/mapper/vg6TBd1-ark @4%


As I'm writing this:

* `replace status` shows a healthy 0.1% progress every 30 seconds or so
* `iostat -d 1 -m <target-dev>` shows about 145MB/s (Seagate advertises 160MB/s)

----

Update:

After completion, I noticed that `btrfs device usage /mountpoint` was showing some `Data,DUP` and `Metadata,single`, rather than only `RAID1`, so I rebalanced:

 btrfs balance start -dconvert=raid1,soft -mconvert=raid1,soft /mountpoint

Also, consider `resize`ing if both devices now contain slack:

 btrfs filesystem resize max /mountpoint

I would also recommend that you `scrub` as I had [262016 correctable `csum` errors seemingly related to the interrupted `replace`](https://unix.stackexchange.com/a/496828/143394).