We are moving an installation from mdadm raid with LVM on top to pure LVM, so we will add the original raid partitions as indendent partitions to the LVM group, something like:
# lvcreate -L 240G -n thin pve /dev/nvme0n1p2 Logical volume "thin" created. Then we add a mirror to it on the other disk/partition:
# lvconvert --type raid1 --mirrors 1 pve/thing /dev/nvme1n1p2 Logical volume pve/thin successfully converted. As we use a thin pool storage system for LXC we assumed we could then just convert it to a thin pool:
# lvconvert --type thin-pool pve/thin Converted pve/thin to thin pool. All seemed to work but the problem is we are uncertain that the last conversion affects the previous one. The reason being that when we list with lvs we get:
thin pve twi-a-tz-- 240,00g 0,00 10,42 Attributes 1 and 7 show this is a thinpool but no mention to the raid1 or value in the sync.
While lvs -a -o +devices does show it being mirrored on two partitions:
[thin_tdata] pve rwi-aor--- 240,00g 24,17 thin_tdata_rimage_0(0),thin_tdata_rimage_1(0) [thin_tdata_rimage_0] pve iwi-aor--- 240,00g /dev/nvme0n1p2(67074) [thin_tdata_rimage_1] pve Iwi-aor--- 240,00g /dev/nvme1n1p2(67075) [thin_tdata_rmeta_0] pve ewi-aor--- 4,00m /dev/nvme0n1p2(128514) [thin_tdata_rmeta_1] pve ewi-aor--- 4,00m /dev/nvme1n1p2(67074) [thin_tmeta] pve ewi-ao---- 120,00m /dev/sdd2(0) So the doubt now is if "behind" the thinpool the raid is still working or simply has been allocated but not being used now. Creating the thin pool and the converting it to --raid1 type returns an error.
We have not found any doc about this scenario and in the case that this was working we are completely lost on how to monitor the lvm-raid status as we were planning to monitor drives status with the return of lvs of type r.