6

We are moving an installation from mdadm raid with LVM on top to pure LVM, so we will add the original raid partitions as indendent partitions to the LVM group, something like:

# lvcreate -L 240G -n thin pve /dev/nvme0n1p2 Logical volume "thin" created. 

Then we add a mirror to it on the other disk/partition:

# lvconvert --type raid1 --mirrors 1 pve/thing /dev/nvme1n1p2 Logical volume pve/thin successfully converted. 

As we use a thin pool storage system for LXC we assumed we could then just convert it to a thin pool:

# lvconvert --type thin-pool pve/thin Converted pve/thin to thin pool. 

All seemed to work but the problem is we are uncertain that the last conversion affects the previous one. The reason being that when we list with lvs we get:

 thin pve twi-a-tz-- 240,00g 0,00 10,42 

Attributes 1 and 7 show this is a thinpool but no mention to the raid1 or value in the sync.

While lvs -a -o +devices does show it being mirrored on two partitions:

 [thin_tdata] pve rwi-aor--- 240,00g 24,17 thin_tdata_rimage_0(0),thin_tdata_rimage_1(0) [thin_tdata_rimage_0] pve iwi-aor--- 240,00g /dev/nvme0n1p2(67074) [thin_tdata_rimage_1] pve Iwi-aor--- 240,00g /dev/nvme1n1p2(67075) [thin_tdata_rmeta_0] pve ewi-aor--- 4,00m /dev/nvme0n1p2(128514) [thin_tdata_rmeta_1] pve ewi-aor--- 4,00m /dev/nvme1n1p2(67074) [thin_tmeta] pve ewi-ao---- 120,00m /dev/sdd2(0) 

So the doubt now is if "behind" the thinpool the raid is still working or simply has been allocated but not being used now. Creating the thin pool and the converting it to --raid1 type returns an error.

We have not found any doc about this scenario and in the case that this was working we are completely lost on how to monitor the lvm-raid status as we were planning to monitor drives status with the return of lvs of type r.

1 Answer 1

9

Yes, it is possible to have a thin pool with RAID 1 and your setup is nearly correct. The problem is with metadata which are not RAID 1 but linear so after losing one drive your thin pool will be broken. You need to create a separate RAID 1 LV for metadata and then use --poolmetadata <vg>/<metadata lv> when converting the RAID LVs to thin pool using lvconvert.

See lvmthin manpage section Tolerate device failures using raid for more details.

Example from the manpage:

 # lvcreate --type raid1 -m 1 -n pool0 -L 10G vg /dev/sdA /dev/sdB # lvcreate --type raid1 -m 1 -n pool0meta -L 1G vg /dev/sdC /dev/sdD # lvconvert --type thin-pool --poolmetadata vg/pool0meta vg/pool0 

Output of lvs -a with this setup:

$ sudo lvs raid_test -a LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert [lvol0_pmspare] raid_test ewi------- 12,00m pool0 raid_test twi-a-tz-- 100,00m 0,00 10,29 [pool0_tdata] raid_test rwi-aor--- 100,00m 100,00 [pool0_tdata_rimage_0] raid_test iwi-aor--- 100,00m [pool0_tdata_rimage_1] raid_test iwi-aor--- 100,00m [pool0_tdata_rmeta_0] raid_test ewi-aor--- 4,00m [pool0_tdata_rmeta_1] raid_test ewi-aor--- 4,00m [pool0_tmeta] raid_test ewi-aor--- 12,00m 100,00 [pool0_tmeta_rimage_0] raid_test iwi-aor--- 12,00m [pool0_tmeta_rimage_1] raid_test iwi-aor--- 12,00m [pool0_tmeta_rmeta_0] raid_test ewi-aor--- 4,00m [pool0_tmeta_rmeta_1] raid_test ewi-aor--- 4,00m 

Problem with lvs attributes output is that only the first bit is used to specify type of the LV and it looks like with both (r)aid and (t)hin pool thin pool wins and you get only t there.

5
  • Thanks for the info. As you mentioned I thought the fact that the thin_tmeta was on another disk (hdd) did not seem correct. What I don't get completely if what would be the ideal allocation. I mean considering that there si sufficient space in the partitions I would be inclined to use the sameone (nvm0n1p2 and nvm1n1ps in my case). Or should they be on a different partition? Commented Dec 7, 2020 at 18:19
  • 1
    Having metadata on a different set of PVs might help with performance but that should be negligible. It can also help if you run out of space on the PVs for data (metadata will be on o different PV and should still have free space reserve) but I think it doesn't really matter in general. Commented Dec 7, 2020 at 18:29
  • In this case the problem is it's been allocated on an HDD. We'll recreate in this case. Additionally for another thin pool we've now discovered that a lvol0_pmspare has been created on a raid we want to remove... wondering if that is "movable" without deleting the thin pool Commented Dec 7, 2020 at 19:10
  • 1
    pmspare is always created automatically with first thin pool, it's a special metadata reserver volume for repair/recovery operations. You can delete it and you can also telll lvcreate to not create it using --poolmetadataspare n, but it can't be created manually. Commented Dec 7, 2020 at 19:18
  • I've managed to move it with pvmove -n data/lvol0_pmspare /dev/md127 /dev/md125 I also red one could delete it and recreate it with " lvconvert --repair vg/thin" but in my case it would return an error as "Active pools cannot be repaired" Commented Dec 7, 2020 at 19:31

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.