2

Lately I bit the bullet and upgrade my OS from Fedora 11 to Fedora 15, and I've been trying very hard to figure out why Fedora 15 can't see the Raid setup that I created in Fedora 11. I think I must have missed something so I resort to group wisdom here.

When I upgraded, I used a new boot drive for Fedora 15, so I can physically swap the boot drives and boot into either Fedora 11 or 15. Fedora 11 can still see the Raid and everything works. Fedora 15 shows something very strange.

[edited to add the output of the request of @psusi ]

On Fedora 11

I had a regular boot drive (/dev/sda) and an lvm built on raid 5 (/dev/sdb, /dev/sdc, /dev/sdd).

Specifically, the raid disk /dev/md/127_0 is built from /dev/sdb1, /dev/sdc1, /dev/sdd1, where each partition takes the whole disk space.

The volume group of the boot drive (/dev/vg_localhost/) is irrelevant. The volume group that I created onto the raid disk is called /dev/lvm-tb-storage/.

The following is the settings that I got from the system (mdadm, pvscan, lvscan, etc.)

[root@localhost ~]# cat /etc/mdadm.conf [root@localhost ~]# pvscan PV /dev/md127 VG lvm-tb-storage lvm2 [1.82 TB / 0 free] PV /dev/sda5 VG vg_localhost lvm2 [61.44 GB / 0 free] Total: 2 [1.88 TB] / in use: 2 [1.88 TB] / in no VG: 0 [0 ] [root@localhost ~]# lvscan ACTIVE '/dev/lvm-tb-storage/tb' [1.82 TB] inherit ACTIVE '/dev/vg_localhost/lv_root' [54.68 GB] inherit ACTIVE '/dev/vg_localhost/lv_swap' [6.77 GB] inherit [root@localhost ~]# vgdisplay --- Volume group --- VG Name lvm-tb-storage System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 6 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 1.82 TB PE Size 4.00 MB Total PE 476839 Alloc PE / Size 476839 / 1.82 TB Free PE / Size 0 / 0 VG UUID wqIXsb-KRZQ-eRnH-JvuP-VdHk-XJTG-DSWimc --- Volume group --- VG Name vg_localhost System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 61.44 GB PE Size 4.00 MB Total PE 15729 Alloc PE / Size 15729 / 61.44 GB Free PE / Size 0 / 0 VG UUID IVIpCV-C4qg-Lii7-zwkz-P3si-MXAZ-WYUSe6 [root@localhost ~]# vgscan Reading all physical volumes. This may take a while... Found volume group "lvm-tb-storage" using metadata type lvm2 Found volume group "vg_localhost" using metadata type lvm2 [root@localhost ~]# mdadm --detail --scan ARRAY /dev/md/127_0 metadata=0.90 UUID=bebfd467:cb6700d9:29bdc0db:c30228ba [root@localhost ~]# ls -al /dev/md total 0 drwxr-xr-x. 2 root root 60 2011-09-13 03:14 . drwxr-xr-x. 19 root root 5180 2011-09-13 03:15 .. lrwxrwxrwx. 1 root root 8 2011-09-13 03:14 127_0 -> ../md127 [root@localhost ~]# mdadm --detail /dev/md/127_0 /dev/md/127_0: Version : 0.90 Creation Time : Wed Nov 5 18:26:25 2008 Raid Level : raid5 Array Size : 1953134208 (1862.65 GiB 2000.01 GB) Used Dev Size : 976567104 (931.33 GiB 1000.00 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 127 Persistence : Superblock is persistent Update Time : Tue Sep 13 03:28:51 2011 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : bebfd467:cb6700d9:29bdc0db:c30228ba Events : 0.671154 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 49 1 active sync /dev/sdd1 2 8 33 2 active sync /dev/sdc1 [root@localhost ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md127 : active raid5 sdb1[0] sdc1[2] sdd1[1] 1953134208 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> [root@localhost ~]# mdadm --examine /dev/sdb1 /dev/sdb1: Magic : a92b4efc Version : 0.90.00 UUID : bebfd467:cb6700d9:29bdc0db:c30228ba Creation Time : Wed Nov 5 18:26:25 2008 Raid Level : raid5 Used Dev Size : 976567104 (931.33 GiB 1000.00 GB) Array Size : 1953134208 (1862.65 GiB 2000.01 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 127 Update Time : Tue Sep 13 03:29:50 2011 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Checksum : f1ddf826 - correct Events : 671154 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 0 8 17 0 active sync /dev/sdb1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 49 1 active sync /dev/sdd1 2 2 8 33 2 active sync /dev/sdc1 [root@localhost ~]# fdisk -lu 2>&1 Disk /dev/dm-0 doesn't contain a valid partition table Disk /dev/dm-1 doesn't contain a valid partition table Disk /dev/md127 doesn't contain a valid partition table Disk /dev/dm-2 doesn't contain a valid partition table Disk /dev/sda: 250.0 GB, 250000000000 bytes 255 heads, 63 sectors/track, 30394 cylinders, total 488281250 sectors Units = sectors of 1 * 512 = 512 bytes Disk identifier: 0x00000080 Device Boot Start End Blocks Id System /dev/sda1 63 610469 305203+ 83 Linux /dev/sda2 610470 359004554 179197042+ 83 Linux /dev/sda3 * 359004555 359414154 204800 83 Linux /dev/sda4 359422245 488279609 64428682+ 5 Extended /dev/sda5 359422308 488278371 64428032 8e Linux LVM Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Disk identifier: 0xb03e1980 Device Boot Start End Blocks Id System /dev/sdb1 63 1953134504 976567221 da Non-FS data Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Disk identifier: 0x7db522d5 Device Boot Start End Blocks Id System /dev/sdc1 63 1953134504 976567221 da Non-FS data Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Disk identifier: 0x20af5840 Device Boot Start End Blocks Id System /dev/sdd1 63 1953134504 976567221 da Non-FS data Disk /dev/dm-0: 58.7 GB, 58707673088 bytes 255 heads, 63 sectors/track, 7137 cylinders, total 114663424 sectors Units = sectors of 1 * 512 = 512 bytes Disk identifier: 0x00000000 Disk /dev/dm-1: 7264 MB, 7264534528 bytes 255 heads, 63 sectors/track, 883 cylinders, total 14188544 sectors Units = sectors of 1 * 512 = 512 bytes Disk identifier: 0x00000000 Disk /dev/md127: 2000.0 GB, 2000009428992 bytes 2 heads, 4 sectors/track, 488283552 cylinders, total 3906268416 sectors Units = sectors of 1 * 512 = 512 bytes Disk identifier: 0x00000000 Disk /dev/dm-2: 2000.0 GB, 2000007725056 bytes 255 heads, 63 sectors/track, 243153 cylinders, total 3906265088 sectors Units = sectors of 1 * 512 = 512 bytes Disk identifier: 0x00000000 

The kernel boot parameter that I have

kernel /vmlinuz-2.6.30.10-105.2.23.fc11.x86_64 ro root=/dev/mapper/vg_localhost-lv_root rhgb quiet 

On Fedora 15

I installed Fedora 15 on a new boot drive, on which the installation program also created an lvm (/dev/vg_20110912a/) for me, but again that's irrelevant.

Under Fedora 15, lvm, pvscan, vgscan see nothing but the irrelevant boot drive. mdadm, however, shows something very strange -- the original raid is broken into three raid, and the combination is very puzzling.

[root@localhost ~] # cat /etc/mdadm.conf # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all [root@localhost ~]# pvscan PV /dev/sda2 VG vg_20110912a lvm2 [59.12 GiB / 0 free] Total: 1 [59.12 GiB] / in use: 1 [59.12 GiB] / in no VG: 0 [0 ] [root@localhost ~]# lvscan ACTIVE '/dev/vg_20110912a/lv_home' [24.06 GiB] inherit ACTIVE '/dev/vg_20110912a/lv_swap' [6.84 GiB] inherit ACTIVE '/dev/vg_20110912a/lv_root' [28.22 GiB] inherit [root@localhost ~]# vgdisplay --- Volume group --- VG Name vg_20110912a System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 3 Max PV 0 Cur PV 1 Act PV 1 VG Size 59.12 GiB PE Size 32.00 MiB Total PE 1892 Alloc PE / Size 1892 / 59.12 GiB Free PE / Size 0 / 0 VG UUID 8VRJyx-XSQp-13mK-NbO6-iV24-rE87-IKuhHH [root@localhost ~]# vgscan Reading all physical volumes. This may take a while... Found volume group "vg_20110912a" using metadata type lvm2 [root@localhost ~]# mdadm --detail --scan ARRAY /dev/md/0_0 metadata=0.90 UUID=153e151b:8c717565:fd59f149:d2ea02c9 ARRAY /dev/md/127_0 metadata=0.90 UUID=bebfd467:cb6700d9:29bdc0db:c30228ba [root@localhost ~]# ls -l /dev/md total 4 lrwxrwxrwx. 1 root root 8 Sep 13 02:39 0_0 -> ../md127 lrwxrwxrwx. 1 root root 10 Sep 13 02:39 0_0p1 -> ../md127p1 lrwxrwxrwx. 1 root root 8 Sep 13 02:39 127_0 -> ../md126 -rw-------. 1 root root 120 Sep 13 02:39 md-device-map [root@localhost ~]# cat /dev/md/md-device-map md126 0.90 bebfd467:cb6700d9:29bdc0db:c30228ba /dev/md/127_0 md127 0.90 153e151b:8c717565:fd59f149:d2ea02c9 /dev/md/0_0 [root@localhost ~]# mdadm --detail /dev/md/0_0 /dev/md/0_0: Version : 0.90 Creation Time : Tue Nov 4 21:45:19 2008 Raid Level : raid5 Array Size : 976762496 (931.51 GiB 1000.20 GB) Used Dev Size : 976762496 (931.51 GiB 1000.20 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 127 Persistence : Superblock is persistent Update Time : Wed Nov 5 09:04:28 2008 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : 153e151b:8c717565:fd59f149:d2ea02c9 Events : 0.2202 Number Major Minor RaidDevice State 0 8 48 0 active sync /dev/sdd 1 8 16 1 active sync /dev/sdb [root@localhost ~]# mdadm --detail /dev/md/127_0 /dev/md/127_0: Version : 0.90 Creation Time : Wed Nov 5 18:26:25 2008 Raid Level : raid5 Array Size : 1953134208 (1862.65 GiB 2000.01 GB) Used Dev Size : 976567104 (931.33 GiB 1000.00 GB) Raid Devices : 3 Total Devices : 2 Preferred Minor : 126 Persistence : Superblock is persistent Update Time : Tue Sep 13 00:39:51 2011 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : bebfd467:cb6700d9:29bdc0db:c30228ba Events : 0.671154 Number Major Minor RaidDevice State 0 259 0 0 active sync /dev/md/0_0p1 1 0 0 1 removed 2 8 33 2 active sync /dev/sdc1 [root@localhost ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md126 : active (auto-read-only) raid5 md127p1[0] sdc1[2] 1953134208 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U] md127 : active (auto-read-only) raid5 sdb[1] sdd[0] 976762496 blocks level 5, 64k chunk, algorithm 2 [2/2] [UU] unused devices: <none> [root@localhost ~]# mdadm --examine /dev/sdb1 /dev/sdb1: Magic : a92b4efc Version : 0.90.00 UUID : bebfd467:cb6700d9:29bdc0db:c30228ba Creation Time : Wed Nov 5 18:26:25 2008 Raid Level : raid5 Used Dev Size : 976567104 (931.33 GiB 1000.00 GB) Array Size : 1953134208 (1862.65 GiB 2000.01 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 127 Update Time : Tue Sep 13 00:39:51 2011 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Checksum : f1ddd04f - correct Events : 671154 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 0 8 17 0 active sync /dev/sdb1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 49 1 active sync /dev/sdd1 2 2 8 33 2 active sync /dev/sdc1 [root@localhost ~]# fdisk -lu 2>&1 Disk /dev/mapper/vg_20110912a-lv_swap doesn't contain a valid partition table Disk /dev/mapper/vg_20110912a-lv_root doesn't contain a valid partition table Disk /dev/md127 doesn't contain a valid partition table Disk /dev/mapper/vg_20110912a-lv_home doesn't contain a valid partition table Disk /dev/sda: 64.0 GB, 64023257088 bytes 255 heads, 63 sectors/track, 7783 cylinders, total 125045424 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0001aa2f Device Boot Start End Blocks Id System /dev/sda1 * 2048 1026047 512000 83 Linux /dev/sda2 1026048 125044735 62009344 8e Linux LVM Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb03e1980 Device Boot Start End Blocks Id System /dev/sdb1 63 1953134504 976567221 da Non-FS data Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x7db522d5 Device Boot Start End Blocks Id System /dev/sdc1 63 1953134504 976567221 da Non-FS data Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x20af5840 Device Boot Start End Blocks Id System /dev/sdd1 63 1953134504 976567221 da Non-FS data Disk /dev/mapper/vg_20110912a-lv_swap: 7348 MB, 7348420608 bytes 255 heads, 63 sectors/track, 893 cylinders, total 14352384 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_20110912a-lv_root: 30.3 GB, 30299652096 bytes 255 heads, 63 sectors/track, 3683 cylinders, total 59179008 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md127: 2000.0 GB, 2000009428992 bytes 2 heads, 4 sectors/track, 488283552 cylinders, total 3906268416 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 65536 bytes / 131072 bytes Disk identifier: 0x00000000 Disk /dev/md126: 1000.2 GB, 1000204795904 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953524992 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 65536 bytes / 65536 bytes Disk identifier: 0x20af5840 Device Boot Start End Blocks Id System /dev/md126p1 63 1953134504 976567221 da Non-FS data Partition 1 does not start on physical sector boundary. Disk /dev/mapper/vg_20110912a-lv_home: 25.8 GB, 25836912640 bytes 255 heads, 63 sectors/track, 3141 cylinders, total 50462720 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 

My kernel boot parameter:

kernel /vmlinuz-2.6.40.4-5.fc15.x86_64 ro root=/dev/mapper/vg_20110912a-lv_root rd_LVM_LV=vg_20110912a/lv_root rd_LVM_LV=vg_20110912a/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us rhgb quiet rdblacklist=nouveau nouveau.modeset=0 nodmraid 

The last mdadm --examine /dev/sdb1 shows exactly the same result as in Fedora 11, but I don't understand why mdadm --detail /dev/md/0_0 shows only /dev/sdb and /dev/sdd, and mdadm --detail /dev/md/127_0 shows /dev/sdc1 and /dev/md/0_0p1 .

Since mdadm --examine /dev/sdb1 shows the correct result, Fedora 15 is able to access the raid somehow, but I am not sure what to do. Shall I create/assembly a new raid /dev/md2 and hope that the lvm that I had created will magically show up?

Thank you in advance.

2
  • It appears that mdadm can't decide whether the raid superblock is in the partition, or the whole disk. Please add the output of fdisk -lu for the raid drives, and see if they are different in fedora 11 vs 15. Commented Sep 13, 2011 at 15:12
  • Thanks, @psusi. The output of fdisk added to the original post (see the end of each section). However, I need some help decipher them. Commented Sep 14, 2011 at 6:11

1 Answer 1

1

It looks like you have some old crufty raid superblocks laying around. The array you were using had 3 disks and the uuid of bebfd467:cb6700d9:29bdc0db:c30228ba and was created on Nov 5, 2008. Fedora 15 has recognized another raid array that has only two disks and was created the day before, using the whole disks instead of the first partition. Fedora 15 seems to have activated that old raid array, and then tried to use that array as one of the components in the correct array, which is causing a mess.

I think you need to blow away the old, bogus superblocks:

mdadm --zero-superblock /dev/sdb /dev/sdd 

You do have a current backup right? ;)

3
  • Thank you very much for your response, @psusi. You do read detail carefully. Since I really don't have enough extra space to backup this much data (that's why I had Raid 5 in the first place so I have redudency), I'd like to make sure that this is the risk worth taking. After I zero the superblock of the two drives, what shall I do? Does it make them recognizable by Fedora 15? What I still don't get is how come when I specify the partition, mdadm --examine /dev/sdb1, mdadm recognize the original 3-partition raid, but not automatically. Is this why you asked me to check the superblock? Commented Sep 14, 2011 at 4:47
  • @Tsan-Kuang Lee, zapping the bogus superblocks and rebooting should fix everything. Also raid is not a backup solution. Raid protects against a single drive failure, there are still many ways, including plain old accidental deletion, that you can lose your data, and so you still need to back it up if you don't want to lose it. Raid is for keeping the server online when a drive dies, not protecting your data. Its about uptime, not safety. Commented Sep 14, 2011 at 13:14
  • Thank you very much, @psusi. I am waiting to get a bigger drive to backup before I zap the superblocks, in the meantime, the problem is solved by manually editing /etc/mdadm.conf # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1 ARRAY /dev/md0 level=raid5 num-devices=3 UUID=bebfd467:cb6700d9:29bdc0db:c30228ba This temporarily solved my problem. I will do the zap after I manage to backup all the data. And thank you for the wisdom on backup. That makes a lot sense. I very much appreciate your help! Commented Sep 16, 2011 at 6:56

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.