2

I had a working RAID5 array until I installed kvm and qemu and rebooted. After that the system wouldnt boot as /dev/md0 could not be mounted.

Running cat /proc/mdstat gives:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : inactive sdb[2](S) sda[0](S) 1953263024 blocks super 1.2 unused devices: <none> 

I did have sda, sdb and sdc in the array but it looks like only sda and sdb are there but now as spares.

Checking each of the disks gives:

sudo mdadm --examine /dev/sda /dev/sda: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : e25ff5c6:90186486:4f001b87:27056b4a Name : SAN1:0 (local to host SAN1) Creation Time : Sat Jul 16 17:13:01 2022 Raid Level : raid5 Raid Devices : 3 Avail Dev Size : 1953263024 (931.39 GiB 1000.07 GB) Array Size : 1953260544 (1862.77 GiB 2000.14 GB) Used Dev Size : 1953260544 (931.39 GiB 1000.07 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262064 sectors, after=2480 sectors State : clean Device UUID : 16904f75:c2ddd8b0:75025adb:0a09effa Internal Bitmap : 8 sectors from superblock Update Time : Wed Jul 20 18:59:56 2022 Bad Block Log : 512 entries available at offset 16 sectors Checksum : 8d2ba8a7 - correct Events : 4167 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 0 Array State : AAA ('A' == active, '.' == missing, 'R' == replacing) sudo mdadm --examine /dev/sdb /dev/sdb: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : e25ff5c6:90186486:4f001b87:27056b4a Name : SAN1:0 (local to host SAN1) Creation Time : Sat Jul 16 17:13:01 2022 Raid Level : raid5 Raid Devices : 3 Avail Dev Size : 1953263024 (931.39 GiB 1000.07 GB) Array Size : 1953260544 (1862.77 GiB 2000.14 GB) Used Dev Size : 1953260544 (931.39 GiB 1000.07 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262064 sectors, after=2480 sectors State : clean Device UUID : 02a449d0:be934563:ff4293f3:42e4ed52 Internal Bitmap : 8 sectors from superblock Update Time : Wed Jul 20 18:59:56 2022 Bad Block Log : 512 entries available at offset 16 sectors Checksum : aca8e53 - correct Events : 4167 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 1 Array State : AAA ('A' == active, '.' == missing, 'R' == replacing) sudo mdadm --examine /dev/sdc /dev/sdc: MBR Magic : aa55 Partition[0] : 1953525167 sectors at 1 (type ee) 

and sudo mdadm --examine --scan shows

ARRAY /dev/md/0 metadata=1.2 UUID=e25ff5c6:90186486:4f001b87:27056b4a name=SAN1:0 

My mdadm.conf looks like:

 mdadm.conf # # !NB! Run update-initramfs -u after updating this file. # !NB! This will ensure that initramfs has an uptodate copy. # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays #level=raid5 #num-devices=3 # This configuration was auto-generated on Fri, 15 Jul 2022 20:17:00 +0100 by mkconf ARRAY /dev/md0 uuid=e25ff5c6:90186486:4f001b87:27056b4a 

Any ideas how to fix the array? Ideally I'd like to do it without having to go back to my backups if possible..

I tried:

sudo mdadm --stop /dev/md0 sudo mdadm --assemble /dev/md0 /dev/sda /dev/sdb /dev/sdc --verbose 

and I got:

mdadm: looking for devices for /dev/md0 mdadm: No super block found on /dev/sdc (Expected magic a92b4efc, got 00000000) mdadm: no RAID superblock on /dev/sdc mdadm: /dev/sdc has no superblock - assembly aborted 

Update 1: Ok, I ran mdadm -D /dev/md1 and it's come back as degraded. Thats not so bad.. I just need to add the 3rd disk back in..

Update 2: After a seemingly successful rebuild I rebooted and got the same issue, to fix it again I tried:

lex@SAN1:/etc/apt$ sudo mdadm --assemble /dev/md0 alex@SAN1:/etc/apt$ sudo mdadm -D /dev/md0 /dev/md0: Version : 1.2 Raid Level : raid0 Total Devices : 2 Persistence : Superblock is persistent State : inactive Working Devices : 2 Name : SAN1:0 (local to host SAN1) UUID : e25ff5c6:90186486:4f001b87:27056b4a Events : 6058 Number Major Minor RaidDevice - 8 0 - /dev/sda - 8 16 - /dev/sdb alex@SAN1:/etc/apt$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : inactive sdb[2](S) sda[0](S) 1953263024 blocks super 1.2 unused devices: <none> alex@SAN1:/etc/apt$ sudo mdadm --stop /dev/md0 mdadm: stopped /dev/md0 alex@SAN1:/etc/apt$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] unused devices: <none> alex@SAN1:/etc/apt$ sudo mdadm -D /dev/md0 mdadm: cannot open /dev/md0: No such file or directory alex@SAN1:/etc/apt$ sudo mdadm --assemble /dev/md0 mdadm: /dev/md0 has been started with 2 drives (out of 3). alex@SAN1:/etc/apt$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active (auto-read-only) raid5 sda[0] sdb[2] 1953260544 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_] bitmap: 0/8 pages [0KB], 65536KB chunk unused devices: <none> 

Update 2a:

I tried the following:

sudo gdisk /dev/sdc GPT fdisk (gdisk) version 1.0.6 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Command (? for help): x Expert command (? for help): z About to wipe out GPT on /dev/sdc. Proceed? (Y/N): y GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. Blank out MBR? (Y/N): y 

Added to array again, lets see what happens in another 2 hours after the rebuild and another reboot.. :(

Any idea what's wrong with disk 3?

Thanks

1 Answer 1

1

Ok, I think I found the problem.

Having run the following:

sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf 

I now have:

ARRAY /dev/md0 metadata=1.2 name=SAN1:0 UUID=e25ff5c6:90186486:4f001b87:27056b4a 

instead of:

ARRAY /dev/md0 uuid=e25ff5c6:90186486:4f001b87:27056b4a 

Having rebooted all seems fine.

cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] md0 : active raid5 sda[0] sdb[2] sdc[3] 1953260544 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] bitmap: 0/8 pages [0KB], 65536KB chunk unused devices: <none> 

I hope this helps anyone else in the same situation..

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.