I have one disk (sdd) on which I currently have stored my data. I now have got two new 3TB disks (sdb&sdc) and want to create a RAID5 array over all three disks.
sdb: gpt table, empty partition sdb1sdc: gpt table, empty partition sdc1sdd: gpt table, btrfs-partition sdd1 with my data
My plan looks like this:
- Create RAID5 array
md0oversdb1andsdc1 - Create a btrfs filesystem on it.
- Copy the data from
sdd1tomd0 - Re-partition(=wipe)
sdd - Grow the array onto
sdd1
I currently am stuck at creating the 2-disk RAID5 array. I built the array
# mdadm --create --verbose /dev/md0 --level=5 --raid-devices=2 /dev/sdc1 /dev/sdb1 mdadm: layout defaults to left-symmetric mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 512K mdadm: size set to 2900832256K mdadm: automatically enabling write-intent bitmap on large array mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. and /proc/mdstat shows that it's doing the initial sync: I have one disk (sdd) on which I currently have stored my data. I now have got two new 3TB disks (sdb&sdc) and want to create a RAID5 array over all three disks.
sdb: gpt table, empty partition sdb1sdc: gpt table, empty partition sdc1sdd: gpt table, btrfs-partition sdd1 with my data
My plan looks like this:
- Create RAID5 array
md0oversdb1andsdc1 - Create a btrfs filesystem on it.
- Copy the data from
sdd1tomd0 - Re-partition(=wipe)
sdd - Grow the array onto
sdd1
I currently am stuck at creating the 2-disk RAID5 array. I built the array
# mdadm --create --verbose /dev/md0 --level=5 --raid-devices=2 /dev/sdc1 /dev/sdb1 mdadm: layout defaults to left-symmetric mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 512K mdadm: size set to 2900832256K mdadm: automatically enabling write-intent bitmap on large array mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. and /proc/mdstat shows that it's doing the initial sync:
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] md0 : active raid5 sdb1[2] sdc1[0] 2900832256 blocks super 1.2 level 5, 512k chunk, algorithm 2 [2/1] [U_] [>....................] recovery = 0.6% (19693440/2900832256) finish=308.8min speed=155487K/sec bitmap: 0/22 pages [0KB], 65536KB chunk unused devices: <none> top shows, that during this time, md(adm) uses ~35% CPU:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 989 root 20 0 0 0 0 S 29.1 0.0 0:17.69 md0_raid5 994 root 20 0 0 0 0 D 6.6 0.0 0:03.54 md0_resync So far so good. This should take ~6 hours. On my first try I had to reboot my server and thus stop the array after ~5h an the second time my sdb drive mysteriously disappeared, so I also had to restart the system.
The array started itself automatically, but the progress bar is gone:
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] md127 : active (auto-read-only) raid5 sdb1[2] sdc1[0] 2900832256 blocks super 1.2 level 5, 512k chunk, algorithm 2 [2/1] [U_] bitmap: 0/22 pages [0KB], 65536KB chunk unused devices: <none> and top reports no CPU-use.
So I tried stopping and assembling it manually:
~# mdadm --stop /dev/md127 mdadm: stopped /dev/md127 ~# mdadm --assemble --verbose /dev/md0 /dev/sdc1 /dev/sdb1 mdadm: looking for devices for /dev/md0 mdadm: /dev/sdc1 is identified as a member of /dev/md0, slot 0. mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 1. mdadm: added /dev/sdb1 to /dev/md0 as 1 mdadm: added /dev/sdc1 to /dev/md0 as 0 mdadm: /dev/md0 has been started with 1 drive (out of 2) and 1 rebuilding. Although it says it's rebuilding, mdstat shows no sign of that:
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] md0 : active (auto-read-only) raid5 sdc1[0] sdb1[2] 2900832256 blocks super 1.2 level 5, 512k chunk, algorithm 2 [2/1] [U_] bitmap: 0/22 pages [0KB], 65536KB chunk unused devices: <none> also top again shows no CPU use.
So I searched the web for a method to manually force a sync and found --update=resync, but trying this also doesn't yield a victory:
~# mdadm --stop /dev/md0 mdadm: stopped /dev/md0 ~# mdadm --assemble --verbose --force --run --update=resync /dev/md0 /dev/sdc1 /dev/sdb1 mdadm: looking for devices for /dev/md0 mdadm: /dev/sdc1 is identified as a member of /dev/md0, slot 0. mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 1. mdadm: Marking array /dev/md0 as 'clean' mdadm: added /dev/sdb1 to /dev/md0 as 1 mdadm: added /dev/sdc1 to /dev/md0 as 0 mdadm: /dev/md0 has been started with 1 drive (out of 2) and 1 rebuilding. root@server:~# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] md0 : active (auto-read-only) raid5 sdc1[0] sdb1[2] 2900832256 blocks super 1.2 level 5, 512k chunk, algorithm 2 [2/1] [U_] bitmap: 0/22 pages [0KB], 65536KB chunk unused devices: <none> (still no CPU-use)
After two days of trying to fix it myself I would be very thankful for any help or advice
-n 3) and just list one of them asmissing. You can add the missing drive later on.-n 3doesn't work with just two drivestruncate -s, thenzpool create ...., and thenzpool offlinethe sparse file.zfs createa fs on the pool,rsyncyoursdddata to it, unmount thesddfilesystem thenzpool replacethe sparse file withsdd.