0

I am trying to recover my data from a 8 year old configuration. 8 years is the last time I tried to mount this configuration but the computer died. The configuration is thus:

  • LUKS encryption on top of
  • A volume group on top of
  • RAID10 (4 drives) + RAID1 (2 drives)

Linux found the RAIDs and rebuilt them. It also found the volume group and assembled it naming it "blob" which was what it was.

The only problem is, when I went to luksOpen it said that the header was missing on /dev/mapper/vg-blob. Sure enough, hexdump revealed the first bytes on the drive were actually just zeroes. So I went looking for the LUKS header it and found it.

# strings -t d -n 4 /dev/mapper/vg-blob | grep LUKS 2097152 LUKS 

2097152 bytes in which by my math (2097152 / 1024 / 1024) is exactly 2MB. What are the odds?

So I used losetup -o 2097152 /dev/loop0 /dev/mapper/vg-blob and luksDumped /dev/loop. LUKS said there was a valid header! But how?

Tried the passwords I expected it to be, but nope.

I'm at a weird crossroads.

  • Is it the volume group that is not correct causing a weird 2MB offset (They are 8 years old. Were formats different then?). Thus THIS is the point of attack that needs to be corrected?
  • Did I just forget my password and I had it everything is okay despite a 2MB offset in LUKS header? Do I just need to bring up past events to try and remember?

Notably, I can see my .bash_history from the dead machine. I always opened it with cryptsetup luksOpen /dev/mapper/vg-blob crypt-blob.

Either way my data feels like it's hanging in the balance.

Edits: luksDump

# cryptsetup luksDump /dev/loop0 LUKS header information for /dev/loop0 Version: 1 Cipher name: aes Cipher mode: xts-plain64 Hash spec: sha1 Payload offset: 4096 MK bits: 256 MK digest: 1e 75 ff 04 c6 0a a8 9b c5 cd 7e d3 a2 ea 1c 8f cd 8f 17 0a MK salt: c5 eb 06 2a e6 50 ce 30 36 83 3e 4b 89 2c fe 94 9e ad d6 1f ad 77 f4 f5 d2 96 89 a2 1e 52 a7 fa MK iterations: 43125 UUID: bfb21fd9-67cc-4c41-bdd1-af38af7d4355 Key Slot 0: ENABLED Iterations: 172042 Salt: a7 61 b1 00 12 59 42 80 5d 62 89 9d 52 db e0 47 a3 39 11 1f bf 06 02 34 46 3e 47 2d 26 db 21 f7 Key material offset: 8 AF stripes: 4000 Key Slot 1: DISABLED Key Slot 2: DISABLED Key Slot 3: DISABLED Key Slot 4: DISABLED Key Slot 5: DISABLED Key Slot 6: DISABLED Key Slot 7: DISABLED 
23
  • If the raid was assembled incorrectly (mdadm --examine for all?), weird offset and subsequently broken header could appear. Even so, off by 2MiB on a 4-disk raid5 is too strange. Since LVM is involved, LVM has a metadata history (1st MiB of PV is a ring buffer, you can check with strings -w). If the VG was already enabled, the current version metadata might also appear under /etc/lvm. It would be interesting to check if there are any shenanigans going on in there. As for LUKS, check if the key material is random (luksDump should show its offset). Bad key material won't accept any password. Commented Feb 7 at 14:47
  • 1
    You shouldn't have LUKS on top of a VG - that's just asking for trouble. Maybe a VG on top of LUKS, or LVs from the VG each on top of their own LUKS. But never LUKS on top of a VG as a VG should be treated as an opaque container Commented Feb 7 at 15:48
  • Ignoring LUKS for a moment. When you've activated your VG (vgchange -ay), do vgs and lvs show you a valid Volume Group and any Logical Volumes? Commented Feb 7 at 15:51
  • 1
    OK, I'm out of suggestions. Good luck Commented Feb 7 at 17:00
  • 1
    Turns out the RAID wasn't built properly. Two of the drives the partitions wouldn't show in the system. The other two fdisk says the partition doesn't start on a physical boundary. I powered off the two drives that weren't showing partitions and turned my computer back on and now the VG is showing LUKS at byte 0. The passwords I remember still don't work. I'm wondering if there is further corruption. Commented Feb 7 at 21:05

0

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.