A story fairly similar to [this one][1] happened to me yesterday.

*Update:*
My only hard drive `/dev/sda` was originally partitioned this way:

 Device Boot Start End Blocks Id System
 /dev/sda1 * 2048 1026047 512000 83 Linux
 /dev/sda2 1026048 976773119 487873536 83 Linux

`/dev/sda1` is the `/boot` partition, it is not encrypted. The `/dev/sda2` partition was originally a LUKS partition containing the volume group `fedora_hostname`, which itself contained the following logical volumes:

 /dev/mapper/fedora_hostname-root
 /dev/mapper/fedora_hostname-home
 /dev/mapper/fedora_hostname-swap

I was on a live disk, just about to shrink the `/dev/sda2` partition following [those steps][2] when I abruptly had to stop and unplug my computer. I just had mounted the partition using

 cryptsetup luksOpen /dev/sda2 ExistingExt4

and had a look around the place (first time I was playing with LVM & LUKS), mounting and unmounting the different logical volumes (swap, root, home), when I had to hastily shut the terminal down without closing the LUKS partition (`exit`; `exit`). Not really paying attention, I withdrew the live CD before the end of the shutdown procedure and turned the laptop off pressing the power button.

When I started the computer a few hours later, the boot hung at "Reached Target Basic System"; the only hint I got from booting from the hard drive in rescue mode was

 dracut-initqueue [...] unit file of systemd-cryptsetup@luksMYUUID changed on disk

Using a rescue disk, I opened the drive (`cryptsetup luksOpen /dev/sda2 ExistingExt4`) and I inquired the status of the partition using `e2fsck`, which returned a pretty

 fsck.ext4: Superblock invalid, trying backup blocks...
 fsck.ext4: Bad magic number in super-block while trying to open /dev/sda2

`pvdisplay` returned that all PE were free (so the volume was shown as virtually empty), and `vgscan` shared that opinion, too, mentioning that my volume group did not contain any logical volume. `lvdisplay` returned nothing.

I had not resized anything yet, so I was very puzzled by the problem.

 testdisk /dev/mapper/ExistingExt4

showed the right volumes (swap, root and home) and allowed access to all data on the system, so I chose to `Write partition to the disk`, hoping that I could see them after rebooting. That did not happen however, although `fdisk -l /dev/mapper/ExistingExt4` now returns 

 Disk /dev/mapper/ExistingExt4: 465.3 GiB, 49958403712 bytes, 975742976 sectors
 Units: sectors of 1 * 512 = 512 bytes
 Sector size (logical/physical): 512 bytes / 4096 bytes
 I/O size (minimum/optimal): 4096 / 4096 bytes
 Disklabel type: dos
 Disk identifier: 0x0000000
 
 Device Boot Start End Blocks Id System
 /dev/mapper/ExistingExt4p1 * 2048 8161279 4079616 82 Linux swap / Solaris
 /dev/mapper/ExistingExt4p2 8161280 870885375 431362048 82 Linux
 /dev/mapper/ExistingExt4p3 870885376 975742975 52428800 82 Linux

I'm still not able to mount the volumes individually as I was able to unmount them when everything was O.K.. In fact, I cannot find them in `/dev/mapper`.
Additionally, since I copied the table to the disk, that I cannot see the physical volumes anymore, let alone the groups and the logical volumes (`pvscan`, `vgscan` & `lvscan` commands return nothing). But I remain confident and I am sure that the solution to this problem is not so far fetched:

 - Maybe renaming the volume entries in the partition table to match the original ones in the grub. <-- Unlikely
 - Maybe find a backup and restore the system using `vgcfgrestore` or equivalent. <-- Possible
 - Maybe rewrite the superblocks on the disk using `mke2fs -S`, as was proposed in the story I mentioned at the beginning. `testdisk` is positive about the size of the blocks: 4096. <-- Risky

But all those solutions (and the other ones you might propose) go beyond my limited knowledge of the logical volume system, as the reading may have disclosed already. I would like to know which procedure would be suitable to boot again properly.

*News from the front*

Again, I accessed my old system using `testdisk`. As mentioned before, the partitions are fine and I had a peek at my old file system, especially `/etc` to see if I could fetch some valuable information to restore. The longer this game goes on, the more I have the feeling that it is just a matter of labeling. I copied `/etc/lvm` to the live Documents file and tried to use it as a backup file for vgcfgrestore, but it returned a poor

 Couldn't find device with uuid cZ83jX-WXkk-tNG4-ulGT-sAqq-HlKq-Omtqc8.
 PV unknown device missing from cache
 Format-specific setup for unknown device failed
 Restore failed.

In fact, `blkid` does not return any uuid similar to that. What is more thrilling (and probably my own deed), is the fact that `blkid` returns this for my Ext4 file system:

 /dev/mapper/ExistingExt4: PTTYPE="dos"

I think I made a mistake with `Write partition to disk` in `testdisk`, as the uuid of my current `ExistingExt4` does in no way match with the one of the former device. In fact, `ExistingExt4` has no uuid. Is it possible to attribute to this volume all the specifications of the previous, working unit: uuid, logical partitioning, ... If yes, how?


 [1]: http://unix.stackexchange.com/questions/33284/recovering-ext4-superblocks/41946#41946
 [2]: http://unix.stackexchange.com/questions/41091/how-can-i-shrink-a-luks-partition-what-does-cryptsetup-resize-do