2

I have Hyper-V on my Windows 10 laptop, with a generation 1 VM running RHEL 7.9. I am attempting to update it to run on a generation 2 VM, using UEFI boot.

As part of the move to a gen-2 VM, I need to install the UEFI boot code into the /boot partition.

I have the following packages installed, which I think is what I need:

grub2-efi-x64.x86_64 (1:2.02-0.07.el7_9.9) shim-x64.x86_64 (15-11.el7) grub2-efi-x64-modules.noarch (1:2.02-0.07.el7_9.9) shim-unsigned-x64.x86_64 (15-9.el7) 

efibootmgr -v shows the following:

enter image description here

As you can see, I've tried several different configurations via efibootmgr, none of which seem to work.

The boot settings for the VM:

enter image description here

So theoretically, the VM should boot from \EFI\redhat\shimx64.efi, which as far as I understand it should be the /boot/efi/EFI/redhat/shimx64.efi file, which does exist:

enter image description here

df -h shows that /boot is mounted on /dev/sda1 after I chroot /mnt/sysimage when booting via the rescue DVD.

As you can see from this screenshot of the /boot partition, there is what looks to me like a valid initramfs file for the kernel (I don't think we're even getting this far though):

enter image description here

I've also tried resetting grubs config with:

grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg

The grub.cfg file contains menu entries that look correct from a kernel-version perspective, however I never even get to see a grub menu, so again I don't think we're getting this far. Sounds like a UEFI boot-loader issue to me, but I don't know how to troubleshoot the problem.

When I remove the DVD, and boot the machine, Hyper-V moves down the boot order until it hits the Network Adapter.

enter image description here

Eventually, Hyper-V shows this:

enter image description here

What am I doing wrong?

If I boot from the rescue DVD, then at the grub menu drop to the command-line (via c) and run configfile (hd0,gpt1)/efi/EFI/redhat/grub.cfg, I am able to boot the installed Linux OS.

mount | grep -i "boot" from the booted OS shows:

/dev/sda1 on /boot type xfs (rw,relatime,seclabel,attr2,inode64,noquota)

0

3 Answers 3

2
+100

Answer currently above is pretty good, but does not stress adequately root cause of your issue.

From reading of your information dump, you probably only have linux formatted /boot partition, but that is not enough for UEFI machine to boot.

Your disk does not match UEFI prescribed layout for bootable disks.

As was already mentioned above, with UEFI, we are entering a new world of how bootloaders are setup, and requisites are completely different from old BIOS method. UEFI uses completely new partition table format and understands filesystems now(!).

This is simplest set of essential requirements for your disk to be recognized by UEFI firmware as UEFI bootable and booted from:

  1. disk must be partitioned with GPT partitioning scheme
  2. somewhere on the disk, in the GPT partition table, there must be one and exactly one partition present, and it must be marked as EFI System Partition, so called ESP; in automated streamlined setups like redhat anaconda, this is often GPT partition 1 of size ~512 MB.
  3. this partition must be formatted to a filesystem, whose driver is present inside UEFI firmware (i.e. has been "installed" into the UEFI installation inside of the motherboard chip - this holds even in VM emulated UEFI chip)
  4. only UEFI filesystem driver that is always mandatory to be installed (by UEFI spec) in any UEFI install is FAT32, ie it's the safest(/only) bet for ESP to be formatted with
  5. on given ESP partition a special UEFI EXE at \EFI\BOOT\BOOTx64.EFI (in case of 64bit system) must exist and must be valid runnable bootloader exe
  6. if this file is not found there are several other paths that will be inspected, but all of them can be overridden by EFIVAR with custom bootloader exe path; this is what efibootmgr is used as editor for.

It seems in your setup you have only one partition, sda1, with xfs filesystem, mounted into /boot.

Your UEFI, naturally, does not have XFS driver installed, thus it is unable to read this partition (and even if it had XFS driver, unless the partition is also of type ESP, it would still not boot from it).

This is my best guess because you forgot to add fdisk -l.

In moden UEFI linux installs ESP is not /boot, and it mounts deeper within /boot, like /boot/efi!

Fix is relatively simple: you need to ensure your partition table is of GPT type (which according to your grub line it already is) and add new partition of ESP type, then format it to FAT32, and ensure your /boot is not marked as ESP. /boot should be marked as normal Linux filesystem type.

In GPT scheme partition types are UUIDs (but do not confuse them (partition types UUIDs) with partition identifying UUIDs!) so ESP is EFI System (C12A7328-F81F-11D2-BA4B-00A0C93EC93B) type exaclty, while Linux filesystem is one of many: for example Linux root (x86-64) (4F68BCE3-E8CD-4DB1-96E7-FBCAF984B709). There is probably one explicit Linux type for /boot even.

Now, UEFI only cares for ESP, and as long as it can find it, it will run any UEFI exe it can find there - this "magic" exe is, what used to be called "bootloader" before: it is a program, that is supposed to load your actual OS.

Given "installing" bootloader is now as simple as copying exe onto FAT32 fomratted ESP, the process of bootloader intall got "simpler" for the user. Because UEFI exe is well known format derived from windows PE exe, it is also much easier to write bootloaders now, too. Thus there is also much bigger choice of bootloaders to chose from now.

For few examples:

  • refind
  • systemd-boot
  • gummiboot
  • syslinux
  • linux(!!!)
  • and ... grub

Yes you read right, modern linux kernel, compiled with EFI stub (which most of them are) is fully functional UEFI bootloader. Actually it can boot directly into itself from ESP without need of any intermediatory crap. So you could easily boot directly to linux right from UEFI already.

Unfortunately of all the bootloaders I mentioned here, grub is the probably shittiest possible. Because of this undeniable quality, it is the go to choice on 95% linux distributions on the market (this behavior is often the case in linux ecosystem).

The reasons for this choice are often very finicky, it's probably due to useless kernel selection menu at boot start that grub provides and due to the time distributions spent polishing their scripts for integrating new kernel installs appearing in this menu into the package manager.

So now, even after you prepare ESP and format it correctly and you could just boot straigth to linux from it correctly, right away, you have now second big linux booting problem, known as "the grub problem".

GRUB predates UEFI and it solves many booting issues old BIOS had the same way as UEFI does. By now you should realize UEFI is basically its own, Windows 98 level OS, only it is preinstalled directly into the motherboard. It is also very complex.

Same way, GRUB is it's own special mini-OS pre-installed into traditional /boot partition (it has it's own filesystem drivers just like UEFI, it's own shell like UEFI, and other stupid things, just like UEFI, only nobody knows/remembers how to use and handle them). You could just ditch it, but as you want your automagic kernel updating to work in conjunction with kernel package updates (through yum/dnf) you need to fix it now.

So in your case this how your boot chain must look like UEFI(OS)->GRUB(OS)->LINUX(OS).

Thus after you fix your UEFI preconditions listed above, you must now fix your GRUB preconditions. GRUB must properly installed into the ESP, so that UEFI can boot into it, and then it must be configured to boot your Linux instead.

Now, where to mount ESP in your in-linux / tree for this to work is highly distro dependant, so that is beyond me, but I guess /boot/efi is default for redhat family.

So, you need to rm -rf that directory in /boot that is it empty, and then mount you frshly formated empty ESP with vfat driver there (/boot/efi). Once you have that done, you need to chroot into the install, and force local grub to regenerate it's installation in UEFI mode into /boot/efi aka "/dev/sda/esp".

This is highly distro specific and also depending on the grub version and can be anything between:

grub-install /dev/sdX update-grub 

to

grub-install --target=x86_64-efi /dev/sdbX grub-install --recheck /dev/sdX 

From within chroot.

If you did right, you should now see GRUB install files appear in originally empty ESP partition and new EFIVAR should/could appear pointing to something like \EFI\redhat\shimx64.efi into ESP (by UUID).

I suggest to remove all stale boot efivars with efbootmgr before grub regeneration/reinstall so you can verify entry added.

Once you are done you should now see grub kernel boot menu after reboot.

Suggested good night readings related to the issue:

https://en.wikipedia.org/wiki/GUID_Partition_Table

https://wiki.archlinux.org/title/Unified_Extensible_Firmware_Interface

http://www.rodsbooks.com/efi-bootloaders/

1
  • parted on my distro (RHEL 7.9) doesn't seem to support esp as the partition type. Apparently boot is a synonym, which seems to have allowed me to load grub. However, I probably need to update my grub config since it only loaded in emergency mode. Commented Jul 7, 2022 at 14:50
2

You need to understand that with UEFI, there is no such thing as "boot block" any more: the UEFI firmware can understand filesystems and load files.

The *.efi boot file needs to be in a filesystem that is understood by the UEFI firmware (or the hypervisor). The UEFI specification only requires that the firmware must understand FAT32; other filesystem types may be added if desired.

But RHEL 7.x's default filesystem type is XFS, and HyperV's virtual firmware does not include XFS filesystem support.

So you should add a small partition to your system disk (512M is probably just fine, even less than that might be enough; although I think with modern disks, partitioning disks in units of less than 1 GB is micro-managing), and set it type in the partition table as 0xef if your system disk is MBR partitioned, or if you use GPT partitioning, just set the partition type as ESP (= EFI System Partition) using your partitioning tool of choice. Then run mkfs.fat on the partition, mount it somewhere you can move the current contents of your /boot/efi to the new partition, and mount the new partition to /boot/efi.

After that, you could use the efibootmgr command to clean up your old attempts, then run grub2-install again to build a correct UEFI boot variable entry automatically. Alternatively you could do it yourself with efibootmgr too, if you wish. Note that the UUID string on the boot entry displayed by efibootmgr -v should match the PARTUUID of the FAT32 partition mounted to /boot/efi. (You can check with lsblk -o +PARTUUID.)

0
1

Documenting exactly what I had to do, for future visitors. Without the details provided in the other answers, this would have been far more difficult.

I was converting my Hyper-V machine from "Generation 1" to "Generation 2". This big difference there is the boot mechanism goes from the tested, tried, and true method of having the BIOS load the boot sector directly into memory, and executing it, to having the UEFI firmware looking for an "ESP" or "BOOT" partition, and loading an EFI boot file from there.

UEFI boot requires a partition to be marked as bootable by the UEFI firmware, and chances are the UEFI firmware doesn't understand anything but a FAT-formatted partition. Since Hyper-V is a Microsoft product, it likely understands how to boot from NTFS formatted partitions as well.

Red Hat Enterprise Linux 7 uses the XFS filesystem by default, which cannot be read by the the UEFI firmware.

So, steps I took:

  1. Take a backup of my /boot file system. I have a secondary VHDX disk attached to my VM, mounted as /data, so I used that as the location for my backup:

    mkdir /data/old_boot cp --archive /boot /data/old_boot 
  2. Use lsblk to determine which device contains the /boot filesystem:

    > lsblk 
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 129G 0 disk ├─sda1 8:1 0 499M 0 part /boot └─sda2 8:2 0 126.5G 0 part ├─rhel-root 253:0 0 50G 0 lvm / ├─rhel-swap 253:1 0 7.9G 0 lvm [SWAP] └─rhel-home 253:2 0 68.6G 0 lvm /home sdb 8:16 0 512G 0 disk /data 
  3. Use parted to drop the existing /boot partition:

    > parted /dev/sda print Model: Msft Virtual Disk (scsi) Disk /dev/sda: 139GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 524MB 523MB xfs primary 2 525MB 136GB 136GB Linux LVM lvm 

    In the above, I've started parted and executed the print command, which shows the list of partitions on /dev/sda, so the partition I want to drop is /dev/sda1 - the size differences between lsblk and parted is due to parted using 1GB = 1,000,000,000 bytes, whereas lsblk uses 1GB = 1,073,741,824 bytes.

    This command, run inside the parted shell, actually deletes the partition:

    rm 1 
  4. Make the new partition, and assign it the efi or boot flag, to signal the UEFI firmware that this is the partition we'll attempt to boot from:

    > parted (parted) mkpart primary fat32 1 524 (parted) print Model: Msft Virtual Disk (scsi) Disk /dev/sda: 139GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 524MB 523MB fat32 primary 2 525MB 136GB 136GB Linux LVM lvm 

    Set the boot flag:

    (parted) set 1 boot on (parted) print Model: Msft Virtual Disk (scsi) Disk /dev/sda: 139GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 524MB 523MB fat32 primary boot 2 525MB 136GB 136GB Linux LVM lvm 

    As you can see, the boot flag is now set for partition number 1.

  5. Next, we'll format the new partition:

    > mkfs.fat -F 32 /dev/sda1 
  6. We need to ensure that /etc/fstab shows the correct items, otherwise we'll get some nasty surprises when Linux boots along the lines of Timed out waiting for device and Dependency failed for /boot. We need the partition UUID for the boot device, via lsblk:

    > lsblk -o +PARTUUID NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT PARTUUID sda 8:0 0 129G 0 disk ├─sda1 8:1 0 499M 0 part /boot 93be2242-cfa5-4759-86a8-e563092da88d └─sda2 8:2 0 126.5G 0 part 484a1ac4-eba0-49b4-9910-b6471462d8b0 ├─rhel-root 253:0 0 50G 0 lvm / ├─rhel-swap 253:1 0 7.9G 0 lvm [SWAP] └─rhel-home 253:2 0 68.6G 0 lvm /home 

    In my case, the partition UUID is 93be2242-cfa5-4759-86a8-e563092da88d - yours will be different, since these identifiers are universally unique by design.

    Next we ensure the /etc/fstab file contains the correct partition UUID for the /boot device:

    > cat /etc/fstab # # /etc/fstab # Created by anaconda on Mon Jan 16 22:52:23 2017 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/rhel-root / xfs defaults 0 0 /dev/sda1 /boot vfat defaults 0 0 /dev/mapper/rhel-home /home xfs defaults 0 0 /dev/mapper/rhel-swap swap swap defaults 0 0 

    In the example above, we see that the /boot device is /dev/sda1, I needed to ensure that line had PARTUUID=93be2242-cfa5-4759-86a8-e563092da88d instead of /dev/sda1 before the system would boot. I used vi to make the change, but feel free to use an editor that doesn't make you crazy. Once I changed it, the contents look like:

    # # /etc/fstab # Created by anaconda on Mon Jan 16 22:52:23 2017 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/rhel-root / xfs defaults 0 0 PARTUUID=93be2242-cfa5-4759-86a8-e563092da88d /boot vfat defaults 0 0 /dev/mapper/rhel-home /home xfs defaults 0 0 /dev/mapper/rhel-swap swap swap defaults 0 0 
  7. Restore the files from the old /boot partition, using cp:

    > cp --archive /data/old_boot /boot 
  8. Update the grub configuration:

    > grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg 
  9. Install the necessary UEFI boot components:

    > yum reinstall grub2 grub2-efi-x64 grubby shim-x64 
  10. Use efibootmgr to make an entry pointing at the .efi boot file:

    > efibootmgr -c -d /dev/sda -p 1 -l '\efi\EFI\redhat\shimx64.efi' -L 'Red Hat Enterprise Linux' 

    Then take a look at efibootmgr to see the list of UEFI boot options:

    > efibootmgr -v BootCurrent: 0006 Timeout: 0 seconds BootOrder: 0000 Boot0000* Red Hat Enterprise Linux HD(1,GPT,93be2242-cfa5-4759-86a8-e563092da88d,0x800,0xf9800)/File(\efi\EFI\redhat\shimx64.efi) 

At this point I was able to reboot, and Linux booted correctly.

3
  • Great, glad you got it working! In cases like these, especially if you have enough of free space on the visor, it is often easier build new Gen2 machine in UEFI mode right away and transfer your settings from one VM to other over the network or some other shared storage. Of course this was also a good learning experience. You successfully manged to fix UEFI in HV VM! Now imagine fixing it on real HW, with its's crappy implementation/of UEFI install, which is usually three times more "fun" ;). Commented Sep 28, 2022 at 17:17
  • On the other side, this migration to Gen2 was well worth it, because Hyper-V is one best Microsoft product I have seen so far. If you do lspci on your rejuvenated VM, you will see there is now virtually (:)) no(!) PCI "hardware" in the machine, as Microsoft provided Linux with hyper-v drivers for every virtual device, and everything in a VM is paravitual - which has many implications, but especially for a VM performance. Commented Sep 28, 2022 at 17:22
  • Agreed. In my case I had to migrate to gen-2 since BIOS (gen-1) wasn't supported by the target OS. Commented Sep 28, 2022 at 20:23

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.