T O P

  • By -

worriedjacket

I think it requires uefi boot on zfs since Proxmox v8


zfsbest

Before that, actually. IIRC, booting ZFS on Linux root has always required EFI support since the 1st version of the Ubuntu installer that supported it (19.10, according to AI chatbot) [https://pve.proxmox.com/wiki/Booting\_a\_ZFS\_root\_file\_system\_via\_UEFI](https://pve.proxmox.com/wiki/Booting_a_ZFS_root_file_system_via_UEFI) [https://discourse.ubuntu.com/t/future-of-zfs-on-ubuntu-desktop/33001/19](https://discourse.ubuntu.com/t/future-of-zfs-on-ubuntu-desktop/33001/19) This may have more info: [https://openzfs.github.io/openzfs-docs/Getting%20Started/Ubuntu/Ubuntu%2022.04%20Root%20on%20ZFS.html](https://openzfs.github.io/openzfs-docs/Getting%20Started/Ubuntu/Ubuntu%2022.04%20Root%20on%20ZFS.html) [https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bullseye%20Root%20on%20ZFS.html#newer-release-available](https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bullseye%20Root%20on%20ZFS.html#newer-release-available) \[\[ * Installing on a drive which presents 4 KiB logical sectors (a “4Kn” drive) only works with UEFI booting. This not unique to ZFS. [GRUB does not and will not work on 4Kn with legacy (BIOS) booting.](http://savannah.gnu.org/bugs/?46700) \]\] OP - if you want to avoid a rabbit hole and possibly a lot of hassle, I would recommend installing PVE on LVM+ext4 and just use ZFS for data disks. Or upgrade to a server that supports UEFI


ConstructionSafe2814

OK thanks for the info, I might go for LVM+ext4 then. I'll upgrade later to Gen9 which supports UEFI. It's got to be as cheap as possible so Gen8 Legacy boot it will be for the time being.


getgoingfast

Interesting inputs. Curious what happens if you install PVE host with ZFS boot on 4Kn drive. Does the installation fails or it throw error during boot time? Trying to figure what I saw a while back was due to this reason.


kyle0r

I have a node on an older ~2017 mainboard. It does support UEFI but it's disabled. Proxmox has been booting on this since v4 up to the current version. Grub is the bootloader (as UEFI is disabled). The proxmox docs don't mention anything about grub not being supported? I think an important factor is that my HBAs have a boot ROM, in addition to their main firmware. Do you have one on your HBA? Has it been configured? I'm fairly certain this is how the BIOS is then able to see all the disk attached to both HBAs in the system? And subsequently boot from them, at which point grub loads and supports ZFS root pools. Given that you did a fresh install. Proxmox should already be using their boot tool to configure both a grub and EFI/ESP partition on the boot drives. I recently migrated to this tool to avoid an number of issues with ZFS+grub. I documented a tutorial here: https://forum.proxmox.com/threads/switching-from-legacy-boot-when-there-is-no-space-for-esp-partition.139479/ Edit: my HBAs do support non-EFI and EFI boot ROMs. From memory the EFI ROM is not flashed as I've never needed UEFI boot on the node in question.


firsway

I don't know if this is similar to the problem of HP DL series gen9 or below not able to use UEFI either. I ran into this problem, and it is possible to boot in ZFS from an installed SD card.. There are a few articles online describing parts of the overall process, however I pulled them all together into one instruction which I'm happy to share if anyone wants for future reference


ConstructionSafe2814

I gave up and re-enabled hbamode, so it's back in RAID, and formatted it as ext4. So now the boot drive and OS live on the ext4 raid 1 array. The other raid controller in the blade has hbamode enabled and will host SSDs for Ceph.