All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Help with software RAID1 and KVM with LVM storage type
Hello,
I need help with a new server setup I am doing. I am not having luck with this one. My previous KVM Nodes all have Hardware RAID10 with SSDs and I never had such an issue before. I am definitely missing something somewhere but I can not figure it out. Help Please
This is new Node for KVM and I am using Virtualizor for easy management.
There are 2 NVME SSDs in soft Raid1 running Almalinux 8. The Software RAID1 and OS is installed using provider's automated method. (No issue with OS or raid1)
/dev/md5 (RAID1) was used to create LVM PV
pvcreate /dev/md5
vgcreate vg /dev/md5
Virtualizor is using LVM storage type with RAW
VMs are setup under
/dev/vg/
So a current test VM disk is like
LV Path /dev/vg/vsv1004-dPp9Ya2whJOFZnPh-m9nDgYcVPsyRkKD8
Everything seems to work fine. (VM booting without any issue, can reboot VM and etc. All ok)
However, when host node is rebooted, there is no /dev/vg/ to start the VPS.
The disks are there, the pvs/vgs/lvs shows all the correct paths but the LVM path /dev/vg/ doesn't work and the VPS will not boot since it can not find the disk.
If you setup a new VPS, it will work, /dev/vg/ becomes active again and I can reboot the non-work VM again.
Any ideas, help on what I am missing here??
Why does /dev/vg/ (My LVM Volume group name) isn't working on its own after the reboot?
If you need any other info from me, please ask.
Thank you so much.
Comments
Sounds like something is out of order with the setting up of RAID and LVM at boot. Couldn't tell you why that would be, but as everything is working to the point that pvs/vgs/lvs display what you expect, does
vgscan --mknodes
fix-up the missing /dev entries for you?After the reboot the main node, if I do
vgchange -a y
It activates the volume group and then I have to reboot VMs and everything works as it should.