LVM volume group offline after reboot
I have a small server in my closet which is running 4 Debian 12 virtual machines under kvm/libvirt. The virtual machines have been running fine for months. They have unattended-upgrades enabled, and I generally leave them alone. I only reboot them periodically, so that the latest kernel upgrades get applied.
All the machines have an LVM configuration. Generally it’s a debian-vg
volume group on /dev/vda
for the operating system, which has been configured automatically by the installer, and a vgdata
volume group on /dev/vdb
for everything else. All file systems are simple ext4, so nothing fancy. (*)
A couple of days ago, one of the virtual machines didn’t come up after a routine reboot and dumped me into a maintenance shell. It complained that it couldn’t mount filesystems that were on vgdata
. First I tried simply rebooting the machine, but it kept dumping me into maintenance. Investigating a bit deeper, I noticed that vgdata
and the block device /dev/vdb
were detected but the volume group was inactive, and none of the logical volumes were found. I ran vgchange -a y vgdata
and that brought it back online. After several test reboots, the problem didn’t reoccur, so it seemed to be fixed permanently.
I was willing to write it off as a glitch, but then a day later I rebooted one of the other virtual machines, and it also dumped me into maintenance with the same error on its vgdata
. Again, running vgchange -y vgdata
fixed the problem. I think two times in two days the same error with different virtual machines is not a coincidence, so something is going on here, but I can’t figure out what.
I looked at the host logs, but I didn’t find anything suspicious that could indicate a hardware error for example. I should also mention that the virtual disks of both machines live on entirely different physical disks: VM1 is on an HDD and VM2 on an SSD.
I also checked if these VMs had been running kernel 6.1.64-1 with the recent ext4 corruption bug at any point, but this does not appear to be the case.
Below is an excerpt of the systemd journal on the failed boot of the second VM, with what I think are the relevant parts. Full pastebin of the log can be found here.
<span style="color:#323232;">Dec 16 14:40:35 omega lvm[307]: PV /dev/vdb online, VG vgdata is complete.
</span><span style="color:#323232;">Dec 16 14:40:35 omega lvm[307]: VG vgdata finished
</span><span style="color:#323232;">...
</span><span style="color:#323232;">Dec 16 14:42:05 omega systemd[1]: dev-vgdata-lvbinaries.device: Job dev-vgdata-lvbinaries.device/start timed out.
</span><span style="color:#323232;">Dec 16 14:42:05 omega systemd[1]: Timed out waiting for device dev-vgdata-lvbinaries.device - /dev/vgdata/lvbinaries.
</span><span style="color:#323232;">Dec 16 14:42:05 omega systemd[1]: Dependency failed for binaries.mount - /binaries.
</span><span style="color:#323232;">Dec 16 14:42:05 omega systemd[1]: Dependency failed for local-fs.target - Local File Systems.
</span><span style="color:#323232;">Dec 16 14:42:05 omega systemd[1]: local-fs.target: Job local-fs.target/start failed with result 'dependency'.
</span><span style="color:#323232;">Dec 16 14:42:05 omega systemd[1]: local-fs.target: Triggering OnFailure= dependencies.
</span><span style="color:#323232;">Dec 16 14:42:05 omega systemd[1]: binaries.mount: Job binaries.mount/start failed with result 'dependency'.
</span><span style="color:#323232;">Dec 16 14:42:05 omega systemd[1]: dev-vgdata-lvbinaries.device: Job dev-vgdata-lvbinaries.device/start failed with result 'timeout'.
</span><span style="color:#323232;">Dec 16 14:42:05 omega systemd[1]: dev-vgdata-lvdata.device: Job dev-vgdata-lvdata.device/start timed out.
</span><span style="color:#323232;">Dec 16 14:42:05 omega systemd[1]: Timed out waiting for device dev-vgdata-lvdata.device - /dev/vgdata/lvdata.
</span><span style="color:#323232;">Dec 16 14:42:05 omega systemd[1]: Dependency failed for data.mount - /data.
</span><span style="color:#323232;">Dec 16 14:42:05 omega systemd[1]: data.mount: Job data.mount/start failed with result 'dependency'.
</span><span style="color:#323232;">Dec 16 14:42:05 omega systemd[1]: dev-vgdata-lvdata.device: Job dev-vgdata-lvdata.device/start failed with result 'timeout'.
</span>
(*) For reference, the disk layout on the affected machine is as follows:
<span style="color:#323232;"># lsblk
</span><span style="color:#323232;">NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
</span><span style="color:#323232;">vda 254:0 0 20G 0 disk
</span><span style="color:#323232;">├─vda1 254:1 0 487M 0 part /boot
</span><span style="color:#323232;">├─vda2 254:2 0 1K 0 part
</span><span style="color:#323232;">└─vda5 254:5 0 19.5G 0 part
</span><span style="color:#323232;"> ├─debian--vg-root 253:2 0 18.6G 0 lvm /
</span><span style="color:#323232;"> └─debian--vg-swap_1 253:3 0 980M 0 lvm [SWAP]
</span><span style="color:#323232;">vdb 254:16 0 50G 0 disk
</span><span style="color:#323232;">├─vgdata-lvbinaries 253:0 0 20G 0 lvm /binaries
</span><span style="color:#323232;">└─vgdata-lvdata 253:1 0 30G 0 lvm /data
</span><span style="color:#323232;">
</span><span style="color:#323232;"># vgs
</span><span style="color:#323232;"> VG #PV #LV #SN Attr VSize VFree
</span><span style="color:#323232;"> debian-vg 1 2 0 wz--n- <19.52g 0
</span><span style="color:#323232;"> vgdata 1 2 0 wz--n- <50.00g 0
</span><span style="color:#323232;">
</span><span style="color:#323232;"># pvs
</span><span style="color:#323232;"> PV VG Fmt Attr PSize PFree
</span><span style="color:#323232;"> /dev/vda5 debian-vg lvm2 a-- <19.52g 0
</span><span style="color:#323232;"> /dev/vdb vgdata lvm2 a-- <50.00g 0
</span><span style="color:#323232;">
</span><span style="color:#323232;"># lvs
</span><span style="color:#323232;"> LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
</span><span style="color:#323232;"> root debian-vg -wi-ao---- 18.56g
</span><span style="color:#323232;"> swap_1 debian-vg -wi-ao---- 980.00m
</span><span style="color:#323232;"> lvbinaries vgdata -wi-ao---- 20.00g
</span><span style="color:#323232;"> lvdata vgdata -wi-ao---- <30.00g
</span>
Add comment