Proxmox servers suddenly can’t start, with the following error message:
kvm: -drive file=/dev/pve/vm-400-disk-0,if=none,id=drive-scsi0,format=raw,cache=none,aio=io_uring,detect-zeroes=on: Could not open '/dev/pve/vm-400-disk-0': No such file or directory
TASK ERROR: start failed: QEMU exited with code 1
Checking the /dev/pve/ directory in the system shows it’s empty. The initial thought was that a kernel update caused this, but searching the Proxmox official forum mostly pointed to disk space issues. Upon calculating disk space usage, it was found that the space used was near the limit. Hence, this judgment was made.
However, when deleting the disk, the following prompt appears:
lvremove snapshot ‘pve/snap_vm-102-disk-0_storage210818’ error: Failed to update pool pve/data.
Comparing normal and faulty hosts, it was thought to mount the pve volume and then copy it out, but it prompts:
TASK ERROR: activating LV ‘pve/data’ failed: Activation of logical volume pve/data is prohibited while logical volume pve/data_tmeta is active.
Following the prompt, I found this post: https://forum.proxmox.com/threads/upgrade-to-7-1-gone-wrong-activating-lv-pve-data-failed.101353/#post-437333.
There are three methods to solve this issue:
First method (using a script):
lvchange -an pve/data_tmeta
lvchange -an pve/data_tdata
lvchange -ay pve
Second method (using a service):
cat /etc/systemd/system/lvm-fix.service
[Unit]
Description=Activate all VG volumes (fix)
Before=local-fs.target
[Service]
Type=oneshot
ExecStart=/usr/sbin/vgchange -ay
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl enable lvm-fix.service
Third method:
Add the following parameters to `/etc/lvm/lvm.conf`:
thin_check_options = [ "-q", "--skip-mappings" ]
Then update the boot parameters:
update-initramfs -u
The first two methods do not require a restart, but the third one does.
Leave a Reply