Proxmox Disk Performance Problems

I’ve started encountering a problem that I should use some assistance troubleshooting. I’ve got a Proxmox system that hosts, primarily, my Opnsense router. I’ve had this specific setup for about a year.

Recently, I’ve been experiencing sluggishness and noticed that the IO wait is through the roof. Rebooting the Opnsense VM, which normally only takes a few minutes is now taking upwards of 15-20. The entire time my IO wait sits between 50-80%.

The system has 1 disk in it that is formatted ZFS. I’ve checked dmesg, and the syslog for indications of disk errors (this feels like a failing disk) and found none. I also checked the smart statistics and they all “PASSED”.

Any pointers would be appreciated.

Example of my most recent host reboot.

Edit: I believe I’ve found the root cause of the change in performance and it was a bit of shooting myself in the foot. I’ve been experimenting with different tools for log collection and the most recent one is a SIEM tool called Wazuh. I didn’t realize that upon reboot it runs an integrity check that generates a ton of disk I/O. So when I rebooted this proxmox server, that integrity check was running on proxmox, my pihole, and (I think) opnsense concurrently. All against a single consumer grade HDD.

Thanks to everyone who responded. I really appreciate all the performance tuning guidance. I’ve also made the following changes:

  1. Added a 2nd drive (I have several of these lying around, don’t ask) converting the zfs pool into a mirror. This gives me both redundancy and should improve read performance.
  2. Configured a 2nd storage target on the same zpool with compression enabled and a 64k block size in proxmox. I then migrated the 2 VMs to that storage.
  3. Since I’m collecting logs in Wazuh I set Opnsense to use ram disks for /tmp and /var/log.

Rebooted Opensense and it was back up in 1:42 min.

pyrosis,
@pyrosis@lemmy.world avatar

Out of curiosity what filesystem did you choose for you opnsense vm. Also can you tell if it’s a zvol, qcow2, or raw disk. FYI if it’s a qcow2 or a raw they both would benefit from a record size of 64k if they exist in a vm dataset. If it’s a zvol still 64 k can help.

I also utilize a heavily optimized setup running opnsense within proxmox. My vm filesystem is ufs because it’s on top of proxmox zfs. You can always find some settings in your opnsense vm to migrate log files to tmpfs which places them in memory. That will heavily reduce disk writes from opnsense.

SeeJayEmm,
@SeeJayEmm@lemmy.procrastinati.org avatar

Proxmox is using ZFS. Opnsense is using UFS. Regarding the record size I assume you’re referring to the same thing this comment is?

You can always find some settings in your opnsense vm to migrate log files to tmpfs which places them in memory.

I’ll look into this.

pyrosis,
@pyrosis@lemmy.world avatar

I’m specifically referencing this little bit of info for optimizing zfs for various situations.

Vms for example should exist in their own dataset with a tuned record size of 64k

Media should exist in its own with a tuned record size of 1mb

lz4 is quick and should always be enabled. It will also work efficiently with larger record sizes.

Anyway all the little things add up with zfs. When you have an underlying zfs you can get away with more simple and performant filesystems on zvols or qcow2. XFS, UFS, EXT4 all work well with 64k record sizes from the underlying zfs dataset/zvol.

Btw it doesn’t change immediately on existing data if you just change the option on a dataset. You have to move the data out then back in for it to have the new record size.

SeeJayEmm,
@SeeJayEmm@lemmy.procrastinati.org avatar

That cheat sheet is getting bookmarked. Thanks.

pyrosis,
@pyrosis@lemmy.world avatar

You are very welcome :)

SeeJayEmm,
@SeeJayEmm@lemmy.procrastinati.org avatar

Media should exist in its own with a tuned record size of 1mb

Should the vm storage block size also be set to 1MB or just the ZFS record size?

pyrosis,
@pyrosis@lemmy.world avatar

most filesystems as mentioned in the guide that exist within qcow2, zvols, even raws, that live on a zfs dataset would benefit form a zfs recordsize of 64k. By default the recordsize will be 128k.

I would never utilize 1mb for any dataset that had vm disks inside it.

I would create a new dataset for media off the pool and set a recordsize of 1mb. You can only really get away with this if you have media files directly inside this dataset. So pics, music, videos.

The cool thing is you can set these options on an individual dataset basis. so one dataset can have one recordsize and another dataset can have another.

TheHolm,
@TheHolm@aussie.zone avatar

Run long smart test on the disk and check smart data after that. Other possibility is ZFS pool is nearly full.

tvcvt,

There was a recent conversation on the Practical ZFS discourse site about poor disk performance in Proxmox (discourse.practicalzfs.com/t/…/1421/). Not sure if you’re seeing the same thing, but it could be that your VMs are running into the same too-small volblocksize that PVE uses to make zvols for its Vans under ZFS.

If that’s the case, the solution is pretty easy. In your PVE datacenter view, go to storage and create a new ZFS storage pool. Point it to the same zpool/dataset as the one you’ve already got and set the block size to something like 32k or 64k. Once you’ve done that, move the VM’s disk to that new storage pool.

Like I said, not sure if you’re seeing the same issue, but it’s a simple thing to try.

SeeJayEmm,
@SeeJayEmm@lemmy.procrastinati.org avatar

This was really interesting, thanks for the info.

scrubbles,
@scrubbles@poptalk.scrubbles.tech avatar

(If it’s not failing, which would be the first thing I’d check)

Do you have any new VMs up and running. IO was the bane of my existence with proxmox, but realized it’s just that VMs eat a ton of IO, especially with ZFS. A standard HDD won’t cut it (unless you have one and only one VM using that disk). Even sata SSDs just didn’t cut it over time, I had to build a full raid that would support 5-10 VMs on it before I saw IO wait drop enough.

SeeJayEmm,
@SeeJayEmm@lemmy.procrastinati.org avatar

I’m trying to think of anything I may have changed since the last time I rebooted the opnsense VM. But I try to keep up on updates and end up rebooting pretty regularly. The only things on this system are the opnsense VM and a small pihole VM. At the time of the screenshot above, the opnsense VM was the only thing running.

If it’s not a failing HDD, my next step is to try and dig into what’s generating the I/O to see if there’s something misbehaving.

scrubbles,
@scrubbles@poptalk.scrubbles.tech avatar

I had bad luck with ZFS on proxmox because of all of the overhead, I found with my tiny cluster it was better to do good old ext4 and then just do regular backups. ZFS actually killed quite a few of my drives because of it’s heavyweight. Not saying that’s your problem, but I wouldn’t be surprised if it was

HybridSarcasm,
@HybridSarcasm@lemmy.world avatar

Check the ZFS pool status. You could lots of errors that ZFS is correcting.

SeeJayEmm,
@SeeJayEmm@lemmy.procrastinati.org avatar

I’m starting to lean towards this being an I/O issue but I haven’t figure out what or why yet. I don’t often make changes to this environment since it’s running my Opnsens router.


<span style="color:#323232;">root@proxmox-02:~# zpool status
</span><span style="color:#323232;">  pool: rpool
</span><span style="color:#323232;"> state: ONLINE
</span><span style="color:#323232;">status: Some supported and requested features are not enabled on the pool.
</span><span style="color:#323232;">        The pool can still be used, but some features are unavailable.
</span><span style="color:#323232;">action: Enable all features using 'zpool upgrade'. Once this is done,
</span><span style="color:#323232;">        the pool may no longer be accessible by software that does not support
</span><span style="color:#323232;">        the features. See zpool-features(7) for details.
</span><span style="color:#323232;">  scan: scrub repaired 0B in 00:56:10 with 0 errors on Sun Apr 28 17:24:59 2024
</span><span style="color:#323232;">config:
</span><span style="color:#323232;">
</span><span style="color:#323232;">        NAME                                    STATE     READ WRITE CKSUM
</span><span style="color:#323232;">        rpool                                   ONLINE       0     0     0
</span><span style="color:#323232;">          ata-ST500LM021-1KJ152_W62HRJ1A-part3  ONLINE       0     0     0
</span><span style="color:#323232;">
</span><span style="color:#323232;">errors: No known data errors
</span>
pyrosis,
@pyrosis@lemmy.world avatar

It looks like you could also do a zpool upgrade. This will just upgrade your legacy pools to the newer zfs version. That command is fairly simple to run from terminal if you are already examining the pool.

Edit

Btw if you have ran pve updates it may be expecting some newer zfs flags for your pool. A pool upgrade may resolve the issue enabling the new features.

SeeJayEmm,
@SeeJayEmm@lemmy.procrastinati.org avatar

I’ve done a bit of research on that and I believe upgrading the zpool would make my system unbootable.

pyrosis,
@pyrosis@lemmy.world avatar

Upgrading a ZFS pool itself shouldn’t make a system unbootable even if an rpool (root pool) exists on it.

That could only happen if the upgrade took a shit during a power outage or something like that. The upgrade itself usually only takes a few seconds from the command line.

If it makes you feel better I upgraded mine with an rpool on it and it was painless. I do have a everything backed up tho so I rarely worry. However ai understand being hesitant.

SeeJayEmm,
@SeeJayEmm@lemmy.procrastinati.org avatar

I’m referring to this.

… using grub to directly boot from ZFS - such setups are in general not safe to run zpool upgrade on!


<span style="color:#323232;">$ sudo proxmox-boot-tool status
</span><span style="color:#323232;">Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
</span><span style="color:#323232;">System currently booted with legacy bios
</span><span style="color:#323232;">8357-FBD5 is configured with: grub (versions: 6.5.11-7-pve, 6.5.13-5-pve, 6.8.4-2-pve)
</span>

Unless I’m misunderstanding the guidance.

pyrosis,
@pyrosis@lemmy.world avatar

It looks like you are using legacy bios. mine is using uefi with a zfs rpool


<span style="color:#323232;">proxmox-boot-tool status
</span><span style="color:#323232;">Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
</span><span style="color:#323232;">System currently booted with uefi
</span><span style="color:#323232;">31FA-87E2 is configured with: uefi (versions: 6.5.11-8-pve, 6.5.13-5-pve)
</span>

However, like with everything a method always exists to get it done. Or not if you are concerned.

If you are interested it would look like…

Pool Upgrade


<span style="color:#323232;">sudo zpool upgrade <pool_name>
</span>

Confirm Upgrade


<span style="color:#323232;">sudo zpool status
</span><span style="color:#323232;">
</span>

Refresh boot config


<span style="color:#323232;">sudo pveboot-tool refresh
</span><span style="color:#323232;">
</span>

Confirm Boot configuration


<span style="color:#323232;">cat /boot/grub/grub.cfg
</span>

You are looking for directives like this to see if they are indeed pointing at your existing rpool


<span style="color:#323232;">root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
</span>

here is my file if it helps you compare…


<span style="color:#323232;">#
</span><span style="color:#323232;"># DO NOT EDIT THIS FILE
</span><span style="color:#323232;">#
</span><span style="color:#323232;"># It is automatically generated by grub-mkconfig using templates
</span><span style="color:#323232;"># from /etc/grub.d and settings from /etc/default/grub
</span><span style="color:#323232;">#
</span><span style="color:#323232;">
</span><span style="color:#323232;">### BEGIN /etc/grub.d/000_proxmox_boot_header ###
</span><span style="color:#323232;">#
</span><span style="color:#323232;"># This system is booted via proxmox-boot-tool! The grub-config used when
</span><span style="color:#323232;"># booting from the disks configured with proxmox-boot-tool resides on the vfat
</span><span style="color:#323232;"># partitions with UUIDs listed in /etc/kernel/proxmox-boot-uuids.
</span><span style="color:#323232;"># /boot/grub/grub.cfg is NOT read when booting from those disk!
</span><span style="color:#323232;">### END /etc/grub.d/000_proxmox_boot_header ###
</span><span style="color:#323232;">
</span><span style="color:#323232;">### BEGIN /etc/grub.d/00_header ###
</span><span style="color:#323232;">if [ -s $prefix/grubenv ]; then
</span><span style="color:#323232;">  set have_grubenv=true
</span><span style="color:#323232;">  load_env
</span><span style="color:#323232;">fi
</span><span style="color:#323232;">if [ "${next_entry}" ] ; then
</span><span style="color:#323232;">   set default="${next_entry}"
</span><span style="color:#323232;">   set next_entry=
</span><span style="color:#323232;">   save_env next_entry
</span><span style="color:#323232;">   set boot_once=true
</span><span style="color:#323232;">else
</span><span style="color:#323232;">   set default="0"
</span><span style="color:#323232;">fi
</span><span style="color:#323232;">
</span><span style="color:#323232;">if [ x"${feature_menuentry_id}" = xy ]; then
</span><span style="color:#323232;">  menuentry_id_option="--id"
</span><span style="color:#323232;">else
</span><span style="color:#323232;">  menuentry_id_option=""
</span><span style="color:#323232;">fi
</span><span style="color:#323232;">
</span><span style="color:#323232;">export menuentry_id_option
</span><span style="color:#323232;">
</span><span style="color:#323232;">if [ "${prev_saved_entry}" ]; then
</span><span style="color:#323232;">  set saved_entry="${prev_saved_entry}"
</span><span style="color:#323232;">  save_env saved_entry
</span><span style="color:#323232;">  set prev_saved_entry=
</span><span style="color:#323232;">  save_env prev_saved_entry
</span><span style="color:#323232;">  set boot_once=true
</span><span style="color:#323232;">fi
</span><span style="color:#323232;">
</span><span style="color:#323232;">function savedefault {
</span><span style="color:#323232;">  if [ -z "${boot_once}" ]; then
</span><span style="color:#323232;">    saved_entry="${chosen}"
</span><span style="color:#323232;">    save_env saved_entry
</span><span style="color:#323232;">  fi
</span><span style="color:#323232;">}
</span><span style="color:#323232;">function load_video {
</span><span style="color:#323232;">  if [ x$feature_all_video_module = xy ]; then
</span><span style="color:#323232;">    insmod all_video
</span><span style="color:#323232;">  else
</span><span style="color:#323232;">    insmod efi_gop
</span><span style="color:#323232;">    insmod efi_uga
</span><span style="color:#323232;">    insmod ieee1275_fb
</span><span style="color:#323232;">    insmod vbe
</span><span style="color:#323232;">    insmod vga
</span><span style="color:#323232;">    insmod video_bochs
</span><span style="color:#323232;">    insmod video_cirrus
</span><span style="color:#323232;">  fi
</span><span style="color:#323232;">}
</span><span style="color:#323232;">
</span><span style="color:#323232;">if loadfont unicode ; then
</span><span style="color:#323232;">  set gfxmode=auto
</span><span style="color:#323232;">  load_video
</span><span style="color:#323232;">  insmod gfxterm
</span><span style="color:#323232;">  set locale_dir=$prefix/locale
</span><span style="color:#323232;">  set lang=en_US
</span><span style="color:#323232;">  insmod gettext
</span><span style="color:#323232;">fi
</span><span style="color:#323232;">terminal_output gfxterm
</span><span style="color:#323232;">if [ "${recordfail}" = 1 ] ; then
</span><span style="color:#323232;">  set timeout=30
</span><span style="color:#323232;">else
</span><span style="color:#323232;">  if [ x$feature_timeout_style = xy ] ; then
</span><span style="color:#323232;">    set timeout_style=menu
</span><span style="color:#323232;">    set timeout=5
</span><span style="color:#323232;">  # Fallback normal timeout code in case the timeout_style feature is
</span><span style="color:#323232;">  # unavailable.
</span><span style="color:#323232;">  else
</span><span style="color:#323232;">    set timeout=5
</span><span style="color:#323232;">  fi
</span><span style="color:#323232;">fi
</span><span style="color:#323232;">### END /etc/grub.d/00_header ###
</span><span style="color:#323232;">
</span><span style="color:#323232;">### BEGIN /etc/grub.d/05_debian_theme ###
</span><span style="color:#323232;">set menu_color_normal=cyan/blue
</span><span style="color:#323232;">set menu_color_highlight=white/blue
</span><span style="color:#323232;">### END /etc/grub.d/05_debian_theme ###
</span><span style="color:#323232;">
</span><span style="color:#323232;">### BEGIN /etc/grub.d/10_linux ###
</span><span style="color:#323232;">function gfxmode {
</span><span style="color:#323232;">        set gfxpayload="${1}"
</span><span style="color:#323232;">}
</span><span style="color:#323232;">set linux_gfx_mode=
</span><span style="color:#323232;">export linux_gfx_mode
</span><span style="color:#323232;">menuentry 'Proxmox VE GNU/Linux' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-/dev/sdc3' {
</span><span style="color:#323232;">        load_video
</span><span style="color:#323232;">        insmod gzio
</span><span style="color:#323232;">        if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
</span><span style="color:#323232;">        insmod part_gpt
</span><span style="color:#323232;">        echo    'Loading Linux 6.5.13-5-pve ...'
</span><span style="color:#323232;">        linux   /ROOT/pve-1@/boot/vmlinuz-6.5.13-5-pve root=ZFS=/ROOT/pve-1 ro       root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
</span><span style="color:#323232;">        echo    'Loading initial ramdisk ...'
</span><span style="color:#323232;">        initrd  /ROOT/pve-1@/boot/initrd.img-6.5.13-5-pve
</span><span style="color:#323232;">}
</span><span style="color:#323232;">submenu 'Advanced options for Proxmox VE GNU/Linux' $menuentry_id_option 'gnulinux-advanced-/dev/sdc3' {
</span><span style="color:#323232;">        menuentry 'Proxmox VE GNU/Linux, with Linux 6.5.13-5-pve' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.5.13-5-pve-advanced-/dev/sdc3' {
</span><span style="color:#323232;">                load_video
</span><span style="color:#323232;">                insmod gzio
</span><span style="color:#323232;">                if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
</span><span style="color:#323232;">                insmod part_gpt
</span><span style="color:#323232;">                echo    'Loading Linux 6.5.13-5-pve ...'
</span><span style="color:#323232;">                linux   /ROOT/pve-1@/boot/vmlinuz-6.5.13-5-pve root=ZFS=/ROOT/pve-1 ro       root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
</span><span style="color:#323232;">                echo    'Loading initial ramdisk ...'
</span><span style="color:#323232;">                initrd  /ROOT/pve-1@/boot/initrd.img-6.5.13-5-pve
</span><span style="color:#323232;">        }
</span><span style="color:#323232;">        menuentry 'Proxmox VE GNU/Linux, with Linux 6.5.13-5-pve (recovery mode)' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.5.13-5-pve-recovery-/dev/sdc3' {
</span><span style="color:#323232;">                load_video
</span><span style="color:#323232;">                insmod gzio
</span><span style="color:#323232;">                if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
</span><span style="color:#323232;">                insmod part_gpt
</span><span style="color:#323232;">                echo    'Loading Linux 6.5.13-5-pve ...'
</span><span style="color:#323232;">                linux   /ROOT/pve-1@/boot/vmlinuz-6.5.13-5-pve root=ZFS=/ROOT/pve-1 ro single       root=ZFS=rpool/ROOT/pve-1 boot=zfs
</span><span style="color:#323232;">                echo    'Loading initial ramdisk ...'
</span><span style="color:#323232;">                initrd  /ROOT/pve-1@/boot/initrd.img-6.5.13-5-pve
</span><span style="color:#323232;">        }
</span><span style="color:#323232;">        menuentry 'Proxmox VE GNU/Linux, with Linux 6.5.11-8-pve' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.5.11-8-pve-advanced-/dev/sdc3' {
</span><span style="color:#323232;">                load_video
</span><span style="color:#323232;">                insmod gzio
</span><span style="color:#323232;">                if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
</span><span style="color:#323232;">                insmod part_gpt
</span><span style="color:#323232;">                echo    'Loading Linux 6.5.11-8-pve ...'
</span><span style="color:#323232;">                linux   /ROOT/pve-1@/boot/vmlinuz-6.5.11-8-pve root=ZFS=/ROOT/pve-1 ro       root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
</span><span style="color:#323232;">                echo    'Loading initial ramdisk ...'
</span><span style="color:#323232;">                initrd  /ROOT/pve-1@/boot/initrd.img-6.5.11-8-pve
</span><span style="color:#323232;">        }
</span><span style="color:#323232;">        menuentry 'Proxmox VE GNU/Linux, with Linux 6.5.11-8-pve (recovery mode)' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.5.11-8-pve-recovery-/dev/sdc3' {
</span><span style="color:#323232;">                load_video
</span><span style="color:#323232;">                insmod gzio
</span><span style="color:#323232;">                if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
</span><span style="color:#323232;">                insmod part_gpt
</span><span style="color:#323232;">                echo    'Loading Linux 6.5.11-8-pve ...'
</span><span style="color:#323232;">                linux   /ROOT/pve-1@/boot/vmlinuz-6.5.11-8-pve root=ZFS=/ROOT/pve-1 ro single       root=ZFS=rpool/ROOT/pve-1 boot=zfs
</span><span style="color:#323232;">                echo    'Loading initial ramdisk ...'
</span><span style="color:#323232;">                initrd  /ROOT/pve-1@/boot/initrd.img-6.5.11-8-pve
</span><span style="color:#323232;">        }
</span><span style="color:#323232;">}
</span><span style="color:#323232;">
</span><span style="color:#323232;">### END /etc/grub.d/10_linux ###
</span><span style="color:#323232;">
</span><span style="color:#323232;">### BEGIN /etc/grub.d/20_linux_xen ###
</span><span style="color:#323232;">
</span><span style="color:#323232;">### END /etc/grub.d/20_linux_xen ###
</span><span style="color:#323232;">
</span><span style="color:#323232;">### BEGIN /etc/grub.d/20_memtest86+ ###
</span><span style="color:#323232;">### END /etc/grub.d/20_memtest86+ ###
</span><span style="color:#323232;">
</span><span style="color:#323232;">### BEGIN /etc/grub.d/30_os-prober ###
</span><span style="color:#323232;">### END /etc/grub.d/30_os-prober ###
</span><span style="color:#323232;">
</span><span style="color:#323232;">### BEGIN /etc/grub.d/30_uefi-firmware ###
</span><span style="color:#323232;">menuentry 'UEFI Firmware Settings' $menuentry_id_option 'uefi-firmware' {
</span><span style="color:#323232;">        fwsetup
</span><span style="color:#323232;">}
</span><span style="color:#323232;">### END /etc/grub.d/30_uefi-firmware ###
</span><span style="color:#323232;">
</span><span style="color:#323232;">### BEGIN /etc/grub.d/40_custom ###
</span><span style="color:#323232;"># This file provides an easy way to add custom menu entries.  Simply type the
</span><span style="color:#323232;"># menu entries you want to add after this comment.  Be careful not to change
</span><span style="color:#323232;"># the 'exec tail' line above.
</span><span style="color:#323232;">### END /etc/grub.d/40_custom ###
</span><span style="color:#323232;">
</span><span style="color:#323232;">### BEGIN /etc/grub.d/41_custom ###
</span><span style="color:#323232;">if [ -f  ${config_directory}/custom.cfg ]; then
</span><span style="color:#323232;">  source ${config_directory}/custom.cfg
</span><span style="color:#323232;">elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
</span><span style="color:#323232;">  source $prefix/custom.cfg
</span><span style="color:#323232;">fi
</span><span style="color:#323232;">### END /etc/grub.d/41_custom ###
</span>

You can see the lines by the linux sections.

SeeJayEmm,
@SeeJayEmm@lemmy.procrastinati.org avatar

Thanks I may give it a try if I’m feeling daring.

Max_P,
@Max_P@lemmy.max-p.me avatar

It could be a disk slowly failing but not throwing errors yet. Some drives really do their best to hide that they’re failing. So even a passing SMART test I would take with some salt.

I would start by making sure you have good recent backups ASAP.

You can test the drive performance by shutting down all VMs and using tools like fio to do some disk benchmarking. It could be a VM causing it. If it’s an HDD in particular, the random reads and writes from VMs can really cause seek latency to shoot way up. Could be as simple as a service logging some warnings due to junk incoming traffic, or an update that added some more info logs, etc.

SeeJayEmm,
@SeeJayEmm@lemmy.procrastinati.org avatar

I would start by making sure you have good recent backups ASAP.

I do.

Could be as simple as a service logging some warnings due to junk incoming traffic, or an update that added some more info logs, etc.

Possible. It’s a really consistent (and stark) degradation in performance tho and is repeatable even when the opnsense VM is the only one running.

possiblylinux127,

Sounds like a hardware issue. I would also check dmesg and journalctl

seaQueue,
@seaQueue@lemmy.world avatar

iowait is indicative of storage not being able to keep up with the performance of the rest of the system. What hardware are you using for storage here?

SeeJayEmm,
@SeeJayEmm@lemmy.procrastinati.org avatar

It’s an old Optiplex SFF with a single HDD. Again, my concern isn’t that it’s “slow”. It’s that performance has rather suddenly tanked and the only changes I’ve made are regular OS updates.

HybridSarcasm,
@HybridSarcasm@lemmy.world avatar

Quick and easy fix attempt would be to replace the HDD with an SSD. As others have said, the drive may just be failing. Replacing with an SSD would not only get rid of the suspect hardware, but would be an upgrade to boot. You can clone the drive, or just start fresh with the backups you have.

SeeJayEmm,
@SeeJayEmm@lemmy.procrastinati.org avatar

I may end up having to go that route. I’m no expert but aren’t you supposed to use different parameters when using SSDs on ZFS vs an HDD?

seaQueue,
@seaQueue@lemmy.world avatar

That’s what I’d do here, used enterprise SSDs are dirt cheap on fleaBay

seaQueue, (edited )
@seaQueue@lemmy.world avatar

If I had to guess there was a code change in the PVE kernel or in their integrated ZFS module that led to a performance regression for your use case. I don’t really have any feedback there, PVE ships a modified version of an older kernel (6.2?) so something could have been backported into that tree that led to the regression. Same deal with ZFS, whichever version the PVE folks are shipping could have introduced a regression as well.

Your best bet is to raise an issue with the PVE folks after identifying which kernel version introduced the regression, you’ll want to do a binary search between now and the last known good time that this wasn’t occurring to determine exactly when the issue started - then you can open an issue describing the regression.

Or just throw a cheap SSD at the problem and move on, that’s what I’d do here. Something like this should outlast the machine you put it in.

Edit: the Samsung 863a also pops up cheaply from time to time, it has good endurance and PLP. Basically just search fleaBay for SATA drives with capacities of 400/480gb, 800/960gb, 1.6T/1.92T or 3.2T/3.84T and check their datasheets for endurance info and PLP capability. Anything in the 400/800/1600/3200Gb sequence is a model with more overprovisioning and higher endurance (usually refered to as mixed use) model. Those often have 3 DWPD or 5 DWPD ratings and are a safe bet if you have a write heavy workload.

SeeJayEmm,
@SeeJayEmm@lemmy.procrastinati.org avatar

I thought cheap SSDs and ZFS didn’t play well together?

seaQueue, (edited )
@seaQueue@lemmy.world avatar

Depends on the SSD, the one I linked is fine for casual home server use. You’re unlikely to see enough of a write workload that endurance will be an issue. That’s an enterprise drive btw, it certainly wasn’t cheap when it was brand new and I doubt running a couple of VMs will wear it quickly. (I’ve had a few of those in service at home for 3-4y, no problems.)

Consumer drives have more issues, their write endurance is considerably lower than most enterprise parts. You can blow through a cheap consumer SSD’s endurance in mere months with a hypervisor workload so I’d strongly recommend using enterprise drives where possible.

It’s always worth taking a look at drive datasheets when you’re considering them and comparing the warranty lifespan to your expected usage too. The drive linked above has an expected endurance of like 2PB (~3 DWPD, OR 2TB/day, over 3y) so you shouldn’t have any problems there. See sandisk.com/…/cloudspeed-eco-genII-sata-ssd-datas…

Older gen retired or old stock parts are basically the only way I buy home server storage now, the value for your money is tremendous and most drives are lightly used at most.

Edit: some select consumer SSDs can work fairly well with ZFS too, but they tend to be higher endurance parts with more baked in over provisioning. It was popular to use Samsung 850 or 860 Pros for a while due to their tremendous endurance (the 512GB 850s often had an endurance lifespan of like 10PB+ before failure thanks to good old high endurance MLC flash) but it’s a lot safer to just buy retired enterprise parts now that they’re available cheaply. There are some gotchas that come along with using high endurance consumer drives, like poor sync write performance due to lack of PLP, but you’ll still see far better performance than an HDD.

SeeJayEmm,
@SeeJayEmm@lemmy.procrastinati.org avatar

Thanks for all the info. I’ll keep this in mind if I replace the drive. I am using refurb enterprise HDDs in my main server. Didn’t think I’d need to go enterprise grade for this box but you make a lot of sense.

pyrosis,
@pyrosis@lemmy.world avatar

Keep in mind it’s more an issue with writes as others mentioned when it comes to ssds. I use two ssds in a zfs mirror that I installed proxmox directly on. It’s an option in the installer and it’s quite nice.

As for combating writes that’s actually easier than you think and applies to any filesystem. It just takes knowing what is write intensive. Most of the time for a linux os like proxmox that’s going to be temp files and logs. Both of which can easily be migrated to tmpfs. Doing this will increase the lifespan of any ssd dramatically. You just have to understand restarting clears those locations because now they exist in ram.

As I mentioned elsewhere opnsense has an option within the gui to migrate tmp files to memory.

catloaf,

Did you do a smart test?

SeeJayEmm,
@SeeJayEmm@lemmy.procrastinati.org avatar

Kinda feel dumb that my answer is no. Let me do that and report back.

catloaf,

While you’re waiting for that, I’d also look at the smart data and write the output to a file, then check it again later to see if any of the numbers have changed, especially reallocated sectors, pending sectors, corrected and uncorrected errors, stuff like that.

Actually, I’m pretty sure that Proxmox will notify you of certain smart issues if you have emails configured.

You should also shut the host down and reseat the drives, and check the cables to make sure they’re all properly seated too. It’s possible that one has come loose but not enough to drop the link.

SeeJayEmm,
@SeeJayEmm@lemmy.procrastinati.org avatar

While you’re waiting for that, I’d also look at the smart data and write the output to a file, then check it again later to see if any of the numbers have changed, especially reallocated sectors, pending sectors, corrected and uncorrected errors, stuff like that.

That’s a good idea. Thanks.

SeeJayEmm,
@SeeJayEmm@lemmy.procrastinati.org avatar

Short test completed without error.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • ngwrru68w68
  • DreamBathrooms
  • khanakhh
  • magazineikmin
  • InstantRegret
  • tacticalgear
  • thenastyranch
  • Youngstown
  • rosin
  • slotface
  • modclub
  • everett
  • kavyap
  • Durango
  • JUstTest
  • osvaldo12
  • normalnudes
  • cubers
  • ethstaker
  • mdbf
  • provamag3
  • GTA5RPClips
  • cisconetworking
  • Leos
  • tester
  • megavids
  • anitta
  • lostlight
  • All magazines