proxmox

This magazine is from a federated server and may be incomplete. Browse more on the original instance.

mindbleach, in Proxmox Virtual Environment 8.2 with Import Wizard released

My dork ass thought this was a Magic: The Gathering pirate app.

And I approved.

surfrock66, in Proxmox Virtual Environment 8.2 with Import Wizard released
@surfrock66@lemmy.world avatar

We’re converting our workplace lab to Proxmox and it’s a great ramp for eventually leaving vmware. Great system.

possiblylinux127,

Can you share any details? You say our workspace so I assume you are talking about work.

surfrock66,
@surfrock66@lemmy.world avatar

Yes, we are a medical/dental/pharmacy university and because of some of the specific data needs of our org we have a large on-prem ecosystem. We are currently a VMWare shop, but Broadcom’s business strategies have made us look for alternatives. I’ve used Proxmox in the homelab for years and have been feeling as its gotten more and more polished it’s ready to be considered for production work. Currently we have a lab environment of previous gen hardware which I want to use as a test-bed for possible production platform moves.

Proxmox isn’t VMware yet, but it’s close. The HA doesn’t work the same, I’ve struggled with something akin to DRS. If you use on-host storage, you have to constantly do replication work to keep them synced and even then a failover is essentially a storage rollback to the last sync. If you use iscsi storage, you have to be very careful. Snapshotting is only functional when backed by a few of the storage types, and we use ZFS. ZFS over isci is somewhat brittle, but we have a TrueNAS device which supports it here. We use Veeam as our enterprise backup solution, and I have no idea how these will work together. Veeam talks directly to our Nimble storage, does storage-based snapshots, and replicates them to our other site. Veeam theoretically does talk to TrueNAS, but without supporting Proxmox I don’t know what the backup/recovery flow would look like. Veeam is looking into this: …veeam.com/…/veeam-researching-support-for-vmware… We tried to use TrueNas ZFS snapshots for just general VM semi-backup, but unless you want to rollback your whole dataset, it doesn’t work well. You have to make separate snapshot tasks for the specific zvol/dataset, otherwise you’re rolling your whole dataset back. Also, I tried mounting a snapshot, hoping to then share it as an iSCSI extent and remount it to a VM and pull out a specific file…this didn’t work at all, I can’t get the UI to show the promoted clone so I can try to present it to the host.

When coming back from a power-off, if your Proxmox hosts are in a cluster, there’s no cluster-aware startup order (HA disables the entire startup delay system). That’s not great, our apps have SQL dependencies which need to be started first.

That’s the issues, and it sounds negative, but ultimately for a zero-cost hypervisor that’s under active development those issues need to be viewed through the lens of the overwhelming achievement that the project is and continues to be.

timbuck2themoon,

Veeam was apparently looking into officially supporting proxmox. I don’t remember seeing any timetable though.

warmaster,

Have you looked at Proxmox Backup Server?

www.proxmox.com/en/…/overview

surfrock66,
@surfrock66@lemmy.world avatar

Yes, but it’s not an option yet. We’re heavily invested in veeam and are not looking to replace that piece yet.

possiblylinux127, (edited )

For the dependency issues I used systemd in my homelab. I have not tested HA as I only have gigabit and limited hardware so performance would take a hit.

The 2 biggest issues I’ve noticed with Proxmox is:

  1. When the cluster gets out of Quotem it is really hard to reestablish consensus.
  2. Proxmox is a normal Linux system which sounds good but updating individual packages with apt can be problematic and doing a version upgrade is a hassle. It would be better if it was immutable so that you could upgrade and downgrade easily. Ideally it would be automated so that nodes could automatically upgraded and then test for stability. If an upgrade fails it should roll it back.
surfrock66,
@surfrock66@lemmy.world avatar

Those are all fair, also the entire open-vswitch setup is very clunky. I always avoid the UI and just edit /etc/network/interfaces directly, especially for vlan networks. I dislike that it wants 3 nodes but I understand, still 2 nodes in the homelab is pretty reasonable. I wish in general the HA was more configurable, robust, and intuitive.

possiblylinux127,

For HA you are always going to need 3 nodes at least. Most HA systems need 5 or more

surfrock66,
@surfrock66@lemmy.world avatar

Does that need to be true though? For like true “counting in how many 9’s” HA of course. But there’s nothing technically preventing high availability in 2 nodes; if the storage is shared and there’s a process to keep the memory in sync it should be possible with 2 nodes have some degree of high availability, even if it’s with big warnings.

possiblylinux127,

The problem with 2 nodes is there is no way to identity which node has the issue. From the hosts perspective all it “knows” is that the other node isn’t reachable. Technically it could assume that it is the functional one but there is also the possibility that both machines assume they are the working one and then spin up the same VM.

You can cluster two nodes but as soon as one node can’t reach the other everything freezes to prevent loss of consensus.

The reason I suggest 5 nodes is because 3 only gives the possibility for one node to fail. If one fails and then the remaining 2 can’t sort out what is happening the cluster freezes to prevent loss of consensus. Also having 5 machines means you have more flexibility.

I also want to point out that you need fast networking for HA but I’m sure you already know that.

possiblylinux127, in Proxmox Virtual Environment 8.2 with Import Wizard released

Cool

n0x0n, in A workaround for ZFS on proxmox not supporting ISOs or container templates.

I have not (yet?) noticed an issue. Do you have a link to the bug report?

possiblylinux127,

Its not a bug. It is because ZFS is block storage

n0x0n,

Hmm, I don’t get it. I can save ISO images on ZFS just fine, also LXC templates.

possiblylinux127,

How? It doesn’t seem to be supported

Anyway this gives me an excuse to setup a NFS share

possiblylinux127, in OpenRGB and Proxmox VMs

Is the controller a ePCI device? It should just be a matter of doing device passthough to a VM.

nezbyte, in Recommendations on Plafor for First Proxmox Build

Whatever you have lying around. Keep everything in VMs and then it is easy to backup and migrate to new hardware at a later date. Good luck!

darkfarmer, in Recommendations on Plafor for First Proxmox Build

Lenovo m900 tiny

They’re small, cheapish, and low power.

PrecisePangolin,

This right here. The exact models I used when setting up a 3 node PVE cluster. Ram is expandable, multiple storage slots, good IO, and best of all they are CHEAP.

caleb, in [TUTORIAL] GPU Passthrough on Proxmox VE - macOS Monterey

Just FYI I’m assuming you can’t run Ventura due to avx and/or avx2 missing on your CPU. You can get around these with a couple kexts. CryptexFixup for avx2 and NoAVXFSConpressionTypeZlib kext for avx. I am successfully running a Ventura VM on proxmox on a Westmere CPU from 2010.

asded,

Yes, that was my issue, I just needed AVX2. Unfortunately, regarding CryptexFixup, this kext does not remove the AVX2 requirement in certain parts of Ventura’s Graphics Stack. In my case, the drivers for my RX580 in Ventura will not work without AVX2 support.

caleb,

Ah that makes sense. I haven’t tried GPU passthrough yet. Thanks for that info!

lemming741, in Proxmox + nvidia without subscription

What kernel are you on? I’m at 6.5.11-7-pve with headers and a .run Nvidia driver working fine

The header package is pve-headers I thought, but I somehow had pve-headers-6.xxxxxx installed without the base meta package

areyouevenreal,

Yeah long story short I had the wrong version of the community repos installed. I broke some other things before I figured out what was wrong. Had to reinstall proxmox. Managed to restore everything through a mixture of backups and migrations. Hoping to get Nvidia working tomorrow. I have the drives installed now, just can’t reboot yet as I am recompressing my btrfs array.

areyouevenreal, in Plex LXC (unprivileged) using 100% SWAP?

That’s because your swap is set too small. It’s actually using more RAM than swap, but because swap is 1/8th the size it’s already full. Normally swap size is equal to ram. If you don’t like the behavior of using swap with low ram usage check out swapinness.

alexrmay91,

I figured swap would only be used if I ran out of available memory. I’m not an expert at all, so I could be totally wrong about that.

Is it possible that the container is allocating the space preemptively but not actively using it? So proxmox would see it as being used, but plex is simply reserving it. I think I’ve seen that occur on other containers like home assistant (but this is a VM not LXC). Proxmox says it’s using almost all the memory available to it, but going into home assistant itself says it’s only using a fraction of that amount.

areyouevenreal,

I figured swap would only be used if I ran out of available memory. I’m not an expert at all, so I could be totally wrong about that.

Yeah this is a fundamental misunderstanding of how swap works on modern systems. Unfortunately a common one as well. Lots of people think using swap is bad and that they should eliminate it or that they need more RAM when they really don’t.

Is it possible that the container is allocating the space preemptively but not actively using it? So proxmox would see it as being used, but plex is simply reserving it. I think I’ve seen that occur on other containers like home assistant (but this is a VM not LXC). Proxmox says it’s using almost all the memory available to it, but going into home assistant itself says it’s only using a fraction of that amount.

Just as likely to be disk caching. Sometimes it’s counted as using memory and sometimes not.

retrodaredevil, in OpenRGB and Proxmox VMs

You would be better off searching or asking how to accomplish this on a Debian system. Since Proxmox is based on Debian (latest Proxmox is based on latest Debian at the time of writing), if you understand how to accomplish this on Debian, you could also do it on a Proxmox install. I don’t know anything about controlling the lights from inside a VM, though. I would stick with trying to accomplish this on the Proxmox host itself.

It might be possible to control them inside a VM, but I don’t actually know what you would have to pass through for this kinda thing.

InEnduringGrowStrong, in Proxmox AMD GPU Passthrough
@InEnduringGrowStrong@sh.itjust.works avatar

Maybe this?
github.com/gnif/vendor-reset
Although I’ve been passing through a vega64 without needing this.

server_paul,

Yeah, i tried that - the link was just not shown in the original post That didnt really fix it

InEnduringGrowStrong,
@InEnduringGrowStrong@sh.itjust.works avatar

Try journalctl to get more details from when it fails?

server_paul,

This is the output from journalctl, since stopping and rebooting the VM: Main error seems to occur at 16:41:43 `Dec 19 16:40:45 pve pvedaemon[1590]: end task UPID:pve:00030675:000E7952:6581B96F:vncshell::root@pam: OK

Dec 19 16:40:47 pve kernel: vfio-pci 0000:03:00.0: not ready 16383ms after bus reset; waiting

Dec 19 16:41:03 pve pvedaemon[1590]: starting task UPID:pve:000308EE:000E85EB:6581B98F:qmstart:195:root@pam:

Dec 19 16:41:03 pve pvedaemon[198894]: start VM 195: UPID:pve:000308EE:000E85EB:6581B98F:qmstart:195:root@pam:

Dec 19 16:41:06 pve kernel: vfio-pci 0000:03:00.0: not ready 32767ms after bus reset; waiting

Dec 19 16:41:40 pve kernel: vfio-pci 0000:03:00.0: not ready 65535ms after bus reset; giving up

Dec 19 16:41:41 pve kernel: vfio-pci 0000:03:00.1: Unable to change power state from D0 to D3hot, device inaccessible

Dec 19 16:41:41 pve kernel: vfio-pci 0000:03:00.0: Unable to change power state from D0 to D3hot, device inaccessible

Dec 19 16:41:41 pve systemd[1]: 195.scope: Deactivated successfully.

Dec 19 16:41:41 pve systemd[1]: 195.scope: Consumed 54min 2.778s CPU time.

Dec 19 16:41:41 pve systemd[1]: Started 195.scope.

Dec 19 16:41:41 pve kernel: tap195i0: entered promiscuous mode

Dec 19 16:41:41 pve kernel: vmbr0: port 4(fwpr195p0) entered blocking state

Dec 19 16:41:41 pve kernel: vmbr0: port 4(fwpr195p0) entered disabled state

Dec 19 16:41:41 pve kernel: fwpr195p0: entered allmulticast mode

Dec 19 16:41:41 pve kernel: fwpr195p0: entered promiscuous mode

Dec 19 16:41:41 pve kernel: vmbr0: port 4(fwpr195p0) entered blocking state

Dec 19 16:41:41 pve kernel: vmbr0: port 4(fwpr195p0) entered forwarding state

Dec 19 16:41:41 pve kernel: fwbr195i0: port 1(fwln195i0) entered blocking state

Dec 19 16:41:41 pve kernel: fwbr195i0: port 1(fwln195i0) entered disabled state

Dec 19 16:41:41 pve kernel: fwln195i0: entered allmulticast mode

Dec 19 16:41:41 pve kernel: fwln195i0: entered promiscuous mode

Dec 19 16:41:41 pve kernel: fwbr195i0: port 1(fwln195i0) entered blocking state

Dec 19 16:41:41 pve kernel: fwbr195i0: port 1(fwln195i0) entered forwarding state

Dec 19 16:41:41 pve kernel: fwbr195i0: port 2(tap195i0) entered blocking state

Dec 19 16:41:41 pve kernel: fwbr195i0: port 2(tap195i0) entered disabled state

Dec 19 16:41:41 pve kernel: tap195i0: entered allmulticast mode

Dec 19 16:41:41 pve kernel: fwbr195i0: port 2(tap195i0) entered blocking state

Dec 19 16:41:41 pve kernel: fwbr195i0: port 2(tap195i0) entered forwarding state

Dec 19 16:41:43 pve kernel: vfio-pci 0000:03:00.0: Unable to change power state from D3cold to D0, device inaccessible

Dec 19 16:41:43 pve kernel: vfio-pci 0000:03:00.0: Unable to change power state from D3cold to D0, device inaccessible

Dec 19 16:41:43 pve kernel: vfio-pci 0000:03:00.0: Unable to change power state from D3cold to D0, device inaccessible

Dec 19 16:41:43 pve kernel: vfio-pci 0000:03:00.0: Unable to change power state from D3cold to D0, device inaccessible

Dec 19 16:41:43 pve kernel: vfio-pci 0000:03:00.1: Unable to change power state from D3cold to D0, device inaccessible

Dec 19 16:41:43 pve kernel: vfio-pci 0000:03:00.0: Unable to change power state from D3cold to D0, device inaccessible

Dec 19 16:41:43 pve kernel: vfio-pci 0000:03:00.1: Unable to change power state from D3cold to D0, device inaccessible

Dec 19 16:41:44 pve kernel: pcieport 0000:02:00.0: broken device, retraining non-functional downstream link at 2.5GT/s

Dec 19 16:41:44 pve pvedaemon[1592]: VM 195 qmp command failed - VM 195 not running

Dec 19 16:41:45 pve kernel: pcieport 0000:02:00.0: retraining failed

Dec 19 16:41:46 pve kernel: pcieport 0000:02:00.0: broken device, retraining non-functional downstream link at 2.5GT/s

Dec 19 16:41:47 pve kernel: pcieport 0000:02:00.0: retraining failed

Dec 19 16:41:47 pve kernel: vfio-pci 0000:03:00.0: not ready 1023ms after bus reset; waiting

Dec 19 16:41:48 pve kernel: vfio-pci 0000:03:00.0: not ready 2047ms after bus reset; waiting

Dec 19 16:41:50 pve kernel: vfio-pci 0000:03:00.0: not ready 4095ms after bus reset; waiting

Dec 19 16:41:54 pve kernel: vfio-pci 0000:03:00.0: not ready 8191ms after bus reset; waiting

Dec 19 16:42:03 pve kernel: vfio-pci 0000:03:00.0: not ready 16383ms after bus reset; waiting

Dec 19 16:42:21 pve kernel: vfio-pci 0000:03:00.0: not ready 32767ms after bus reset; waiting

Dec 19 16:42:56 pve kernel: vfio-pci 0000:03:00.0: not ready 65535ms after bus reset; giving up

Dec 19 16:42:56 pve kernel: vfio-pci 0000:03:00.1: Unable to change power state from D3cold to D0, device inaccessible

Dec 19 16:42:56 pve kernel: vfio-pci 0000:03:00.0: Unable to change power state from D3cold to D0, device inaccessible

Dec 19 16:42:56 pve kernel: fwbr195i0: port 2(tap195i0) entered disabled state

Dec 19 16:42:56 pve kernel: tap195i0 (unregistering): left allmulticast mode

Dec 19 16:42:56 pve kernel: fwbr195i0: port 2(tap195i0) entered disabled state

Dec 19 16:42:56 pve pvedaemon[199553]: stopping swtpm instance (pid 199561) due to QEMU startup error

Dec 19 16:42:56 pve pvedaemon[198894]: start failed: QEMU exited with code 1

Dec 19 16:42:56 pve pvedaemon[1590]: end task UPID:pve:000308EE:000E85EB:6581B98F:qmstart:195:root@pam: start failed: QEMU exit>

Dec 19 16:42:56 pve systemd[1]: 195.scope: Deactivated successfully.

Dec 19 16:42:56 pve systemd[1]: 195.scope: Consumed 1.736s CPU time.`

server_paul,

dmesg also reported vendor_reset: module verification failed: signature and/or required key missing - tainting kernelHowever, according to https://github.com/gnif/vendor-reset/issues/46#issuecomment-983087796 this error is not as important…

server_paul,

To everyone else encountering this error, I finally fixed it this way: This forum entry sent me here, which then helped me resolve the issue. Huge thanks to you, InEnduringGrowStrong, for pushing me in the right direction.

InEnduringGrowStrong,
@InEnduringGrowStrong@sh.itjust.works avatar

Ah nice you got it working.
Once it works it’s great.
I’ve been running mine for a while now, but purposefully avoided Kernel upgrades so far.

server_paul,

Haha, I already started worrying about that :) But you‘re right, its great.

InEnduringGrowStrong,
@InEnduringGrowStrong@sh.itjust.works avatar

Formatted with a code block so it’s more readable:


<span style="color:#323232;">16:41:43 `Dec 19 16:40:45 pve pvedaemon[1590]: end task UPID:pve:00030675:000E7952:6581B96F:vncshell::root@pam: OK
</span><span style="color:#323232;">Dec 19 16:40:47 pve kernel: vfio-pci 0000:03:00.0: not ready 16383ms after bus reset; waiting
</span><span style="color:#323232;">Dec 19 16:41:03 pve pvedaemon[1590]: starting task UPID:pve:000308EE:000E85EB:6581B98F:qmstart:195:root@pam:
</span><span style="color:#323232;">Dec 19 16:41:03 pve pvedaemon[198894]: start VM 195: UPID:pve:000308EE:000E85EB:6581B98F:qmstart:195:root@pam:
</span><span style="color:#323232;">Dec 19 16:41:06 pve kernel: vfio-pci 0000:03:00.0: not ready 32767ms after bus reset; waiting
</span><span style="color:#323232;">Dec 19 16:41:40 pve kernel: vfio-pci 0000:03:00.0: not ready 65535ms after bus reset; giving up
</span><span style="color:#323232;">Dec 19 16:41:41 pve kernel: vfio-pci 0000:03:00.1: Unable to change power state from D0 to D3hot, device inaccessible
</span><span style="color:#323232;">Dec 19 16:41:41 pve kernel: vfio-pci 0000:03:00.0: Unable to change power state from D0 to D3hot, device inaccessible
</span><span style="color:#323232;">Dec 19 16:41:41 pve systemd[1]: 195.scope: Deactivated successfully.
</span><span style="color:#323232;">Dec 19 16:41:41 pve systemd[1]: 195.scope: Consumed 54min 2.778s CPU time.
</span><span style="color:#323232;">Dec 19 16:41:41 pve systemd[1]: Started 195.scope.
</span><span style="color:#323232;">Dec 19 16:41:41 pve kernel: tap195i0: entered promiscuous mode
</span><span style="color:#323232;">Dec 19 16:41:41 pve kernel: vmbr0: port 4(fwpr195p0) entered blocking state
</span><span style="color:#323232;">Dec 19 16:41:41 pve kernel: vmbr0: port 4(fwpr195p0) entered disabled state
</span><span style="color:#323232;">Dec 19 16:41:41 pve kernel: fwpr195p0: entered allmulticast mode
</span><span style="color:#323232;">Dec 19 16:41:41 pve kernel: fwpr195p0: entered promiscuous mode
</span><span style="color:#323232;">Dec 19 16:41:41 pve kernel: vmbr0: port 4(fwpr195p0) entered blocking state
</span><span style="color:#323232;">Dec 19 16:41:41 pve kernel: vmbr0: port 4(fwpr195p0) entered forwarding state
</span><span style="color:#323232;">Dec 19 16:41:41 pve kernel: fwbr195i0: port 1(fwln195i0) entered blocking state
</span><span style="color:#323232;">Dec 19 16:41:41 pve kernel: fwbr195i0: port 1(fwln195i0) entered disabled state
</span><span style="color:#323232;">Dec 19 16:41:41 pve kernel: fwln195i0: entered allmulticast mode
</span><span style="color:#323232;">Dec 19 16:41:41 pve kernel: fwln195i0: entered promiscuous mode
</span><span style="color:#323232;">Dec 19 16:41:41 pve kernel: fwbr195i0: port 1(fwln195i0) entered blocking state
</span><span style="color:#323232;">Dec 19 16:41:41 pve kernel: fwbr195i0: port 1(fwln195i0) entered forwarding state
</span><span style="color:#323232;">Dec 19 16:41:41 pve kernel: fwbr195i0: port 2(tap195i0) entered blocking state
</span><span style="color:#323232;">Dec 19 16:41:41 pve kernel: fwbr195i0: port 2(tap195i0) entered disabled state
</span><span style="color:#323232;">Dec 19 16:41:41 pve kernel: tap195i0: entered allmulticast mode
</span><span style="color:#323232;">Dec 19 16:41:41 pve kernel: fwbr195i0: port 2(tap195i0) entered blocking state
</span><span style="color:#323232;">Dec 19 16:41:41 pve kernel: fwbr195i0: port 2(tap195i0) entered forwarding state
</span><span style="color:#323232;">Dec 19 16:41:43 pve kernel: vfio-pci 0000:03:00.0: Unable to change power state from D3cold to D0, device inaccessible
</span><span style="color:#323232;">Dec 19 16:41:43 pve kernel: vfio-pci 0000:03:00.0: Unable to change power state from D3cold to D0, device inaccessible
</span><span style="color:#323232;">Dec 19 16:41:43 pve kernel: vfio-pci 0000:03:00.0: Unable to change power state from D3cold to D0, device inaccessible
</span><span style="color:#323232;">Dec 19 16:41:43 pve kernel: vfio-pci 0000:03:00.0: Unable to change power state from D3cold to D0, device inaccessible
</span><span style="color:#323232;">Dec 19 16:41:43 pve kernel: vfio-pci 0000:03:00.1: Unable to change power state from D3cold to D0, device inaccessible
</span><span style="color:#323232;">Dec 19 16:41:43 pve kernel: vfio-pci 0000:03:00.0: Unable to change power state from D3cold to D0, device inaccessible
</span><span style="color:#323232;">Dec 19 16:41:43 pve kernel: vfio-pci 0000:03:00.1: Unable to change power state from D3cold to D0, device inaccessible
</span><span style="color:#323232;">Dec 19 16:41:44 pve kernel: pcieport 0000:02:00.0: broken device, retraining non-functional downstream link at 2.5GT/s
</span><span style="color:#323232;">Dec 19 16:41:44 pve pvedaemon[1592]: VM 195 qmp command failed - VM 195 not running
</span><span style="color:#323232;">Dec 19 16:41:45 pve kernel: pcieport 0000:02:00.0: retraining failed
</span><span style="color:#323232;">Dec 19 16:41:46 pve kernel: pcieport 0000:02:00.0: broken device, retraining non-functional downstream link at 2.5GT/s
</span><span style="color:#323232;">Dec 19 16:41:47 pve kernel: pcieport 0000:02:00.0: retraining failed
</span><span style="color:#323232;">Dec 19 16:41:47 pve kernel: vfio-pci 0000:03:00.0: not ready 1023ms after bus reset; waiting
</span><span style="color:#323232;">Dec 19 16:41:48 pve kernel: vfio-pci 0000:03:00.0: not ready 2047ms after bus reset; waiting
</span><span style="color:#323232;">Dec 19 16:41:50 pve kernel: vfio-pci 0000:03:00.0: not ready 4095ms after bus reset; waiting
</span><span style="color:#323232;">Dec 19 16:41:54 pve kernel: vfio-pci 0000:03:00.0: not ready 8191ms after bus reset; waiting
</span><span style="color:#323232;">Dec 19 16:42:03 pve kernel: vfio-pci 0000:03:00.0: not ready 16383ms after bus reset; waiting
</span><span style="color:#323232;">Dec 19 16:42:21 pve kernel: vfio-pci 0000:03:00.0: not ready 32767ms after bus reset; waiting
</span><span style="color:#323232;">Dec 19 16:42:56 pve kernel: vfio-pci 0000:03:00.0: not ready 65535ms after bus reset; giving up
</span><span style="color:#323232;">Dec 19 16:42:56 pve kernel: vfio-pci 0000:03:00.1: Unable to change power state from D3cold to D0, device inaccessible
</span><span style="color:#323232;">Dec 19 16:42:56 pve kernel: vfio-pci 0000:03:00.0: Unable to change power state from D3cold to D0, device inaccessible
</span><span style="color:#323232;">Dec 19 16:42:56 pve kernel: fwbr195i0: port 2(tap195i0) entered disabled state
</span><span style="color:#323232;">Dec 19 16:42:56 pve kernel: tap195i0 (unregistering): left allmulticast mode
</span><span style="color:#323232;">Dec 19 16:42:56 pve kernel: fwbr195i0: port 2(tap195i0) entered disabled state
</span><span style="color:#323232;">Dec 19 16:42:56 pve pvedaemon[199553]: stopping swtpm instance (pid 199561) due to QEMU startup error
</span><span style="color:#323232;">Dec 19 16:42:56 pve pvedaemon[198894]: start failed: QEMU exited with code 1
</span><span style="color:#323232;">Dec 19 16:42:56 pve pvedaemon[1590]: end task UPID:pve:000308EE:000E85EB:6581B98F:qmstart:195:root@pam: start failed: QEMU exit>
</span><span style="color:#323232;">Dec 19 16:42:56 pve systemd[1]: 195.scope: Deactivated successfully.
</span><span style="color:#323232;">Dec 19 16:42:56 pve systemd[1]: 195.scope: Consumed 1.736s CPU time.
</span>
InEnduringGrowStrong,
@InEnduringGrowStrong@sh.itjust.works avatar

It does seem a lot like the reset bug, but then you already tried that. :/ Kernel module aren’t as easy to install and if you’re missing the required flags it might just do nothing.


<span style="color:#323232;">grep -E '(CONFIG_FTRACE|CONFIG_KPROBES|CONFIG_PCI_QUIRKS|CONFIG_KALLSYMS|CONFIG_KALLSYMS_ALL|CONFIG_FUNCTION_TRACER)b' /boot/config-`uname -r`  
</span>

Should show the 6 flags =y

Or maybe some variation of manual reset…
…proxmox.com/…/issues-with-intel-arc-a770m-gpu-pa…

server_paul,

Just fyi, the 6 y-flags were shown

server_paul,

It was inteded to be a code block, but that way it was just a bunch of text without newlines somehow

phanto, in Plex LXC (unprivileged) using 100% SWAP?

A few of my Proxmox containers do this. They all work fine, so I never questioned it. Not my Plex container though.

SquishyPandaDev, in Plex LXC (unprivileged) using 100% SWAP?
@SquishyPandaDev@yiffit.net avatar

Not a Plex user (avid Linux User) but you could checkout swappiness (Redhat article)

makingrain, in Trying to set up openmediavault as an LXC. Having trouble getting the container to see an external USB drive.

LXC and USB drives will give you problems. It’d be easier to run OMV as a VM and then pass the USB through in Proxmox web UI.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • proxmox@lemmy.world
  • ngwrru68w68
  • DreamBathrooms
  • khanakhh
  • magazineikmin
  • InstantRegret
  • tacticalgear
  • thenastyranch
  • Youngstown
  • rosin
  • slotface
  • modclub
  • everett
  • kavyap
  • Durango
  • JUstTest
  • osvaldo12
  • normalnudes
  • cubers
  • ethstaker
  • mdbf
  • tester
  • GTA5RPClips
  • cisconetworking
  • Leos
  • megavids
  • provamag3
  • anitta
  • lostlight
  • All magazines