Am gestrigen Regentag nahm ich meinen Mut zusammen wollte meinen #Heimserver von #Proxmox 6.4 auf 7.x updaten. Viel zu spät war ich dran, aber es lief seit Jahren alles so schön fehlerfrei. Leider fror das Upgrade bei 58 % ein. Ich konnte herausfinden, dass es etwas mit „memtest“ zu tun hat. Nachdem ich diesen Prozess abschoss, lief das Upgrade bis zum Ende durch jedoch nicht fehlerfrei: Ich kam nicht mehr ins Proxmox-Gui, da „pveproxy“ und „pveservice“ (oder so) nicht mehr starteten. 😞 1/3
@robertdahlem yes. In the last 25 years I had like 15 drives die and probably less than two SMART warnings. But yes, one of these drives today did report end of life. After it died. Thanks SMART, I guess.
Also like half of my Zigbee devices do not work now, which is doubly weird as there was no backup restore. I just re-added the drive I have all my containers and VMs on.
This post is really a small collection of thoughts about Proxmox when used in a home lab situation and home labs in general. I was originally going to post this to Mastodon only but it didn't fit in a single post.
A lot of people (at least what I see on reddit) build Proxmox systems with shared file systems like ceph, even for home lab use.
@zeh fair. I was just makin' a funny haha with stereotypes. Especially since I fit the stereotype of a white man obsessing over something he views as "his domain". 😃
Today I pondered something: Proxmox and others boast native ZFS integration as one of their strengths. Many Proxmox features rely on ZFS's unique capabilities, and many setups are built around them. If Oracle were to send a cease and desist tomorrow, how would the situation unfold?
@stefano According to this reddit thread from a year ago, that seems unlikely. OpenZFS is built on Sun's ZFS which was released under a permissive-ish license (CDDL) which includes a patent grant.
Linus can't merge it into the mainline linux tree until he gets an explicit grant from Oracle that it won't cause GPL issues for the kernel, but that doesn't stop anyone else from using it or integrating it.
Bon, je suis complètement à court d'idées, j'ai besoin d' #aide sur du #reseau sur ce serveur... Si vous avez une idée ou si vous pouvez partager, j'en peux plus là
Le serveur qui fait tourner cette instance est sur un #proxmox. Jusqu'ici, tout allait bien.
Hier, suite à un plantage, j'ai du reboot le serveur (VPS chez Ionos). Après redémarrage, impossible d'accéder à quoi que ce soit : Interface Proxmox, services dans les conteneurs, rien.
La configuration était la suivante :
Interface externe (ens6) et 2 bridges :
vmbr0, en bridge-port sur ens6 (avec donc son ip publique), utilisé pour l'administration
vmbr1, avec une ip dans un réseau en 192.168.2.0/24 qui sert les conteneurs (reverse proxy pour un et docker pour l'autre)
J'ai dans mon fichier d'interfaces pour vmbr1 ceci :
Après debug, j'ai vu que le trafic passait à nouveau lorsque je foutais en down VMBR0.
J'ai donc, dans l'urgence, changé mes règles pour retirer vmbr0 et le changer par ens6, qui est le nom d'interface "physique".
J'ai pu constater que tout était revenu : accès à l'interface de proxmox, accès aux conteneurs... tout sauf, un point important : impossible depuis le serveur d'utiliser sa propre ip publique.
Par exemple, impossible pour l'instance d'envoyer un mail (le conteneur de mailing est situé derrière la même ip), impossible même depuis le shell de proxmox... ou encore impossible de renouveler les certificats sur le conteneur qui fait reverse.
I've got the 192.168.1.x network with HA in there. Now, I've got an #OpenWrt router with the IP 192.168.1.190 creating 2 new networks: 192.168.100.x. I now want to get the HA into the 192.168.100.x network, but I can't connect it physically, as both the OpenWrt and HomeAssistant run in containers/VMs on #proxmox. I tried experimenting with IP routes, but no success. Does anyone have a clue for me?
I had several freezes of my old laptop that runs Proxmox due to a stuck CPU fan. I googled a bit and found an elegant solution for temperature reporting: a command_line sensor.
Nevertheless, it took several hours to configure it correctly (I forgot how to deal with ssh keys and similar).
I installed the ‘Terminal & SSH’ add-on in to home assistant.
I created SSH keys, put them into /config/.ssh folder, and copied to my proxmox server. Read these instructions. I’ve put something like this in my HA Terminal addon:
$ mkdir /config/.ssh$ ssh-keygen <em># generated ssh keys and when asked, i enter the folder /root/config/.ssh</em> $ ssh-copy-id -i /root/config/.ssh/id_proxmox root@MY_PROXMOX_IP <em># copy keys to my prox server</em>$ ssh root@MY_PROXMOX_IP <em>#try out if I can log on without password prompt, then exit</em>
I had to find out where my proxmox stores temperatures. I ssh’ed to my proxmox again, browsed folders and looked into files which one store temps. My AMD laptop stores it in /sys/class/thermal/thermal_zone0/temp.
It could be also …/thermal_zone1, 2, 3 or similar.
Then I pulled temperature data via SSH to HA terminal:
When I publish a blog post, the WordPress Activitypub plugin delivers the post to the #fediverse. This causes the CPU to heat to 75C+. I know it now, because I can track its temp. in HA.
@po3mah@tomi I see. I tried that integration just today, and was a bit underwhelmed that it only delivers "VM up Y/N" -- or are there different integrations? I used proxmoxve.
So I had a bit of a unpleasant experience with #proxmox and trying to successfully do a PCI(e) Passthrough.
In my cluster, I have a few machines running on an AMD Ryzen 7 4800U with Radeon Graphics. Following the Proxmox WIki guide, I kinda managed to pass the APU into a guest.
The unpleasant side effect was that following that, the host then lost control over the remaining PCI devices too, as in IO, network, etc. First thing to happen of course was all OSDs on that host just disappeared.
@pieceofthepie I’ve thought about setting up a PiKVM so I could remote in to the console in case of something like this. Lugging monitors up and down stairs isn’t how I want to get my workout 🙃
A few weeks ago I bought a terramaster NAS (because it was the only device you could install other OSes like proxmox or truenas on it) and now UGREEN published a kickstarter with their new NASes which can do the same but are much newer in terms of hardware…
@vordenken i have a rule not to buy kickstarters - ugreen are a real company so why don't they just make a product and allow pre-orders?
terramaster are ARM aren't they? probably much lower power consumption and fan noise (and they exist today..)
@simon Yeah I thought about that too but on the other hand green not delivering to backers would kill the whole company...
The terramaster nas I'm talking about (F2-423) has an intel N5095 x64 processor in it.
It's kinda mesmerizing watching the core speeds on one of the i7-12700Ts in the #Proxmox Cluster (the one that runs Tiggi.es presently) dynamically shift cores anywhere from 1400Mhz to 4.62GHz quickly, and on an as-needed basis. The low base speed saves energy, but it'll instantly pop cores up to speeds within its thermal/power limit envelope, so for lightly-threaded workloads, it can perform as well as beefier chips. #homelab#servers#cpu
@mario@stooovie I've looked at this setting and have tried changing it.
On Comet Lake and later, I'm seeing powersave disabling turbo boost and limiting cores to the base frequency while allowing cores to go even lower than base frequency before returning to it. Performance mode runs the cores at base frequency and permits them to turbo boost before returning to base frequency.
Given this behavior, and the low power nature of these systems (and the fact I've got solar panels generating an average of 50-60kW daily), I'll be leaving the systems on Performance Mode.
If I wanted to control turbo boosting, I'd just use powercap-utils and set the short-term power limit down, so the CPU uses less power when demand is there, but this will also impact performance negatively in high utilization bursty scenarios.
Instead of doing that, since I value performance over saving a few watts, I'm going to set my long-term power limits (thus TDP) up from 65w to 85w as the CPU cooler can handle it based on XTU testing, and short term will actually be lowered from 135w to 125w, but the window extended to ~57 seconds to enable larger workloads to complete more rapidly.
I can see Powersave making sense where Proxmox is being used locally with sipping power taking precedence over performance...but I bought these CPUs to perform, so I'm gonna help them do just that. ;)