Proxmox hypervisor

DarrenNevares,
@DarrenNevares@mas.to avatar

Finally getting MacOS setup on

bazcurtis,
@bazcurtis@mastodon.social avatar

@DarrenNevares That is interesting. I would like to know more.

felis, German
@felis@social.tchncs.de avatar

Am gestrigen Regentag nahm ich meinen Mut zusammen wollte meinen von 6.4 auf 7.x updaten. Viel zu spät war ich dran, aber es lief seit Jahren alles so schön fehlerfrei. Leider fror das Upgrade bei 58 % ein. Ich konnte herausfinden, dass es etwas mit „memtest“ zu tun hat. Nachdem ich diesen Prozess abschoss, lief das Upgrade bis zum Ende durch jedoch nicht fehlerfrei: Ich kam nicht mehr ins Proxmox-Gui, da „pveproxy“ und „pveservice“ (oder so) nicht mehr starteten. 😞 1/3

stooovie,
@stooovie@mas.to avatar

Whew! host went belly up, took me seven hours to fix even with backups.

TWO SSDs failed this morning.

stooovie,
@stooovie@mas.to avatar

@robertdahlem yes. In the last 25 years I had like 15 drives die and probably less than two SMART warnings. But yes, one of these drives today did report end of life. After it died. Thanks SMART, I guess.

stooovie,
@stooovie@mas.to avatar

Also like half of my Zigbee devices do not work now, which is doubly weird as there was no backup restore. I just re-added the drive I have all my containers and VMs on.

dustinrue,
@dustinrue@chateaude.luxe avatar

Thoughts on Proxmox and Home Lab Use

This post is really a small collection of thoughts about Proxmox when used in a home lab situation and home labs in general. I was originally going to post this to Mastodon only but it didn't fit in a single post.

A lot of people (at least what I see on reddit) build Proxmox systems with shared file systems like ceph, even for home lab use.

https://dustinrue.com/2024/04/thoughts-on-proxmox-and-home-lab-use/

stefano,
@stefano@bsd.cafe avatar
wagesj45,
@wagesj45@mastodon.jordanwages.com avatar

I totally understand men that obsess over their lawn. I do the exact same thing with my home network.

#proxmox #vm #virtualmachine #debian #network #homenetwork #server #homelab #lawncare #guystuff #justmanlythings

zeh,
@zeh@mstdn.io avatar

@wagesj45
yeah, my point is that a lawn is useless while a network is not, so the comparison doesn't hold.

also, in parallel, everybody can obsess over those things, not just men

wagesj45,
@wagesj45@mastodon.jordanwages.com avatar

@zeh fair. I was just makin' a funny haha with stereotypes. Especially since I fit the stereotype of a white man obsessing over something he views as "his domain". 😃

stefano,
@stefano@bsd.cafe avatar

Today I pondered something: Proxmox and others boast native ZFS integration as one of their strengths. Many Proxmox features rely on ZFS's unique capabilities, and many setups are built around them. If Oracle were to send a cease and desist tomorrow, how would the situation unfold?

zrail,
@zrail@hachyderm.io avatar

@stefano According to this reddit thread from a year ago, that seems unlikely. OpenZFS is built on Sun's ZFS which was released under a permissive-ish license (CDDL) which includes a patent grant.

https://www.reddit.com/r/zfs/comments/108ztxr/can_oracle_shut_down_openzfs_if_they_wanted/

Linus can't merge it into the mainline linux tree until he gets an explicit grant from Oracle that it won't cause GPL issues for the kernel, but that doesn't stop anyone else from using it or integrating it.

https://www.realworldtech.com/forum/?threadid=189711&curpostid=189841

apgarcia,
@apgarcia@fosstodon.org avatar

@stefano Do they use ZFS in Oracle Linux? I'm not a license lawyer, but I suspect that might complicate things for them.

marud,

Bon, je suis complètement à court d'idées, j'ai besoin d' sur du sur ce serveur... Si vous avez une idée ou si vous pouvez partager, j'en peux plus là

Le serveur qui fait tourner cette instance est sur un . Jusqu'ici, tout allait bien.
Hier, suite à un plantage, j'ai du reboot le serveur (VPS chez Ionos). Après redémarrage, impossible d'accéder à quoi que ce soit : Interface Proxmox, services dans les conteneurs, rien.

La configuration était la suivante :

Interface externe (ens6) et 2 bridges :

  • vmbr0, en bridge-port sur ens6 (avec donc son ip publique), utilisé pour l'administration
  • vmbr1, avec une ip dans un réseau en 192.168.2.0/24 qui sert les conteneurs (reverse proxy pour un et docker pour l'autre)

J'ai dans mon fichier d'interfaces pour vmbr1 ceci :

        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -o vmbr0 -j MASQUERADE
        post-up /script/dnat.sh
        post-down iptables -t nat -D POSTROUTING -s 192.168.2.0/24 -o vmbr0 -j MASQUERADE

Pour les ouvertures de port, j'ai dans dnat.sh des entrées comme celle ci (exemple pour le port 443)

iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 443 -j DNAT --to-destination 192.168.2.10:443

Après debug, j'ai vu que le trafic passait à nouveau lorsque je foutais en down VMBR0.

J'ai donc, dans l'urgence, changé mes règles pour retirer vmbr0 et le changer par ens6, qui est le nom d'interface "physique".

J'ai pu constater que tout était revenu : accès à l'interface de proxmox, accès aux conteneurs... tout sauf, un point important : impossible depuis le serveur d'utiliser sa propre ip publique.

Par exemple, impossible pour l'instance d'envoyer un mail (le conteneur de mailing est situé derrière la même ip), impossible même depuis le shell de proxmox... ou encore impossible de renouveler les certificats sur le conteneur qui fait reverse.

[1/2]

tangeek,

@alarig @marud @R1Rail Y en a qui ont de la chance. 😮‍💨

alarig,
@alarig@hostux.social avatar
Mawoka,
@Mawoka@mastodon.online avatar

A question for the -guys here:

I've got the 192.168.1.x network with HA in there. Now, I've got an router with the IP 192.168.1.190 creating 2 new networks: 192.168.100.x. I now want to get the HA into the 192.168.100.x network, but I can't connect it physically, as both the OpenWrt and HomeAssistant run in containers/VMs on . I tried experimenting with IP routes, but no success. Does anyone have a clue for me?

tomi,

(Link to the original blog post with a proper layout)

It’s a pity that doesn’t report CPU and other temperatures to .

I had several freezes of my old laptop that runs Proxmox due to a stuck CPU fan. I googled a bit and found an elegant solution for temperature reporting: a command_line sensor.

Nevertheless, it took several hours to configure it correctly (I forgot how to deal with ssh keys and similar).

This is the end result:

home assistant cpu temperature sensor chartThe procedure:

  1. I installed the ‘Terminal & SSH’ add-on in to home assistant.
  2. I created SSH keys, put them into /config/.ssh folder, and copied to my proxmox server. Read these instructions. I’ve put something like this in my HA Terminal addon:
$ mkdir /config/.ssh$ ssh-keygen <em># generated ssh keys and when asked, i enter the folder /root/config/.ssh</em> $ ssh-copy-id -i /root/config/.ssh/id_proxmox root@MY_PROXMOX_IP <em># copy keys to my prox server</em>$ ssh root@MY_PROXMOX_IP <em>#try out if I can log on without password prompt, then exit</em>
  1. I had to find out where my proxmox stores temperatures. I ssh’ed to my proxmox again, browsed folders and looked into files which one store temps. My AMD laptop stores it in /sys/class/thermal/thermal_zone0/temp.

It could be also …/thermal_zone1, 2, 3 or similar.

  1. Then I pulled temperature data via SSH to HA terminal:
$ ssh -i /config/.ssh/id_prox -o StrictHostKeyChecking=no -q root@YOUR_PROXMOX_IP cat /sys/class/thermal/thermal_zone0/temp

The command returned 52000.

Edited my configuration.yaml and added the sensor. This is a working code (as of Apr. 2024). I used tips from here (deprecated sensor) and here.

##################

Temperature proxmox

##################
command_line:
– sensor:
name: temperature_cpu
command: “ssh -i /config/.ssh/id_prox -o StrictHostKeyChecking=no -q root@YOUR_PROXMOX_IP cat /sys/class/thermal/thermal_zone0/temp”
value_template: “{{ value | multiply(0.001) | round(1) }}”
unit_of_measurement: “°C”

After 20 restarts of Home Assistant, it finally shows the proxmox CPU temp.

Bonus: here is a picture my homelab proxmox ‘server’ with external fans (because CPU fan is not working).

https://blog.rozman.info/wp-content/uploads/2024/04/20240415_081455-1024x576.jpgBonus #2: A hypothesis: Fediverse is causing global warming! 😉 😉

When I publish a blog post, the WordPress Activitypub plugin delivers the post to the . This causes the CPU to heat to 75C+. I know it now, because I can track its temp. in HA.

The hypothesis is confirmed.

temperature chart of the CPU, a spike is detected at blog publish time, because of activitypub pluginhttps://blog.rozman.info/proxmox-server-temperature-tracking-in-home-assistant/

#2

image/jpeg
image/png

fm_volker,
@fm_volker@mastodon.social avatar

@po3mah @tomi I see. I tried that integration just today, and was a bit underwhelmed that it only delivers "VM up Y/N" -- or are there different integrations? I used proxmoxve.

po3mah,
@po3mah@mastodon.social avatar

@fm_volker @tomi I don't use the default one, too little info. I use this one: https://github.com/dougiteixeira/proxmoxve

danielsreichenbach,
@danielsreichenbach@mastodon.world avatar

So I had a bit of a unpleasant experience with and trying to successfully do a PCI(e) Passthrough.

In my cluster, I have a few machines running on an AMD Ryzen 7 4800U with Radeon Graphics. Following the Proxmox WIki guide, I kinda managed to pass the APU into a guest.

The unpleasant side effect was that following that, the host then lost control over the remaining PCI devices too, as in IO, network, etc. First thing to happen of course was all OSDs on that host just disappeared.

stooovie,
@stooovie@mas.to avatar

@danielsreichenbach IOMMU issues? I'm not sure AMD works the same as Intel but it looks like it.

pieceofthepie, (edited )
@pieceofthepie@n8e.dev avatar

Don't enable your #Proxmox firewall without reading the instructions, or you will have a very bad time.

Shit.

#SelfHost

bradfrank,
@bradfrank@cyberplace.social avatar

@pieceofthepie I’ve thought about setting up a PiKVM so I could remote in to the console in case of something like this. Lugging monitors up and down stairs isn’t how I want to get my workout 🙃

pieceofthepie,
@pieceofthepie@n8e.dev avatar

@bradfrank May or may not have the bits to do that in my cart.

mttaggart,

Brilliant move by to integrate this tool: an ESXi import wizard to make it simple to move your VMs over. forum.proxmox.com/threads/new-import-wizard-available-for-migrating-vmware-esxi-based-virtual-machines.144023/

vordenken,
@vordenken@fosstodon.org avatar

A few weeks ago I bought a terramaster NAS (because it was the only device you could install other OSes like proxmox or truenas on it) and now UGREEN published a kickstarter with their new NASes which can do the same but are much newer in terms of hardware…

Should I buy a UGREEN and sell my terraform…?
#NAS #Proxmox #TrueNAS #UGREEN #Hardware

simon,
@simon@fosstodon.org avatar

@vordenken i have a rule not to buy kickstarters - ugreen are a real company so why don't they just make a product and allow pre-orders?
terramaster are ARM aren't they? probably much lower power consumption and fan noise (and they exist today..)

vordenken,
@vordenken@fosstodon.org avatar

@simon Yeah I thought about that too but on the other hand green not delivering to backers would kill the whole company...
The terramaster nas I'm talking about (F2-423) has an intel N5095 x64 processor in it.

LeoBurr,
@LeoBurr@tiggi.es avatar

It's kinda mesmerizing watching the core speeds on one of the i7-12700Ts in the Cluster (the one that runs Tiggi.es presently) dynamically shift cores anywhere from 1400Mhz to 4.62GHz quickly, and on an as-needed basis. The low base speed saves energy, but it'll instantly pop cores up to speeds within its thermal/power limit envelope, so for lightly-threaded workloads, it can perform as well as beefier chips.

Screen grab of "watch -n0 "grep Hz /proc/cpuinfo" off of a Proxmox node. 20 logical CPU cores with a base of 1400MHz flickering up to as high as 4.62GHz per core as processes require it.

LeoBurr,
@LeoBurr@tiggi.es avatar

@mario @stooovie I've looked at this setting and have tried changing it.

On Comet Lake and later, I'm seeing powersave disabling turbo boost and limiting cores to the base frequency while allowing cores to go even lower than base frequency before returning to it. Performance mode runs the cores at base frequency and permits them to turbo boost before returning to base frequency.

Given this behavior, and the low power nature of these systems (and the fact I've got solar panels generating an average of 50-60kW daily), I'll be leaving the systems on Performance Mode.

If I wanted to control turbo boosting, I'd just use powercap-utils and set the short-term power limit down, so the CPU uses less power when demand is there, but this will also impact performance negatively in high utilization bursty scenarios.

Instead of doing that, since I value performance over saving a few watts, I'm going to set my long-term power limits (thus TDP) up from 65w to 85w as the CPU cooler can handle it based on XTU testing, and short term will actually be lowered from 135w to 125w, but the window extended to ~57 seconds to enable larger workloads to complete more rapidly.

I can see Powersave making sense where Proxmox is being used locally with sipping power taking precedence over performance...but I bought these CPUs to perform, so I'm gonna help them do just that. ;)

stooovie,
@stooovie@mas.to avatar

@LeoBurr @mario yeah, makes sense. I wanted as low a power draw as possible (I was setting this up at the peak of the European energy crisis of 2022)

estevez,
@estevez@techhub.social avatar

I made it and it went better than expected.

I have a single-node setup with various services in containers and a VM. For example, https://blog.randomplace.online/ is hosted there. I needed a build machine to build a work project, so I decided to spin up a new VM.

I got 10 minutes of build time and my server didn't die during the build. You can see the CPU load of two builds. There is plenty compute power left to serve my blog for that one visitor per month. Also, the Home Assistant VM was not affected.

Need to mention that the node is an PN41 with 4-core Celeron N5100 and 16GB of RAM.

AngryAnt,
@AngryAnt@mastodon.gamedev.place avatar

@estevez There's a lot of power to be squeezed from small nodes :)

My smallest one is dual core, four threads, running among others a Home Assistant VM and a Plex one - with bulk storage attached via iSCSI. No performance issues with any of the services on there.

kai, German
@kai@m.kretschmann.social avatar

@tuxedocomputers What would you recommend as HW for running as an enthusiastic developer for personal educational use? Will it perhaps be a supported OS during ordering?

tuxedocomputers,
@tuxedocomputers@linuxrocks.online avatar

@kai
The TUXEDO Cube Intel Gen13 Micro ARX tower would be a future-proof choice for a Proxmox home server. It offers up to 128 GBytes of RAM, 2 M.2 Slots and 4 SATA III Slots. We do not offer Proxmox for pre-installation, but it is effortless to install. https://www.tuxedocomputers.com/de/TUXEDO-Cube-Intel-Core-Series-Gen13-Micro-ATX.tuxedo

pax0707,
@pax0707@mastodon.social avatar

Guess it's also @proxmox update time.




hanscees, Dutch
@hanscees@mas.to avatar

I have a problem where it looks like id's in the webgui are wrong.
If I connect to the command prompt it connects to the wrong vm.

Anybody see that problem before?
And know what to do perhaps?
Happens on vm's without kvm-client running

hanscees,
@hanscees@mas.to avatar

@ScaredyCat I have copied the disks and all is well. I think I was just too tired yesterday and cloned the wrong vm's. Its the best explanation I can come up with

ScaredyCat,

@hanscees Good news. Sometimes a bit of rest can make a world of difference. Glad you got it all sorted out.

lucas3d,
@lucas3d@mastodon.social avatar

My journey started a month ago. I moved all my all my containers and VM from my NAS to a testing MiniPC with Proxmox.

This test was very positive, so I decided to move to a new server. I'm using a cluster to move my LXCs and VMs. I'm very impressed by how simple the process is for a newbie like me. I just need to remove my snapshots before.

Keep learning in my journey! 😊

Justin,
@Justin@social.younger.wtf avatar

@ironicbadger @lucas3d totally, but I would argue that saying its hardware specific is kind of part of the whole unstable thing

ironicbadger,
@ironicbadger@techhub.social avatar

@Justin @lucas3d maybe but don’t put the poor guy off before they even started!

T_X,
@T_X@chaos.social avatar

Hrm, I think I'm calling it a day (or a night). Couldn't figure out yet why on this server a Linux bridge wouldn't forward Router Discovery () messages generated via @troglobit's mrdisc. While sending to ff02::6a via ICMPv6 echo request or UDP works just fine...
Disabling multicast snooping on the bridge does not help either. Nor does "ebtables -I FORWARD -p IPv6 --ip6-destination ff02::6a -j ACCEPT".

Likely sth. with the with nf-call-ip6tables enabled...

troglobit,
@troglobit@fosstodon.org avatar

@T_X probably filtering yeah. I wrote something a while ago for work here https://addiva-elektronik.github.io/2023/03/29/bridge-firewalling/

ironicbadger,
@ironicbadger@techhub.social avatar

This evening was totally nuts. I checked, documented, and updated the firmwares on every SSD in the house. I took the entire house offline for nearly 4 hours. The kind of offline that means even DHCP wasn’t available.

Upshot. My cluster lost quorum (because everything was off) and when I came to bring back up my core network services node it took me about 30 mins to figure out a) why are you not reachable on the network? b) how can I see your console (headless rack unit btw).

That node was forcefully removed from the cluster after some cursed action with an hdmi capture card to figure things out.

I’m almost at the point where I’ve finished my hardware stock take and can feasibly consider starting the Epyc build tomorrow or maybe Weds.

video/mp4

ZS,
@ZS@techhub.social avatar

@ironicbadger I think the last time I updated a SSD firmware was for those old Samsung drives that didn't housekeep (840 Evo?) or the even older Crucial M4 drives that locked up after a few thousand hours. Certainly not a regular thing to do tbh.

ironicbadger,
@ironicbadger@techhub.social avatar

@ZS nor would I want to do this regularly. However the new epyc build seems like the perfect time to take stock of what’s where and repurpose bits

dedsyn4ps3,
@dedsyn4ps3@fosstodon.org avatar

Server components, ASSEMBLE!!!!

Ugh, FINALLY got around to getting the last few drives installed in the new rackmount chassis. Figured while I was at it to go ahead and swap out for a beefy with integrated graphics since it's a slim build! 😎

is up and running, and before long, the will have another domain controller and some other dedicated machines! 🙌🤘💯 :nixos: :windows:

mttaggart,

I see Broadcom is continuing to sell my book for me. Stop using VMWare; use instead.

arstechnica.com/information-technology/2024/02/broadcom-owned-vmware-kills-the-free-version-of-esxi-virtualization-software/

sej7278,
@sej7278@fosstodon.org avatar

and gonna get a whoooooole lotta new users now is dead https://kb.vmware.com/s/article/2107518?lang=en_US

  • All
  • Subscribed
  • Moderated
  • Favorites
  • proxmox
  • modclub
  • DreamBathrooms
  • InstantRegret
  • magazineikmin
  • cubers
  • GTA5RPClips
  • thenastyranch
  • Youngstown
  • rosin
  • slotface
  • tacticalgear
  • ethstaker
  • kavyap
  • Durango
  • anitta
  • everett
  • Leos
  • provamag3
  • mdbf
  • ngwrru68w68
  • cisconetworking
  • tester
  • osvaldo12
  • megavids
  • khanakhh
  • normalnudes
  • JUstTest
  • lostlight
  • All magazines