Kennt und nutzt dies wer von euch? Für mich ist es anscheinend was Neues von der @linuxfoundation (alles kann ich nicht wissen).
»Open Source – Ceph Squid ist veröffentlicht:
Das Open-Source-Storage-Projekt hat auch den Termin des Cephalocon 2024 und drei Diamond-Mitgliedschaften bekanntgegeben.«
This post is really a small collection of thoughts about Proxmox when used in a home lab situation and home labs in general. I was originally going to post this to Mastodon only but it didn't fit in a single post.
A lot of people (at least what I see on reddit) build Proxmox systems with shared file systems like ceph, even for home lab use.
Got incus-deploy to a state where it can now reliably deploy #Ceph, #OVN and #Incus in just a few minutes, taking care of all the clustering bits and resulting in a fully functional environment! https://asciinema.org/a/654385
Codeberg's performance is currently degraded. We are working on a fix.
We performed an experiment with our storage settings (testing Ceph's resiliency) and did not expect that undoing the action takes x100 the time to do the change.
Finally got #Ceph RBD reliably working in #Rancher#Harvester VMs. Volumes, snapshots, backups, migrations, etc. are all working now against an external Ceph cluster alongside the native Longhorn.
One of my control plane/storage nodes kept freezing at approx 00:10 every day. Just removed it from the cluster, moved the 2 #ceph OSDs it was hosting to a former worker, and reprovisioned the worker as a control plane/storage node. All with no downtime 😅
Just realized it isn't S3 storage what I potentially need for the #Peertube instance I might want to host but that other service thingy for improved bandwidth. 🙃
Dreamed last night about configuring and deploying cephfs. Was frustrated when I woke up and still had to actually then go and do it first thing at work this morning. #ceph