The difficulty in troubleshooting issues with the #Ceph radosgw S3 API is often the question of whether for a particular call radosgw has an implementation bug, or S3 as implemented in AWS is really stupidly counterintuitive.
The votes are open for this year's Fedivision Song Contest and I would be delighted if you could vote for my contribution: https://fedivision.party/#CEPH
This post is really a small collection of thoughts about Proxmox when used in a home lab situation and home labs in general. I was originally going to post this to Mastodon only but it didn't fit in a single post.
A lot of people (at least what I see on reddit) build Proxmox systems with shared file systems like ceph, even for home lab use.
Got incus-deploy to a state where it can now reliably deploy #Ceph, #OVN and #Incus in just a few minutes, taking care of all the clustering bits and resulting in a fully functional environment! https://asciinema.org/a/654385
Finally got #Ceph RBD reliably working in #Rancher#Harvester VMs. Volumes, snapshots, backups, migrations, etc. are all working now against an external Ceph cluster alongside the native Longhorn.
FYI, even though the #Forgejo and #Gitea docs brand "MINIO", it will still work for other S3 compatible vendors. If you just mentally s/MINIO/S3, the rest makes sense.
Also, if you plan to use RGW and get stuck on the Bucket policy, here's a basic example one that grants the "forgejo" RGW user full access to the "git-lfs" bucket:
One of my control plane/storage nodes kept freezing at approx 00:10 every day. Just removed it from the cluster, moved the 2 #ceph OSDs it was hosting to a former worker, and reprovisioned the worker as a control plane/storage node. All with no downtime 😅