As my old NUC was showing its age, and didn't suffice for my #homelab needs anymore, I decided to build a new one. And because I prefer running all my #selfhosted in containers, but abhor fucking around with #docker and docker-compose, it's a single-node #k8s cluster, using #k3s, just like my old server. One big difference is that the new server has a decent amount of drives for storage. I decided to set up #zfs to manage that, and zfs is all it's cracked up to be.
I was trying to run #Rancher on top of #k3s on a VM with 8GB of RAM, to simulate a #RaspberryPi, but when I saw the basic cluster and monitoring with NO apps using 50% of memory, all I could hear was @ironicbadger telling me to stop using Pis. 😮💨
Is Rancher really that memory hungry, or did I do something wrong? 🤔
In case you're wondering, I installed them using @technotim playbooks.
However, the k3s instances keep crashing. Has anyone set this up? I feel like I must be missing something simple.
Also, I hate that the recommended way is to run a shell script off the Internet as root, but here we are. Also also, the logs are super spammy, making it hard for a n00b like me to debug.
I’m using the #tailscale#kubernetes operator for my #k3s cluster, and I would like for one particular workload to use a specific exit node. I don’t think that’s possible today via the operator, but does anyone know what change would be? Do I need a tailscale sidecar container on my workload pods?
Lame phrase, but I'm really going to try to be more active on the fediverse and socially active from an activist standpoint. Too much hate and hurt in the world not to try.
Today I migrated an HA #k3s#Kubernetes cluster to #OpenSUSE Leap Transactional Server that had been running on a different distro entirely and leveraged the k3s-upgrade and rancher-system-upgrade controller to fully automate the k3s and OS patching. It then updated the cluster to the latest kubernetes patch for the installed version automatically without incident. Cool stuff!
hetzner-k3s v1.1.4 is out! 🎉 This is the first release made entirely of contributions (7 PRs) so thanks to the contributors! See release notes for more information about this update. We also reached 1.1K stars! https://github.com/vitobotta/hetzner-k3s#kubernetes#k3s#hetzner#devops
One of our #k3s#Kubernetes clusters is coming up on its second birthday soon (kept fully patched and upgraded). I know these #Rancher are meant to be treated as "cattle" and not "pets", but this deployment didn't get the memo.
Going on a road trip with our four kids and our car is so packed full of containers that it is now definitively a #k3s instance, but only running on two control plane nodes - we might be a bit degraded, but production is still shipping right along! #Kubernetes#family
Enlightening, isn't it? There are other empty blocks, but they are either fairly standard or are described elsewhere in the document.
If you are familiar with #helm, you won't despair because you have the power of analytics.enabled: false. That works on the rest of this chart and is the standard way to en/disable things.
It doesn't work that way.
Let me save you some time with the terrible new #github code search. Here is the actual syntax:
"analytics.reporting_enabled: false"
Basically what I learned last week is that #Rancher#RKE2 is #k3s on steroids and that is awesome. It also means if you're a hobbyist learning #Kubernetes in a self-hosted #homelab, much of the concepts carry over in a familiar way to the bigger "enterprise" product (which is also FOSS).
Big trouble in little cluster today 😅 I couldn't create new volumes and after chasing down everything , it turned out to be some weird multipathd issue...