Basically what I learned last week is that #Rancher#RKE2 is #k3s on steroids and that is awesome. It also means if you're a hobbyist learning #Kubernetes in a self-hosted #homelab, much of the concepts carry over in a familiar way to the bigger "enterprise" product (which is also FOSS).
I was trying to run #Rancher on top of #k3s on a VM with 8GB of RAM, to simulate a #RaspberryPi, but when I saw the basic cluster and monitoring with NO apps using 50% of memory, all I could hear was @ironicbadger telling me to stop using Pis. 😮💨
Is Rancher really that memory hungry, or did I do something wrong? 🤔
In case you're wondering, I installed them using @technotim playbooks.
As my old NUC was showing its age, and didn't suffice for my #homelab needs anymore, I decided to build a new one. And because I prefer running all my #selfhosted in containers, but abhor fucking around with #docker and docker-compose, it's a single-node #k8s cluster, using #k3s, just like my old server. One big difference is that the new server has a decent amount of drives for storage. I decided to set up #zfs to manage that, and zfs is all it's cracked up to be.
Going on a road trip with our four kids and our car is so packed full of containers that it is now definitively a #k3s instance, but only running on two control plane nodes - we might be a bit degraded, but production is still shipping right along! #Kubernetes#family
One of our #k3s#Kubernetes clusters is coming up on its second birthday soon (kept fully patched and upgraded). I know these #Rancher are meant to be treated as "cattle" and not "pets", but this deployment didn't get the memo.
Do you know when you have hung on to a mobile phone for too long? When you get a quote to sell it and the phone is worth $2 USD. Sleep well Pixel2. I'll find something for you to do in the #homelab.
Big trouble in little cluster today 😅 I couldn't create new volumes and after chasing down everything , it turned out to be some weird multipathd issue...
Lame phrase, but I'm really going to try to be more active on the fediverse and socially active from an activist standpoint. Too much hate and hurt in the world not to try.
Today I migrated an HA #k3s#Kubernetes cluster to #OpenSUSE Leap Transactional Server that had been running on a different distro entirely and leveraged the k3s-upgrade and rancher-system-upgrade controller to fully automate the k3s and OS patching. It then updated the cluster to the latest kubernetes patch for the installed version automatically without incident. Cool stuff!
Saw someone on Reddit try telling another person that running a single #k3s instance is not a good way to learn #Kubernetes. Total rubbish. It's superb way of learning Kubernetes.
I’m using the #tailscale#kubernetes operator for my #k3s cluster, and I would like for one particular workload to use a specific exit node. I don’t think that’s possible today via the operator, but does anyone know what change would be? Do I need a tailscale sidecar container on my workload pods?
♲ @bkoehn@diaspora.koehn.com:> My first machine in my re-homing IT infrastructure was an OrangePi. I created a single-node #k3s cluster and started moving services to it (initially my Diaspora pod and a few others). I tweaked Dockerfiles to create cross-platform images (usually this required no change at all), and by and large it worked great.I’m looking to add some more nodes, and I discovered the Mini PC category. A node that looks promising is the BosGame B100. For $180 you get a machine with a pretty good CPU (Alder Lake N100), RAM (16GB), and NVMe SSD (512GB). It’s amd64 instead of arm64, but again, my images are all cross-platform and the scheduler can deploy them on whatever is more available.
Adding nodes to the cluster is super-simple, and they need very little administration since they run basically Debian and everything else is behind a container interface.
Migrating home becomes little more than moving the data, updating some DNS entries, and applying the same configuration against the new cluster. So much simpler than the bad old days where everything ran on the underlying OS.
However, the k3s instances keep crashing. Has anyone set this up? I feel like I must be missing something simple.
Also, I hate that the recommended way is to run a shell script off the Internet as root, but here we are. Also also, the logs are super spammy, making it hard for a n00b like me to debug.