As my old NUC was showing its age, and didn't suffice for my #homelab needs anymore, I decided to build a new one. And because I prefer running all my #selfhosted in containers, but abhor fucking around with #docker and docker-compose, it's a single-node #k8s cluster, using #k3s, just like my old server. One big difference is that the new server has a decent amount of drives for storage. I decided to set up #zfs to manage that, and zfs is all it's cracked up to be.
I was trying to run #Rancher on top of #k3s on a VM with 8GB of RAM, to simulate a #RaspberryPi, but when I saw the basic cluster and monitoring with NO apps using 50% of memory, all I could hear was @ironicbadger telling me to stop using Pis. 😮💨
Is Rancher really that memory hungry, or did I do something wrong? 🤔
In case you're wondering, I installed them using @technotim playbooks.
However, the k3s instances keep crashing. Has anyone set this up? I feel like I must be missing something simple.
Also, I hate that the recommended way is to run a shell script off the Internet as root, but here we are. Also also, the logs are super spammy, making it hard for a n00b like me to debug.
♲ @bkoehn@diaspora.koehn.com:> My first machine in my re-homing IT infrastructure was an OrangePi. I created a single-node #k3s cluster and started moving services to it (initially my Diaspora pod and a few others). I tweaked Dockerfiles to create cross-platform images (usually this required no change at all), and by and large it worked great.I’m looking to add some more nodes, and I discovered the Mini PC category. A node that looks promising is the BosGame B100. For $180 you get a machine with a pretty good CPU (Alder Lake N100), RAM (16GB), and NVMe SSD (512GB). It’s amd64 instead of arm64, but again, my images are all cross-platform and the scheduler can deploy them on whatever is more available.
Adding nodes to the cluster is super-simple, and they need very little administration since they run basically Debian and everything else is behind a container interface.
Migrating home becomes little more than moving the data, updating some DNS entries, and applying the same configuration against the new cluster. So much simpler than the bad old days where everything ran on the underlying OS.
I’m using the #tailscale#kubernetes operator for my #k3s cluster, and I would like for one particular workload to use a specific exit node. I don’t think that’s possible today via the operator, but does anyone know what change would be? Do I need a tailscale sidecar container on my workload pods?
Saw someone on Reddit try telling another person that running a single #k3s instance is not a good way to learn #Kubernetes. Total rubbish. It's superb way of learning Kubernetes.
My #k3s installation is over 2 years old and has done a lot to further my understanding of #Kubernetes. It has proven to be super reliable and capable and I can't remember ever having to recover from a serious situation related to the embedded SQLite datastore.
Lame phrase, but I'm really going to try to be more active on the fediverse and socially active from an activist standpoint. Too much hate and hurt in the world not to try.
@ellie@pezhore Thank you for that, at first glance it looks very useful for me! I'm in the process of learning how to deploy a k3s cluster on a few Pis on my desk, and I could use a good summary