As my old NUC was showing its age, and didn't suffice for my #homelab needs anymore, I decided to build a new one. And because I prefer running all my #selfhosted in containers, but abhor fucking around with #docker and docker-compose, it's a single-node #k8s cluster, using #k3s, just like my old server. One big difference is that the new server has a decent amount of drives for storage. I decided to set up #zfs to manage that, and zfs is all it's cracked up to be.
I was trying to run #Rancher on top of #k3s on a VM with 8GB of RAM, to simulate a #RaspberryPi, but when I saw the basic cluster and monitoring with NO apps using 50% of memory, all I could hear was @ironicbadger telling me to stop using Pis. 😮💨
Is Rancher really that memory hungry, or did I do something wrong? 🤔
In case you're wondering, I installed them using @technotim playbooks.
However, the k3s instances keep crashing. Has anyone set this up? I feel like I must be missing something simple.
Also, I hate that the recommended way is to run a shell script off the Internet as root, but here we are. Also also, the logs are super spammy, making it hard for a n00b like me to debug.
♲ @bkoehn@diaspora.koehn.com:> My first machine in my re-homing IT infrastructure was an OrangePi. I created a single-node #k3s cluster and started moving services to it (initially my Diaspora pod and a few others). I tweaked Dockerfiles to create cross-platform images (usually this required no change at all), and by and large it worked great.I’m looking to add some more nodes, and I discovered the Mini PC category. A node that looks promising is the BosGame B100. For $180 you get a machine with a pretty good CPU (Alder Lake N100), RAM (16GB), and NVMe SSD (512GB). It’s amd64 instead of arm64, but again, my images are all cross-platform and the scheduler can deploy them on whatever is more available.
Adding nodes to the cluster is super-simple, and they need very little administration since they run basically Debian and everything else is behind a container interface.
Migrating home becomes little more than moving the data, updating some DNS entries, and applying the same configuration against the new cluster. So much simpler than the bad old days where everything ran on the underlying OS.
I’m using the #tailscale#kubernetes operator for my #k3s cluster, and I would like for one particular workload to use a specific exit node. I don’t think that’s possible today via the operator, but does anyone know what change would be? Do I need a tailscale sidecar container on my workload pods?
Saw someone on Reddit try telling another person that running a single #k3s instance is not a good way to learn #Kubernetes. Total rubbish. It's superb way of learning Kubernetes.
In fact, the #kubernetes operator is how you’re seeing this post! I’m running a #k3s cluster at home on two #OrangePi Pluses using the operator for ingress. I’m on T-Mobile home internet and it works great since I can’t forward ports.
My #k3s installation is over 2 years old and has done a lot to further my understanding of #Kubernetes. It has proven to be super reliable and capable and I can't remember ever having to recover from a serious situation related to the embedded SQLite datastore.
Lame phrase, but I'm really going to try to be more active on the fediverse and socially active from an activist standpoint. Too much hate and hurt in the world not to try.
Today I migrated an HA #k3s#Kubernetes cluster to #OpenSUSE Leap Transactional Server that had been running on a different distro entirely and leveraged the k3s-upgrade and rancher-system-upgrade controller to fully automate the k3s and OS patching. It then updated the cluster to the latest kubernetes patch for the installed version automatically without incident. Cool stuff!
hetzner-k3s v1.1.4 is out! 🎉 This is the first release made entirely of contributions (7 PRs) so thanks to the contributors! See release notes for more information about this update. We also reached 1.1K stars! https://github.com/vitobotta/hetzner-k3s#kubernetes#k3s#hetzner#devops
First part of a new long term home project coming in. An #Ubiquiti PoE+ switch to power a small #Kubernetes cluster built using #raspberrypi nodes. Going to blog about every step once it has been completed. But it is going to be a few quarters long project doing bit by bit
If the experiments with https://www.talos.dev/ are positive, it will replace k3os. One of big plusses is that this is #k8s and not #k3s so I should be able to use the clusterautoscaler for autoscaling. Which needs some code to work with #unifi switches.