First part of a new long term home project coming in. An #Ubiquiti PoE+ switch to power a small #Kubernetes cluster built using #raspberrypi nodes. Going to blog about every step once it has been completed. But it is going to be a few quarters long project doing bit by bit
Alright! Take over cluster from @ocramius just came in from Germany. This should speed up the project, plus makes it makes slightly repurposing those nodes in the future easier. (Yes there is a plan beyond jus the cluster and it's #lego housing.)
And exactly this is why I want to use #lego Technic for the skeleton. (Moved it from the box it was on the yellow thing it's on right now. The grey underplate is now fully resting on the surface it's on.))
#tinkerbell_oss Figured out what's wrong today and what is causing this error, TL;DR I need a custom kernel/bootloader to be able to run workflows, to be continued:
#tinkerbell_oss Decided to skip auto provisioning for now. Not because of #tinkerbell_oss but due to the fact that the RPI4 set up I've been following requires a custom kernel and initramfs to be able to run workflows from netboot: https://t.co/VejBOwwTP0
#tinkerbell_oss Still figuring out how to create that kernel and initramfs and that's going to take a while. And I really wanted to have at least SOMETHING running. So this is my MVP home #k8s cluster running #k3s. Didn't bother with the SSD yet, running purely from SDcards for now.
#tinkerbell_oss Got something else to fix for booting from SSD, but will also fix that later.
For now, the next step is getting GitHub Action runners on it to start building applications for it and have a way to deploy directly to it. There are several solutions for that, should be fun :D
Essentially lots of Docker images only have an amd64 version, maybe an arm64 but rarely an arm(7) image so running anything on the RPI3's in the cluster is unlikely unless I start building images for it.
Now the GitHub Actions Runner Helm chart I'm using also only has amd64 and
So that is going to be fun. The cool thing is setting that up is really really easy, like scary easy. Writing a Helm chart to add all the of the runner deployment and autoscaling definitions for that. Also considering putting them directly in a projects
deployment. But that results in the chicken and egg problem, so either the first deployment to the cluster has to be done manually or I'll have to store them at a central location.
However the first thing on the menu is getting Helm to work and be able to deploy from within the