I think I've finally tamed this #OpenSearch setup on this #Rancher#RKE2 cluster. Today's adventure was schema conflicts. Pods labeled with "app" while others are labeled with app.kubernetes.io cause a problem for inputs as it looks to OpenSearch like there's a string where an object should be and the flatten hashes on the #fluentd output wasn't quite enough to cut it, but the dedot filter brought it in the rest of the way there.
One of our #k3s#Kubernetes clusters is coming up on its second birthday soon (kept fully patched and upgraded). I know these #Rancher are meant to be treated as "cattle" and not "pets", but this deployment didn't get the memo.
Going on a road trip with our four kids and our car is so packed full of containers that it is now definitively a #k3s instance, but only running on two control plane nodes - we might be a bit degraded, but production is still shipping right along! #Kubernetes#family
openSUSE Tumbleweed makes for a good desktop and even one for gaming, but for a workstation or server, you're going to want either SUSE Leap or Enterprise.
Sadly, SUSE is really trying to push their immutable systems with transactional updates. And unfortunately, that will soon include openSUSE Leap.
I will give credit where credit is due. SUSE will allow some binaries to run from /home, but /opt is questionable. The development seems to be on the fence on that.
@Linux_Is_Best Objections your honour! I have been running half a down servers with #opensuse#microos successfully for about two years. Some with #k3s, some with #podman, some with applications directly running on the OS.
Enlightening, isn't it? There are other empty blocks, but they are either fairly standard or are described elsewhere in the document.
If you are familiar with #helm, you won't despair because you have the power of analytics.enabled: false. That works on the rest of this chart and is the standard way to en/disable things.
It doesn't work that way.
Let me save you some time with the terrible new #github code search. Here is the actual syntax:
"analytics.reporting_enabled: false"
Basically what I learned last week is that #Rancher#RKE2 is #k3s on steroids and that is awesome. It also means if you're a hobbyist learning #Kubernetes in a self-hosted #homelab, much of the concepts carry over in a familiar way to the bigger "enterprise" product (which is also FOSS).
I think I may have a conceivable way to do it by setting up an initial standalone #k3s node as a bootstrap cluster on the same VLAN and then setting up the #metal3 operator there.
Big trouble in little cluster today 😅 I couldn't create new volumes and after chasing down everything , it turned out to be some weird multipathd issue...
I've been using #nixos on a #RaspberryPi to run some services for a year or two now, because I really wanted declarative configuration and to lose the stress of upgrades/etc., but it turns out Nixos SUCKS on the Pi (don't @ me, you know it's true or you haven't tried). I bought a laptop-in-a-mini-form-factor-desktop x86 machine to replace it, but ... I'm exhausted just thinking about it. Installing w/ ZFS was not painless. I want private services, but I don't want to sysadmin them.
@elb just got my hands on a #RaspberryPi. I’m either going to put #k3s on there, or add it as an arm node to my #rke2 cluster. What do people recommend for running as the base os? I’ve read #nixos isn’t so hot on a pi.
First part of a new long term home project coming in. An #Ubiquiti PoE+ switch to power a small #Kubernetes cluster built using #raspberrypi nodes. Going to blog about every step once it has been completed. But it is going to be a few quarters long project doing bit by bit
#tinkerbell_oss Still figuring out how to create that kernel and initramfs and that's going to take a while. And I really wanted to have at least SOMETHING running. So this is my MVP home #k8s cluster running #k3s. Didn't bother with the SSD yet, running purely from SDcards for now.