In this week's episode of our Cloud Commute podcast we talked to @anderseknert from #Styra about access policy management at cloud scale using OPA and OPA Enterprise / Styra DAS in #Kubernetes.
Controlling access to services and resources, inside and outside of Kubernetes, is more important than ever. OPA enables easy, unified policy creation and management and eases the bookkeeping with policy as code.
#kubernetes turns ten today and yall. i can’t believe it. it feels like yesterday i drank the firehose. happy birthday to the project that changed my life forever.
I know I'm not in the majority, but one of my main gripes with #kubernetes is really more a failure of #golang, and how they never managed to get extensions/plugins right. Eventually they just gave up on the idea altogether. Which means every extension point in kubernetes involves calling a service rather than implementing an interface. And besides the performance/availability impact of that, also means you're limited in what you can extend, unless you feel like maintaining a k8s fork.
10 years since Google made the first commit in #Kubernetes. And ten years of Google mostly running something else for their own services. I don’t work that much on infra these days, and Kubernetes sure has a fuckton of warts. But I’ve gotta say, compared to what we had before, and the way it transformed the industry… “game changer” is for once not an exaggeration.
I’m interested in running #FrankenPHP (i.e., #Caddy) as the app server for a website. The website has multiple domains pointing to it, and I plan to run this in a #Kubernetes cluster. Can someone point to any docs or blog posts that show how others have set up similar configurations with Caddy or FrankenPHP, especially with regard to how Caddy magically configures HTTPS for the domains in this kind of setup?
Doe je iets met #opensource, #openstandards of #opensystems dan moet je erbij zijn. In november heb je weer een kans, want we organiseren 2x
p/jr een conferentie.
We zijn een vereniging, dus je kan lid worden! Zie: https://nluug.nl
Having to recycle your #kubernetes nodes because the #HPA isn't scaling up your #Mastodon#sidekiq so you're now lagging 15K jobs behind looks like this in the #UI:
Had fun this weekend working on a performance focussed proof of concept using Bunny in the #PHP#queue interop contracts. The first metrics are in using the #RabbitMQ cluster on my #Raspberrypi#Kubernetes home cluster. (Which isn't meant for high performance. Still pleased by these numbers.)
#AlpineLinux is generally my default goto for #kubernetes#container image bases when I'm writing stuff from scratch. I love how lightweight and simple it is, so I'll often start there until I can prove that I need something more complex.
On that note - need to move some #Kubernetes data to S3? Here's an #Alpine image I pushed today that includes #s3cmd and zip and gets built/tested/pushed weekly by our Jenkins farm at my $DAYJOB. Perfect for a sidecar container or CronJob backup, etc. Enjoy! (And if this is useful to you, thank the Alpine and s3cmd folks, because they by far did most of the work.)
First part of a new long term home project coming in. An #Ubiquiti PoE+ switch to power a small #Kubernetes cluster built using #raspberrypi nodes. Going to blog about every step once it has been completed. But it is going to be a few quarters long project doing bit by bit
So instead I went with this tower, we have the same one downstairs powering the living room #lego display. Plenty of space and nothing will block each other. Plus it comes with 4 #USB powers we will utilize for (more) lego display lights and planned #ESPHome sensors on the stairs to the home office. Just need to pick and time and date as doing this will take that entire switch down, and create a network split between two and one control plane #kubernetes nodes.
The fun part is that I've already accidentally ran with 3 nodes, while I need 4 over the past week occasionally. Tweaked my home brow #kubernetes cluster autoscaler a bit to aggressive and it took out a node while it shouldn't.
Did a small but significant upgrade to the temporary #kubernetes node autoscaler. It will now turn on as many nodes as there are pending pods (whichever is lowest). This will save me from waiting up to 6 minutes before all nodes are online to handle #GitHub Actions workflows. And #talos boot timing is pretty consistent. Those were all turned in within a second of each other: