Had fun this weekend working on a performance focussed proof of concept using Bunny in the #PHP#queue interop contracts. The first metrics are in using the #RabbitMQ cluster on my #Raspberrypi#Kubernetes home cluster. (Which isn't meant for high performance. Still pleased by these numbers.)
Thanks to @jay Bunny #PHP will support client properties in the upcoming 0.5.6 and 0.6 releases. Client properties can be used to set a human readable name to your connection with #RabbitMQ:
Merged https://github.com/jakubkulhan/bunny/pull/147 earlier today into the 0.6 dev branch. You can test it out with composer require bunny/bunny:^0.6@dev. Looking forward to any feedback on it before releasing bunny/bunny 0.6 in a few months. With the following other changes:
Another reason to run things such as #RabbitMQ as a cluster: If the #Docker image is missing the #arm64 arch one pod will stay pending while the continues working
Getting close to a full green #Bunny running fully on @reactphp. There is one #TLS/#SSL test left to resolve before this will become the base for 0.6.x. #php#rabbitmq#amqp
Here is my toot resume in case anyone has open positions:
Experience: staff software engineer, #backend#webdeveloper, #python, #django, #postgresql, #terraform, #redis, #rabbitmq, #kubernetes, #aws, #gcp
I get things done and worked with pretty much any tech out there. I learn fast, have no problem coding in other languages. I have experience leading teams. I helped growing an engineering team from 10 to 150 engineers. I know how to scale things. My code is resilient and has tests. #fedihire
Let me partially rewrite this #PHP package I thought, how hard can it be? Yup now I'm learning all about the small details and timings of the protocol the package implements 😅 .
Let it run for 12 hours without major issues. However, I discovered a small memory leak somewhere on the publishing side. But will hunt that down and fix it. (It went from 0.1% memory usage to 2.3% in 12 hours.) The numbers in the graphs are not on #PHP's side. I learned some things about how #RabbitMQ clusters work and how that impacts performance. Because those numbers can get higher due to pure randomness.
The way the #RabbitMQ cluster is set up is that it runs 3 pods across 3 nodes each on a different switch. For the tests I've been using a classic queue so it runs on 1 of those pods. Which is fine, but that means if the pod goes down the queue disappears. Normally I use quorum queues so that isn't an issue. Using a classic queue also means that one pod handles every thing. Even if you connect to a different one. RabbitMQ will handle the in cluster routing for you.
It will however impact performance. Because if you connect both the producer and the consumer to the pod the queue is on. You can do 17.7K messages a second instead of 5.3K.
Interestingly enough #RabbitMQ will flow control the producer depending on how fast it can process the messages and write them to disk etc etc. Which is honestly pretty cool. Because initially the message per second numbers looked low to me. Especially since I've used it with 70K messages per second in the past.
One more thing, #RabbitMQ seems to throttle the producer as long as there are still a bunch of messages in the queue waiting to be picked up. The moment it's empty, producer gets more bandwidth 🙀 .
A final note on this thread (probably), seen it peak to 25K messages a second. And decided to do some digging because that was 8K messages a second than previously seen.
While digging into #kubernetes metrics, it kinda stood out why the throughput is what it is. Started to max out CPU on the node that #RabbitMQ pod was running on:
First part of a new long term home project coming in. An #Ubiquiti PoE+ switch to power a small #Kubernetes cluster built using #raspberrypi nodes. Going to blog about every step once it has been completed. But it is going to be a few quarters long project doing bit by bit
Here is a reason why I love #kubernetes; just made #RabbitMQ replicas run in different zones. Downstairs where 2 of the control plane nodes are is a zone, my desk for the other control plane node is a zone, and the rest of the cluster behind me is a zone. So I now forced a broker in each zone, but also told it not to get on the same node as #homeassistant.
This will get me some more usage out of those nodes. Because honestly 3 nodes as control plane for a home cluster? That is overkill.