wyri, to random
@wyri@haxim.us avatar

Guess who did an emergency secondary server rebuilding today after something between , , and took out their primairy cluster. Yup this guy 🤬

irfan, (edited ) to Kubernetes

UPDATE: The service is accessible by its domain () as soon as I set the DNS server of my client machine to my PiHole. For other systems not using my local DNS (so outside my network), the domain remains unreachable. My suspicion is an issue with the Port Forwards, but idk what's wrong w em as it is.


Note: this may not be in the exact order. If the order to any of this is important, feel free to point that out.

  1. I've added to , to my zone (domain), the hostname foo pointing to my network's public IP.

  2. I've deployed everything you'd need including (which determines the dedicated Ingress private IP), -ingress (type set to LoadBalancer instead of NodePort), and -manager (with both HTTP/DNS clusterissuers). If you want to take a peek at how I've deployed/configured them, more details are on here: https://github.com/irfanhakim-as/orked.

  3. I've added foo.domain to the closest thing resembling to a DNS server that I have, , pointing to the dedicated Ingress private IP.

  4. I've set my router's only DNS server to the PiHole's IP.

  5. I've set all my Kubernetes nodes' (Masters and Workers) DNS1 to the Router's IP (DNS2 set to Cloudflare's, 1.1.1.1).

  6. I've created a port forwarding rule for HTTP on my router with 1) WAN Start/End ports set to 80, 2) Virtual Host port set to its nodePort (acquired from kubectl get svc -n ingress-nginx ingress-nginx-controller -o=jsonpath='{.spec.ports[0].nodePort}' i.e. 3XXXX), 3) Protocol set to TCP, and 4) LAN Host address set to the dedicated Ingress private IP.

  7. I've created a port forwarding rule for HTTPS on my router with 1) WAN Start/End ports set to 443, 2) Virtual Host port set to its nodePort (acquired from kubectl get svc -n ingress-nginx ingress-nginx-controller -o=jsonpath='{.spec.ports[1].nodePort}' i.e. 3XXXX), 3) Protocol set to TCP, and 4) LAN Host address set to the dedicated Ingress private IP.

  8. I've deployed a container service, and an Ingress for it, using 's DNS validation clusterissuer.

Current result:

  • Cert-manager creates a certificate automatically and is in a Ready: True state as expected.

  • The subdomain (foo.domain) however remains unreachable, no 404 errors, no nothing. Just "The connection has timed out" error.

  • Describing the container service's ingress (foo.domain), shows that it's stuck at "Scheduled for sync".

and experts - please tell me what I've done in any of this that were either wrong or unnecessary, or what I'm currently missing here for me to reach my goal of being able to get my container accessible via foo.domain through that Ingress. I suspect that I might be doing something wrong with this whole DNS mess I literally cannot fathom. I feel like I'm insanely close to getting this thing to work, but I fear I'm also insanely close of blowing up my brain.

cc: @telnetlocalhost (thanks for bearing w me and getting me this far)

ricci, to Kubernetes
@ricci@discuss.systems avatar

Okay, so let me tell you about my doorbell, from a perspective.

When you push the button by the door, it sends a message over the wireless mesh network in my house. It probably goes through a few hops, getting relayed along the way by the various Zigbee light switches and "smart outlets" I have.

Once it makes it to my utility closet, it's received by a Zigbee-to-USB dongle, through a USB hub (a simple tree network) plugged into an SFF PC. From there, it gets fed into zigbee2mqtt, which, as the name implies, publishes it to my local broker.

The mqtt broker is in the small cluster of nodes I run in my utility closet. To get in (via a couple of switch hops), it goes through , which is basically a proxy-ARP type service that advertises the IP address for the mqtt endpoint to the rest of my network, then passes the traffic to the appropriate container via a veth device.

I have , running in the same Kubernetes cluster, subscribed to these events. Within Kubernetes, the message goes through the CNI plugin that I use, . If the message has to pass between hosts, Flannel encapsulates it in VXLAN, so that it can be directed to the correct veth on the destination host.

Because I like for automation tasks more than HomeAssistant, your press of the doorbell takes another hop within the Kubernetes cluster (via a REST call) so that NodeRed can decide whether it's within the time of day I want the doorbell to ring, etc. If we're all good, NodeRed publishes an mqtt message (more VXLANs, veths, etc.)

(Oh and it also sends a notification to my phone, which means another trip through the HomeAssistant container, and leaving my home network involves another soup of acronyms including VLANs, PoE, QoS, PPPoE, NAT or IPv6, DoH, and GPON. And maybe it goes over 5G depending on where my phone is.)

Of course something's got to actually make the "ding dong" sound, and that's another Raspberry Pi that sits on top of my grandmother clock. So to get there the message hops through a couple Ethernet switches and my home WiFi, where it gets received by a little custom daemon I wrote that plays the sound via an attached board. Oh but wait! We're not quite done with networking, because the sound gets played through PulseAudio, which is done through a UNIX domain socket.

SO ANYWAY, that's why my doorbell rarely works and why you've been standing outside in the snow for five minutes.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • thenastyranch
  • rosin
  • GTA5RPClips
  • osvaldo12
  • love
  • Youngstown
  • slotface
  • khanakhh
  • everett
  • kavyap
  • mdbf
  • DreamBathrooms
  • ngwrru68w68
  • provamag3
  • magazineikmin
  • InstantRegret
  • normalnudes
  • tacticalgear
  • cubers
  • ethstaker
  • modclub
  • cisconetworking
  • Durango
  • anitta
  • Leos
  • tester
  • JUstTest
  • All magazines