I’m pretty stoked to see that there is support for doing a dns challenge to Gandi with the library #Nix uses to obtain Let’s Encrypt certificates for use with #Nginx. This is going to be nice for a server that isn’t directly exposed to the internet.
[#SelfHosting#SelfHosted#NixOS#LetsEncrypt#HomeLab ]
Ein Let's Encrypt Zertifikat für die Nextcloud im eigenen Netzwerk
In diesem Beitrag geht es darum eine Nextcloud im eigenen Netzwerk mit einem offiziellen Let's Encrypt Zertifikat auszustatten, die eigentlich NICHT über das Internet erreichbar ist.
In case anyone is wondering about how to "update" a valid certificate from #letsencrypt that for some reason #prosody states is already expired, just run:
@ljrk@lexd0g SSL is trash because it requires value-removing middlemen aka. CAs to work and the inherent structures in IT cockblocked community-based CAs like #CACert for digital philantropy aka. @letsencrypt / #LetsEncrypt...
SSL is systemically bad and unfixable per design - period.
I don't see the added value of Passkeys over API-Keys, Login-Cookies and proper Login Managment...
Make sure your LE cert deployment logic includes serving the right intermediates that ACME should hand you, not just that same old LE intermediate you got years ago. Otherwise, there'll be breakage...
Why isn't there something like #LetsEncrypt but for #email certificates, so we could stop sending unencrypted mails? The S/MIME standard is built into almost every mail client, the only thing that's missing are the free certificates. Ok, and a smart software for renewal, but first things first.
UPDATE: The service is accessible by its domain (#Ingress) as soon as I set the DNS server of my client machine to my PiHole. For other systems not using my local DNS (so outside my network), the domain remains unreachable. My suspicion is an issue with the Port Forwards, but idk what's wrong w em as it is.
Note: this may not be in the exact order. If the order to any of this is important, feel free to point that out.
I've added to #Cloudflare, to my zone (domain), the hostname foo pointing to my network's public IP.
I've deployed everything you'd need including #MetalLB (which determines the dedicated Ingress private IP), #nginx-ingress (type set to LoadBalancer instead of NodePort), and #cert-manager (with both HTTP/DNS clusterissuers). If you want to take a peek at how I've deployed/configured them, more details are on here: https://github.com/irfanhakim-as/orked.
I've added foo.domain to the closest thing resembling to a DNS server that I have, #PiHole, pointing to the dedicated Ingress private IP.
I've set my router's only DNS server to the PiHole's IP.
I've set all my Kubernetes nodes' (Masters and Workers) DNS1 to the Router's IP (DNS2 set to Cloudflare's, 1.1.1.1).
I've created a port forwarding rule for HTTP on my router with 1) WAN Start/End ports set to 80, 2) Virtual Host port set to its nodePort (acquired from kubectl get svc -n ingress-nginx ingress-nginx-controller -o=jsonpath='{.spec.ports[0].nodePort}' i.e. 3XXXX), 3) Protocol set to TCP, and 4) LAN Host address set to the dedicated Ingress private IP.
I've created a port forwarding rule for HTTPS on my router with 1) WAN Start/End ports set to 443, 2) Virtual Host port set to its nodePort (acquired from kubectl get svc -n ingress-nginx ingress-nginx-controller -o=jsonpath='{.spec.ports[1].nodePort}' i.e. 3XXXX), 3) Protocol set to TCP, and 4) LAN Host address set to the dedicated Ingress private IP.
I've deployed a container service, and an Ingress for it, using #LetsEncrypt's DNS validation clusterissuer.
Current result:
Cert-manager creates a certificate automatically and is in a Ready: True state as expected.
The subdomain (foo.domain) however remains unreachable, no 404 errors, no nothing. Just "The connection has timed out" error.
Describing the container service's ingress (foo.domain), shows that it's stuck at "Scheduled for sync".
#Kubernetes and #Networking experts - please tell me what I've done in any of this that were either wrong or unnecessary, or what I'm currently missing here for me to reach my goal of being able to get my container accessible via foo.domain through that Ingress. I suspect that I might be doing something wrong with this whole DNS mess I literally cannot fathom. I feel like I'm insanely close to getting this thing to work, but I fear I'm also insanely close of blowing up my brain.
cc: @telnetlocalhost (thanks for bearing w me and getting me this far)
Pretty incredible report here about what is likely lawful interception of TLS encrypted communications (used by basically every web service) targeted at an instant messaging service popular in Russia..
the TLS communications were being recertificated in the middle (similar to how enterprise firewalls do TLS decryption) for six months to snoop on communications.. it only got rumbled as somebody (drum roll) let the interception certificate expire by mistake.
I'd take these allegations with a grain of salt. But I must say that MitM'ing with a #LetsEncrypt certificate and then forgetting to renew it, leading to discovery, sounds like the most German law enforcement thing ever.
Looks like a transparent bridge was deployed in front of the actual server, obtained dedicated certificates from #LetsEncrypt and MitMed all incoming client connections since July. It was discovered because the LE certificate expired 🤦
But ... I was going to say it didn't work for my primary site, but IT DID! I just had to wait a couple of minutes (browser caching?) and... TA DA... I now have:
Hey #xmpp folks, is Prosody still the easiest way to self-host a server?
Please, please, please say it isn't
EDIT: use case is I have an Ubuntu box from Hetzner hosting some websites and (soon) a nextcloud, this is what I had set up on my last dedicated server (except owncloud back then) and the new box has Plesk on it and after just a couple of weeks of pressing buttons in Plesk and having it Do It All For Me I've been completely spoiled
EDIT 2: the root of the issue is that Plesk hides your .cert and .key files. Scrambles their names, removes their extensions and dumps them all under /opt/psa/certificates/ not even in different folders for different domains, you'd expect mydomain.com.cert but instead you get scfLios3a, all mixed up in a bucket with the eggs on top, good luck telling Prosody where to look for those buggers
The small successes: Wrestled with my #traefik config to automatically re-write requests to <https://ljrk.org> to my blog at<https://www.blog.ljrk.org> (whiches domain is actually a CNAME to ljrk.codeberg.page), all with appropriate certificates for the base domain through #LetsEncrypt and all.
I run all of my homelab services in Docker, with SSL certs from LetsEncrypt. It's awesome, except when the cert renews and none of those services notice.
So, I add a label to those containers of net.tenshu.ssl=true
Then, I have a script in certbot's renewal-hooks/deploy/ which does this:
Auf https://secure.seat.es, dem Endpunkt für die Fahrzeug-API-Zugriffe bei #Seat und #Cupra ist seit gestern mittag das #Letsencrypt-#Zertifikat abgelaufen. Diese Kinderzimmer-IT kann sich doch kein Mensch ausdenken. Wir haben hier Kunden, für die betreiben wir Renewal und Monitoring ohne gesonderte Berechnung nebenbei, weil das so extrem wenig Aufwand ist. 🤦♂️
@CleoQc I've seen quite a few large businesses use #letsEncrypt as their main certs.
I know many complain about them being short lived (90 days) but they forget you can automate the renewals - still don't know why that one is an issue, although I have seen certs expire because it's not been done.