@vegetaaaaaaa@lemmy.world avatar

vegetaaaaaaa

@vegetaaaaaaa@lemmy.world

This profile is from a federated server and may be incomplete. Browse more on the original instance.

vegetaaaaaaa, (edited )
@vegetaaaaaaa@lemmy.world avatar

I’m not using a private CA for my internal services, just plain self-signed certs. But if I had to, I would probably go as simple as possible in the first time: generate the CA cert using ansible, use ansible to automate signing of all my certs by the CA cert. The openssl_* modules make this easy enough. This is not very different from my current self-signed setup, the benefit is that I’d only have to trust a single CA certificate/bypass a single certificate warning, instead of getting a warning for every single certificate/domain.

If I wanted to rotate certificates frequently, I’d look into setting up an ACME server like [1], and point mod_md or certbot to it, instead of the default letsencrypt endpoint.

This still does not solve the problem of how to get your clients to trust your private CA. There are dozens of different mechanisms to get a certificate into the trust store. On Linux machines this is easy enough (add the CA cert to /usr/local/share/ca-certificates/*.crt, run update-ca-certificates), but other operating systems use different methods (ever tried adding a custom CA cert on Android? it’s painful. Do other OS even allow it?). Then some apps (Web browsers for example) use their own CA cert store, which is different from the OS… What about clients you don’t have admin access to? etc.

So for simplicity’s sake, if I really wanted valid certs for my internal services, I’d use subdomains of an actual, purchased (more like renting…) domain name (e.g. service-name.internal.example.org), and get the certs from Let’s Encrypt (using DNS challenge, or HTTP challenge on a public-facing server and sync the certificates to the actual servers that needs them). It’s not ideal, but still better than the certificate racket system we had before Let’s Encrypt.

vegetaaaaaaa, (edited )
@vegetaaaaaaa@lemmy.world avatar

I’m curious why you’re not running your own CA since that seems to be a more seamless process than having to deal with ugly SSL errors for every website

It’s not, it’s another service to deploy, maintain, monitor, backup and troubleshoot. The ugly SSL warning only appears once, I check the certificate fingerprint and bypass the warning, from there it’s smooth sailing. The certificate is pinned, so if it ever changes I would get a new warning and would know something shady is going on.

every time you rotate the certificate.

I don’t really rotate these certs, they have a validity of several years.

I’m wondering about different the process is between running an ACME server and another daemon/process like certbot to pull certificates from it, vs writing an ansible playbook/simple shell script to automate the rotation of server certificates.

  • Generating self-signed certs is ~40 lines of clean ansible [1], 2 lines of apache config, and one click to get through the self-signed cert warning, once.
  • Obtaining Let’s Encrypt certs is 2 lines of apache config with mod_md and the HTTP-01 challenge. But it requires a domain name in the public DNS, and port forwarding.
  • Obtaining certs from a custom ACME CA is 3 lines of apache config (the extra line is to change the ACME endpoint) and a 100k LOC ACME server daemon running somewhere with its own bugs, documentation, deployment and upgrade management tooling, config quirks… and you still have to manage certs for this service. It may be worth it if you have a lot of clients who don’t want to see the self-signed cert warning and/or worry about their private keys being compromised and thus needing to rotate the certs frequently (you still need to protect the CA key…)

likely never going to purchase Apple products since I recognise how much they lock down their device

hear hear

there are not that many android devices in the US with custom ROM support. With that said, I do plan to root all of my Android devices when KernelSU mature

I bought a cheap refurbished Samsung, installed LineageOS on it (Europe, but I don’t see why it wouldn’t work in the US?), without root - I don’t really need root, it’s a security liability, and I think the last time I tried Magisk it didn’t work. The only downside is that I have to manually tap Update for F-Droid updates to run (fully unattended requires root).

I’m currently reading up on how to insert a root and client certificate into Android’s certificate store, but I think it’s definitely possible.

I did it on that LineageOS phone, using adb push, can’t remember how exactly (did it require root? I don’t know). It works but you get a permanent warning in your notifications telling you that The network might be monitored or something. But some apps would still ignore it.

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

Any of awesome-selfhosted.net/…/personal-dashboards.html

Personally I use a static HTML page [1]. Ansible updates it automatically every time a service is added/removed.

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

safeguard our work from being used for profit by someone that did not contribute anything to it

AGPLv3 exists for this exact reason choosealicense.com/licenses/agpl-3.0/

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

get at least two drives in RAID or ZFS

Why do you think OP needs high availability?

In case of disk failure, simply redeploy and restore backups. Having an automated and fast redeployment procedure (and working backups) is a must-have, disk fault tolerance is not, IMHO (unless you have specific requirements).

vegetaaaaaaa, (edited )
@vegetaaaaaaa@lemmy.world avatar

This is also what I do, weekly. It’s one of the cheapest (cheap SATA drive and USB enclosure, pay once) and most reliable methods, and arguably one of the most secure (the offsite backup drive is also offline most of the time).

A simple script on my desktop sends a desktop notification reminding me to plug the USB drive, once it’s mounted backups get pulled from my servers to the external disk, then I get a notification to unplug the drive and store it away. There’s about 15 minutes every week where all backups are in the same place. To be extra safe, use 2 drives and rotate them each week.

Help with audiobookshelf downloading deleted podcasts

I’m having issues with audiobookshelf continuously downloading old, unsubscribed podcasts. No matter how many times I delete them they come back and download hundreds of episodes. Each one also says “no rss feed” when I look for more info so I’m not sure how they are even being downloaded. I have even tried deleting the...

Help with Rootless Podman and remote volume mounts

I’m trying to switch my existing Docker setup to a rootless podman one. The main issue tripping me is how to mount remote volumes. My file server is a different machine, so I mount the data into docker containers via a NFS volume. However, I can’t do this with podman since the normal user doesn’t have the rights for...

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

It’s not possible to mount NFS shares without root (a rootful container would work but I don’t recommend it). Docker allows it because it implicitly runs as root. Cleanest solution is to mount it from the host’s fstab and use a bind mount.

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

I’m missing the point about a reverse-proxy being an SSL termination endpoint

Yes, that’s usually one of the jobs of the reverse proxy. Communication between the RP and an application container running on the same host is typically unencrypted. If you’re really paranoid about a rogue process intercepting HTTP connections between the RP and the application container, setup separate container networks for each application, and/or use unix sockets.

ChatGPT suggested I use Kubernetes

wtf…

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

Is the fact that I mentioned ChatGPT setting a wrong impression?

Not at all, but the fact that it suggested jumping straight to k8s for such a trivial problem is… interesting.

how using Unix sockets would improve my security posture here

Unix sockets enforce another layer of protection by requiring the user/application writing/reading to/from them to have a valid UID or be part of the correct group (traditional Linux/Unix permission system). Whereas using plain localhost HTTP networking, a rogue application could somehow listen on the loopback interface and/or exploit a race condition to bind the port and prentend to be the “real” application. Network namespaces (which container management tools use to create isolated virtual networks) mostly solve this problem. Again, basic unencrypted localhost networking is fine for a vast majority of use cases/threat models.

vegetaaaaaaa, (edited )
@vegetaaaaaaa@lemmy.world avatar

how do I programmatically programmatically utilise sockets for containers to communicate amongst each other?

Sockets are filesystem objects, similar to a file. So for 2 containers to access the same socket, the container exposing the socket must export it to the host filesystem via a bind mount/volume, and the container that needs read/write on this socket must be able to access it, also via a bind mount. The user ID or groups of the user accessing the socket must be allowed to access the socket via traditional unix permissions.

Again, I personally do not bother with this, I run the reverse proxy directly on the host, and configure it to forward traffic over HTTP on the loopback interface to the containers. [1] [2] [3] and many others lead me to think the risk is acceptable in my particular case. If I was forced to do otherwise, I would probably look into plugging the RP into the appropriate podman network namespaces, or running it on a dedicated host (VM/physical - this time using SSL/TLS between RP and applications, since traffic leaves the host) and implementing port forwarding/firewalling with netfilter.

I have a few services exposing a unix socket (mainly php-fpm) instead of a HTTP/localhost socket, in this case I just point the RP at these sockets (e.g. ProxyPass unix:/run/php/php8.2-fpm.sock). If the php-fpm process was running in a container, I’d just export /run/php/php8.2-fpm.sock from the container to /some/place/myapp/php.sock on the host, and target this from the RP instead.

You need to think about what actual attacks could actually happen, what kind of damage they would be able to do, and mitigate from there.

how I can automate the deployment of such proxies along with the pods

That’s a separate question. I use ansible for all deployment/automation needs - when it comes to podman I use the podman_container and podman_generate_systemd modules to automate deployment of containers as systemd services. Ansible also configures my reverse proxy to forward traffic to the container (simply copy files in /etc/apache2/sites-available/…; a2enconf; systemctl reload apache2). I have not used pods yet, but there is a podman_pod module. A simple bash script should also do the trick in a first time.

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

Could you detail how you would do this?

I would re-read all docs about podman networking, different network modes, experiment with systemd PrivateNetwork option, re-read some basic about network namespaces, etc ;) I have no precise guide as I’ve never attempted it, so I would do some research, trial and error, take notes, etc, which is the stage you’re at.

Edit: cloudnull.io/…/35d5ccf03e4e6cbd03c3c45528775ab3, …

Could you confirm if one can reach one’s containers on the loopback address in a separate network namespace on podman?

I think each pod uses its own network namespace [1]. You should check the docs and experiment (ip netns, ip addr, ip link, ip route…).

I think it’s doable, but pretty much uncharted territory - at least the docs for basic building blocks exist, but I’ve never come across a real world example of how to do this. So if you go this way, you will be on your own debugging, documenting and maintaining the system and fixing it when it breaks. It will be an interesting learning experiment though, hope you can document and share the outcome. Good luck!

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

No, they can’t, that’s the point of namespaces.

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

You technically can bind ports <1024 to unprivileged containers. echo ‘net.ipv4.ip_unprivileged_port_start=0’ | sudo tee /etc/sysctl.d/50-unprivileged-ports.conf; sudo sysctl --system. Though this will allow any user to bind ports below 1024, so it’s not very clean.

Another workaround is to redirect port 80 to 8080 (or other) through iptables and have your proxy listen on this port. sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 8080. Same thing for 443.

As far as I know granting the CAP_NET_BIND_SERVICE capability to /usr/bin/podman does not work.

Also podman-compose implementation is still incomplete, and I prefer using systemd units to start and manage containers. Check https://manpages.debian.org/bookworm/podman/podman-generate-systemd.1.en.html

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

frequently updated

Not something I’d want on my server :) Partly joking, their lifecycle makes sense if you stay on the major.minor release. Though I find 2 years security support is a bit short - Debian LTS is usually around 5 years, not an excuse to wait for the last moment to upgrade but I find it more comfortable than just 2 years.

One thing to watch for is that alpine uses musl as it libc, and many programs expect glibc so you might run into obscure bugs. I find it good as base image for OCI images (again there are edge cases), but wouldn’t use it for a general purpose server.

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

Not “self-hosted” (it doesn’t even need a server, just a mobile app), but this is Free/Open-Source and works well: f-droid.org/en/packages/org.isoron.uhabits/

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

the computer has room for one drive only

The case might, but are you sure there isn’t a second SATA port on the motherboard? In which case, and assuming you’re using LVM, it would be easy to plug the 2 drives simultaneously while the case is open, create the appropriate partitions/LVM pvcreate/vgextend on the new drive, pvmove everything to the new drive, vgreduce/pvremove to remove the old drive, done.

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

Without more information, I’d say you’re looking for podman run --volume /mnt/sdb/:/path/inside/your/container. Check the manpage for podman run

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

For HTTP/web server logs: goaccess using the free db-ip database will give you country-level geolocation info.

For other connections (SSH etc.), setup a Graylog instance, send all your logs to it using rsyslog over TLS, setup pipelines to extract IP addresses from the messages, and setup the GeoIP plugin (graylog.org/…/how-to-set-up-graylog-geoip-configu…). It’s not a small task though. My ansible roles for goaccess and graylog.

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

Graylog and elasticsearch might fit on that, depending on how much is already used, and if you set the heap sizes at their bare minimum… but it will perform badly, and it’s overkill anyway if you just need this simple stat.

I would look into writing a custom log parser for goaccess (goaccess.io/man#custom-log) and let it parse your bridge logs. This is how the geolocation section looks in the HTML report (each continent can be expanded and it will reveal the stat by country).

https://lemmy.world/pictrs/image/d81f3b8a-da4a-4d99-81cc-7f8a6f93b47b.png

I update the report every hour via cron, as I don’t need real-time stats (but goaccess can do that).

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

how networks work

tcpipguide.com/free/index.htm and lookup terms/protocols on wikipedia as you go.

But as others said, I think you would learn faster if you pick a specific project and try to implement it from scratch. A matrix server is a nice project, but it will have you dig into matrix-specific configuration which is not particularly relevant if you’re just trying to learn system administration and networking.

I would start with a more “basic” project and ensure you got the fundamentals right, and document or automate (shell scripts, ansible…) all steps:

  • install a virtualization platform (hypervisor)
  • create a VM and install Debian inside it, using LVM for disk management, and a static IP address
  • practice with creating/restoring snapshots, add/remove hardware and resources (vCPUs, RAM, disk storage) from the VM
  • set up an SSH server and client using SSH keys
  • setup a firewall with some basic rules (e.g. only accept SSH connections from a specific IP address and DROP all other SSH connections, forward all HTTPS connections to another IP address…)
  • setup monitoring with a few basic rules (alert if the SSH server is down, alert if disk space or free memory is low…)
  • automate security updates

Then you can work your way up to more complex services, lookup security hardening measures on your existing setup (as always, document or automate all steps). To give you some ideas, you can find ansible roles I wrote for these tasks here. The common role implements most of what I listed above. The monitoring role implements the monitoring part. There are a few other roles for middleware/infrastructure services (web server/reverse proxy, DNS server, database services, VPN…) and a few more for applications (matrix+element, gitea, jellyfin, mumble…). Start at tasks/main.yml for each role, follow the import_tasks statements from there, and read at least the name: for each task to get a good overview of what needs to be done, and implement it yourself from a shell in the first time. If you break your setup, restore the initial VM snapshot and start again (at this point you’ve automated or documented everything, so it should not take more than a few minutes, right?) .

Each of these tasks will require you to research available software and decide for yourself which is best for your requirements (which hypervisor? which firewall frontend? which monitoring solution? etc)

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • rosin
  • thenastyranch
  • ethstaker
  • DreamBathrooms
  • osvaldo12
  • magazineikmin
  • tacticalgear
  • Youngstown
  • everett
  • mdbf
  • slotface
  • ngwrru68w68
  • kavyap
  • provamag3
  • Durango
  • InstantRegret
  • GTA5RPClips
  • tester
  • cubers
  • cisconetworking
  • normalnudes
  • khanakhh
  • modclub
  • anitta
  • Leos
  • megavids
  • lostlight
  • All magazines