Another question: I set up the Immich docker image and I’m using Mullvad VPN, however Mullvad VPN removed in-app port forwarding last year. I’m curious whether there is a solution to use Split Tunneling for Immich via another VPN to setup secure remote access outside of the home network?
Im not sure Intel has any worthwhile CPU’s unless you are getting them used.
Currently E cores are mostly trash, and not all that “efficient” and letting a P Core turbo up and get the task completed uses less overall power.
Secondly Intel is lying about its heat output, and power use. Everything from 10th gen up is a power hog if you dont limit the performance to well below “stock” settings.
This is a good match up between an i5-13500 vs R5 7600, which is the most interesting IMO. The R5 7600 seems to be about $15 less expensive for just the CPU and uses 3/4ths the power which will be a greater savings over time vs Intel. The AMD Motherboards also still seem to trend a bit lower in cost than Intel.
So overall its a good question. If you can get a use 13500 or one under $150 then its probably worth it, but at retail prices the 7600 will cost less to buy, and less to own while being similar in performance.
I am using a terramaster d6-320 connected with usb-c.
It has been running zfs disks for proxmox via a geekom a5 mini pc since February. It has lost contact with the drives twice so far, more than a month between each time so I don’t know the cause. I am mostly happy with the setup, but of course it is annoying when it fails
The tdp is bullshit on the n100. It really means nothing, as it can pull closer to 14W idle, it’s still good, but don’t rely on the numbers too much, if that’s important to you.
The thing I don’t like about the n100 is that it only supports single channel RAM, and the naming makes no sense, as the n97 outperforms it.
While true, the 6W idle can be hit with proper tuning, I just wouldn’t recommend it. Still, from what I’ve seen with mine, it overall uses less power than a pi4/5 at the plug. I’m pretty happy with the one running my network services. I’ll be going AMD next round with the pstate improvements coming up once this one outlives it’s usefulness though.
I use netdata (the FOSS agent only, not the cloud offering) on all my servers (physical, VMs…) and stream all metrics to a parent netdata instance. It works extremely well for me.
Other solutions are too cumbersome and heavy on maintenance for me. You can query netdata from prometheus/grafana [1] if you really need custom dashboards.
I guess you wouldn’t be able to install it on the router/switch but there is a SNMP collector which should be able to query bandwidth info from the network appliances.
Gonna check it out!
Is it easy to setup automatic responses to the alerts, f.e. restarting a service if it isn’t answering requests in a timely manner?
Have you used it together with Windows Servers too?
It should be possible using script to execute on alarm = /your/custom/remediation-script…netdata.cloud/…/agent-notifications-reference. I have not experimented with this yet, but soon will (implementing a custom notification channel for specific alarms)
restarting a service if it isn’t answering requests
I’d rather find the root cause of the downtime/malfunction instead of blindly restarting the service, just my 2 cents.
I’m very happy with Syncthing, you can configure how you want the sync to work (e.g. one-way sync, two-way sync, etc.), the web GUI is pretty good and it’s not that hard to set up. I got the idea from this video back when I initially set up my seedbox, have been using this solution ever since and encountered any issues.
An explanation from one of the maintainers explaining why they removed the toggle from the UI and try to hide it from users because it’s going to be deprecated eventually:
Docker compose has a default “feature” of prefixing the names of things it creates with the name of the directory the yml is in. It could be that the name of your volume changed as a result of you moving the yml to a new folder. The old one should still be there.
I think my problem is that I didn’t have the proper .env file the first time I started it up after moving the yml file, and that’s why immich thought it neded to create a new database from scratch. Does that make sense? I think it’s realy overwritten those
It’s very unexpected behavior for docker compose IMHO. When you say the volume is named “foo” it creates a volume named “directory_foo”. Same with all the container names.
You do have some control over that by setting a project name. So you could re-use your old volumes with the new directory name.
Or if you want to migrate from an old volume to a new one you can create a container with both volumes mounted and copy your data over by doing something like this:
<span style="color:#323232;">docker run -it --rm -v old_volume:/old:ro -v new_volume:/new ubuntu:latest
</span><span style="color:#323232;">$ apt update </span><span style="font-weight:bold;color:#a71d5d;">&& </span><span style="color:#323232;">apt install -y rsync
</span><span style="color:#323232;">$ rsync -rav --progress --delete /old/ /new/ </span><span style="font-style:italic;color:#969896;"># be *very* sure to have the order of these two correct!
</span><span style="color:#323232;">$ exit
</span>
For the most part applications won’t “delete and re-create” a data source if it finds one. The logic is “did I find a DB, if so then use it, else create a fresh one.”
Awesome, take this close call as a kind reminder from the universe to backup!
Borg will allow incremental backups from any number of local folders to any number of remote locations. Borgmatic is a wrapper for it that also includes automated incremental borg backups.
I have a second server that runs this container: nold360/borgserver
Which works as a borg repository.
I also buy storage in borgbase and so every hour and incremental setup goes to both.
The other day I blew away a config folder by accident and restored it with no sweat in 2 mins.
Do you plan to compress video ( which generally already compressed format) when saving to remote location? I do not see use case for it, as you ether use lossless compression and not compressing it in any meaningful way, or just re-encode to different format and loose quality. Second option is simpler to achieve by re-encoding before sending out.
I plan to reencode on the fly before I push the files to the Cloud. This would have to happen in real-time though, since I don’t have the space to cache files (and slow upload speeds)
selfhosted
Active
This magazine is from a federated server and may be incomplete. Browse more on the original instance.