I have a home server with various services running (all isolated with different user accounts). The server has a small SSD and a big external HDD, but I'd like to make external backups to my cloud account. The cloud service has a proprietary command line application to sync data, so backups would ideally be date separated compressed tar files that it can then sync. An additional complication are PostgreSQL and SQLite databases that can't just be copied without risking breaking the integrity of the files.
How would you do the backups? Do you have some existing app that you could suggest? I'm sure I can whip up some script to do it, but I wouldn't mind if there was something already made.
@nicd
Restic is a command line tool that supports a lot of storage solutions S3, SFTP, DAV, etc. There are some beta UI but I run it in scheduled jobs using a systemd timer. It can encrypt the data. For the database: poatgresql have a a backup tool to dump the databases to a file that the you can backup with the rest (I keep 7 local copies) #restic#backup#postgresql
Exploring restic for backing up my workstation (to local external volumes and probably "the cloud" too) at the suggestion of a friend. https://restic.readthedocs.io/en/latest/
Maybe just in time, because...
[605358.398403] nvme0n1: I/O Cmd(0x2) @ LBA 131640760, 1024 blocks, I/O Error (sct 0x2 / sc 0x81) MORE
[605358.398428] critical medium error, dev nvme0n1, sector 131640760 op 0x0:(READ) flags 0x80700 phys_seg 88 prio class 2
Now I'm replacing the primary NVMe SSD tomorrow too...
I hoped that by copying the #restic cache to my new computer the first backup after the move would be as fast as normal. It's not; it has to re-read all the files.
But at least Restic doesn't store redundant copies of all the files! 🎉
Considering to change my #backup solution from #duplicity to #restic (Not sure yet, I like having #pgp keys for encryption, but it's not like a long password stored in #PasswordStore wouldn't cut it). Since restic supports Windows I might try moving a couple relatives onto it; Makes helping them easier if I know the software. For them however, a #GUI is likely a MUST, but what I've found so far is not too encouraging: restatic (dead), npbackup ("metrics" and other assorted niggles), resticguigx (Electron), backrest (browser-based, which makes my skin crawl for security tooling)... Does anyone know other options I missed? Or has some compelling arguments for those I mentioned?
Hey @ironicbadger, @popey, & other #ZFS#homelab users: what tools and processes do you use to back up your ZFS-based storage? Like most homelabers, I have a mix of file types and databases that I need to account for.
[#Linux#SelfHosting ]
@ironicbadger I was wondering about #restic, but was avoiding mentioning tool names to see what other thought. Have you seen any particular challenges with restic for ZFS?
I'd love to have a #restic command that shows me all of the versions of a file stored in the repository. Like, I can do restic find -l some/file.txt, but this lists all copies of it in all of the snapshots it appears in, regardless of whether the file actually changed or not. I'm more interested in, basically, a version history.
You can mount the repo, and it looks like ls -l snapshots/*/some/file.txt works reasonably fast.
Is there a tool that works on this kind of "one subdirectory per snapshot" structure, compares the file's modification time & size between each of them, and tells you in which of the directories a certain mtime and size tuple first appeared?
I've already checked whether the inode numbers exposed by restic mount help, but they change between snapshots even when the file doesn't.
The curse of being a programmer is not only finding bugs in software you'd just want to use, but also having the urge to dig for the issue and potential fix yourself.
Today: The #restic wrapper #resticprofile doesn't apply the configured nice priority.
Ugh, so in one of the replies to my experiments with #BorgBackup and associated tooling, @guerda asked "well, why not #restic?" and turns out, my main reasons against using restic (no compression and no exclude exceptions) are no longer valid, so I'm currently re-evaluating it.
Almost finished my writeup. And you know what? Both are really good.
My decision might boil down to performance against remote hosts and SSH vs SFTP, and whether "rclone serve restic" can save the day there.
The main thing missing from #restic's compression feature¹ is what #BorgBackup calls "auto": try compressing a small part of the file to see if it makes sense at all (i.e. if the file is compressible), and skip compressing that file if it doesn't. Right now, restic can only compress all files during a run, or none.
¹ other than documentation; it's not cool that I had to dig through the source to find the difference between "auto" and "max" compression, and which algorithm it's using at all
Tools used are #restic and #rclone. All data is encrypted client side before being shipped to the cloud.
Comparing up-times, speeds (both down and up), and the correctness of the data stored. The last part was done using a VPS, and the data was found to be identical.
On speed (up and down) Jotta wins, Hetzner comes second, Mega fluctuates wildly.
On up-time Jotta and Hetzner tie, Mega went off-line for me at some points (worrysome)
On price Jotta also wins. Mega comes second, Hetzner last.
So I'll stick with Jotta. It's hosted in europe, it's fast, it's priced decently, support reacted fast when I asked some (noob) questions.
"[Restic v0.16.3] fixes a couple of bugs on Windows and in the restore command. It also works around an unlikely yet possible situation with rclone which could potentially result in data loss."
Giving #restic (https://restic.readthedocs.io/) a try for periodically backing up my MBP. I've been using #Arq for a long time but every few days nows that one seems to decide it needs to re-index everything, and having an incremental backup take 12 hours is just not cool
Weekend project: upgraded our #trueNAS system with #ZFS and a RAIDz1 pool from 4 TB to 16 TB. Easy peasy! #FreeBSD with ZFS was one of the best choices for our NAS. Combined with #restic it is the best backup solution.
Currently I have snapshots going back to mid 2022, which are being pruned according to a schedule, but I've already exceeded 5TB of storage.
I'd like something that'd perhaps slightly less convoluted, but also doesn't break the bank. I'd love to use straight ZFS #replication but that is priced out of my budget.
Look into it for encrypted backup and restore via a variety of target protocols.
FYI even the latest #macOS includes #cron for scheduling tasks. You just need to give it (/usr/sbin/cron) file system access permissions to be able to execute a script. (Cmd+shift+G after clicking the + in the security settings).
I’m looking for a #backup solution for #MacOS which can make incremental and #encrypted backups. Nothing linked to a specific #cloud (but nice if it supports many, plus USB disks). Something similar to “Duplicati” (but stable). Already tried Kopia (UI too complicated and doesn’t support multiple tasks). Nice if open source but willing to pay for the best solution.
Any ideas? Boosts are very appreciated for better reach 🙏
ps: I already pay for pCloud, iCloud and Google Drive to use as storage