nicd, to random

I have a home server with various services running (all isolated with different user accounts). The server has a small SSD and a big external HDD, but I'd like to make external backups to my cloud account. The cloud service has a proprietary command line application to sync data, so backups would ideally be date separated compressed tar files that it can then sync. An additional complication are PostgreSQL and SQLite databases that can't just be copied without risking breaking the integrity of the files.

How would you do the backups? Do you have some existing app that you could suggest? I'm sure I can whip up some script to do it, but I wouldn't mind if there was something already made.

adelgado,
@adelgado@eu.mastodon.green avatar

@nicd
Restic is a command line tool that supports a lot of storage solutions S3, SFTP, DAV, etc. There are some beta UI but I run it in scheduled jobs using a systemd timer. It can encrypt the data. For the database: poatgresql have a a backup tool to dump the databases to a file that the you can backup with the rest (I keep 7 local copies)

swetland, to random
@swetland@chaos.social avatar

Exploring restic for backing up my workstation (to local external volumes and probably "the cloud" too) at the suggestion of a friend.
https://restic.readthedocs.io/en/latest/

Maybe just in time, because...

[605358.398403] nvme0n1: I/O Cmd(0x2) @ LBA 131640760, 1024 blocks, I/O Error (sct 0x2 / sc 0x81) MORE
[605358.398428] critical medium error, dev nvme0n1, sector 131640760 op 0x0:(READ) flags 0x80700 phys_seg 88 prio class 2

Now I'm replacing the primary NVMe SSD tomorrow too...

mcdanlj, to random
@mcdanlj@social.makerforums.info avatar

I hoped that by copying the cache to my new computer the first backup after the move would be as fast as normal. It's not; it has to re-read all the files.

But at least Restic doesn't store redundant copies of all the files! 🎉

markstos, to Ansible
@markstos@urbanists.social avatar

Today in , I'm investigating why I got alerted that my aren't running.

The first task is determine if the backups really aren't running or whether there's a problem with monitoring/alerting.

I used to set up to backup to .

I think I'll start by checking in BackBlaze to see how fresh the backups are. 🧵

markstos,
@markstos@urbanists.social avatar

🧵 I've got a fresh backup running for the service that wasn't set up before and I'll check my metrics tomorrow.

The role I'm using is: https://github.com/roles-ansible/ansible_role_restic

The backup completed and I see fresh metrics in AWS Cloudwatch Metrics, but not in my Dashboard I just made. Sigh.

lpwaterhouse, to random
@lpwaterhouse@ioc.exchange avatar

Considering to change my #backup solution from #duplicity to #restic (Not sure yet, I like having #pgp keys for encryption, but it's not like a long password stored in #PasswordStore wouldn't cut it). Since restic supports Windows I might try moving a couple relatives onto it; Makes helping them easier if I know the software. For them however, a #GUI is likely a MUST, but what I've found so far is not too encouraging: restatic (dead), npbackup ("metrics" and other assorted niggles), resticguigx (Electron), backrest (browser-based, which makes my skin crawl for security tooling)... Does anyone know other options I missed? Or has some compelling arguments for those I mentioned?

ascherbaum, to random
@ascherbaum@mastodon.social avatar

Today is a day where I have to test the (or rather: restore).

Full restore of a users home directory. Takes about 6 hours, but so far it's going flawless.

genebean, to homelab
@genebean@fosstodon.org avatar

Hey @ironicbadger, @popey, & other users: what tools and processes do you use to back up your ZFS-based storage? Like most homelabers, I have a mix of file types and databases that I need to account for.
[ ]

genebean,
@genebean@fosstodon.org avatar

@ironicbadger I was wondering about , but was avoiding mentioning tool names to see what other thought. Have you seen any particular challenges with restic for ZFS?

scy, to random
@scy@chaos.social avatar

I'd love to have a command that shows me all of the versions of a file stored in the repository. Like, I can do restic find -l some/file.txt, but this lists all copies of it in all of the snapshots it appears in, regardless of whether the file actually changed or not. I'm more interested in, basically, a version history.

There already is a proposal for it (https://github.com/restic/restic/issues/3073), and it's also related to storing file checksums (https://github.com/restic/restic/issues/1620).

scy,
@scy@chaos.social avatar

You can mount the repo, and it looks like ls -l snapshots/*/some/file.txt works reasonably fast.

Is there a tool that works on this kind of "one subdirectory per snapshot" structure, compares the file's modification time & size between each of them, and tells you in which of the directories a certain mtime and size tuple first appeared?

I've already checked whether the inode numbers exposed by restic mount help, but they change between snapshots even when the file doesn't.

scy, to random
@scy@chaos.social avatar

#restic running under #WSL1 just stalls unless I exclude /dev.

Like, it continues updating the terminal, but it looks as if the file scanner simply stops doing anything. No CPU usage either.

I don't think I even want to know why, but I'd be surprised if it's restic's fault. 🙄

#WSL

scy, to random
@scy@chaos.social avatar

The curse of being a programmer is not only finding bugs in software you'd just want to use, but also having the urge to dig for the issue and potential fix yourself.

Today: The wrapper doesn't apply the configured nice priority.

https://github.com/creativeprojects/resticprofile/issues/229#issuecomment-1994821012

scy, to random
@scy@chaos.social avatar

Ugh, so in one of the replies to my experiments with and associated tooling, @guerda asked "well, why not ?" and turns out, my main reasons against using restic (no compression and no exclude exceptions) are no longer valid, so I'm currently re-evaluating it.

Almost finished my writeup. And you know what? Both are really good.

My decision might boil down to performance against remote hosts and SSH vs SFTP, and whether "rclone serve restic" can save the day there.

scy,
@scy@chaos.social avatar

The main thing missing from 's compression feature¹ is what calls "auto": try compressing a small part of the file to see if it makes sense at all (i.e. if the file is compressible), and skip compressing that file if it doesn't. Right now, restic can only compress all files during a run, or none.

¹ other than documentation; it's not cool that I had to dig through the source to find the difference between "auto" and "max" compression, and which algorithm it's using at all

scy,
@scy@chaos.social avatar

The readme of https://github.com/restic/rest-server, 's HTTP server, mentions that is inefficient:

> everything needs to be transferred in chunks of 32 KiB at most, each packet needs to be acknowledged by the server

So, how much faster is rclone serve restic --stdio over SSH (because my destination doesn't support Rest Server via HTTP)?

Almost 20 %.

Uploading 5 GB over my 50 Mbps link took 16m08s via SFTP, but 13m31s via rclone.

For reference, 's SSH transport took 13m49s.

patpro, to random French
@patpro@mastodon.green avatar
underlap, to archlinux
@underlap@fosstodon.org avatar

After a strange crash on my arch system requiring a hard reboot, my .zsh_history file was corrupted!

restic to the rescue. Restored 5 GB from a snapshot (one hour old) in 5 mins and the .zsh_history file there was intact. 😅

larsmb, to random
@larsmb@mastodon.online avatar

vs is one of those questions that doesn't have a clear cut answer, does it?

jan, to random
@jan@kcore.org avatar

Did a comparison between #jottacloud, #hetzner #storagebox and #Mega for +- a month.

Meant for off-site #backups, storing +- 6TB.

Tools used are #restic and #rclone. All data is encrypted client side before being shipped to the cloud.

Comparing up-times, speeds (both down and up), and the correctness of the data stored. The last part was done using a VPS, and the data was found to be identical.

On speed (up and down) Jotta wins, Hetzner comes second, Mega fluctuates wildly.

On up-time Jotta and Hetzner tie, Mega went off-line for me at some points (worrysome)

On price Jotta also wins. Mega comes second, Hetzner last.

So I'll stick with Jotta. It's hosted in europe, it's fast, it's priced decently, support reacted fast when I asked some (noob) questions.

rgberror, to random
@rgberror@hachyderm.io avatar

if anyone uses or :

"[Restic v0.16.3] fixes a couple of bugs on Windows and in the restore command. It also works around an unlikely yet possible situation with rclone which could potentially result in data loss."

Oof

czottmann, to random
@czottmann@norden.social avatar

Giving (https://restic.readthedocs.io/) a try for periodically backing up my MBP. I've been using for a long time but every few days nows that one seems to decide it needs to re-index everything, and having an incremental backup take 12 hours is just not cool

c1t, to TrueNAS German

Weekend project: upgraded our system with and a RAIDz1 pool from 4 TB to 16 TB. Easy peasy! with ZFS was one of the best choices for our NAS. Combined with it is the best backup solution.

jan, to random
@jan@kcore.org avatar

What do people use to do offsite backups with?

Currently I have a pool, on which I take a daily snapshot of certain datasets, clone them, and send those using to a (with , and activated).

Currently I have snapshots going back to mid 2022, which are being pruned according to a schedule, but I've already exceeded 5TB of storage.

I'd like something that'd perhaps slightly less convoluted, but also doesn't break the bank. I'd love to use straight ZFS but that is priced out of my budget.

stefan, to macos
@stefan@social.stefanberger.net avatar

I am pretty impressed by .

Look into it for encrypted backup and restore via a variety of target protocols.

FYI even the latest includes for scheduling tasks. You just need to give it (/usr/sbin/cron) file system access permissions to be able to execute a script. (Cmd+shift+G after clicking the + in the security settings).

andreagrandi, to macos
@andreagrandi@mastodon.social avatar

I’m looking for a solution for which can make incremental and backups. Nothing linked to a specific (but nice if it supports many, plus USB disks). Something similar to “Duplicati” (but stable). Already tried Kopia (UI too complicated and doesn’t support multiple tasks). Nice if open source but willing to pay for the best solution.

Any ideas? Boosts are very appreciated for better reach 🙏

ps: I already pay for pCloud, iCloud and Google Drive to use as storage

e38383, (edited )

@andreagrandi I used Arq in the past, but switched to (https://restic.net/) together with (https://github.com/creativeprojects/resticprofile) to have a common solution for all my systems.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • provamag3
  • kavyap
  • DreamBathrooms
  • InstantRegret
  • magazineikmin
  • thenastyranch
  • ngwrru68w68
  • Youngstown
  • everett
  • slotface
  • rosin
  • ethstaker
  • Durango
  • GTA5RPClips
  • megavids
  • cubers
  • modclub
  • mdbf
  • khanakhh
  • vwfavf
  • osvaldo12
  • cisconetworking
  • tester
  • Leos
  • tacticalgear
  • anitta
  • normalnudes
  • JUstTest
  • All magazines