Has anyone ever seen the effect that a top-level recursive #rsync shows no more changes, but then going in the sub directories and doing individual rsyncs does give a lot of diff?
We're trying to figure out why that happens, and coming up empty.
Both FS are ext4, source server has been quieted (and FS even mounted r/o to make sure)
Static sites, RSS feed to email digest, reverse ssh tunnels (for exposing local socket with public domain name), blog engine, pastebin and couple more.
As the saying goes, "Real BOFH use tar and rsync!"
The blog article is an excellent treatment of using tar along with SSH to effect a reliable backup plan and schedule.
Another couple of great fav GoTo solutions of mine have always been Duplicity and Duply for those not comfortable rolling their own scripts w/SSH, tar, and/or rsync :batman:
Thank you very much for sharing this @nixCraft !!!
It is the "world #backup day", at least according to WorldBackupDay.com. I like the idea of having such a day, to serve as another nudge and a reminder to make and check backups, though WorldBackupDay.com is awkward, does not mention rsync in its software section. The "com" TLD looks suspicious, too, but it is better than nothing (except for potential private data leaks with online backup services).
I use primarily encrypted external HDDs (#ZFS or #LUKS with #ext4) and #rsync for personal backups, including rsync with "--dry-run --checksum" for scrubbing and checking before synchronization; quite happy that such tools are available, even though they are usually taken for granted, as are many other neat FLOSS tools we use regularly. Planning to add a USB stick to the list of storage devices, since it should be less fragile mechanically (even though less reliable otherwise).
@defanor GIven that my usual restoration use case is "I want that specific file/directory back", I'll particularly applaud backups that output as readable filesystems. It'll take a bit longer if I ever want to reset my entire machine to last Tuesday but it's much more useful until then.
I swear, using #pcmanfm's standard UI copy and paste into USB drives is infuriatingly slow. Frustrating to see the dialog file get stuck at 99% for many minutes when the copying took 30 seconds to complete.
@10leej yeah, after some thought, this seems to be it indeed. Too tired yesterday night to think of it! Sucks to think that this makes such a user-friendly and very familiar interface (Ctrl+C/V from a file manager) ultimately sucks. Probably the first reaction of newbies that don't know better would be to dismiss it as "Linux sucks lol"
@kzimmermann It's been something that's been an issue for years but at the same time it works in a well enough kind of way. So every time a bug is opened you'll just get a response of "PRs/Patches welcome"
#rsync'ing a ~10GB #poudriere package directory /usr/local/poudriere/data/packages/foo elsewhere gets me a ~130GB target directory? What am I missing? #ZFS compression is turned on for both and compressratio is not really great, so that's not it. Using rsync -a, so nothing really special #FreeBSD
I've got some corruption on a 4TB Btrfs filesystem. I have all files in #BorgBackup safe and ready to restore, but... I can't figure out HOW to restore them.
Ok, I found a spare 4TB drive I can use, so I started copying all the files from the damaged filesystem. Hopefully it'll skip the corrupted files, leaving holes I will be able to fill with Borg backup...
Ich suche nach einem #Backup-Script/-Tool, das Daten von A nach B kopieren kann.
Es sollte nach Möglichkeit KEIN #rsync und #Hardlinks verwenden.
Inkrementell.
Gerne mit ner kleinen Datenbank oder so...
Ansonsten bin ich für alles offen.
Will das kommerziell in der Firma nutzen, eine passende Lizenz wäre daher gut.
Nach Berechtugungsgeradeziehen arbeite ich gerne und erfolgreich mit rsync und 'drunterliegenden Filesystem-Snapshots (btrfs, ZFS, SAN-Snapshots, ...)
@schenklklopfer :
Sollen Daten woanders hin kopiert und dort auch auf diese Dateien zugegriffen werden, oder wird ein Off-Site Snapshot gefordert, aus dem man die Dateien wiederherstellen kann?
Wenn "notfalls" reicht: man kann Borg-Backup-Repos auch mounten.
Syncthing and other similar software are great, but sometimes you just want a simple, lightweight, and fast way to mirror things, specially if the remote and/or local machines have limited resources.
Here is my ultra minimalistic set and forget (I hope) 2-in-1 rsync based mirror script that can be used in a cron and called manually at the same time. If there is a problem with the transfer, you even get an email.
Okay, got a Q for #rsync pros out there: If you rsync something from origin to dest and it starts running, and DURING that rsync, files it thinks it needs "appear" in the destination (from another process), does rsync still compute and check to see if it needs them?
I have an rsync that takes 8 hours (on-site NAS to off-site NAS) and the offsite one may get the contents from another location first, thus the offsite one ends up having a coupe of files that rsync was gonna send.
I want to hear what you folks use to sync blogs/webpages over SFTP.
So, the setup is that you have a bunch of files (HTML, CSS, etc) on your local computer, and you want to update your hosting with the latest version, having only SFTP access.
I am mounting it locally with sshfs and then using "rsync -vuz --delete --recursive <source> <destination>", but I feel like there is a better way (or better rsync options).
@sergi I use rsync a lot, but I generally find rclone’s command syntax more straightforward and easier to understand if that’s what you’re looking for. It supports SFTP so you can skip the sshfs mount if you want and you can use a straightforward “sync” command to “make source and dest identical, modifying destination only” https://rclone.org/commands/rclone_sync/ (+ https://rclone.org/sftp/)
@abcdw@efraim Don't know if that's ironic or not :-) wget over ssh is definitely not an option, wget supports only HTTP, HTTPS, and FTP.
Also, resuming a download with rsync is not quite the same as wget with HTTP+Range header. The Range header simply means "send the message from this byte to this", which can be used to resume broken downloads or parallel download of the same file if Content-Length is present.
And rsync employs its "delta transfer algorithm", which is essentially an efficient way to synchronize changes between files. So, in addition to resuming a broken download, it can sync two large files by transferring only the differences and keeping the matching parts intact, for instance.
So, y'know, you don't need scp at all, rsync offers a superset of scp capabilities :D Even the devs agree:
> The scp protocol is outdated, inflexible and not readily fixed. We recommend the use of more modern protocols like sftp and rsync for file transfer instead.
@sqrtminusone@efraim Not ironic, but my use case was an uploading of a big file to the server where I have ssh access and rsync allowed to continue the upload from interrupted scp, which is what I was sharing here in the post :)
Possibly a foolish question for the Mastodon mind, but with #Dropbox now willing to trawl my data for #AI purposes, is the concept of an invasive-free cloud drive impossible? Or should I stop worrying and learn to love #rsync over #ssh again?
@mattkenworthy I migrated from Dropbox to Nextcloud, no regrets - it just works and does most things better than Dropbox. I'm using a managed Nextcloud provider, so I don't have to keep up with updates myself.