Has anyone ever seen the effect that a top-level recursive #rsync shows no more changes, but then going in the sub directories and doing individual rsyncs does give a lot of diff?
We're trying to figure out why that happens, and coming up empty.
Both FS are ext4, source server has been quieted (and FS even mounted r/o to make sure)
Static sites, RSS feed to email digest, reverse ssh tunnels (for exposing local socket with public domain name), blog engine, pastebin and couple more.
As the saying goes, "Real BOFH use tar and rsync!"
The blog article is an excellent treatment of using tar along with SSH to effect a reliable backup plan and schedule.
Another couple of great fav GoTo solutions of mine have always been Duplicity and Duply for those not comfortable rolling their own scripts w/SSH, tar, and/or rsync :batman:
Thank you very much for sharing this @nixCraft !!!
It is the "world #backup day", at least according to WorldBackupDay.com. I like the idea of having such a day, to serve as another nudge and a reminder to make and check backups, though WorldBackupDay.com is awkward, does not mention rsync in its software section. The "com" TLD looks suspicious, too, but it is better than nothing (except for potential private data leaks with online backup services).
I use primarily encrypted external HDDs (#ZFS or #LUKS with #ext4) and #rsync for personal backups, including rsync with "--dry-run --checksum" for scrubbing and checking before synchronization; quite happy that such tools are available, even though they are usually taken for granted, as are many other neat FLOSS tools we use regularly. Planning to add a USB stick to the list of storage devices, since it should be less fragile mechanically (even though less reliable otherwise).
I swear, using #pcmanfm's standard UI copy and paste into USB drives is infuriatingly slow. Frustrating to see the dialog file get stuck at 99% for many minutes when the copying took 30 seconds to complete.
Думаю як швидше додати репліку в кластер #mongo з розміром бази 3ТБ. Є такі варіанти:
снапшот існуючої та потім наздоганяти зміни;
підняти новий сервер, через #rsync перегнати усі файли в каталозі mongodb та наздоганяти зміни.
Спробував другий варіант, але #rsync вже декілька годин працює та ще половини файлів не засінкав.
Мабуть, прийдеться з нуля нову репліку додавати по першому варіанту. #AWS
У змаганні #rsync VS #mongodb реплікація перемогла остання. Нода змогла засінкать десь 3ТБ даних меньш ніж за 16 годин, а rsync, хоча й стартовав раніше, не встиг завершити реплікацію
#rsync'ing a ~10GB #poudriere package directory /usr/local/poudriere/data/packages/foo elsewhere gets me a ~130GB target directory? What am I missing? #ZFS compression is turned on for both and compressratio is not really great, so that's not it. Using rsync -a, so nothing really special #FreeBSD
I've got some corruption on a 4TB Btrfs filesystem. I have all files in #BorgBackup safe and ready to restore, but... I can't figure out HOW to restore them.
Ich suche nach einem #Backup-Script/-Tool, das Daten von A nach B kopieren kann.
Es sollte nach Möglichkeit KEIN #rsync und #Hardlinks verwenden.
Inkrementell.
Gerne mit ner kleinen Datenbank oder so...
Ansonsten bin ich für alles offen.
Will das kommerziell in der Firma nutzen, eine passende Lizenz wäre daher gut.
Syncthing and other similar software are great, but sometimes you just want a simple, lightweight, and fast way to mirror things, specially if the remote and/or local machines have limited resources.
Here is my ultra minimalistic set and forget (I hope) 2-in-1 rsync based mirror script that can be used in a cron and called manually at the same time. If there is a problem with the transfer, you even get an email.
Okay, got a Q for #rsync pros out there: If you rsync something from origin to dest and it starts running, and DURING that rsync, files it thinks it needs "appear" in the destination (from another process), does rsync still compute and check to see if it needs them?
I have an rsync that takes 8 hours (on-site NAS to off-site NAS) and the offsite one may get the contents from another location first, thus the offsite one ends up having a coupe of files that rsync was gonna send.
This week, our #SysAdmin syllabus covers backups and restores, including use of dump(8), #rsync, and flux-capacitors (e.g., ZFS snapshots, Apple TimeMachine, NetApp's WAFL). We also were supposed to talk about #syslog and monitoring, but honestly, chances are we'll spend most of our time on the #xz#backdoor.
I want to hear what you folks use to sync blogs/webpages over SFTP.
So, the setup is that you have a bunch of files (HTML, CSS, etc) on your local computer, and you want to update your hosting with the latest version, having only SFTP access.
I am mounting it locally with sshfs and then using "rsync -vuz --delete --recursive <source> <destination>", but I feel like there is a better way (or better rsync options).
@ParadeGrotesque I upload my photos using the #ente app, use their cli to download them to my desktop and #rsync encrypted to my 3 buckets (in 3 different services, of course). The download and backup part I do periodically when I remember :)
So I was working on a computer for my mom earlier. It's dual booting Debian #Linux with KDE because she wants to learn Linux, and I figured KDE resembles Windows. Anyway, I was setting up shortcuts so she could find all of her files she normally uses, but there's one problem. #Microsoft#OneDrive appears to do something weird. There's an empty file called "OneDrive", that appears to be a pointer, but the OneDrive "folder" isn't present. Anybody know where OneDrive "actually" stores files?
@gerowen Which is yet another reason why I hate #OneDrive, as it's some proprietary bs and not like #rsync or #Nextcloud where the files are being synced and mirrored transparently to the user...
Possibly a foolish question for the Mastodon mind, but with #Dropbox now willing to trawl my data for #AI purposes, is the concept of an invasive-free cloud drive impossible? Or should I stop worrying and learn to love #rsync over #ssh again?
@slackline@lauren dried-sync-transient sits on top of dired-rsync for those who prefer a #magit like interface to the underlying #rsync command (allowing you to tweak a few vars). However I'm fine with just using plain dired-rsync - full disclosure I'm the original author so unsurprisingly it suites my workflow ;-)
Podman: clearly, /dest is supposed to be a directory, I'll create it for you and copy shit in there
Docker: ah sure, you want to copy things to the file called /dest, on it!
Sheesh…took me way too long to swap out my 5 TB drive for the new 10 TB drive on my #PLEX server, and get everything to show up in the library and play from the new drive.
I wish I’d known about gnome-drives before I started, what I SHOULD have done was plug in the new drive, go into gnome-drives and disable the automount for the old drive, set the new drive to automount to the same location, and reboot.