This week, our #SysAdmin syllabus covers backups and restores, including use of dump(8), #rsync, and flux-capacitors (e.g., ZFS snapshots, Apple TimeMachine, NetApp's WAFL). We also were supposed to talk about #syslog and monitoring, but honestly, chances are we'll spend most of our time on the #xz#backdoor.
Ich suche nach einem #Backup-Script/-Tool, das Daten von A nach B kopieren kann.
Es sollte nach Möglichkeit KEIN #rsync und #Hardlinks verwenden.
Inkrementell.
Gerne mit ner kleinen Datenbank oder so...
Ansonsten bin ich für alles offen.
Will das kommerziell in der Firma nutzen, eine passende Lizenz wäre daher gut.
I swear, using #pcmanfm's standard UI copy and paste into USB drives is infuriatingly slow. Frustrating to see the dialog file get stuck at 99% for many minutes when the copying took 30 seconds to complete.
Думаю як швидше додати репліку в кластер #mongo з розміром бази 3ТБ. Є такі варіанти:
снапшот існуючої та потім наздоганяти зміни;
підняти новий сервер, через #rsync перегнати усі файли в каталозі mongodb та наздоганяти зміни.
Спробував другий варіант, але #rsync вже декілька годин працює та ще половини файлів не засінкав.
Мабуть, прийдеться з нуля нову репліку додавати по першому варіанту. #AWS
У змаганні #rsync VS #mongodb реплікація перемогла остання. Нода змогла засінкать десь 3ТБ даних меньш ніж за 16 годин, а rsync, хоча й стартовав раніше, не встиг завершити реплікацію
Has anyone ever seen the effect that a top-level recursive #rsync shows no more changes, but then going in the sub directories and doing individual rsyncs does give a lot of diff?
We're trying to figure out why that happens, and coming up empty.
Both FS are ext4, source server has been quieted (and FS even mounted r/o to make sure)
Syncthing and other similar software are great, but sometimes you just want a simple, lightweight, and fast way to mirror things, specially if the remote and/or local machines have limited resources.
Here is my ultra minimalistic set and forget (I hope) 2-in-1 rsync based mirror script that can be used in a cron and called manually at the same time. If there is a problem with the transfer, you even get an email.
Ok, so this seems like an obvious thing, but is there a better cp and mv command for #linux , I want something that shows progress and speed etc. I know I can use #rsync but that seems like overkill
Possibly a foolish question for the Mastodon mind, but with #Dropbox now willing to trawl my data for #AI purposes, is the concept of an invasive-free cloud drive impossible? Or should I stop worrying and learn to love #rsync over #ssh again?
I want to hear what you folks use to sync blogs/webpages over SFTP.
So, the setup is that you have a bunch of files (HTML, CSS, etc) on your local computer, and you want to update your hosting with the latest version, having only SFTP access.
I am mounting it locally with sshfs and then using "rsync -vuz --delete --recursive <source> <destination>", but I feel like there is a better way (or better rsync options).
Oletko rakentanut itsellesi varmuuskopiosysteemiä johonkin ulkoiseen palveluun? Onko suosituksia?
Itselläni on tällä hetkellä vain paikalliset backupit, jotka sijaitsevat erillisellä NAS:lla. Se on sijoitettu niin, että pystyn esimerkiksi tulipalon sattuessa ottamaan sen nopeasti matkaan ja viemään talosta ulos. Tästä on kuitenkin apua vain jos olen kotona ja moiseen temppuiluun on ylipäätään aikaa.
Itselleni mieluisin ratkaisu oli se, että saisin nykyisen NAS:in vietyä erilliseen rakennukseen ja olisin täysin riippumaton palveluista ja yrityksistä. Se ei vaan juuri nyt onnistu kovin helposti. Toiseksi paras vaihtoehto lienee siis tuo pilvitallennus.
Backblaze B2 on tullut monessa vastaan ja vaikuttaa varsin asialliselta ja yllättävän edulliselta. Saa jakaa kokemuksia jos sellaisia on. Myös vaihtoehdot ovat tervetulleita ☺️
I've got some corruption on a 4TB Btrfs filesystem. I have all files in #BorgBackup safe and ready to restore, but... I can't figure out HOW to restore them.
It is the "world #backup day", at least according to WorldBackupDay.com. I like the idea of having such a day, to serve as another nudge and a reminder to make and check backups, though WorldBackupDay.com is awkward, does not mention rsync in its software section. The "com" TLD looks suspicious, too, but it is better than nothing (except for potential private data leaks with online backup services).
I use primarily encrypted external HDDs (#ZFS or #LUKS with #ext4) and #rsync for personal backups, including rsync with "--dry-run --checksum" for scrubbing and checking before synchronization; quite happy that such tools are available, even though they are usually taken for granted, as are many other neat FLOSS tools we use regularly. Planning to add a USB stick to the list of storage devices, since it should be less fragile mechanically (even though less reliable otherwise).
#rsync'ing a ~10GB #poudriere package directory /usr/local/poudriere/data/packages/foo elsewhere gets me a ~130GB target directory? What am I missing? #ZFS compression is turned on for both and compressratio is not really great, so that's not it. Using rsync -a, so nothing really special #FreeBSD
As the saying goes, "Real BOFH use tar and rsync!"
The blog article is an excellent treatment of using tar along with SSH to effect a reliable backup plan and schedule.
Another couple of great fav GoTo solutions of mine have always been Duplicity and Duply for those not comfortable rolling their own scripts w/SSH, tar, and/or rsync :batman:
Thank you very much for sharing this @nixCraft !!!
Static sites, RSS feed to email digest, reverse ssh tunnels (for exposing local socket with public domain name), blog engine, pastebin and couple more.
Okay, got a Q for #rsync pros out there: If you rsync something from origin to dest and it starts running, and DURING that rsync, files it thinks it needs "appear" in the destination (from another process), does rsync still compute and check to see if it needs them?
I have an rsync that takes 8 hours (on-site NAS to off-site NAS) and the offsite one may get the contents from another location first, thus the offsite one ends up having a coupe of files that rsync was gonna send.