jan, to linux
@jan@kcore.org avatar

Has anyone ever seen the effect that a top-level recursive shows no more changes, but then going in the sub directories and doing individual rsyncs does give a lot of diff?

We're trying to figure out why that happens, and coming up empty.

Both FS are ext4, source server has been quieted (and FS even mounted r/o to make sure)

Rhel7 to rhel8. Rsync 3.1.2
Please boost.

ferki,
@ferki@fosstodon.org avatar

@jan Sounds interesting!

Adding -i or --itemize-changes to the rsync options should make it "output a change-summary for all updates", like this:

>f..T...... filename

If you add it twice, it should output it even for unchanged files on recent enough rsync versions.

The manpage of rsync explains the meaning of those flags. For example the above is:

> means the item is being received
f means it's a file
. means unchanged attribute
T means the modification time will be set

Hope this helps!

ferki,
@ferki@fosstodon.org avatar

@jan Sooo...did --itemize-changes help explain why rsync thinks it should sync specific items again?

I'm curious what's causing the behavior you described 🤔

abcdw, to hosting
@abcdw@fosstodon.org avatar

Found a cool SSH Apps project:
https://pico.sh/

Static sites, RSS feed to email digest, reverse ssh tunnels (for exposing local socket with public domain name), blog engine, pastebin and couple more.

All available via ssh/rsync.

tallship, to random
@tallship@fedia.social avatar

Yes! Yes! Yes!

As the saying goes, "Real BOFH use tar and rsync!"

The blog article is an excellent treatment of using tar along with SSH to effect a reliable backup plan and schedule.

Another couple of great fav GoTo solutions of mine have always been Duplicity and Duply for those not comfortable rolling their own scripts w/SSH, tar, and/or rsync ​:batman:​

Thank you very much for sharing this @nixCraft !!!

#tallship #DR #backup #tar #rsync #SSH #Systems_Administration You can haz #Cheezburgerz! 🍔

.

RE: mastodon.social/users/nixCraft/statuses/112276456842443382

defanor, to random

It is the "world day", at least according to WorldBackupDay.com. I like the idea of having such a day, to serve as another nudge and a reminder to make and check backups, though WorldBackupDay.com is awkward, does not mention rsync in its software section. The "com" TLD looks suspicious, too, but it is better than nothing (except for potential private data leaks with online backup services).

I use primarily encrypted external HDDs ( or with ) and for personal backups, including rsync with "--dry-run --checksum" for scrubbing and checking before synchronization; quite happy that such tools are available, even though they are usually taken for granted, as are many other neat FLOSS tools we use regularly. Planning to add a USB stick to the list of storage devices, since it should be less fragile mechanically (even though less reliable otherwise).

RogerBW,
@RogerBW@emacs.ch avatar

@defanor GIven that my usual restoration use case is "I want that specific file/directory back", I'll particularly applaud backups that output as readable filesystems. It'll take a bit longer if I ever want to reset my entire machine to last Tuesday but it's much more useful until then.

kzimmermann, to random
@kzimmermann@fosstodon.org avatar

I swear, using 's standard UI copy and paste into USB drives is infuriatingly slow. Frustrating to see the dialog file get stuck at 99% for many minutes when the copying took 30 seconds to complete.

Just open a terminal and run instead!

kzimmermann,
@kzimmermann@fosstodon.org avatar

@10leej yeah, after some thought, this seems to be it indeed. Too tired yesterday night to think of it! Sucks to think that this makes such a user-friendly and very familiar interface (Ctrl+C/V from a file manager) ultimately sucks. Probably the first reaction of newbies that don't know better would be to dismiss it as "Linux sucks lol"

10leej,
@10leej@fosstodon.org avatar

@kzimmermann It's been something that's been an issue for years but at the same time it works in a well enough kind of way. So every time a bug is opened you'll just get a response of "PRs/Patches welcome"

Zergy, to linux French
@Zergy@mastodon.zergy.net avatar

Ce week-end, j'ai testé la migration d'un système en faisant uniquement un RSync entre l'ancien et le nouveau disque d'un serveur.

ça a fonctionné sans problème. :rp_justok:

tshirtman,
@tshirtman@mas.to avatar

@Zergy t'as fait attention aux hard links? Mais sinon oui, avec les bonnes options, ça devrait le faire.

FiLiS, to FreeBSD
@FiLiS@mastodon.social avatar

'ing a ~10GB package directory /usr/local/poudriere/data/packages/foo elsewhere gets me a ~130GB target directory? What am I missing?
compression is turned on for both and compressratio is not really great, so that's not it. Using rsync -a, so nothing really special

zhenech,
@zhenech@chaos.social avatar

@FiLiS -a doesn't include -H, so this might be the needed aha! moment? ;-)

unfa, to Help
@unfa@mastodon.social avatar

Any wizards in here? :)
I need some !

I've got some corruption on a 4TB Btrfs filesystem. I have all files in safe and ready to restore, but... I can't figure out HOW to restore them.

So far I've tried this:

rsync --verbose --human-readable --progress --checksum --archive --inplace --update --whole-file --existing --stats --times --partial --ignore-errors /backup/data /data/

However this failed to overwrite the corrupted files throwing an input/output error...

🤔

unfa,
@unfa@mastodon.social avatar

Ok, I found a spare 4TB drive I can use, so I started copying all the files from the damaged filesystem. Hopefully it'll skip the corrupted files, leaving holes I will be able to fill with Borg backup...

schenklklopfer, to random German
@schenklklopfer@chaos.social avatar

Ich brauch mal eure Hilfe!

Ich suche nach einem -Script/-Tool, das Daten von A nach B kopieren kann.
Es sollte nach Möglichkeit KEIN und verwenden.
Inkrementell.
Gerne mit ner kleinen Datenbank oder so...

Ansonsten bin ich für alles offen.

Will das kommerziell in der Firma nutzen, eine passende Lizenz wäre daher gut.

Idden?

yahe,
@yahe@chaos.social avatar

@schenklklopfer I see. Was für Probleme hast du denn damit genau?

vampirdaddy,
@vampirdaddy@chaos.social avatar

@yahe @schenklklopfer
Bei rsync machen (root-) Berechtigungen gerne Probleme.

Nach Berechtugungsgeradeziehen arbeite ich gerne und erfolgreich mit rsync und 'drunterliegenden Filesystem-Snapshots (btrfs, ZFS, SAN-Snapshots, ...)

@schenklklopfer :
Sollen Daten woanders hin kopiert und dort auch auf diese Dateien zugegriffen werden, oder wird ein Off-Site Snapshot gefordert, aus dem man die Dateien wiederherstellen kann?

Wenn "notfalls" reicht: man kann Borg-Backup-Repos auch mounten.

320x200, to random
@320x200@post.lurk.org avatar

Syncthing and other similar software are great, but sometimes you just want a simple, lightweight, and fast way to mirror things, specially if the remote and/or local machines have limited resources.

Here is my ultra minimalistic set and forget (I hope) 2-in-1 rsync based mirror script that can be used in a cron and called manually at the same time. If there is a problem with the transfer, you even get an email.

https://things.bleu255.com/runyourown/One_way_fast_and_lightweight_file_mirroring

#rsync #ssh #shell #backup #mirror #sync

320x200,
@320x200@post.lurk.org avatar

Been following further the rabbit hole of setting up simple things for small scale networks. This time tackling file sharing on LAN with WebDAV.

https://things.bleu255.com/runyourown/Simple_LAN_filesharing_with_WebDAV

kris,
@kris@outmo.de avatar

@320x200 for a slightly more advanced setup I found https://github.com/kd2org/karadav quite nice for WebDAV.

jann, to random
@jann@twit.social avatar

Okay, got a Q for pros out there: If you rsync something from origin to dest and it starts running, and DURING that rsync, files it thinks it needs "appear" in the destination (from another process), does rsync still compute and check to see if it needs them?

I have an rsync that takes 8 hours (on-site NAS to off-site NAS) and the offsite one may get the contents from another location first, thus the offsite one ends up having a coupe of files that rsync was gonna send.

What happens?

sergi, to blogging
@sergi@floss.social avatar

I want to hear what you folks use to sync blogs/webpages over SFTP.

So, the setup is that you have a bunch of files (HTML, CSS, etc) on your local computer, and you want to update your hosting with the latest version, having only SFTP access.

I am mounting it locally with sshfs and then using "rsync -vuz --delete --recursive <source> <destination>", but I feel like there is a better way (or better rsync options).

#blogging #SFTP #FTP #sync #rsync

ryanfb,
@ryanfb@digipres.club avatar

@sergi I use rsync a lot, but I generally find rclone’s command syntax more straightforward and easier to understand if that’s what you’re looking for. It supports SFTP so you can skip the sshfs mount if you want and you can use a straightforward “sync” command to “make source and dest identical, modifying destination only” https://rclone.org/commands/rclone_sync/ (+ https://rclone.org/sftp/)

abcdw, to til
@abcdw@fosstodon.org avatar

Today I learned: if the scp (utility for copying files over SSH) process was interrupted, you can resume the transfer of the file(s) with rsync!

Especially handy with low-bandwidth and unstable connection.

Experimenting with almost forgotten old good stuff can have its own perks!

sqrtminusone,
@sqrtminusone@emacs.ch avatar

@abcdw @efraim Don't know if that's ironic or not :-) wget over ssh is definitely not an option, wget supports only HTTP, HTTPS, and FTP.

Also, resuming a download with rsync is not quite the same as wget with HTTP+Range header. The Range header simply means "send the message from this byte to this", which can be used to resume broken downloads or parallel download of the same file if Content-Length is present.

And rsync employs its "delta transfer algorithm", which is essentially an efficient way to synchronize changes between files. So, in addition to resuming a broken download, it can sync two large files by transferring only the differences and keeping the matching parts intact, for instance.

So, y'know, you don't need scp at all, rsync offers a superset of scp capabilities :D Even the devs agree:

> The scp protocol is outdated, inflexible and not readily fixed. We recommend the use of more modern protocols like sftp and rsync for file transfer instead.

See https://www.openssh.com/txt/release-8.0

abcdw,
@abcdw@fosstodon.org avatar

@sqrtminusone @efraim Not ironic, but my use case was an uploading of a big file to the server where I have ssh access and rsync allowed to continue the upload from interrupted scp, which is what I was sharing here in the post :)

governa, to random
@governa@fosstodon.org avatar
Cosmicqbit,
governa,
@governa@fosstodon.org avatar

@Cosmicqbit indeed! 👍

mattkenworthy, to ai
@mattkenworthy@mastodon.social avatar

Possibly a foolish question for the Mastodon mind, but with now willing to trawl my data for purposes, is the concept of an invasive-free cloud drive impossible? Or should I stop worrying and learn to love over again?

chris,
@chris@mstdn.games avatar

@mattkenworthy I migrated from Dropbox to Nextcloud, no regrets - it just works and does most things better than Dropbox. I'm using a managed Nextcloud provider, so I don't have to keep up with updates myself.

JustinMac84,

@mattkenworthy It will only trall your data if you tell it to via the website.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • tacticalgear
  • khanakhh
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • everett
  • ngwrru68w68
  • Durango
  • megavids
  • InstantRegret
  • cubers
  • GTA5RPClips
  • cisconetworking
  • ethstaker
  • osvaldo12
  • modclub
  • normalnudes
  • provamag3
  • tester
  • anitta
  • Leos
  • lostlight
  • All magazines