I'd like to use rsync to back up my Mac, mostly because it's what I'm used to. But, resource forks. What's the done thing?

<LJ-CUT text=" --More--(13%) ">

  1. Do nothing different than what I did before, and back up my Mac to a remote Linux machine. This means my backups won't have resource forks saved. What will break if I later restore those? Probably nothing?
  2. Use the /usr/bin/rsync (2.6.3 proto 28) shipped by Apple with 10.4. In this version, the "-E" argument causes rsync to preserve resource forks, but:

    1. it seems to always considers them to be "changed" (that is, with -v it will print a line for every file that has a resource fork, even if there haven't been any changes, which is annoying);
    2. it only works when the target file system is HFS: if you point it at a remote Linux box, you get a protocol version mismatch error. (Solved by backing up to a Firewire HFS drive instead of to a Linux server, I guess, but that'd be a waste as I have plenty of room on the disk in the Linux box...)

  3. RsyncX: again, this version only works with HFS on both sides. I can't see a reason to use this in preference to the one that Apple ships.

  4. Patched rsync+hfsmode: this can back up resource forks to remote non-HFS servers, but when going the other direction, does not re-assemble the magic dot-files back into resources. This sounds generally to be the most sensible solution of the three, but my instincts tell me, "the fact that there is no .dmg binary distribution of it means that only 5 people in the world are actually using it, and you don't want to be one of those."

Update: In case you were curious, I'm going with option #1, "ignore the problem".

Tags: , , ,

36 Responses:

  1. bodyfour says:

    Personally I've been doing option #1 — if it isn't a file I can transfer between machines then as far as I'm concerned it doesn't exist.

  2. chucker23n says:

    Have you considered psync / Carbon Copy Cloner?

  3. jzawodn says:

    I've been using SuperDuper for a while and find that it works well. I'm told it works on network mounted disks as well.

    • mattlazycat says:

      Seconded - the smart update does the same as you appear to be doing with rsync, and you get a bootable clone out of it if you use a firewire drive instead of a Linux box, which is sexy as all heck.

  4. kaseijin says:

    I'm one of the five people using the hfsmode patch, and it hasn't caused me any new problems. Also, have you considered mounting a sparse HFS+ image on the Linux box?

    • duskwuff says:

      Sparse images don't work well for backups - they'll increase in size every time you do an update. You can compact them, but that's slow over the network.

      • kaseijin says:

        Sparse images don't work well for backups - they'll increase in size every time you do an update.

        The suggestion applies to a solid image, too, but you might as well say that solid images don't work well for backups because they require committing all the space up front. Sparse images increase in size whenever adding data; there isn't anything special about backups.

        You can compact them, but that's slow over the network.

        Slow beats impossible, but I was talking about mounting a file on the Linux box, not mounting a dmg over some network filesystem. Compacting wouldn't be the only thing slow the latter way.

  5. valentwine says:

    Another option is to setup netatalk on Linux and rsync on the Mac from the local filesystem to the mounted netatalk filesystem. This also does the AppleDouble hoodoo that rsync+hfsmode is doing, but is fully AFP complaint in both directions and supports all of the Apple metadata, not just the resource fork, but the Finder metadata, etc. It has the disadvantage of likely being slower than straight-up rsync.


    • duskwuff says:

      It has the disadvantage of likely being slower than straight-up rsync.

      By a couple orders of magnitude, I expect. AFP isn't exactly designed for speed.

      • valentwine says:

        I was being generous. But arguably as long as the backup completes, the speed at which it does so is nearly irrelevant. It may be an order of magnitude slower, but it won't be so slow as to mark a change in kind.

        The obsessive compulsive in me would rather be confident I have the entire file and all its associated metadata than deal with the nagging thought that someday I might restore something and run into trouble.

    • jhf says:

      I tried Netatalk because I was sick of the resource-fork problem with my NFS share and the slow Linux-to-OSX NFS performance.

      "Avoid, Avoid."

      Take the crufty Appletalk protocol and add a crufty Linux implementation of it, and the result is byzantine. The worst part for me was that the file associations are stored in a config file on the Linux box which is about as user-friendly as sendmail's configs -- so all my files on network shares wanted to open in applications other than my own preferred ones. Okay, that's not as much a problem with backups, but it's still not right.

      It also crapped Appletalk metadata folders all over my network shares. I was not happy with it.

    • sherbooke says:

      A couple of years back, I worked with Netatalk and AFP2. It was a nightmare. Not only was there this bizarre 32-character limit to filenames in AFP2 but the Netatalk demon kept crashing on large file copies. Every day, large chunks of distributions are copied from Classic Macs/MacOS X machines into a main repository, so I guess it's analogous to backup copies.

      We changed to Ethershare (a closed source package), still with AFP2. I've tried - several times - to update Ethershare to deal with AFP3, but with no joy.

      Plus we had to modify all our distrib-building software to deal with resource directories. It's a real pain. I'll be glad when the need for resource forks is removed once and for all.

      So I guess this is no direct help. It's just a massive downer against the netatalk/Ethershare/AFP kludge. It sucks.

  6. davachu says:

    I've rsynced my home directory to a linux server, rsynced it to a new machine and everything's been fine. Programs, I assume I can re-install.

    I do have a USB disk drive formatted up mac stylee just in case though.

  7. duskwuff says:

    Notes, in no particular order:

    • Most programs nowadays don't use resource forks. Metadata is really your biggest concern.
    • Disk images, disk images, disk images! man hdiutil for information, and remember that Disk has a GUI for most of those features. You can remotely mount an HFS-formatted disk image and back up to that.
    • Psync is somewhat buggy. Review its logic carefully (especially with regard to permissions set on folders and skipped directories) before you use it.
    • rsync -E is probably good enough once you have somewhere happy (like a disk image) to back them up to - resource forks are rare enough that always copying them isn't too horribly inefficient.

    Personally, I use File Vault for my home directory and scp the .filevault (a sparse encrypted disk image) over to a Linux machine periodically. Everything else is relatively easy to regenerate if necessary.

  8. seminiferous says:

    A .pkg with rsync+hfsmode+lchown available (scroll down halfway), although I don't think it is regularly maintained.

    Reassembling resource forks after a restore is not a big deal using FixUpResourceForks as described on the hfsmode page. It takes about a minute to do my 80GB drive. Even still, I've booted off a drive stripped of resource forks without any noticeable effects. If you have any old OS9 apps, they may break.

  9. brianenigma says:

    I have always done #1 and it has never bit me on restores. My situation may be a little different from yours or it may be the same, but basically I just back up things in my home directory (as well as a few Unixisms I have on my machine like /var/cvs and some custom shell scripts in /usr/local/bin). Admittedly, I do not have a "one click" restore and have to manually reinstall applications, but any metadata lost (if any) from rsync'ing everything in my home directory does not seem to affect anything at all. I really think most metadata along those lines are going to consist of preview icons that Finder/Photoshop will automatically regenerate.

    I have gone through three or four restores now (I can be a little harsh on my PowerBook, but that's what AppleCare is for), without any issues.

  10. I thought resource forks were only for old OS 9 stuff.

  11. ninjarat says:

    ditto is the canonical copy with metadata utility and it can write CPIO archives.

    Before I got some extra Firewire drives for backups I mounted a Samba volume from the backup server and built a disk image on it, and I used SilverKeeper copy my Mac to the image.

    I still use SilverKeeper for live/bootable backups of my Macs and I use Disk Utility or hdiutil to make disk images for archival snapshots.

  12. dsandler says:

    I've heard good things about Unison. Adherents boast of its cross-platformitude, rsync-like speed, and ability to reconcile divergent copies of the same source data (a la diff3 and automerge in source control systems).

    It appears to do The Right Thing by AppleDoubling resource forks on filesystems which do not support them. From TFM:

    rsrc xxx
    When set to true, this flag causes Unison to synchronize resource forks and HFS meta-data. On filesystems that do not natively support resource forks, this data is stored in Carbon-compatible ._ AppleDouble files. When the flag is set to false, Unison will not synchronize these data. Ordinarily, the flag is set to default, and these data are automatically synchronized if either host is running OSX. In rare circumstances it is useful to set the flag manually.

    Lazyweb score: -2 (-1 for not exactly answering the question as posed, -1 for suggesting something I haven't used extensively)

    • ashiant says:

      My two expierences are:

      1) Unison: Really cool for large file archives. Metadata, timestamps and etc aside, the file itself is preserved well. (I have not expierenced good control with the unison process, nor have I any expierence with metadata, though it's believable)

      2) PsyncX: ever since a Mac at work shit the bed with several years of data, we've run a dual disk Mac that runs PsyncX, it copies 'everything' from HFS+ to HFS+ very well, and does the Cron setup automagically.

      Sadly, I don't think either of these is really the solution you're looking for... I think you really want a professional, commercial, software that does incremental, Resource Fork compatible, backups to a remote "file"....

    • kchrist says:

      Unison is great, but this isn't the problem it solves. For general incremental backups, it's basically no different than rsync.

      What Unison does is two-way synchronization, while rsync is one-way, which is what you want for backups. Unison is better suited for, say, keeping your home directory in sync between your laptop and desktop (which is how I've been using it for a couple years now).

  13. mr_privacy says:

    I "foolishly" used tar to backup a disk having a few "extent" errors, not expecting that the tool Apple shipped with the filesystem that they shipped wouldn't do what I expected. I guess years of stuff "just working" had lulled me into a false sense of security.

    A lot of applications didn't work at all on restore, which was more painful because I don't organize license keys very well (and neither do apps, some storing them in a 'secret' directroy somewhere under

    So, if you're better about keeping those magic strings, and the apps you use are downloadable, then you may be ok not worrying about this.

  14. bifrosty2k says:

    Have you tried making a virtual filesystem on your linux box, and then formatting it as HFS? I know that sounds like a PITA, but it could work...

    I have a DLT drive you could borrow :P

  15. mkj says:

    From what I can tell (quick look at the apple source, could be wrong), if you backup to a random linux box, it should suffice to make a shell wrapper etc on the linux side that just ignores -E flag.

    The patched rsync seems to just send the resource fork as an extra file, which will be opaquely handled by the stock rsync side. Haven't tried it though.

  16. With the knowledge that the lazyweb has brought, what have you decided to do?

  17. goatbar says:

    This won't fix things like icons and other stuff, but with files that must have the HFS+ attributes set can have them easily reset (if you know what they are supposed to be). For me, my EndNote database needs this fixed when I pull old versions back from CD:

    /Developer/Tools/SetFile -c ENDN -t ENDB references.end


  18. shandrew says: is my backup software of choice. It utilizes rsync, but also has the ability to keep snapshots, handle resource forks, and is rather easy to use (as far as command-line programs go). It runs on Macs and other unix-like systems.