- If you have two external drives, you can make them be a Spanned Volume or a JBOD RAID -- but then you can't encrypt the file system.
- If you want to use Time Machine to put some folders on Backup-A and some on Backup-B -- you can't. There's one global set of Time Machine rules.
Some stupid things about macOS that I learned today
Let's say your Time Machine drive isn't big enough to fit all your stuff.
Tags: computers, firstperson, mac
Time Machine has corrupted my backups so many times that I have given up and switched to using Arq instead.
It supports multiple targets (I use a home NAS and a cloud service) and custom rules for each.
How did it corrupt them? With the exception of the FS hack for multi-hard-linked directories, it's rsync.
Beats me. I kept getting the "Time Machine has to start over" message even though the sparsebundle files were readable by Finder. After the third one in as many weeks, I gave up.
Time Machine creates an HFS+ format sparse image for remote backups (AFP file shares or CIFS fileshares) and only writes to HFS+ formatted locally attached drives.
The Mac manages to corrupt HFS+ filesystems on a pretty regular basis. Sometime hfs.fsck or diskutil repairDisk will fix the issue, and sometime (like the fun pop-up says) you get to create a new backup from scratch. If it's a sparse image on a NAS then you have some hoops to jump through to get the filesystem available for repair but not mounted.
Regular basis for me is about 4 times a year. Both our machines have 4 Time Machine destinations, so I am covered when one backup gets crapped on. One destination is Nas4Free, so zfs and snapshots. That one is just a matter of rolling back to an uncorrupted snapshot and running a backup against that destination. On the Time Capsules I have one go at fscking the image then delete and create new - they are so slow that it takes over 12 hours to fsck.
HFS+ managing to corrupt itself is why I am disappointed that APFS doesn't have all the nice checksumming etc data protection that filesystems like ZFS have. I guess the authors of APFS are single and have never had to deal with the fear that the spouse will one day find out that all the family photos have been eaten by the grues in the filesystem.
Backup destinations to spread the risk - 8TB Time Capsule, 2TB Time Capsule, 6TB Nas4Free CIFS shares, 10TB Synology NAS CIFS Shares, Apple Server with 2TB and 4TB USB drives, plus a nightly SuperDuper backup of the server itself, and a local drive on each machine. If I was not playing with stuff somewhat I would have local disk, and 1 or 2 network destinations. We also use iCloud for files, and about to turn on Photo backups to iCloud.
Clearly this is anecdotal, but I've also had several instances of TM losing its mind in such a way that the only useful way to recover seemed to be to blow it all away. This has happened both to a time capsule & local disk, both encrypted. I don't really trust it as a result, so I use SuperDuper for disaster recovery and Arq for 'I need that file from two days ago', with TM just in case. I worry that Arq has had no updates for ages.
Arq was most recently patched on October 8, with updates also occurring in September, July, and February. Are there unpatched bugs that concern you? It seems feature-complete to me.
Oh, thanks. It hasn't been telling me there were updates which it used to do: clearly that's the problem, not that there have not been any.
I agree on feature-completeness, I was just worrying it would break due to the usual gratuitous OS drift.
I also wound up using Arq after evaluating a couple of other solutions.
Will the volume be used only for backups? If so, why not leave it unencrypted and instead encrypt the backups (sparse bundles) themselves?
Huh? It's not a sparsebundle. /Volumes/Time\ Machine/Backups.backupdb/ is a directory tree.
It has been a long time since I’ve used Time Machine with a locally-connected device. If you elect to encrypt the backup in TM options, does it then wrap the backupdb in an encrypted machinename.sparsebundle?
Okay I guess the rules are different for directly-connected disks. I don’t really understand why. For remote disks, it puts the backups into a sparsebundle, possibly encrypted, named after your machine. I figure that’s so it can support directory hard links, encryption, and whatever other metadata it needs, even if the network protocol is AFP/CIFS and the remote filesystem isn't HFS+/APFS. Also so you can backup multiple machines to the same filesystem even if they would all be named Backups.backupdb/Macintosh\ HD or whatever.
But, if it’s a local disk, it insists on enabling encryption on the underlying volume, or not at all. That's per item #4 below:
Do you really have more than 12TB of backup you need to store?
If you have so much stuff that it won't fit on a single drive, then you're going to have to either set up a thunderbolt hardware RAID, or a NAS box. And honestly, you're going to want two RAIDS plus LTO if you're that worried about your "won't fit on a single drive" data. These days you can spin up a linux box that looks like an SMB Time Machine target with relative ease ("relative ease" I say, not giving a scale on purpose).
OR, I suppose, you could just have separate drives with rsync scripts on them that only copy some subset of data over when you plug them in ("/Volumes/OnlyBandsWhoseNamesStartWithDentalFricatives/backup.sh"), but that's halfassery. You need wholeassery. That's gonna cost monies.
I have approximately 8TB of stuff I need to back up. You can get perfectly serviceable 5TB USB3 self-powered drives for about $100. If you want more than 5TB in one box, the price is more than double that and it requires a wall wart.
Your solution of "spend a couple hundred bucks on a second computer to host those drives instead" hits the sweet spot between both "more expensive" and "more complicated".