I have been using SuperDuper to make weekly data-only clones (no OS) to separate volumes on an external HDD formatted with APFS. The drive is about 80 percent full, and each week I delete the oldest volume and create a new one for the current week’s clone.
Does this meet your exception criteria, or is the drive still subject to excessive fragmentation?
Also, instead of creating a new volume each time, would using SuperDuper’s Smart Update on an existing volume be any different?
Ultimately, the decision is whether you find performance acceptable. If your backup mechanism’s performance is sufficient for your needs, then I would never tell you you’re wrong.
That having been said, I may have some useful information:
Although creating multiple volumes, each containing a full backup will give you access to historical backups (e.g. in case you backed up a corrupt file and need to go look at an older one), it will not help you if the drive physically fails.
For that kind of protection, you will need two or more physical drives. You could, for instance, do what you’re doing now, but alternate between two different drives. So if one fails, you’ve still got half of your backup history to restore from.
How are you creating those volumes? If you are creating multiple APFS volumes within a single APFS container, then all those volumes share the pool of free space. I think this means the disk blocks for all the volumes are shared. So one volume probably will create holes in the storage, resulting in fragmentation. On the other hand, if these are full backups without other usage, it is likely that all those files are adjacent, so the fragmentation may not be that bad.
If, on the other hand, you are creating separate APFS containers, storing one volume in each, then that is like an old-fashioned partition (e.g. as you’d create for HFS+ volumes). Each container will have a contiguous set of blocks, and what you do in one will have no effect on the others. But it is wasteful of space, since free space from one can’t be used by another.
I’m unfamiliar with SuperDuper, but when skimming the Shirt Pocket web site, I found an article from 2017 that says it makes snapshots when updating the backup on an APFS destination.
This will do something similar to what Time Machine does. Very efficient use of storage, but when it comes time to delete old snapshots (e.g. to create free space on the volume), it will start fragmenting files.
But, as I wrote above, the concern is not that something will break, but that it may create an unacceptable performance problem. But if you find the performance OK, then that’s really all that matters.
I own both SuperDuper and Carbon Copy Cloner for backups. SuperDuper might be fine, but it’s a rare day when it gets updates, as compared to the frequent update schedule of Carbon Copy Cloner. CCC also seems to have more features, which is why I prefer it.
These weekly HDD backups are secondary to daily SuperDuper clones I make to a rotating series of seven portable SSDs, the latest two of which I always have with me whenever I am not at home. So, the HDDs would only be used to restore from (via Migration Assistant) in case the SSDs fail, and I am willing to accept a lower level of performance if that should happen.
I also have a cloud backup (CrashPlan), which I inherited from my now-closed business. Not sure if a full restoration from that would be even slower than from an HDD.
Point well taken – I use two different HDDs for the weekly clones. (With my business, I used a third HDD that I stored offsite, but I no longer have that possibility.)
So far I have been creating multiple volumes within a single APFS container. They are full data backups (no OS), and I don’t use the drives for anything else, but I have no problem changing to separate containers.
If so, how much free space should I leave in a container? (The HDDs are 6 TB each; the most recent backups are about 220 GB each.)
Brian is spot on…and an additional downside of SD is that it is designed to duplicate the entire volume and while you can convince it to just dupe a folder it is hard to setup and the author has zero interest (I asked him) in making folder copies easier. CCC for me is mostly used for various tasks duplicating folders to a backup destination rather than full disk clones although there are a few of those as well. Setting a folder as source and destination is far easier and less likely to be screwed up by the user.
Neil, for your uses of CCC, you might want to consider FoldersSynchronizer: FoldersSynchronizer | Softobe
It’s designed to do backups exactly the way you seem to be doing them.
As an experiment, I erased the drive of a spare MBP 15-inch 2017, installed Ventura (the latest OS it can run) and used Migration Assistant to transfer the backup data from an APFS-formatted volume on one of my external HDDs.
The total transfer of 170.18 GB took a little over an hour and 40 minutes. It was so much faster than what I had expected that I repeated the whole procedure and got the same results. Both times I spot-checked the files and everything looked OK.