Network Time Machine Backups: Moving on from the Time Capsule

Great article but I needed a home lab in addition to NAS and at the time, FreeNAS / TrueNAS wasn’t all that great (much better now). If all you need is NAS storage and Time Capsule support, you should consider a TrueNAS solution as it is superior to the other small NAS solutions on the market. It can even run some things in a FreeBSD Jail container such as a Plex Media Server, UniFi Controller, HomeBridge, etc.

Here’s my DIY solution (technically overkill for most people):

Couple of Mini-ITX cases with 8-bay hot swappable drives on a backplane with two power inputs and two SAS connectors to hook up all 8 SATA drives to an LSI 8-way RAID card configured in JBOD mode. Using a Supermicro M11SDV-8C±LN4F which includes the AMD EPYC 3251 SoC, 8 Cores 16 Threads @ 55W. Includes heatsink with cooling fan. Can handle up to 512GB ECC RAM. 4 Gigabit Ethernet Ports plus one IPMI management port. Two USB 3.0 ports and 6 USB 2.0 ports and additional 4 SATA3 ports on the board.

Running Ubuntu Server 20.04.3 LTS, LXD 4.22 (Linux Containers), and ZFS file system across 8 HDD’s with a couple SSD’s for read/write caching and booting. LXD containers share the host system kernel so they run at near bare metal speed and can access the disks at full speed. LXD can run virtual machines for other operating systems but they run slower than native LXD containers due to the overhead of hardware virtualization.

I have many containers but one to emulate a Time Capsule. The container is minuscule because that’s all it really needs. It needs to boot a very minimal Linux server instance and it needs to load SAMBA and avahi-daemon and that’s it. 1 CPU core is allocated, 256MB’s of RAM (never needs to swap) and 3GB of storage is allocated to the container. Then I mounted a ZFS dataset into the container and the dataset is allocated a 4TB quota. This is where the Time Machine backups actually go. I run a job to periodically check ZFS for errors and report if there is a problem. I use smartmontools to monitor and alert me if an HDD is going bad. I use the OPENOID tool (bunch of fancy Perl scripts) to snapshot the ZFS dataset and rotate the snapshots as well as replicating to a second and third NAS server, one in my home and one in the corner of a data center offsite. I backup critical data files to Cloud services such as iCloud / OneDrive.

Been running for years without issues, need to replace the drives as they start going bad after about 5+ years. Probably build out a whole new rig when more than a few drives start failing. It runs rather quiet, consumes little in the way of electricity and is ridiculously reliable. ZFS is absolutely rock solid and extremely reliable.

Hot Tips: Make sure you use HDD’s that are designed for RAID and make sure those drives are not SMR drives. ZFS requires all the drives to be the same capacity. Avoid buying drives in batches where the serial numbers are sequential. If there is ever a factory defect you could end up with several failing drives. Watch out for WD Red Drives smaller than 8TB’s as they likely are SMR drives which can fail spectacularly in ZFS. If you use SSD’s you are going to need Enterprise class SSD’s or they are going to wear out rapidly in a RAID configuration. Enterprise class SSD’s have a lot more over-provisioning meaning they have a lot of extra flash sectors which are swapped out by the drives firmware as the sectors reach their maximum number of writes. If you are going to use SSD’s you should definitely setup 10GbE Ethernet or you won’t realize the full speed benefits over 1GbE networks. Still using HDD’s with SSD caching because SSD’s are still very expensive.

2 Likes

WD makes three different series of “Red” drives:

  • The Red is SMR at all sizes. Any CMR models they made in the past have been discontinued.
  • The Red Plus is CMR for all sizes
  • The Red Pro is CMR for all sizes

In general, if specifications matter to you (and for your application, they absolutely matter), you must read the data sheet for the drive. Marketing material can suggest/imply/hint at all kinds of things but the data sheet will tell you the truth for each specific model sold.

2 Likes

Two points: First, time machine is much more than a “backup”, because it stores many snapshots from the past. Most of the time I use it not because my disk has crashed, but because I deleted a file or email I want to get back. (Good luck with email; it’s a nightmare to restore.)
Second, as many have pointed out, it works fine with an attached hard drive, so the problem is one of networking, or possibly time machine’s ability to handle interruptions.
I don’t really understand why TM needs so much complexity (including drive format) at the drive end. Of course it doesn’t make any sense for apple to get into the drive business. Is it possible they (or someone) could re-write time machine so it would work reliably on a commodity network storage devices? Have the issues that cause the storage-side complexity changed since it was originally conceived?

Sure…it’s quite possible. In fact…Mike Bombich who used to work at Apple and left to start CarbonCopyCloner has already done so. It isn’t as simple to setup as TM…but it isn’t much harder and it actually works.

1 Like

There’s also Chronosync. It was used years ago at a company I worked for, and I never heard any complaints.

Yeah…that works too I’ve heard but I don’t have any experience with it so don’t really know how it plays with network shares but I presume it works fine.

A few years ago, the HDD in my Airport Time Capsule started to fail. I bought a 4TB external drive and connected it to the USB port on the TC. ( It was supposed to be an interim solution until I found something better.) It has happily been backing up 3 laptops and a Mini and there it sits - I just haven’t bothered to implement another solution. If it ain’t broke, don’t fix it! :grinning:

4 Likes

Time Machine like Apple Mail and Preview- are limited PIA apps. Designed for users who just want plug and play out of the box with limited intelligent user experience. It’s the ‘duh’ school of thought.
I just have (redundancy) external drives (one NAS) and use CCC. Perfect- can backup and restore and/or migrate with no hassles. Apple wanted some sort of flashy space experience with Time Machine- but I have run into so many problems restoring other peoples computers. I send them down the Super-Duper and CCC roads and they are very satisfied.

1 Like

A higher-end option worth considering would be a TrueNAS device, especially if you build it to be more than just a backup machine. (I’ve built one for my parents which is doing Time Machine for their two laptops, as well as running a media server.) Disks will also be on ZFS - which is more robust than any of the file systems listed in the article. (BTFS in theory is as robust, but in practice is not.)

You could also use just the software on your own lower-end or older hardware, to reduce the cost. Most PCs with 8GB of RAM or more would be fine to run it.

I went down the Raspberry Pi path, and to make my life easier used Open Media Vault 5.6 as the software distribution. This provides all the NAS features you would expect with a Web GUI interface. Using a single USB 3.0 drive and a pi4.

My experience (after 10 months of use)…compared to my old mini.

GOOD

  • faster transfers than my old mini (was firewire now USB 3)
  • wake up (from sleeping disc) much faster
  • low power consumption (pi4 3W vs mini 11W-85W) no fans
  • reuse old USB 3.0 enclosure
  • significantly cheaper than a new NAS or mac mini
  • flexible and easy to manage
  • OMV is well documented, maintained and with active forums

BAD

  • learning curve for linux, samba and raspberry pi
  • power supply cable for raspberry pi is it’s biggest weakness (continuity can be lost if bumped or moved)

It is important to backup your Raspberry Pi boot volume, and I use a USB micro-SD card reader housing the backup volume. I backup prior to every software update.

3 Likes

This was a really useful article and it prompted me to look at my Time Machine configuration. However, most of the solutions were to complex for my capabilities, and only the Western Digital MyCloud option seemed appropriate. Of this, Ivan writes “Just set your Macs to use the My Cloud Home as a Time Machine destination, signing in as Guest when prompted for authentication. That’s it.” However, there is no mention of how to migrate an existing Time Machine archive to the new hard drive. Or am I missing something obvious.
I hope this post is appropriate – I have only just joined TBT

1 Like

What a weird article. The author states that SMB is a key criterion and then goes on to review several possible solitons that don’t support SMB. He largely rules out NAS devices as being too costly, and then goes on to review a low end Synology unit, makes no mention of QNAP (the NAS maker I use whose operating system provides explicit support for Time Machine backups), or indeed other NAS makers, but does include a Mac Mini as a potential backup device (at a price that is significantly higher than most low end NAS units).

There does seem some confusion regarding file system formats. As the article explains network Time Machine backups use sparse files, So any product that provides a correct SMB interface is fine. The underlying file system isn’t important.

Great subject, great article, great comments so far.

Long time Time Capsule user. When my last time capsule died, I replaced it by external USB3 small size drives (without individual power supply) connected to an old network mac doing High Sierra located in the basement with my router and such. Piloted using screen sharing.

Drawbacks and associated questions:

(1) Is it really advisable to use SSD in this function. A long discussion was held on mac forums recently advising about the difficulty for a SSD to support the delicate carving of time machine. Any suggestion to defend the SSD in this role ?

(2) I failed to associate the network mac to the protection granted by the UPS to my router. Hence a short time after power failure, the backup dies suddenly when the UPS battery is drained empty and I found no way to associate the mac to the UPS : clean shut down of the mac and automatic restart of the mac upon return of electric power. Is this prudent ? Is there a cure ?

SSDs have no intrinsic problem with being a TM drive, whether now with APFS nor before with HFS+. The issue would be frequent read/writes cycles that could eventually reach the limit in certain areas (blocks of bits) as specified by the manufacturer on the SSD (can be only few thousand on inexpensive consumer grade SSDs).

So if you plan to use one TM disk forever, where old backups will eventually get purged to make room for new backups (i.e. bits get overwritten frequently), this could eventually indeed become a problem. However, if you follow the old adage of filling the drive up only until it has reached about 80% of its capacity and then you swap it for a fresh drive, there should be absolutely no problem with SSD or HDD. The difference is really just cost/GB, noise, and power.

Some people have warned that SSDs can die abruptly where there’s usually no advance warning. Note there is no guarantee of such advance warning with HDD failure either (although sometimes people get an indication the disk is about to fail through SMART warnings, etc.). Regardless of SSD or HDD, good practice to ensure this doesn’t bite you is to simply use multiple TM drives. Several disks can be assigned to TM concurrently. TM will use whatever it’s hooked up to at the time. In fact, if it’s been too long since it has seen a certain disk (10 days IIRC), it will even warn you and tell you which disk it should back up to again.

Disk docks (or SATA-USB adapters for SSDs) make swapping multiple drives straightforward. They offer little physical protection for the drive itself, but OTOH swapping becomes a 2-sec task. Very simple.

https://www.amazon.com/dp/B07B3S5FSF/
https://www.amazon.com/gp/product/B011M8YACM/

1 Like

Can you take an SSD offline, erase it - or more - does it reset the “bit limits” so it can be used for another “few thousand" cycles?

I would be a bit concerned with this choice of hardware. Particularly:

  • I don’t trust bus-powered drives to be reliable. The power delivered by a USB bus isn’t necessarily consistent or sufficient for a high powered drive. Especially if there is a hub involved. I always prefer external power for my backup drives (of course, since I prefer to use 3.5" HDDs, I couldn’t choose otherwise even if I wanted to).
  • I’m always concerned about the reliability of pocket/portable drives for extended use scenarios. I think manufacturers try to cut costs wherever possible for these drives, and therefore don’t use the most reliable mechanisms.

I personally don’t see a specific problem with it. But I don’t see a particular reason why you should use an SSD either. Your backup device doesn’t have to be high performance - even with an HDD, each hourly backup (after the initial one, of course) should run just fine, even if it takes a little longer than it would on an SSD.

New Time Machine volumes (created on Big Sur or later) all use APFS, which is designed and optimized for SSD usage, so an HDD will experience a lot of head thrashing, but again, since this is not a high-performance scenario, it shouldn’t matter. Unless the sound of the head motion bothers you.

My personal opinion is that the lower cost-per-TB of an HDD is most important when choosing a backup device. Especially when you can buy/build HDDs with humongous capacities (4, 8, 10 and more TB), where an SSD of similar capacity would be prohibitively expensive.

Any decent UPS should support monitoring - typically via a USB cable. Connect it to the computer it is powering.

macOS will probably auto-detect this and add a “UPS” tab to the Energy Saver preference panel, which can be used to configure automatic shutdown. The main energy saver panel can be used to configure automatic restart after a power failure:

No. It doesn’t work that way. The “write cycle” limit is the maximum number of times each physical block of flash memory can be erased and re-written This is a function of the hardware.

When a block exceeds its limit, it can no longer reliably store data. Any remotely competent SSD controller chip will detect this and take the block out of service when it happens (using other storage locations to hold its data).

The ATA spec includes a “secure erase” command which is supposed to completely wipe all the storage on an SSD. (I assume NVMe drives have a similar command.) This will make all previously-stored data inaccessible, but the specific mechanism is not defined by the standard and different manufacturers may implement it in different ways.

Because not all SSDs have a secure erase command (especially those connected via USB), and there’s no guarantee that every SSD implements it in a way that is truly secure, it is not considered a reliable solution for the general case. Instead, the best way is to encrypt the volume when you set it up. Later on, you can simply change or discard the key, which will make all the data inaccessible, no matter what the underlying hardware does.

Simply writing zeros to the entire SSD does not erase all the flash memory blocks. As a matter of fact, depending on the SSD controller, doing so may actually be a bad thing - creating lots of “garbage” that subsequently needs to be “collected” before the physical blocks are actually erased.

2 Likes

I found a solution that combined off-site backup with a resource I was underusing.

Over the many years of using TM I have maybe restored a file from a TM backup just a handful of times, so I regard TM primarily as a gold-standard method to restore the entire Mac in case of disaster.

I have a TM drive permanently connected to my OWC T3 Dock and M1 MacBook Pro to which I make regular manual backups during the day (such as during meals) as well as clones of the Data volume to an alternating pair of external drives with the excellent ChronoSync. I also make an additional TM backup to a drive stored in a fire safe once a week.

To address the off-site issue, a friend recommended an application named Arq (URL) which is capable of making backups to Dropbox. I have a 2TB plan but was only using a fraction of that for my clients so it was a perfect match. My broadband connection is not fantastic - 16 down and 6 up, but scheduled to run overnight, it gradually chomps its way through the gigs of data after which refreshing the backup is relatively light work. Restoring data is very simple.

Instead of backing up everything, I select top and 2nd-level folders from the MacBook Pro internal store and from other attached drives and setup separate backup plans in Arq to save the data to Dropbox. This has been working really well and the only additional cost was that of Arq, which is modest.

Arq sits in the background, doing its work quietly, efficiently and without fuss, just as all good software of this kind should do.

2 Likes

I use Arq for offsite backup as well (to Backblaze B2, and to AWA glacier for media files, and also to OneDrive, where I have 1 TB of storage that I hardly use otherwise), plus backup onsite to an external drive of another machine, which works as Time Machine does - thinned to daily, weekly, and monthly file versions. I’ve had TM several times inform me that its drive can no longer be used for backup, and needs to start over, but I can’t recall ever getting that message from Arq.

Of course Arq is not free but it is a great backup utility, and I know it’s been mentioned on these boards many times.

1 Like

No. The process of “erasing” it and then rewriting to it later would just increase that read/write cycle count by one. You cannot undo read/write cycles on these disks. Think of it as wear and tear, and this cycle count is a wear level indicator. It only gets worse as time progresses and use increases.

Of course this is really only an issue for disks that see lots of read/writes (like a boot volume or scratch partition). If this is a TM backup and it gets swapped once it’s filled up, you’ll be fine.