My former employer has a Mac Mini (running Mojave) which serves as a Wordpress web server. That’s all it runs. Several weeks ago it started crashing and it was found to be full. I cleared about 50GB of data and within minutes it was full again. Nothing ‘visible’ explained where the space was going. I checked various logs etc and nothing stood out as unusual but there was a ‘gap’ of a couple of hundred Gb from what the Finder was displaying to what Get info was displaying.
I cloned it to a 1TB external USB-C drive, booted from this and everything went back to normal. A couple of weeks later and it seems the problem has re-emerged. Despite only uploading a couple of Gb a week the machine is reporting 900 GB used. Oddly, CCC is still successfully cloning the drive onto a 480 GB drive - it has shown no failed backups at all.
Can anyone suggest what might be the issue? My first thought is the Spotlight database might be huge because of the large number of files Wordpress keeps and the space used isn’t revealed in the Finder.
How much memory is in that mini? You might want to check the memory pressure graph in Activity Monitor to see if whatever you’re running is oversubscribing memory to the extent that it’s causing massive swap outs.
Is that an APFS formatted boot disk?
If you find extreme memory oversubscription, look again in Activity Monitor to see if you have a process that’s consuming more memory than expected.
It only has 8Gb. but the memory stats on Activity Monitor look fairly normal. It’s also been running without issue since 2018. The highest usage is MySQL consuming around 590Mb. Total memory usage is 4.1Gb.
Yes, it’s formatted as APFS. I’m not entirely convinced it’s not a mis-reporting issue. The clones we have are indicating they’re only 148Gb which more closely aligns to what the Finder is displaying. As mentioned previously, CCC has shown no errors in completing its daily tasks.
I’m wondering if the spotlight database gets backed up in a CCC clone.
Oh, thank you very much. Disk Utility showed nothing but a quick enquire with tmutil listlocalsnapshots / showed a series of CCC snapshots which totalled nearly 600GB. I’ve removed them, turned off CCC snapshots and free space is now close to 700 GB. Checking the docs it appears snapshots are turned on by default with CCC and I suspect it’s caught us out when we’ve updated several years ago from an older version. It also confirmed snapshots are not copied when cloning which explains the difference in size of the primary volume and the clone.
Thank you again, you get a virtual gold star!
If you make regular (scheduled) backups with CCC, then those snapshots will be created and destroyed based on your configured retention schedule. By all means delete them if you are running out of free space, but under normal circumstances, the space consumed will get purged over time.
This is very similar to Time Machine’s local snapshots, which are created hourly and have 24-hour retention.
In both cases, when you delete a file, the space it occupies will be freed when all of the snapshots holding a reference to that file get freed. In the case of Time Machine, this typically means 24 hours later. In the case of CCC, it depends on your configured retention schedule.
Either way, this is generally a good thing (unless you’re running out of space, of course), since it gives you a day or so to recover from accidental deletions/modifications without having to go to your external (and probably slower) backup media.
We don’t use TM at all - we’ve had issues where entire backups have been corrupted and rendered useless.
On the servers we have a series of rotating CCC clones. Alternating days onto seperate disks then a fortnightly one in case we get something we don’t pick up inside of two days. We also have an offsite backup which runs every night back to my home server.
Our content comes from our editorial system which runs on a different server. If we had to move stuff back it would be relatively easy provided the base system (apache/PHP/MySQL) is intact.
Unfortunately if you are using Time Machine, you are getting APFS snapshots. Since Mojave, Time Machine takes a snapshot of an APFS file system before it backs it up and uses that as a point in time copy to back up from. I’m not sure if Mojave does this or not, but later releases of macOS keep these snapshots for up to 24 hours for faster recovery. If the system needs disk space though, Time Machine is supposed to automatically delete these snapshots starting with the oldest one.
If you are mixing CCC and Time Machine snapshots, though, Time Machine won’t automatically delete the CCC snapshots.
We’ve been using the default settings but they haven’t worked very well. The server repeatedly crashed when it would fill up. We could clear tens of gigs and it would consume it within minutes and crash. I’m going in this afternoon to look after something else but I intend spending some time on the Wordpress server to try and get it stable. Then I’ll take another look at the CCC snapshot options to see if something more suitable and less problematic might be possible. It may be as simple as making the ‘free space’ figure quite high.
I’m curious. Have these Time Machine issues happened in Big Sur or Monterey? A lot changed in these releases, including the use of APFS and APFS snapshots for the backup disks themselves, replacing the hacked together HFS+ directory hard links that were problematic.
Of course there are workloads that do not behave well with time machine including databases and virtual machines.
No, I concede these were several years (and OSs) ago with HFS+ and it was a case of ‘once bitten, twice shy’. It happened on my personal machine at least twice and we had several others (in a plant of about 90 Macs). We’re big fans of CCC and apart from the fancy interface I don’t see there’s a lot we can’t do with CCC and Safetynet that TM would provide.
But, like snapshots, the SafetyNet folder is purged when space runs low. You can configure the parameters for this algorithm.
I think you’re misunderstanding what snapshots do. They don’t magically appear consuming massive amounts of storage. They add additional references to already existing files, so the space doesn’t get freed when the file is deleted or modified.
If you have 600GB consumed in snapshots, then that means you have overwritten or deleted 600GB of files since the snapshots were created. But (at least with Time Machine), that space will automatically free itself when the snapshots expire 24 hours later.
While there are certainly good reasons why you may need to manually delete snapshots (e.g. you’ve run out of space, and must delete files to make room), under normal circumstances, just letting them expire on schedule works just fine.
CCC’s snapshots are similar, but they are only purged when the CCC app is running. If you have it running in the background (e.g. for periodic backups), then this isn’t a problem - just configure the retention schedule or disable it as you prefer. If you don’t keep it running in the background, then you may want to disable its snapshot creation (at least on the source volume).
Unless there is a specific reason why you need the space immediately, that space will get freed 24 hours later (or sooner, if macOS determines that it needs to create free space). Jumping through hoops to manually delete TM snapshots is usually a pointless activity.
Good explanation. There’s a nagging question that’s been in the back of my mind while reading this thread. What is writing the volume of data that’s triggering the space consumption by the snapshots? As you say, it’s writes and overwrites on the primary volume that trigger extra consumption of disk space to maintain a snapshot’s point in time view.
A disk image could easily be this big, and every time it changes (basically any time it is mounted), it could cause the old version to be preserved by a snapshot (which, again, should expire in 24 hours).
I would think that a mount of the disk image should not cause a significant amount of a file system to change. Snapshots should be working at file system block level, not file level. Only those underlying file system blocks changed by the activity on the mount should result in extra blocks being allocated. Similar behavior for a virtual machine’s virtual disk file.
This is in contrast to applications that open documents, make modifications either in memory or in temp files, then write the entire document back when saving it. In which case the old file is essentially deleted and replaced by an new file. That would cause the entire space of the old version of the file to be retained.
Opening a file does have an impact on file modified dates, which causes Time Machine to copy an entire file to the backup disk regardless of how much data has been changed in the file.
Some information on snapshots that readers of this thread may find interesting. Here’s a semi-random excerpt:
Although Disk Utility and CCC tell you which app made each snapshot, and allow you to delete them, they can’t tell you why any given snapshot appears so large. To do that, you’ll need to keep a watchful eye on the apps you use. Anything that creates many large temporary files should be a suspect: while they can easily be excluded from backups, remember that a snapshot is of a whole volume, and includes all the files in temporary folders, the macOS versioning system on that volume, and the Trash, most of which are automatically excluded from Time Machine backups. Managing snapshots: how to stop them eating free space – The Eclectic Light Company
The “size” of a snapshot can be very misleading, because the disk blocks contained in it are (mostly) shared with the main file system and other snapshots.
Typically, the size you see is the amount of free space that will be made available if the snapshot is deleted - meaning the size of the blocks that are not used by the main file system or any other snapshots. Blocks that are used by other snapshots won’t factor into that total.
This has the interesting side-effect that deleting one snapshot can make another snapshot’s reported size increase, because any blocks shared by only those two snapshots will have their reference counts decrement to 1, and they will be freed when the second snapshot is deleted.
It also means that adding up the reported sizes of each snapshot may not (probably won’t) equal the amount of space that will be made available if they are all deleted. To pre-compute that, you would need to count up all the blocks in all the snapshots that are only referenced by the group of snapshots you intend to delete and not by anything else.
It’s confusing, but I don’t know if there is any better way to report such a thing.