You’re correct. This is an issue with any file system that implements its snapshots by a copy-on-write method and sharing blocks within the volume/container that houses it. I’ve worked with other implementations of this concept for over 20 years, and the only way to accurately figure out space consumption of the snapshot is for the underlying structures to be able to track that unique consumption.
It’s almost as if you have to focus more on the space remaining in the enclosing container (which is the same across all snapshots and volumes in the container) than space consumed by the file system.
Related, but a bit off topic: Can anyone explain what “purgeable” space is that macOS reports in Get Info? The common explanation that “it’s snapshots” is bunk and not the whole story. I have no snapshots on my Mac’s SSD, yet it insists that there is 81 GB of purgeable space.
"Purgeable space refers to a particular type of storage space on macOS systems. Beginning with macOS Sierra, Apple introduced a new category of storage space that is visible when you view your system’s storage. Purgeable storage can be seen when you have turned on Optimize Mac Storage.
‘Purgeable storage space’ is space that the system can automatically make available when it is needed. The files that occupy the purgeable space have been stored on your iCloud Drive. They have been selected for purgeable status due to the fact that they have not been used recently."
Quote from CleverFiles.com.
Store in iCloud: Store files from your Desktop and Documents folders in iCloud Drive, store photos and videos in iCloud Photos, store messages and attachments in iCloud, and optimize storage by keeping only recently opened files on your Mac when space is needed.
May be true in theory according to the documentation but not sufficient in practice.
Example from my Mac boot drive:
Get into says:
Capacity: 245 GB
Available 217 GB (81 GB purgeable)
Used: 108 GB
211 GB available
Disk Utility says
130 GB available
Macintosh HD volume using 9GB
Data volume using 99 GB
df -k command line says:
245.11 GB volume size (same as disk utility reports for APFS container size)
Macintosh HD volume size: 9 GB
Data volume size: 99 GB
Available space: 130 GB
I do not have iCloud Drive enabled for Desktop and Documents, nor to I have it enabled for Photos and videos. I don’t have 80GB of mail, attachment or messages. iCloud Drive is available for syncing, but there are not 80GB of documents in there. I have a OneDrive account as well, and there are nowhere near 80 GB of documents available in there.
I also do not have my home directory on the boot drive where my documents, photos and music libraries are stored. The only home directory on my boot drive is an admin account which has next to no files in it.
And there are not 80 GB of snapshots on the boot or data volume. There are only 2, and getting rid of both of them resulted in a 13 GB. Still saying that 68GB of space is purgeable.
So where’s the phantom 80GB of purgeable space that doesn’t exist on my boot volume? If you subtract the purgeable space figure from the Available figure that Get Info says, that represents reality.
It’s obvious to me that the Finder has a problem with math.
Purgeable space is any storage that is consuming space, but can be auto-deleted by macOS to make room for other content.
As @terryk wrote, this includes any documents stored in iCloud. They can be purged because your Mac can download them if they are later needed. I think this is only done if you have macOS configured to optimize storage space.
It also can include content purchased from Apple’s on-line stores (movies and TV shows you’ve already seen), also if configured via optimizing storage space.
It can also include temporary files, caches and other internal files you don’t normally need to see or care about.
And (I think) it can also include Time Machine local snapshots.
There may be a few other categories, but I don’t know that for certain.
In general, I would think of it as free space, since it is storage that the system can make free if you actually need it for something.
Disk utility is reporting 217GB, of which 81 is purgeable. This means 136GB of actual free space and 81GB of stuff that the system can delete. (And if you add that 136GB to the 108GB used, you get 244, which is almost identical to the drive’s capacity). I think Disk Utility includes storage that can be made available by deleting Time Machine local snapshots.
Finder’s report is 211 available is almost the same. Finder doesn’t (as far as I know) know about snapshots, so it is reporting slightly less free space than Disk Utility. Does 6GB sound like a reasonable amount of space to be consumed by snapshots?
I’m guessing that the remaining purgeable storage is system caches and temporary files. There can be quite a lot of that, especially if you do a lot of web surfing. Try deleting the caches in your web browsers and see what that does to the numbers. Also, locate any locations where temporary files are stored and purge them to see how that changes.
Finally, I think purgeable content includes the contents of /var/folders. This contains a lot of temporary- and cache-line stuff. I wouldn’t recommend deleting anything from there, however. The content in there that is truly unused should get purged on a schedule and the rest is used by the system. You may be able to delete the files, but chances are that much if it will be re-created over time, and you’ll suffer a performance hit as that happens.
As a postscript to all of this I think I found my major offender. They were indeed temporary files but not of the type that you would immediately think of.
After an exhaustive search of my Mac’s Data volume, I discovered about 40GB of 1+ year old files in /Library/InstallerSandboxes/.PLInstallSandboxManager - all of which appeared to be failed Xcode updates. There’s no way to get rid of them manually without disabling SIP and deleting them (other than, of course hoping that an out-of-space condition will trigger macOS to remove them).
It’s mind boggling that Apple does not provide a way to have failed installations clean up after themselves. These are not files that are going to impact performance. Even browsers have a mechanism to clean up their caches if you want them to.
Don’t expect to see space reclaimed immediately with an SSD. Although the space is marked as deleted, it has to go through the garbage collection process before you will see it has been made available as free space again.
Thanks. At this stage it’s still bleeding storage at a rate of 400Mb an hour. Tonight I’ll remote in and boot from the clone I made (as it will drain the remaining space overnight).
Once I get full access to the internal drive I’ll hoping to track down where the hidden storage is. I have DaisyDisk here but it just returns “hidden storage 103Gb” but if I scan booted from another drive it will hopefully give some more information.
Booted into a clone and was able to run DaisyDisk and get a proper report. The hidden Volumes directory was 103 Gb - all of which was ‘pointing to’ our Backup server which holds a backup of our Wordpress site. Our WP site is just on 100 Gb. I deleted this from the Volumes directory (NOT the actual backup server) and now have 101Gb of free space.
I may completely misunderstand how the shell goes about copying files but why is it keeping a local copy of a 100Gb site in the Volumes directory when the destination server happily holds the files?
The WP directory is copied with a simple shell script cp -R /Library/WebServer/Documents “/Volumes/Backups New/WordpressBackup”. Removing ‘/Volumes’ from the script results in a missing folder error.
Are you sure that Backups New was properly mounted? If not, it will be treated as a normal folder (even though it’s located in /Volumes) and so your cp commahd is just duplicating the files on the same disk. If Backups New exists when you connect to the network share, it might get mounted at something like /Volumes/Backups New-1.
Thanks Jolin. It’s difficult to know as I only go into the building as required - perhaps once a fortnight. It’s certainly possible there could have had a situation where it wasn’t mounted.
You make a very valid point though and your explanation makes perfect sense. I’ll reconsider how the backups are run. I’m now inclined to write a small utility app (or Applescript) which has some decent error trapping for the existence of the Backup volume before running any copy/duplication.
If you go down the AppleScript route, you can use the disk class in System Events to identify your backup network share and get the appropriate path to pass to cp. disk objects have various useful properties for your needs, such as local volume (indicates whether it’s a file server or not) and POSIX path.
That’s only relevant for people who care about what goes on with with raw flash (e.g. data recovery people).
For normal users, this deleted storage is immediately reflected as free space to the operating system. If there is insufficiently-collected garbage when you try to write new data, the SSD will slow down a lot (as it will be forced to collect garbage as a part of the write process), but the operations will not fail.
Indeed. I’ll probably attempt to mount the server inside a try/on error block and write an error log if it fails to mount. I’m well versed in Applescript and Xojo but I’ve never been a shell person.
I wrote the original copy script when the site first moved to WP about 5 years ago and was a few hundred Mb In hindsight, using incremental syncing instead of cp would have made more sense given how large it’s become …