I had always thought that the /private/var/folders/* were self cleaning during a startup, reboot, or Safe Boot. That does not seem to be the case on my MacPro where there are a massive number of files/folders dating back many years.
For comparison, I looked at a recent SuperDuper! clone of my boot drive and it has far fewer files/folders. Apparently, the SuperDuper! scripts do not copy all of these items.
The directories are supposed to be self-cleaning, but not necessarily on restart.
Within each of the /var/folders/*/* directories, there are multiple sub-folders. Looking at my own (type getconf DARWIN_USER_DIR to see the path to your 0 directory, which is one level below the folder for your current session), I see:
0. This is your “Darwin user” directory. I wouldn’t expect this to ever be purged because it holds all kinds of databases used by system services. For instance, here are some of the names of its subdirectories:
com.apple.LaunchServices.dv
com.apple.SharedWebCredentials
com.apple.corespeechd - I assume this is Siri-related
com.apple.dock.launchpad
com.apple.notificationcenter
C. This contains caches. I think a safe boot is supposed to purge this, but otherwise, I don’t think they get purged. Looking at the file names, I see things like system font caches. Deleting these files probably won’t break anything, but may result in a temporary reduction in system performance as apps and system services have to rebuild those caches.
Cleanup At Startup. It’s empty. As I would expect
T. These are temporary files. The last time I reviewed this, the system should periodically delete files that have not been accessed in the past 3 days.
An ls -lRu command will recursively list the files, displaying the last-access timestamps instead of the usual last-modified timestamp. When I do it on my own “T” directory, I see quite a lot of files significantly more than 3 days old. But not super-old - the oldest file was from October 16 - less than 3 months old.
So I wonder if Apple may have changed the interval that they use for cleaning this directory.
X. Empty.
I didn’t bother to look through every user’s /var/folders directory, but I suspect most will be following a similar pattern to what I saw for my home directory.
I would expect that the 0 directories won’t ever get purged, the C directories will be purged on a safe-boot and the T directories will have their oldest files deleted on some kind of schedule (looks like maybe 2 or 3 months on my system).
As for why your clone doesn’t have the same content, I’m not sure. I would guess that it is not backing up the contents of the T and C directories. That would make sense, because those two contain data that will get re-generated during normal system operation and might cause problems if restored to a new installation (e.g. if the reason you wiped the drive was due to a corrupt system cache).
Additionally, I found that I had no access (even via sudo) to some of the directories in my 0 and C directories. So SuperDuper! might not be able to back up the contents of those directories.
If you like, go look in those directories on your system and its backup to compare the contents. I will be interested to know if I guessed right or not.
Everything in “0” was as current as the last reboot (earlier today).
“C” contained thousands of files from 2022 and 2023.
“T” contained thousands of files from 2022 and 2023.
That is not happening in the “C” and “T” directories based on the results reported above.
I think that is correct. SuperDuper! reports in its log files that it cannot access some directories. Also, when I look at the access dates on the clone drive they also contain old files. So we can ignore the differences between the two devices as tangential to the issue.
I also wonder if backups interfere with the cleanup process.
Based on everything I’ve read, the T directory is cleaned based on the last-access time. But if you’re cloning your system, and those directories get backed up, then the last-access time is the time of your last backup. Which would clearly interfere.
I wonder if this might be the case, because I haven’t had a successful CCC backup since September due to a BridgeOS panic that happens (see Post-14.7 kernel panic). Ignoring the fact that I need to install the latest macOS 14.x or 15.x, it may be significant that I’ve only been backing up with Time Machine (which only backs up the 0 directories, not the C or T directories) since then.
I wonder how hard it would be to install a wildcard filter to not back up /private/var/folders/*/*/C and /private/var/folders/*/*/T. If routine backups don’t touch those folders, they won’t change the last-access times of the contents and the self-cleaning mechanism may work better.
Interesting. However, I turned off my SuperDuper backups for a portion of the summer (long list of reasons) so the access times were not updated for about 2 or 3 months. That should have made them eligible for cleanup/removal.
I did some digging in the scripts that SuperDuper uses and found some entries related to /private/var/folders.
/Applications/SuperDuper!.app/Contents/Resources/Copy Scripts/Exclude system temporary files.dset
Omitting all of the “zz” folders is good - those are all for internal system accounts that nobody logs in as (e.g. _networkd, _windowserver, _spotlight, etc.) and shouldn’t have to be backed up.
Dropping the temp/cache files for mdworker also makes sense, although after seeing what Time Machine omits, I’d probably not back up any of the T/C directories.
Yes, it’s still a mystery. Once upon a time, I could find the periodic scripts that did the cleanup. But I’m not sure where they are today.
Since you got me curious, I opened up the default.plist file in CCC. It looks like they exclude the C and T folders altogether along with the NSURLSession daemon content from the 0 directory. It also includes an explanation, which is also pretty nice.
...
<dict>
<key>comment</key>
<string>These are hidden system cache folders. Accessing the contents of these folders on a startup disk can lead to stalls.</string>
<key>excludeFromTransfer</key>
<true/>
<key>excludeRule</key>
<string>/private/var/folders/*/*/C</string>
<key>protectFromDeletion</key>
<true/>
</dict>
<dict>
<key>comment</key>
<string>These are hidden system temporary folders. Accessing the contents of these folders on a startup disk can lead to stalls.</string>
<key>excludeFromTransfer</key>
<true/>
<key>excludeRule</key>
<string>/private/var/folders/*/*/T</string>
<key>protectFromDeletion</key>
<true/>
</dict>
<dict>
<key>comment</key>
<string>This is a system folder that is protected by SIP. Attempting to restore this folder is unnecessary and leads to spurious errors.</string>
<key>excludeFromTransfer</key>
<true/>
<key>excludeRule</key>
<string>/private/var/folders/*/*/0/com.apple.nsurlsessiond</string>
<key>protectFromDeletion</key>
<true/>
</dict>
...
In a modern system, I don’t believe backups via CCC or SD would interfere with routine housekeeping. My understanding of the process is that they take a copy of the current directory of the disk (snapshot) and then use that snapshot to figure out if a file on the backup needs to be updated or marked for deletion. If a background task is proceeding and messes with a file on the update list, the backup may record an error and go on. So, the backup is not quite complete, but the original disk is fine. The error log can be examined post-backup, and if the error appears to be significant, the incremental backup can be rerun.
As I discussed in another post, I used to back up HDDs and would see an error when trying to back up a write-ahead log (.wal) file associated with the housekeeping for an application SQL-lite database. Eventually, I realized there was no need to rerun such backups since the file was missing because the housekeeping for the database had been completed by the time the backup updated its copy of the database. Since changing to SSDs for my backup disks, the time to run backups has shrunk from tens of minutes to a minute or two at most, leaving little time for the temp files to disappear.
Since I have a bootable clone I deleted most of the files in that directory on the clone and then rebooted from the clone. Other than a few extra seconds on the reboot, it works fine for the handful of apps that I tested.
I would not have been able to run this test so quickly if I didn’t have a bootable clone.
The whole point of this exercise is that the clone will become my main boot drive (it’s twice the size) and I wanted to clean some of the cruft before the switchover.
I may also switch from SuperDuper to CCC for backups since it was pointed out that the latter ignores the C and T directories.
A backup shouldn’t interfere, except for the fact that if a housekeeping tasks relies on the last-accessed timestamp for a file, that timestamp may get updated whenever the file is read by the backup software.
If the backup software only compares metadata (file name, size, last-modified timestamp) when determining if the file needs to be backed up, then that should (I hope) not change the last-accessed timestamp, except when reading it on order to copy it.
But if the software is configured to be more robust (e.g. compute a cryptographic hash and compare it against a saved hash), then it will have to read the file (and hence change the last-accessed time) every time a backup runs.
But I think most people don’t enable this level of integrity checking, because it makes backups take a lot longer to run and shouldn’t be necessary as long as your system clock is working.