macOS Sonoma garbage collection

I generally keep several apps open all the time on my 16GB M2 Mac mini. These include the Brave browser, Excel, Word, and Keynote. I have noticed that over a few days the memory used by these apps can grow dramatically. When I start Excel and Word each uses less than 250 MB but after opening, editing, and closing documents they grow to 1.5 GB even when no files are open. Keynote is similar but worse. It does not appear to release memory when a presentation is closed and because most of my presentations have many images these can be large files. Occasionally I have received system messages telling me that there is no memory to allocate.

I have adopted the practice of closing these apps from time to time just to avoid clogging up the system.

I understand virtual memory and the way macOS manages physical memory.

I would not describe this as a problem but I wonder why a modern operating system appears to lack better garbage collection.

1 Like

Boot into the Recovery Console and choose “Re-install Mac OS Sonoma”. Although this doesn’t tell you WHY this problem is happening, there’s a very good chance it will reset the bogus/munged files that are allowing this. And it never hurts anything.

TL;DR: This isn’t unusual, and it doesn’t necessarily mean apps are leaking memory.

Although some languages like Java have built-in robust garbage collection, many other languages (e.g. C and C++) do not and rely on the app developer managing his own memory and/or a third-party garbage collection library.

Even Objective-C doesn’t to this, but it does employ a robust system of reference counting to auto-delete unused objects. (I don’t know what Swift does).

But this is all realtively high-level. Under the covers, most modern operating systems (including Linux, BSD and by extension, macOS) use a very simple memory allocation system under the covers. They implement what’s known as a “break region”.

When an app loads, the kernel allocates stack space at the top of its memory (typically starting at the 4GB line and working down for 32-bit apps, somewhere else for 64-bit apps). The app’s code and global data occupies space at the bottom of the process’s memory. A “heap” for dynamic storage begins immediately above the code/data region and dynamically grows as memory is allocated:

Memory

The “break” is the highest address available to the heap. Low-level APIs, brk and sbrk manipulate the position of this break address.

Application-level memory allocators, whether it is C’s malloc, C++'s allocation system (which usually layers over malloc), or Java’s memory allocator, all work at their lowest levels by manipulating the break address.

When you allocate memory for an object, the system checks to see if there is sufficient free space in the heap. If there is, your object is allocated from that memory. If there isn’t, it will make the heap larger by increasing the break address. Then it will allocate your object from the newly-expanded heap.

So what happens when you free an object? The system will add that object’s memory to an internal free-list. That is a data structure that tracks all unused memory. But that’s all it does. The memory will be reused for new allocations, but it is never returned back to the system. In other words, the break address is never decreased.

Why is it done this way? Because it usually works best this way. You can’t decrease the break region unless the object you just deleted is at the top of the heap - where there is no object allocated at any higher address. For a running system, where objects are being allocated and freed all the time all over the place, this actually doesn’t happen very often.

And even when it does happen, for most real-world apps, it is highly likely that future allocations will force the app to increase the break region later. So why free the memory only to request it back again?

How could you work around this? Well, you could design your system so that the OS can dynamically move objects around in memory. If it can move objects to lower addresses (so the free space bubbles to the top of the address space), then you could decrease the break and free memory back to the OS. But this means applications can no longer retain pointers to in-memory objects, because those objects could move without warning.

Interestingly, Classic Mac OS did just that. It was necessary for apps to run on systems like a Mac with small amount of RAM and no virtual memory (e.g. a Mac 512, or Plus or SE). When an app allocates an object it gets a “handle” (effectively an ID number) to it. In order to access the object, apps must “lock” the object in order to get a valid memory pointer. Then it must unlock the object, making that pointer invalid. This way, the OS can move the object around in memory. The handle will remain valid, but the next time it is locked, it may return a different memory pointer.

But with the invention of virtual memory, that mechanism fell by the wayside. It’s a pain in the neck for developers, and if you have virtual memory, you don’t have to worry (as much) about your free memory getting fragmented.

The upshot of all this is that the amount of memory the OS has allocated to a process is not necessarily the amount that the app is actually using, but is the high water mark of all the memory that it has used since it started. And that memory doesn’t get returned to the OS until the application quits.

And normally, this isn’t a problem. If an app has a lot of unused memory (e.g. a large heap with a lot of free space) and something else needs that memory, it will get paged out to the swap file, and it won’t get paged back in until/unless it is actually needed again. So you’ve lost a bit of storage, but probably not a lot, given the large size of storage devices these days.

But it does means that certain apps that may allocate and free large amounts of memory (like an app that works on large documents) may end up consuming a lot of memory it doesn’t need at the moment. And if your file system is approaching full, it may not be able to copy that memory to the file system. Your only option is to quit the app(s) when you’re not using them. Which I’ve always considered to be good practice anyway.

This is also one of the reasons that web browsers these days frequently spawn child processes for each tab you open. This way when you close the tab, its process goes away, and the memory consumed by that process also gets returned to the OS. Which is really important for those people who never want to quit their browsers.

9 Likes

I knew David would have the complete answer. Many thanks!

FWIW, Swift appears to be similar to Objective-C in this regard (I guess not surprisingly): Documentation

1 Like

Well, I think the important thing here is that on modern systems with virtual memory (including, of course, Macs) and Memory Management Units (MMUs), memory allocators today are very complex, but there are still circumstances where apps can allocate memory which they are not using for practical purposes as a result. The (s)brk calls are today largely a historical curiosity used if at all for smaller allocations; nowadays we also have memory-mapped “anonymous” pages (mmap) to avoid fragmentation, and allocators do try to avoid arranging memory in a way disadvantageous to later re-use if it can’t be returned to the system. And when the kernel is asked for memory, it often chooses to simply accept the request, without allocation, in the hope that the application will use less of it later (“overcommitting”), and rearrange and defragment memory when not in use by a process to maximise contiguous space. On Darwin, it is also possible to ask for memory that will only be used for caching, so that it can be purged spontaneously when memory pressure requires it.

All of which is to say that what you see in ActivityMonitor is, at best, a wildly inflated notion of the actual memory requirement for a process, at best to be used once the app first launches, and that even when this number grows it need not actually represent any kind of a problem for your system’s performance or operation. One can reasonably object to this state of affairs, I think, but there are at least reasons for it.

2 Likes

Thank you, Sebby!

I posted after receiving system pop-ups telling me I was running out of memory. It only happened twice and I cannot reproduce the circumstances. Should it happen again I’ll post screen shots of the message and Activity Monitor.

In the meantime we can leave it there.

1 Like