I’m curious, how does MacOS application memory usage work? A few months ago, I bought a Mac Studio with 64 GB of memory. (Is it still called RAMM?) Last night I was busy working away in Capture One (my preferred image processing application), when I noticed that it was using 24 GB of memory. This seems wildly excessive to me, and it got me thinking and wondering about how MacOS (or maybe it’s the app) can allocate so much memory to an app? Is it just because the resources are currently available, and the app says, “gimme!”? A buddy mentioned that he noticed that Photoshop was using 21 GB, and he didn’t have any images open. How can that be?
Not long ago I was learning about macOS memory management and had written this elsewhere. I hope this is useful and do let me know if I get any of the details wrong!
The high RAM usage is an inherent feature of macOS’s memory management system. macOS (whose kernel is derived from Mach kernel) features “a fully integrated virtual memory system” that is always on. The virtual memory system comprises the physical RAM itself (the expensive ones) and drive storage (also expensive but much cheaper). The memory paging scheme maximises system performance by looking at app profiles and memory usage pattern, then preemptively loads pages (memory blocks) which the apps might use. This usage pattern can be built up over time as you use the app, or the app developers can specify it.
The paging system is configured such that it loads and retains pages in memory. Pages that are recently accessed are marked as active, otherwise they become inactive. When more memory is required by apps and there are no longer free memory, inactive pages are removed to make way for new active ones. The removal can be a complete purge, or they may be compressed or swapped. The exception is wired memory which is required by the kernel and its data structure, and cannot be paged out. This arrangement maximises system performance since loading data into memory takes much longer than paging existing ones in memory. Hence the adage “unused memory is wasted memory”.
The memory pressure chart in Activity Monitor is the summary of the above interactions. Activity Monitor can indicate “high memory usage” but low on memory pressure, simply because of the qualities described above. Memory pressure turns yellow or red when macOS cannot efficiently use memory (because of high swap/compression-decompression caused by lack of RAM).
@chengengaun, this is great. A huge thanks for sharing.
Also consider that file I/O also goes through the virtual memory system in normal circumstances. Memory serves as a disk cache - and will be released first if the system needs memory. (Data written typically gets flushed to disk but is kept in this in-memory cache).
This is why simply using “free memory” numbers is misleading. If free memory is low it does not mean that your system is out of memory.
As is stated, the degree of paging activity is a better indication of whether you have over subscribed memory.
Ah! Very good to know! Thanks!
The pressure is low, and page swaps haven’t been happening, which is a good thing.
This new Memory Management system came first in macOS Mavericks and resulted in my no longer monitoring the memory usage statistics altogether. It’s designed to speed up your computing experience by keeping RAM as full as possible with system and application cache files so they don’t have to be continually retrieved from your drives.
Apple guidance at the time was and still is to instead look at the graph in Activity Monitor’s Memory tab in the lower left corner called MEMORY PRESSURE. As long as it stays green, it’s working perfectly. Yellow may indicate the need to watch more closely. Red indicates a need to add RAM, if possible.
Yep. On systems with virtual memory (which is anything modern), a lot of free memory means that either you recently rebooted or there is a real problem. Free memory is memory that could be put to use for some system purpose (e.g. caching file system data).
Which is why I was able to get good performance out of my old 2011 Mac mini server that booted Sierra from hard drives. I had 16GB of RAM and rarely ran any apps needing that much, so the system could cache lots of file system data.
The upshot was that although boot times were a bit slow, and it took a while to load an app for the first time after a reboot, system performance was generally pretty good, because all of the system framework libraries and stuff were cached in that otherwise-free memory.
Yes, an SSD would have been even faster, but those hard drives were not the performance killer others experienced on their systems with the stock 4GB of RAM.
With SSD boot disks, the speed of the disks masks over the penalties of swapping/paging to a degree since those operations are faster than on HDDs. That makes low memory systems that start to page out appear to run faster than they would have.
However, there is a down side of excessive memory over-subscription with SSDs. If paging heavily, the SSD’s life may be impacted due to the number of page-out writes.
A wise man once told me: Nothing helps virtual like real.
Now would be a good time to segue into someone explaining Electron apps…and why you “need” 24GB!
Electron is a cross-platform software framework designed to simplify the development of applications that need to run on many different hardware and software platforms. See also Electron (software framework) - Wikipedia.
As for why you “need” 24GB of RAM, you don’t but…
Modern Macs (everything with Apple Silicon, and many recent Intel Macs) have the RAM soldered down. It can’t be upgraded. So if you plan on using the computer for a long time, it is recommended to buy it with the maximum amount available, so you will have plenty of headroom for the future, when your software may require it.
Well, what I had in mind was the practical issue of multiple “MacOS” Electron apps all loading their own instances of Chrome (in addition to your browser) into memory in order to give you a desktop version of their app. So if I load Slack and a few other Electron apps along with my Chrome browser and all its tabs, I’ve got a need for a lot of physical memory lest my machine start paging out like crazy.
Or so I’ve been told… (I am not a dev, nor do I play one on the 'net.)
It’s possible. I don’t think Electron apps actually launch the Chrome browser as such, but they are using the Chromium library frameworks (which are at the core of the Chrome browser and other Chromium-based browsers like Edge and Brave). The memory used by the framework’s code and resources will be shared by every app using it, but the memory used for tracking per-app state will (obviously) not be shared.
As for how much this may be and its impact on system performance, I’ll have to defer to others, since I haven’t researched this.
I mistyped and you’re right that there are not multiple Chrome browsers, but that it’s the entire Chromium library for each app, and that is the issue as it’s a lot of memory if you have a lot of apps running. Easy in this day of SaaSes.
I would question that. Seems like macOS Memory Management would be able to realize that it already has those libraries in RAM application cache and would not reload them, but that’s beyond my technical ability to confirm.
The issue I would see is if different applications bundle their own versions of Electron and its Chromium contents. Unless the executables/libraries are installed in a location that’s common among different applications (that is, only one copy will exist), the system may not recognize them as “the same libraries” due to different paths to the components. That’s wasteful of memory since shared libraries/executables will only provide a memory sharing benefit to multiple instances of the same application.
Agreed. If each app is loading its own private copy, then the code pages and resorues will not be shared. On the other hand, this isn’t as big a deal as you might think. Read-only content (code and resources) are typically memory-mapped from the file system. They will be paged-in when required and will be discarded (not written to swap space) when RAM starts running low. With a fast SSD, you may never notice these page-in events and it doesn’t increase wear on an SSD.
This is very different from the frameworks’ data, which won’t ever be shared by apps, whether or not they are sharing installations of the framework. That data will be written out to the swap file(s) when RAM is needed. Of course, the data won’t be paged-in again until it is needed. So if the framework is well-written, keeping frequently-used data together and separate from infrequently-used data, the amount of paging can be kept to a minimum. But even so, it may create a noticeable performance problem and increase wear on an SSD depending on how much paging actually ends up taking place.
But you are right, it would be more efficient if a single copy of the Electron framework could be installed to a common location (e.g.
/Library/Frameworks) instead of having a separate copy per app, because multiple apps could then share code/resource pages.
Does anyone know if the “standard” Electron installer mechanism does this? It should be possible, but I don’t know if it is actually done in practice.
Of course, if multiple apps are built using multiple versions of Electron, it could be problematic. Hopefully newer versions are backward compatible with old versions, so you can just install the latest version and let all your apps use it (similar to how Microsoft’s DirectX video drivers work). If not, then hopefully there’s a mechanism where the version can be encoded into the framework’s name in a way that allows apps to load (and hopefully share) code for the most recent compatible version.
FWIW, this is the way shared libraries work in the Unix/Linux world. A shared library’s name includes its major and minor version numbers. When an app loads, the system loader first looks for the exact version the app wants. If that’s not found, it will load some other version that shares the requested version’s major version number. If there is no installation using the same major version, then loading fails. So if the library creator makes sure that all versions sharing the same major version number are compatible with each other, apps can get upgrades and bug fixes for free as the libraries are updated.
Thanks for the clarification - it’s been a while since I’d worked with developing shared libraries and the intricacies of the loader.
One other important detail for M1 Macs is that the use of Unified Memory, where the CPU and GPU share the installed RAM. So with 32 GB, all the usual application memory comes out of the 32GB installed, but also all the graphics memory is taken from the same 32GB. It makes tracking memory usage a lot trickier. This is a good article explaining how all that works on an M1 Mac.
Yep. It can be very tricky, indeed.
When you have discrete graphics and a GPU with dedicated memory, many resources (e.g. texture maps) may be duplicated in memory. They need to be loaded from the file system and then transferred into the GPU’s memory. If the GUI framework (Metal or otherwise) doesn’t discard the main memory copy of those resources, then they are using twice as much RAM between the two devices.
With unified memory, the GUI framework will load the resources into RAM, and the GPU will simply access it where it lies. So it may end up using less RAM overall than on a system with discrete graphics. But the practical reality will depend greatly on what GUI framework is being used and how the application is using it.
Thanks @EricRFMA, @Shamino and all - very instructive discussions! From the links shared and further references in those documents, I am reading these to further understand how unified memory works (very complicated):
“Why Apple’s M1 chip is so fast” - discussion on CPU vs GPU memory requirements (low latency low throughput vs high latency high throughput) and how the UMA addresses these:
“Explore the new system architecture of Apple Silicon Macs” - with discussion of UMA:
Metal (GPU) memory management - shared, private and memoryless:
Clean vs dirty memory: