Costs of Non-Upgradeable RAM & SSDs

Yep, and this is actually Linus’s biggest complaint in the video. He says that the RAM and storage pricing, while high, is not too far out of line with PC laptops, and that Apple’s profit margin is not unusually high for the industry.

But he objects to Apple’s base-model systems being under-powered in order to advertise a lower price. And as you point out, for many users, thanks to the inability to upgrade, this will force a whole-system replacement where it shouldn’t be necessary. He says (and I think I would agree) that if the systems would ship with a base configuration of 16GB RAM and 512GB storage for $400 more, there wouldn’t be nearly as much of an issue, because most users wouldn’t end up having to upgrade prematurely.

Yes, but since we haven’t seen an Apple SSD actually fail (at least I’ve read no reports about it), it’s unclear what this actually means. If the SSD goes read-only and the pre-boot containers are still intact, that might be enough to boot an external volume. But we don’t know because we haven’t yet seen any system get to that state. (And if you know of any articles about a real-life system that did get that far, please share it with us.)

That’s been my recommendation for quite some time - going back to the Intel systems.

The most annoying problem (and Linus’s complaint as well) is that the standard configuration for all of the lower-cost consumer models (all the Airs, the 13" MBP and the M2 mini) only ship with a stock configuration of 8GB RAM. To get more (for any price) requires ordering a BTO configuration. Which means retail stores (including Apple’s) can’t sell you a system with more. You will either take an 8GB system or you will have to mail-order a custom configuration from Apple.

If you want a Mac with 16GB or more RAM in a stock configuration, you need to select a 14" or 16" MBP, an M2 Pro mini, a Studio or a Pro. All of which are pretty expensive systems, and are not likely to be “grab-n-go” retail purchases.

2 Likes

RAM holds the “working” memory of a computer (or phone or anything else with a microprocessor). It is where your apps and documents go when you “load” them. It tracks the memory for your video screen and can be used as “cache” to speed up access to storage. RAM does not have a maximum number of write cycles. Software can read and write it forever, and it should pretty much last forever, assuming it is not damaged by things like power surges.

RAM is “volatile”. That is, it loses its content when powered off. So it can’t be used for any data that you want to survive a power off/on cycle. That’s what “storage” is for.

Storage (of which SSD memory is but one kind) is persistent. When you write data to storage (usually in the form of files), that data will generally remain intact for the life of the storage device until it is explicitly erased.

There are many different kinds of storage. The most common kinds are hard disk drives (HDDs) and Solid State Drives (SSDs). The two use very different technologies.

A HDD is a spinning disc (or stack of discs) coated with a specialized magnetic coating (similar in concept to what an audio or video cassette tape might use). These devices were the primary storage media for computers from the mid 1980’s through around 2010. Although substantially slower than SSDs, they remain very popular for backups and archival mass-storage because they are much less expensive than SSDs (e.g. a 4TB HDD can be purchased for under $100, while a 4TB SSD may cost 2-3x that price). And they come in much higher capacities (e.g. you can buy a HDD as large as 18TB, but SSDs are generally not available in capacities larger than 8TB). In general, you can write to any region of an HDD’s storage as much as you want without wearing out any particular region. Although an old drive may develop bad spots over time, in general you can access the entire drive as much as you want until it fails altogether, and that failure is not a function of how much data you have written to it.

An SSD is a circuit board containing “flash” memory. This is a microchip that, unlike RAM, retains its contents when powered off. It is very fast (10-20x faster than an HDD), is much smaller than an HDD, and generally uses less power. But it has a “write cycle” limit. Without going into too much ugly details about how flash memory is designed, each storage location on the flash chip can only be written to so many times. When the limit is reached, that location can no longer reliably be written and is effectively a “bad” region. Because some data is rewritten more often than other data, the “controller” chip that manages data on an SSD implements “wear leveling” where data-write operations are distributed across all of the flash storage, so (hopefully) no single region reaches its limit before the rest of the device, extending the usable lifespan of the SSD.

SSDs are typically rated with an “endurance” - that is, the maximum amount of data that can be written before the SSD fails. This is typically in units of TBW (Terabytes written). Larger SSDs have larger TBW figures (because there’s more flash memory to begin with). The kind of flash memory chips used, the kind of controller chip used and other factors can also affect the TBW rating.

The size and kind of flash memory used by Apple’s SSDs gives them a pretty high TBW rating, so the SSD should not wear out before the computer’s end-of-life, but as @dustin pointed out, certain usage patterns (e.g. memory-intensive apps on a computer without a lot of RAM) can cause an unusually large amount of data to be written to the SSD, which will cause it to reach its write-limit sooner. Whether this will cause it to reach that limit before the computer reaches its end of life for another reason (e.g. not enough power for modern apps) is a question that IMO remains to be answered.

For this reason, I have resisted getting an SSD-based computer for a long time. I much prefer HDDs, which don’t have any concept of a write-cycle limit. But modern operating systems (all of them - macOS, Windows and Linux) are designed in a way that really requires the high speed performance of an SSD. Running them on a computer with only HDD storage makes them painfully sluggish, so it is no longer a practical alternative, although I still recommend HDDs for things where high performance is not a requirement (e.g. backups, media collections and storage of documents that you’re not actively working on.)

7 Likes

I completely agree with this recommendation, especially for large media collections. Modern multi-TB HDDs are more than fast enough that speed will not be an issue. This is especially true for those of us for whom modern internet speeds (i.e., > 15-20 mbps) remain only in our dreams, but we still have large DVD/CD/digital downloads available for local streaming.

Simon pretty much sums it up: There is advantage to integrated storage/memory in device design, along with fewer components like sockets, flex cabling, and 3rd party parts to troubleshoot or warranty. Apple is notorious for markup of storage. I know this, as right now, somewhere in Mainland China, my M2 Mac Studio Pro with 64GB and 2TB is being processed to ship next week, or sooner. Its replacing my 2012 Mac Mini and I have been on/off the fence for the last few months on whether to replace with M2 MacMini or the Studio. And it came down to a few hundred dollars…more for the faster-bus Studio, SD card reader and 10Gb built-in. But it was a lesson in how Apple’s PhDs have formulated the upgrade costs/pricing tiers.
E.G. one can get a 2TB M.2 SSD (Samsung EVO Pro 980) for $120 while its $600 upgrade for 2TB on Mac Studio, or $1200!!! for 4TB where a component like the EVO Pro 870 in less than $250 (older but a 990 is due soon). Its the same cost for a Mac Mini from 512GB to 2TB…$600. Even an additional 16GB of “Unified” aka shared Memory is $400.
That is beyond rude but more like greed… and you have no choice if you need a mac, and added storage.
And like Simon, I feel Linus needs to just go away. He’s annoying, He was also bullying his colleague years ago, and it made some watchers uncomfortable. There are other, watchable, tech-saavy youtubers out there, but Linus IMHO, isn’t one of them.

2 Likes

@Shamino Have you not switched over to an SSD storage based computer? If you have a computer that is upgradable internally, or has reasonably fast TB connectivity, you should REALLY do so - the prices are down low enough now and the storage capacity high enough that it really doesn’t matter that they have a bit of a lifetime limitation - a 2TB Samsung 870 EVO Plus NVME is $75 on Amazon right now - that is an amazing price for a really great drive (the OWC Thunderbolt 3/4 Envoy Express empty case would work well with this and give you a great TB3/4 external flash boot drive). Chances are you’re not going to hit the write cycle limit for many years (unless you do tons of file I/O constantly). The performance gains are just so incredibly life-changing that you’ll never want to use a spinning disk boot drive on a computer ever again. Keep secondary spinning disk storage for your media (especially stuff that is going to be read sequentially - a long movie for example). With operating systems so complex (and their need to load so many bits of information during boot and operation) that being able to effectively instantly access that data concurrently is a huge win. No need to seek across different sectors of a spinning disk with a physical servo disk head rapidly as your system boots…

For all my whingeing on this topic, I do want to stress that I don’t see the write-cycle limit of SSDs to be the problem, it’s that the SSD is integrated and non-replicable. I’d be thrilled if Apple would keep the integrated 256GB SSD on their M-series chips, and use that for the core System volume for the OS, their own apps and that kind of thing. Lock it down, whatever. Provide the fastest possible interface on the system board for an NVME drive (or two in the desktops/MacBook Pros) All the user-based activity could be on this secondary (but still quite fast) storage, and if I need to upgrade, or replace, I can swap it out (perhaps with an Apple M-Series Storage Certification that drive makers could get that would guarantee it would work at peak performance in Apple’s computers - I’d pay a $25 premium for that (and more if it offered special service or better features). The Apple SOC SSD would still eventually wear out, but that would be very far out in the future (it would mainly see use for the System Updates, Firmware/etc. upgrades, and perhaps some system caching… but ideally the main disk swap would happen on the user/replaceable drive). Some I/O performance might be sacrificed, but low-medium tier users won’t notice. Higher end Macs could still offer the SOC SSD in much higher sizes (and prices) for users that really need the speed, but they should also come with the NVME slot for adding more storage or to allow for long-term usage when the device goes on to its next owner. For the most part, the performance of the M-Class Macs should be good enough to have a very long working life. That would be better for the planet and allow computing to get into the hands of more people. I’ll upgrade my system after 5-6 years but it should still be usable for someone with lesser computing needs. Others might upgrade more frequently, which makes all the more case for ensuring the computer itself keeps on working downstream.

1 Like

Apple is not only using completely standard RAM, but it is definitely still connected to the CPU via a bus.

1 Like

I see no real benefits to us the consumers from this practice. Apple implemented it solely as a means to gouge their customers. The Macbook Pro I use is no thinner or lighter than my Windows laptop, but the latter has slots to upgrade the RAM and storage. Even my son’s ultra-thin Windows laptop has upgradeable RAM and storage. Claims of enhanced reliability also sounds like nonsense since there is nothing wrong with the reliability of laptops that have user upgrade options. It’s all the same as gluing in batteries.

It might even be less of an issue if Apple didn’t deliberately cripple the base model. Which they do to push the overpriced upgrades. No modern laptop should be sold with a mere 256Gb internal drive unless that drive is readily upgradeable later on by the user. External storage is neither a substitute nor cheap if you want it fast enough to match an internal device. Minimum RAM today should be 16Gb. However you slice it it is all about ripping off the customer.

I don’t find Apple’s total cost of ownership lower either. Their computers have never really been especially reliable nor is the software as robust as it is made out to be. Their machines have of course always carried a premium price, but the gouging has become worse in the last decade.

1 Like

As far as I can see there is no speed advantage to the integrated SSD chips in Apple’s products. They’re faster than SATA devices obviously, but everything is using NVME now.

1 Like

Costco has a big sale on Windows machines right now.

3 Likes

Ultimately my 2020 iMac purchase with 128 GB RAM was feasible only because I could add the RAM. It’s almost certainly the case that nobody else cares about this, but it was bargaining power well worth having to buy third-party RAM. Now, I can’t, and while I certainly desire the Mac Studio, adding that much RAM (and matching 8 TB storage) takes it into enviously-looking-at-thy-neighbour’s-spouse territory. The question I have for people using the new I9s or Threadrippers is this: how d’you like your power consumption and ambient noise levels? Because ultimately that’s something I’d rather not have to walk back since my Mac Mini Pro M2 has made my living room setup absolutely quiet even with the hearing aids cranked all the way up.

And yes I’ve seen an Apple SSD fail: the one in the 2012 Mac Mini server. It was my own fault for mis-aligning the RAID partitions in Linux, but still, these things are certainly perishable.

But that’s not the same device. The 2012 mini (like its 2011 predecessor) has two SATA ports for storage. If you get an SSD equipped model, you are getting a third-party SATA SSD. The behavior of these over time is pretty well understood, but the consequence for failure isn’t nearly as catastrophic because the drive can be replaced.

I was referring specifically to Apple’s non-replaceable SSDs, where they are soldering raw flash chips to the board (or in the case of the Studio and Pro, are using proprietary installable modules) and are using their own proprietary SSD controller (part of the T2 chip for Intel, or part of the M* processor for Apple Silicon).

So far, I haven’t read any reports about these wearing out. If you know of one, I’d love to read it so we can start to get a feel for the expected longevity of these systems’ storage.

1 Like

Nope, you’re right, of course, and indeed my 2012 Mini has had both drives replaced with Crucial MX500s, but I really haven’t heard of a contemporary (integrated) Apple SSD failure yet. Still not betting on it being scarce for long though, ultimately the flash itself isn’t anything special, so it’s just a matter of time particularly for the low-memory configurations. But in the end it doesn’t actually matter, because Macs are now disposable by design, meant to be recycled and not upgraded, so that’s what will happen.

Incidentally I only just managed to upgrade the firmware on the 2012 Mini from High Sierra; evidently Apple pulled a rabbit out of a hat because up until recently you needed official Apple SSDs to upgrade your firmware correctly. I certainly don’t dispute the value of Apple “controlling the whole widget”. But in an age when we should care about hw longevity, it really is sad to see the end of modularity and the ruthless way in which Apple pursues its advantage in lieu of that.

The T2 equipped Macs have been out for a number of years now. I wonder what the failure rate is on the soldered-down SSDs in that configuration. Might give us a clue on how the Apple Silicon Macs might age.

I consider Macs to have a nearly guaranteed 5 year lifetime and will keep them for longer if they still work. I have never had a mac fail fatally, having used nothing else since the first 128k Mac in 1984 (which I upgraded in our electronics shop to 512k). I think it highly unlikely that an SSD will be the cause of a Mac replacement during the first 5 years.

About upgradability, I have absolutely no problem with non-upgradable RAM - M-series Macs derive their great performance from tightly coupled RAM with very short leads to the CPU and this would be badly sacrificed if Apple used plug-in RAM. The M1 Pro MBP I am typing this on has a 200 GB/s memory bandwidth and I don’t see how this could be achieved with upgradable RAM. SSDs are another matter - it would be nice if Apple allowed those to be easily upgraded, though a socket is never as reliable as a soldered-in chip.

I concur with others in considering Linus’ youtube videos to be unwatchable - he is almost an archetype of an obnoxious youtube presenter and I don’t understand his popularity.

I didn’t say the RAM was proprietary. But it is state of the art LPDDR and it’s mounted directly next to the SoC on a high speed bus similar to AMD Infinity Fabric used between their chiplets inside Xen CPUs. It’s not stuck on a PCIe bus that interconnects

CPU ↔ RAM ↔ GPU / RAM.

But rather CPU / GPU ↔ RAM

That apparently is far more efficient than even Apple engineers and executives expected. The numbers they saw were far better than expected especially when adding active cooling for the Mac Mini and 13” MacBook Pro so that is why the M1 was added to more than just the Air when the M1 shipped.

The SSD flash chips are standard as well. Instead of being mounted on an M.2 NVME board with its own controller that plugs into a socket which then goes to a disk controller on the system board. While other manufacturers also surface mount SSD flash chips. Apple did something unique.

Apple surface mounted the SSD flash chips to the system board. But the only disk controller is in the Apple Silicon SoC. It also encrypted the chips from the factory. They are always encrypted. With the Studio / Mac Pro they put the controller less chips in a module that plugs into a an M.2 NVME socket but they still only connect to the SoC with no disk controllers or firmware in between. There are a variety of manufacturers providing the SSD flash chips and Apple only supports some of the ones on the open market. Apple only supports specific configurations and that’s why you cannot simple replace the removable SSD modules to upgrade your storage. Besides the fact you can’t buy these modules.

Now when I say the RAM & SSD chips are “standard”, I mean they are not anything radically new.

But apparently, they are somewhat custom. You cannot go and buy these chips on the open market. Apple has had them made bespoke for Apple. That means 3rd party repair shops have to utilize donor system boards from previous repairs to acquire the components to replace. This is a bad thing as used SSD Flash chips have write lifespan per sector. So replacing a chip with a used one might mean it will fail sooner than a brand new chip.

The main reason to solder RAM is for greatly increased efficiency to communicate with the CPU / GPU cores within the SoC as well as the disk controller and Secure Enclave also within the SoC. Soldering the SSD to the system board means space savings better used for more battery cells. It also allows for a thinner case.

99.8% of all computer users never ever upgrade their RAM nor SSD. Most computers are replaced on 3-5 year cycles. Apple mobile and Mac devices will be dropped from the latest operating system after 7 years. By year 8/9 such a system won’t be receiving security updates.

Sure you can buy PC laptops that are upgradable but more and more are going the small form factor route and soldering RAM and/or SSD chips.

You could go Linux for the ultimate freedom and frankly the Linux GUI is making leaps and bounds as of late. You could also put it on a refurbished 20 year old ThinkPad and it would run about as well as a MacBook Air minus the battery life. Printing works without doing anything. The printer just magically appears. The desktop user experience is quite nice with the latest Gnome or KDE. But you would have to learn considerably more about Linux. But you have full freedom to tinker. There would be much more DIY involved.

Macs work out of the box and everything is tightly integrated with many benefits to the Mac ecosystem. Far more commercial software is available on macOS than Linux. Nobody is beating Apple at the battery lifespan and efficiency of Apple Silicon.

Choose wisely when buying any computer that isn’t upgradable. Allocate enough storage or offload your excess files to a cloud drive or NAS. But statistically, it’s a very small percentage of users who find themselves upgrading versus just buying more RAM and storage when buying a new computer.

1 Like

Sadly, I’ve had a few. A Quadra 840av and a IIci both failed (but long after I was done with them). They simply wouldn’t power-on one day. I suspect they needed capacitor replacement, either in the power supply or on the motherboard. But I didn’t have the time or ability to repair them.

There are solutions. You’re right that simply moving your main memory to an SO-DIMM socket will sacrifice performance. Dell, HP and others are discovering the same thing.

Dell is testing the field with CAMM memory which is designed to deliver high performance in a replaceable module. Dell’s implementation is proprietary, but it is expected that JDEC (the standards body for these kinds of components) will have a standard later this year. (See alo Tom’s Hardware).

Apple could choose to adopt something like CAMM. But I don’t think they will see a business reason to do so.

But I also think that they can support external memory using their unified memory mechanism as high speed cache. The same way that today’s CPUs have an L1, L2 and L3 cache in order to adapt slower external memory to the CPU’s internal buses, Apple could update the M-series SoC so that its unified memory (of whatever size) is actually an L4 cache to external memory. And if there is no external memory, then that’s just the system memory.

With that architecture, systems like laptops (where you probably don’t need more than 32GB) can continue doing what they’re doing today, but the big desktop systems (Mac Pro and maybe the Studio) can be equipped with large amounts of this unified/cache (e.g. 128GB or 256GB) and provide sockets to support extreme amounts of socketed memory (e.g. the 1.5TB that Intel Mac Pros can support).

I guess it’s just a matter of taste. I find him fun to watch.

Sort of. They are generic RAM and flash chips. But they contract with the manufacturers to have custom configurations (pinouts, voltages, etc.). Depending on how suspicious you are, this may be to let them better integrate with the rest of the system or it may be for the purpose of preventing third parties from being able to perform repairs/upgrades.

Either way, they do sign exclusive contracts with chipmakers in order to make sure nobody else can legally buy them. Which is just mean.

If it was just a matter of “specific configurations”, you would very quickly find aftermarket suppliers selling those configurations.

The problem is that the flash modules are cryptographically paired with the CPU. They can’t be used without being paired and the software to do the pairing is not available for third-party repair shops (although it may be possible on some Macs using Apple’s Configurator tool).

You might find this article of interest:

On an M1 mini, the flash chips can be replaced/upgraded, but Apple made it about as hard as humanly possible, including “underfill” beneath the chip, small passive components in very close proximity and cryptographic pairing (making it impossible to transplant chips, but seems to be possible to install factory-blank chips).

When I read articles like this, it reinforces my belief that some of the reasons for Apple’s decisions are purely for the purpose of forcing users to pay for whole-system upgrades when they only need what should be a minor upgrade.

The Mac Pro and Studio are the perfect examples. Apple can easily replace the flash modules, and they will do it for warranty repairs. But they won’t sell you upgraded modules for any amount of money and they won’t release the software needed to allow a third-party to sell an upgrade kit.

1 Like

The problem with using the fast memory as a cache is the need for paging. The latter requires that one have a memory controller and usually that the fast memory (cache) be somewhat associative (“set associative”) to avoid thrashing and so that memory accesses could be fast. That is a very complex problem usually done with lots of on-chip logic for the on-chip caches.

I guess I don’t understand the need for huge amounts of memory - maybe for some very complex astrophysical problem or weather prediction and modeling but these are very extreme edge cases and Apple probably doesn’t need to cater to them (in my opinion). Apple currently allows 192 GB in the ultra chips and I would guess that the M3 variants would allow even more. Remember, this memory is extremely fast - close to a terabyte per second bandwidth. Maybe the few people who need more should refine their algorithms.

It’s problematic to tell an established market - one that has been buying these $50,000 servers - that their needs don’t matter or that their software developers are incompetent and that they should be able to make due with 1/8 of the memory that their applications require.

These customers are simply going to go to Dell and HP - who will sell them a computer with the resources they need. And once they’ve migrated their workflow to Windows or Linux, they will never come back.

3 Likes

Before this becomes another online slugfest, can I just ask if we even know if Apple has any market that requires >192 GB RAM?

I mean, just for some perspective, until the M1-powered MBPs came out, there was no workstation-class performance on any Apple portable, actually on any portable at all if we discount 10-lb “laptops” with 30 min battery life.

I am glad you changed your reply and replaced the phrase “incredibly arrogant” with “problematic” - the former phrase was out of place in a congenial forum (it did appear in the email I received). This group is increasingly resembling the usenet of ages ago with easily started “flame wars”. I am done with it.