Has the M1 really made Intel that desperate?

It depends on what kind of chips customers require of them. Intel currently supports processes down to 10 nm. This is the process Apple’s A10X (currently only used in the Apple TV) uses.

Most of Apple’s current devices are based on the A12, A12Z and A13 chips, which use a 7nm process. And the A14 and M1, which use a 5 nm process. Intel can’t do this today, but they are claiming (for whatever that’s worth) that they will be able to do 7nm soon.

To be fair, not all chips are manufactured with the latest and greatest process technology. Intel can probably make good money fabbing chips at 10nm and larger processes, even though they won’t be able to make cutting edge CPUs and GPUs.

Intel can fab and they can money doing so, no doubt. But I think it’s quite clear they’d need to up their game considerably if they’d want to one day fab for Apple.

Their mixed messaging here has not done them any favors either. You don’t publicly dunk on the same company you plan to court to the very next day. I suppose they’re banking on Tim not reacting like Steve would have.

In addition to royally pissing off Apple over a long period of time it looks like Intel’s announcement has just made themselves another big enemy…TSMC. And in order for Intel’s fab venture to succeed, they will need to deliver products flawlessly and on time. They do not have a good track record in achieving these goals, especially when it came to Apple.

I think Intel has stared further into the future and realised that aside from embedded controller chips, the future could be very bleak.

I liken the launch of the M1 to be as disruptive as the the launch of the original iPhone. Before 2007 everyone obsessed about BlackBerry v Nokia’s keyboard layouts, flip or slide cover, extendable or integrated antenna. The iPhone simply blew all those arguments out of the water. Discussions about clock speeds and chips have become irrelevant. Now 15 years the most important decisions about your phone are camera quality, storage size, etc, etc.

Intel has looked at the M-series and realised that it will be as fundamentally disruptive, albeit over a longer timeframe. Give it five years and the question of how many cores, what clock speed, how much memory, laptop battery life will become as irrelevant as a BlackBerry. It will be down to how much storage you need and (for some) does it have this year’s M7 or last year’s M6 chip. Like its foray into mobile phones, Apple will have succeeded in commoditising the personal computer market - and all without owning any chip production capacity of their own. It’s absolutely no coincidence that Intel are creating foundries open to third-parties. By the third or fourth iteration of the M-series it’s going to become apparent that custom-made chips, rather than the standard off-the-shelf series processors will be the way forward.

It’s going to take more time than the iPhone did, not least because there are hundreds of millions of geeks who have invested time, knowledge and money in extracting every last drop of performance out of custom-built Intel and AMD machines. But once the M-series starts to ramp up its performance the penny will drop.

So yes, Intel are desperate. There’s still going to a reasonably lucrative life in the embedded controller market, but the cream of personal computing and server markets will be under serious threat within a few years. Those ads are just the tired flailings of an old campaigner.


Actually, SoCs are printed on wafers nowadays - the process is pretty much lithography.

1 Like

The problem for Intel is not the loss of Mac chip sales - it’s that the lifestyle company in Cupertino is producing devices which are kicking butts and taking names - and doing so at a fraction of the power consumption.

This can have real consequences affecting sales of competing hardware - and anyone who’s OS agnostic can find the difference in price/performance quite compelling.

Most people are looking at these first Apple Silicon Macs wrong - these aren’t Apple’s powerhouse machines: they’re simply the annual spec bump of the lowest end Apple computers with DCI-P3 displays, Wifi 6, and the new Apple Silicon M1 SoC.

They have the same limitations as the machines they replace - 16 GB RAM and two Thunderbolt ports.

These are the machines you give to a student or teacher or a lawyer or an accountant or a work-at-home information worker - folks who need a decently performing machine with decent build quality who don’t want to lug around a huge powerhouse machine (or pay for one for that matter). They’re still marketed at the same market segment, though they now have a vastly expanded compute power envelope.

The real powerhouses will probably come later this year with the M1x (or whatever). Apple has yet to decide on an external memory interconnect and multichannel PCIe scheme, if they decide to move in that direction.

Other CPU and GPU vendors and OEM computer makers take notice - your businesses are now on limited life support. These new Apple Silicon models can compete speed-wise up through the mid-high tier of computer purchases, and if as I expect Apple sells a ton of these many will be to your bread and butter customers.

In fact, I suspect that Apple - once they recover their R&D costs - will be pushing the prices of these machines lower while still maintaining their margins - while competing computer makers will still have to pay Intel, AMD, Qualcomm, and nVidea for their expensive processors, whereas Apple’s cost goes down the more they manufacture. Competing computer makers may soon be squeezed by Apple Silicon price/performance on one side and high component prices on the other. Expect them to be demanding lower processor prices from the above manufacturers so they can more readily compete, and processor manufacturers may have to comply because if OEM computer manufacturers go under or stop making competing models, the processor makers will see a diminishing customer base.

I believe the biggest costs for a chip fab are startup costs - no matter what processor vendors would like you to believe. Design and fab startup are expensive - but once you start getting decent yields, the additional costs are silicon wafers and QA. The more of these units Apple can move, the lower the per unit cost and the better the profits.

So … who should buy these M1 Macs?

If you’re in the target demographic - the student, teacher, lawyer, accountant, or work-at-home information worker: this is the Mac for you.

If you’re a heavy computer user like a creative and don’t simply want a light and cheap computer with some additional video and sound editing capability for use on the go - I’d wait for the M1x (or whatever) later this year. You’ll probably kick yourself when the machines targeted at you finally appear.


To paraphrase Verne, the M1 computers introduced last year are the opening shot. The performance of the M1 machines appears to be comparable or better than the top-of-line Intel machines, but Apple placed the M1 in their low-end computers and didn’t upgrade the rest of the package.

I know that I was all ready to order an M1 MacBookPro to replace my current 13" 4-port TouchBar version when I noticed that it had only 2 ports and hadn’t upgraded the camera to 1080p.

So, that was only the initial shot; replacements for the rest of the line will surely be even more powerful.

By the way, Jean-Louis Gassée has just published a good analysis of Intel’s predicament here.


I’d be a good bit more optimistic for an iMac rollout tomorrow. Yes, the Pros and the questions about graphics cards still hang in the air, but the current low end M1s still aren’t breaking a sweat as far as I can see. My son just did a 3D animation on his M1 base model Mini, he’s been eyeing up an Alienware box for a while but he has a dilemma on his hands.


Somewhat curiously underreported, but at the moment the 2019 and 2020 iMacs are actually more performant than the current crop of M1 Macs, processor-wise. It will be interesting, indeed, to see what comes next.

The M1 Mac SoC (system on a chip) and the A14 in the iPhone 12 and iPad Air share a common, scalable architecture - and the thing that’s got everyone in a tither are the Firestorm high performance cores in both.

The A14 has two high performance Firestorm cores and four high efficiency Icestorm cores; the M1 has four Firestorms and four Icestorms.

What makes a computer feel fast are the high performance cores - not all tasks are capable of being multithreaded and multithreading (or multiprocessing) is an arduous process introducing complexity and the chance of unintended consequences (bugs).

In servers, multiprocessing is the norm - most of the processes run by a server are discrete and you can assign a processing engine (core) to each client and process the work of a bunch of clients in parallel - on consumer computers, introducing a computer running on a bunch of cores can easily lead to a bunch of idle cores. The speed of your fastest process is still limited to the speed of your fastest core.

The hardest thing you can do is produce a faster core - by comparison, adding cores and the attendant boogymen of cache coherence and the like are relatively simple in comparison. When you see a CPU maker add a bunch of cores to a consumer computer, what you’re seeing is a band-aid - a way to keep the numbers up without doing the really hard work of improving the speed of your single core performance.

Sure, there are workloads which can benefit from multiprocessing - transcoding video can be chopped into subtasks by chopping the video into chunks between keyframes and dispatching work to any number of cores. Your OS can give a display refresh task a core to keep the video buffer refreshed independent of other processing going on the the foreground. But the number of parallel tasks which can be assigned in a consumer computer doesn’t justify the 10 core 20 thread CPU used in my 2020 iMac 5K - most of the time, most of the cores simply sit there and soak up power, though the dispatcher does round robin dispatching work on various cores to keep them from looking idle and to even wear.

So how did Apple - that lifestyle company from Cupertino - end up designing one of the fastest cores available giving the Wintel alliance sleepless nights all around the world?

In 2008, Apple acquired PA Semi and worked with cash strapped Intrinsity and Samsung to produce a FastCore Cortex-A8; the frenemies famously split and Apple used their IP and Imagination’s PowerVR to create the A4 and Samsung took their tech to produce the Exynos 3. Apple acquired Intrinsity and continued to hire engineering talent from IBM’s Cell and XCPU design teams, and hired Johny Srouji from IBM who worked on the POWER7 line to direct the effort.

This divergence from standard ARM designs was continued by Apple who continued to nurture and build their Silicon Design Team (capitalized out of respect) for a decade, ignoring standard ARM designs building their own architecture, improving and optimizing it year by year for the last decade.

Whereas other ARM processor makers like Qualcomm and Samsung pretty much now use standard ARM designed cores - Apple has their own designs and architecture and has greatly expanded their own processor acumen to the point where the Firestorm cores in the A14 and M1 are the most sophisticated processors in the world with an eight wide processor design with a 690 instruction execution queue with a massive reorder buffer and the arithmetic units to back it up - which means its out-of-order execution unit can execute up to eight instructions simultaneously.

x86 processor makers are hampered by the CISC design and a variable instruction length. This means that at most they can produce a three or four wide design and even for that the decoder would have to be fiendishly clever, as it would have to guess where one instruction ended and the next began.

There’s a problem shared with x86-64 processor makers and Windows - they never met an instruction or feature they didn’t like. What happens then is you get a build-up of crud that no one uses, but it still consumes energy and engineering time to keep working.

AMD can get better single core speed by pushing up clocks (and dealing with the exponentially increased heat though chiplets are probably much harder to cool), and Intel by reducing the number of cores (the top of the 10 core 20 thread 10900K actually had to be shaved to achieve enough surface area to cool the chip so it at 14nm had reached the limits of physics). Both run so hot they are soon in danger of running into Moore’s Wall.

Apple OTOH ruthlessly pares underused or unoptimizable features.

When Apple determined that ARMv7 (32 bit ARM) was unoptimizable, they wrote it out of iOS, and removed those logic blocks from their CPUs in two years, repurposing the silicon real estate for more productive things. Intel, AMD, and yes even Qualcomm couldn’t do that in a decade.

Apple continues that with everything - not enough people using Force Touch - deprecate it, remove it from the hardware, and replace it with Haptic Touch. Gone.

Here’s another secret of efficiency - make it a goal. Last year on the A13 Bionic used in the iPhone 11s, the Apple Silicon Team introduced hundreds of voltage domains so they could turn off parts of the chip not in use. Following their annual cadence, they increased the speed of the Lightning high performance and the Thunder high efficiency cores by 20% despite no change in the 7nm mask size. As an aside, they increased the speed of matrix multiplication and division by six times (used in machine learning).

This year they increased the speed of the Firestorm high performance and Icestorm high efficiency cores by another 20% while dropping the mask size from 7nm to 5nm. That’s a hell of a compounding rate and explains how they got to where they are. Rumor has it they’ve bought all the 3nm capacity from TSMC for the A16 (and probably M2) next year.

Wintel fans would deny the efficacy of the A series processors and say they were mobile chips, as if they used slower silicon with wheels on the bottom or more sluggish electrons.

What they were were high efficiency chips which were passively cooled and living in a glass sandwich. Remove them from that environment where they could breathe more easily and boost the clocks a tad and they became a raging beast.

People say that the other processor makers will catch up in a couple of years, but that’s really tough to see. Apple Silicon is the culmination of a decade of intense processor design financed by a company with very deep pockets - who is fully cognizant of the competitive advantage Apple Silicon affords. Here’s an article in Anandtech comparing the Firestorm cores to the competing ARM and x86 cores. It’s very readable for an article of its ilk.

Of course these are the Firestorm cores used in the A14, and are not as performant as the cores in the M1 due to the M1’s higher 3.2 ghz clock speed.


This is inline with how Apple has been pricing Macs since 1984:

“It had an initial selling price of $2,495 (equivalent to $6,140 in 2019)”

Let’s see what Apple has up it’s sleeve tomorrow.

I didn’t buy the 128K - I thought it was too much a toy - but I did buy the 512K (fat Mac).

Whoa those things were expensive (for the time).

And this really ticked off the developers. The original goal was to be a mass-market computer, priced lower than an Apple II. Later on, design changes forced the price up, but they worked very hard to keep the price below a $1500 price point, which they considered the most people would be willing to pay.

Later design changes forced their price up to $1995. Then John Scully decided to bump the price to $2495 in order to finance a bigger marketing budget, which the developers (and Steve) saw as a betrayal of the Mac’s purpose (a computer “for the rest of us”).

Folklore.org: Price Fight.

1 Like

In 1984 my husband bought a LaserWriter along with an SE 30, and they cost about the same.

Creative and communications pros and businesses were a very big part of Steve Jobs’ strategy for Macs, and he bankrolled Adobe founder John Warnock to develop PostScript. Warnock had not been able to convince any other companies that an industry wide standardized type management system for personal computers and imagesetters was a game changing idea:

Steve also developed the ColorSync, which is still exclusive to Macs, and it was the first desktop color management system. It’s still the digital imaging standard:

These were totally revolutionary, and upended the graphic communications industries, as well as for many other types of businesses, from end to end. It was simply not possible to manage color or type on desktop computers prior to Macs.

1 Like

I agree that single-core performance/power is an important gain and it’s clearly got Intel/AMD running a little bit scared. But right now lots of memory and multi-core performance is still important for some workloads like virtualisation. I do hope Apple are getting ready to wow the developer crowds again, although I’ll be a little bit annoyed if it turns out my tentative journey into Apple Silicon by way of a Mac Mini was a little bit premature …

Indeed I foresee a future where my most powerful Silicon-based Mac is actually a notebook (without a touch bar, please, Apple) and my desktop runs traditional workloads on more generic Intel- or ARM-based hardware, in multiple operating systems. I’d get the best features of both. My 2019 iMac is ready to go for at least another 6 years; hopefully longer!

1 Like

Another critical Apple technology was WorldScript. Introduced in System 7.1, before UNICODE was a standard, this was (as far as I know) the first instance of mainstream desktop system software supporting full multilingual operation.

Here’s an archived copy of an Apple web page providing background information about WorldScript: WorldScript: Apple’s technology to make the Macintosh the best personal computer for the World

I remember reading an incredible whitepaper about internationalization from that time period (early 90’s), but I can’t seem to find it today. It’s probably on one of my old developer CDs, but they’re all in HFS format, which isn’t mountable in Catalina. :frowning:

1 Like

A post was merged into an existing topic: Adobe Co-Founder Charles Geschke Dead at 81

Even at the educational price of ~ $1300, I had to go into debt to get it. But it was definitely not a toy. Even though the only decent language available that autumn was MacForth, I was able to transfer much of my data analysis from a vax to the mac and get twice as much done. Mac was a lot slower, but it was all mine all the time. Once Fortran came out (Absoft?) we were really able to get cranking by porting some of the the larger crystallography stuff. Plus it made a great terminal to the vax, and terminals were in short supply those days–only one for our lab group of five people.

It did help that we did our own 512k ram upgrades well before that was available from apple, but even with 128K, it was a great and useful system–much better than the PC we had in the lab that wasn’t fit for much more than a few simple games (no idea what the specs were.)

Intel’s fully justified to feel threatened by Apple, especially now. Apple silicon, what it is, who makes it and what it and Macs can potentially do in the future can seriously cut into Intel’s business. For me personally, I don’t need anything the M1 can do, and Big Sur is problematic, so I’m still happy with my Intel MB. But I’m betting that within a few years Macs - Apple’s real computers - may once more be the bestest with the mostest - something they have not been in recent years. I’m hanging on to my Apple stock.

Yep, I fully intend to hang on to my Intel Macs while they last. As long as x86(-64) is the “lingua franca” of instruction sets for almost all software then, at least for now, it’s important. I love my M1 Mini and I even look forward to running Linux on it, but I’m pretty sure I’ll miss that flexibility to run other OSs which is going to come with compromises and compatibility issues for some time (Linux, perhaps unsurprisingly, still less so than Windows, though by no means not at all). In Windows the emulation is getting better, but I can’t find any information on how it will impact my use of screen readers, which tend to use system components. No doubt the equation will change over time, but for now don’t throw away your Intel Macs–they’re still useful! :slight_smile:

Looking forward to the show later on.