Origin of thermal throttling

Continuing the discussion from More mini Mac mini:

An interesting sidebar question. When did CPUs and GPUs start incorporating thermal throttling?

I know that in the (not sure how distant) past, you could damage a chip by running it with an insufficient cooling solution. But today, if you do that, you just get bad performance - the chips will reduce their power consumption by lowering voltages and clock speeds and bus bandwidth in order to prevent overheating.

So when did this start to become a thing?

According to Wikipedia, AMD introduced Cool’n’Quiet in 2002 and Intel introduced SpeedStep in 2005. Both are dynamic frequency scaling technologies that allow software (typically via the system BIOS and device drivers in the operating system) to dynamically reduce clock speeds in order to save power, lower temperatures and reduce fan speeds at times when high performance is not needed.

This sounds like something closely related to thermal throttling. But the descriptions of these products only talk about slowing the CPU when applications don’t need the horsepower, not in response to overheating events. And since they require interaction with system software, they clearly can’t prevent overheating if you have a bad/broken cooling system in conjunction with an unsupported (or crashed) OS.

2 Likes