What Does the T2 Chip Mean for Mac Usage?

(Geoff Duncan) #1

Originally published at: https://tidbits.com/2019/04/05/what-does-the-t2-chip-mean-for-mac-usage/

Apple’s custom T2 chip brings better security to recent Macs—and we all like security! But the T2 also makes Macs harder to repair or use with non-Apple operating systems, and it can create nightmares for DJs and musicians. So is a T2 Mac right for you?

0 Likes

(Geoff Duncan) #2

I’ve received a few questions about the audio testing methodology I used in this article, so here are the “off into the weeds” details.

I do a fair bit of pro audio work (mostly recording music), but I don’t have a T2 Mac myself because the T2 Macs have been fairly infamous in audio circles since the introduction of the iMac Pro. So I tested using other people’s systems as I had access to them: these included iMac Pros, as well as more-recent MacBook Pros and Mac minis, in environments ranging from professional recording and production studios (Seattle and New Orleans) to amateur musicians and podcasters just doing stuff in at home.

When Adam assigned the article (I think in January) I thought I would make a couple template projects in Pro Tools, Logic, Reason, etc., and carry those around on a USB stick so I could quickly fire them up on T2 Macs and test. That turned out not to work so well: audio setups vary too widely, and I almost always made test projects from scratch (sometimes in GarageBand, because most people had it). The T2 problems are also unpredictable enough that it took me a long time to gather what I felt was solid data: I did 42 tests of varying duration (ranging from 30 mins to over four hours) on 15 machines over a span of about two and a half months. I also spoke to several audio interface makers on background.

For the testing, I borrowed an NTi Digirator DR2. It’s basically a signal generator: totally overkill for what I was doing, but it’s reliable. The idea behind all the tests was the same: record as many channels of sine wave input as I could for as long as possible, then go back through the audio and look for anomalies. Testing used Avid (Pro Tools), UA, MOTU, Presonus, Focusrite, Apogee interfaces connected using USB and Thunderbolt, depending, along with “consumer level” USB interfaces like a Blue Yeti mic and “guitar” inputs from iRig and Line6. No plugins or onboard preprocessing: just straight in.

The “glitch” screenshot screenshot shows Logic Pro X 10.4.4 on a 10-core iMac Pro running High Sierra 10.14.3 on a rig with Thunderbolt UA Apollo interfaces (two x16s and an x8p, I think) recording at 24-bit/48KHz (nothing stressful). It’s recording eight tracks of sine wave input from the Digirator; I did the signal bussing for the Digirator on the studio’s physical console (old school!), so the UA/iMac was just blindly recording eight tracks of audio—as un-fancy as I could make it. I got to run that test about 90 minutes; there’s another glitch later in the same session. I picked that particular glitch because it happened just four minutes into the test - so I knew I could find it again fast - and it was on eight tracks simultaneously.

Now, if Adam would finally approve article pitches on unity gain, avoiding single-coil hum from LCD displays and other noise sources, how to change acoustic guitar strings using just hose-clamp pliers…I could do that all in-house. :wink:

1 Like

(Adam Engst) #3

Man, that hose-clamp pliers tip sounds hot. :slight_smile:

0 Likes

(Marc Z) #4

Just to further confuse the issue, there’s an article on Appleinsider today that concludes Macs with T2 chips can encode video much faster.

https://iphone.appleinsider.com/articles/19/04/09/apples-t2-chip-makes-a-giant-difference-in-video-encoding-for-most-users

0 Likes

(Geoff Duncan) #5

I like their testing methodology. I’m not a video person, but I can see that helping out consumers and hobbyist video people who (I’m just making this up) use iMovie or Adobe Premier Elements(?) or similar things. (Do people actually do that? I know no one who uses these programs, but I’m obviously more audio-focused.) I imagine most pros/semi-pros would prefer to render with “real” graphics processors. I have no idea how widespread third-party support for Video Toolbox might be, or the degree to which Video Toolbox supports third-party graphics hardware.

0 Likes

(Curtis Wilcox) #6

For what Handbrake is doing, converting video files from one format to each other, GPU is not important. VideoToolbox provides software access to available hardware that exists to encode or decode specific video formats, like H.265. These chips are common, everybody’s smartphone has them because without them, watching or creating a video would kill their battery life, if it was possible at all. iPhones, GoPros, etc. can shoot 4K video because of chips do nothing but encode video to the format they support. A discrete video card could have such chips on its board in addition to its main GPU and I would expect them to be available through VideoToolbox; since VideoToolbox is a part of macOS’s core graphics library, it would be weird to make a video card and drivers for Macs that didn’t use it.

In the AppleInsider testing, the 4K iMac they used has a “real” discrete video card, a Radeon Pro 555X, it didn’t help. They have another graph that shows encode times didn’t change when an eGPU was attached to a 2018 Mac mini or MacBook Pro.

Good GPUs matter earlier in the production process; rendering 3D graphics, applying filters, doing color correction, etc. When it comes to encoding the final product, a good GPU only helps if those effects need to be re-applied before the actual encode (e.g. when editing using proxy files). In the absence of dedicated hardware for encoding the final video format, encoding is handled by the computer’s CPU.

1 Like

(Simon) #7

I think that’s quite in line with what we see in the AI benchmarks linked to above. The 4K iMac had a dedicated GPU and hardly did any better than the Core i3 mini with only Intel on-board graphics and T2 deactivated.

The real game changer according to those benchmarks appears to have been enabling the mini’s T2. With the T2 coming from the iOS world where Apple has had the greatest interest in integrated encoding/decoding circuitry this perhaps it not a big surprise.

What I do wonder about is where the modest gains of the iMac compared to the mini with T2 disabled did come from. The AI testers explicitly checked to make sure it was unrelated to disk. Both Macs use the same i3 and memory bus. The difference in transcoding time was on the 20% level so more than I’d expect to chalk up to measurement accuracy/repeatability. Is the GPU is contributing in some other fashion?

0 Likes

(Geoff Duncan) #8

Thanks for posting this, Curtis: makes sense to me and a video editor I’m in a Messages chat with (although he doesn’t have a T2 Mac).

0 Likes