Apple infrastructure problems causing app launch problems

I certainly read people arguing both sides of whether it was appropriate for Apple to simply do what HP wanted without evaluation. I’m not an expert on these things, but there was some discussion of how it works. FWIW, I didn’t notice anybody disagreeing that it took action on Apple’s part to do it. Seems like you understand how it (might) work better than I do, though, so perhaps you can shed more light on that.

I believe that only applies to apps that are signed using and Apple Developer ID signature. I don’t believe any of the current means that Apple uses can prevent self-signed/unsigned apps from being used, even though it is aware of the app you are attempting to launch.

So Apple is just collecting this info… for the government? Or what purpose do they have in collecting this information??

Public key infrastructure (PKI) systems used for code signing can be quite complex and nuanced. At their (oversimplified) core, however, they’re a chain of public-private key pairs. Thus:

  • Apple creates a public/private key pair, publishes the public key and tucks away the private key.

  • HP creates a public/private key pair, convinces Apple that they won’t write malware, and Apple uses their private key to sign HP’s public key. They then publish the signed public key and hide their private key.

  • HP then uses their private key to sign any macOS software they release.

  • The public (us, or at least our system software) can then – prior to installing/using any HP software – (1) use HP’s public key to verify that the software was signed by HP’s private key and (2) use Apple’s public key to verify that HP’s public key was signed by Apple’s private key.

The problem is how you handle the inevitable cases when a key gets compromised. First, all key pairs have an expiration date so, if all else fails, they keys will stop working after a while.

The owner of a public/private key pair can also issue a revocation certificate that says “hey! this key was compromised! don’t trust it any more!” The revocation certificate has to be signed using the private key of the owner, so fake revocations can’t be used to create a denial-of-service attack.

So far, the end user can verify a signing key without consulting a central repository in real time (they can store Apple’s and HP’s keys until their expiry). But in the case of revocation certificates, they have to check to see if one has been issued so somebody (Apple in this case) has to maintain a repository of revocations that can be checked in real time.

There is very little need to verify, authorize, or rule on the sanity of a revocation. The revocation certificate is the OWNER of the certificate saying “hey! I’ve lost control of this certificate and it can’t be trusted!” So Apple likely does not, and absolutely should not, do anything that would delay the publication of a revocation certificate. Even if the revocation isn’t from the actual key owner, that’s still proof that the key is compromised and untrustworthy. Apple’s one job here is to publish the revocation as quickly as possible before users start getting burned by the compromised signing key.

Apple can (and probably does) check the signature on the revocation certificate to make sure it’s valid. Publishing an invalid one wouldn’t harm users, whose systems would simply reject the bogus certs, but it would create an opportunity for miscreants to load up the server with bogus data. But the signature check and publication of revocations should be completely automated and as close to instantaneous as possible. Anything else would put users at risk of installing malware with valid signatures, and would be an irresponsible move on Apple’s part.

1 Like

It’s all laid out here:

From Apple:

  • With your explicit consent, we may collect data about how you use your device and applications in order to help app developers improve their apps.

My understanding (which could be wrong) is that what we are discussing here is not something you can opt out of, much less something you have to give explicit consent to opting in to. I don’t see anything else in a quick scan that would indicate Apple is going to require me to let them know every time I start any app on my computer. That’s the opposite of what the privacy policy says.

Nothing else in the privacy policy seems to answer the purpose for collecting this information. My understanding was that it was for the purposes of security, but @alvarnell said that isn’t the case for apps not signed with an Apple Dev ID. If Apple is not able to do anything with unsigned apps, which they already make it difficult to run, then why would they collect the information? And even if the information is collected for security, where is there any mention of this in the privacy policy?

The hash referenced, is that of the signing certificate, does not necessarily identify the application itself and it is used to validate that the certificate is valid.

The rest of the information can also be determined from your IP address, so not really a privacy issue.

I doubt that they do. If there is no signing certificate hash, then there wouldn’t be anything to send, let alone collect.

I’m 98% sure that information is still sent for unsigned apps.

In related news…

Move along, folks. There’s nothing new and nothing to see here. This is Apple making us more secure. \s

I think it’s worth noting here that developers are the ones Apple is pissing off right now. That doesn’t bode well for the platform. I’m reading more and more about developers dumping their Apple products because this is the direction that Apple has been moving for quite a while now. No. You can’t replace your hard drive anymore. No. You can’t replace your RAM anymore. No you can’t use a firewall on your computer anymore.

I think I might be done with Apple. So sad.

The limitations outlined by Patrick are much different from what is being discussed here. The controls being used by Little Snitch and LuLu relate to attempts by various applications to “phone home” or make contacts for other reasons. The capability to block such contacts by Apple apps using an Apple API has been deprecated.

This discussion concerns macOS (trustd) contacting an apple server (ocsp.apple.com) to validate the Apple DeveloperID signature. That has nothing to do with Apple app making their own contacts.

Daring Fireball just linked to an article that explains more of what’s really going on:

And for those who don’t feel they have time for all the details, here’s the bottom line:

TL;DR

  • No, macOS does not send Apple a hash of your apps each time you run them.
  • You should be aware that macOS might transmit some opaque information about the developer certificate of the apps you run. This information is sent out in clear text on your network.
  • You shouldn’t probably block ocsp.apple.com with Little Snitch or in your hosts file.

That is indeed the claim made in the article.

But in the words of the Dude, “that’s like your opinion, man”. Isn’t it ultimately a question of what a user considers more harmful? If I block ocsp.apple.com I might not be notified immediately if a certificate is revoked. OTOH if I block it I won’t lose hours next time Apple comes under attack or otherwise can’t maintain reliability of their servers. If I come to the conclusion that I’d hear about that revoked certificate anyway (for example through TidBITS) and the harm that could happen before I hear about it pales in comparison to the harm done when I wasted half an afternoon chasing after my tail, well then I’d say you could absolutely argue it’s prudent to block ocsp.apple.com.

I’m not saying you should block, but I get the impression this is really an individual question based on how each and every one of us gauges their risks. I do have doubts there is but one right answer here.

2 Likes

There have been some really nasty malware recently that managed to get signed and widely distributed before Apple found out and revoked the certificate, so I think you would have to ask yourself, do I want to suffer through six hours of reduced productivity (not a total loss of hours) should such a slowdown happen again (once in how many years of checking for certificate validity?) or would I rather spend time recovering from a ransomware attack that prevented me from doing anything until I found a way to restore my Mac?

trustd is one of the apps you cannot block with Little Snitch, isn’t it? If so, it has everything to do with it.

And from what I’ve read, blocking OCSP with the hosts file also disables the App Store.

It was set to every 5 minutes. That’s every time for all intents and purposes.

Might? It does. There’s no debate about that. It also isn’t opaque. As one commenter said: “Many developers only publish a single app or a certain type of app. So it still is a significant information leak. It’s really not much different from sending a app-specific hash. Think: remote therapy/healthcare apps, pornographic games, or Tor - which alone could get you into big trouble or on a watchlist in certain regions.”

As I said above, isn’t it impossible to even do this with Little Snitch in Big Sur?

I was only making the point that trustd is a macOS process, not an app, but if it can’t be blocked with Little Snitch, your point is valid.

Had not heard that, but I just tested and can confirm that it did prevent me from browsing the App Store, however I was able to update an app.

Sorry I wasn’t clear. My intended point was that it isn’t a hash of a particular app, in fact it isn’t even a hash. It’s MIME encoded information about some certificate. That certificate might be used to sign multiple apps. The example in Daring Fireball gave was Firefox and Thunderbird. But sure, if someone is that interested to know if I am using one or more apps from a particular developer from a specific location, date and time, have at it. If someone can be proven to be doing something illegal, I’m all for allowing law enforcement with a warrant helping themselves to it.

That should not have been attributed to me. It is a quote from Daring Fireball.

Sorry about the quotes from the article getting attributed to you.

It appears to me that you are not considering this from the perspective of problematic laws in the US, much less in repressive regimes. Even in the US we know, thanks to Snowden, that the information is available to the government without a warrant because it is not encrypted. But what about where encryption is outlawed (which is something we are still having to fight in the US), and the citizens or observers are trying to communicate to the wider world about human rights violations such as mass killings? Now the government has a report of exactly where and when Signal was started up.

And that ignores the fact that we’ve learned over and over again that this sort of information leak has many privacy invasion consequences, one of which is that your ISP (and others) can also collect this information and sell it to marketers or blackmailers.

1 Like

If you have questions about Apple’s privacy policy, it suggests this as a way to contact Apple Legal. Sounds like it would be worth asking these questions directly.

1 Like

Apple made a change today to stop collecting IP addresses, and will start encrypting these packets at some point in the next year.

Also stronger protection against server failure.

1 Like