It feels to me like you’re looking for some sort of “control” that would let you protect yourself in ways that Apple couldn’t anticipate. I can understand the desire, but do you really think that’s realistic for anyone who isn’t at least an expert security researcher?
The comparison with right to repair is tempting, but unfulfilling, I think. The desire to be able to repair a broken device is based on wanting to return it to a known functional state as defined by physical objects in a finite construction. (And realistically, only very high-end repair people want to be able to work at the chip level—the rest of us just want to be able to replace a battery or a screen, which are far more discrete objects.)
An exploited software system seems to me to be an entirely different scenario. There are millions of files in macOS and it’s clear that no one, even developers at Apple, knows everything about how they operate and interact. (If Apple knew that, there would be no security vulnerabilities.) So, and I don’t mean to dismiss your technical expertise, do you really think you could identify and rectify some problem that Apple’s developers, testers, and security experts couldn’t, even with the world’s best security researchers feeding them bug reports? I’m confident that my level of technical expertise is nowhere near up to that task.
Perhaps a medical analogy would be more fruitful. We can know that our Macs are running slowly or being weird, just as we can identify and describe symptoms of some medical condition. But realistically, we don’t have the tools or expertise to diagnose and treat much of anything beyond the basics. And even an experienced GP will go to specialists in the event of an odd infectious disease or cancer possibility.
I’m all in favor of self-sufficiency when possible, but the world has become to complex not to trust others.
Neither is mine! But would the collective mind (the ‘good’ internet) find some solution, I couldn’t appy it, even if I trusted the source, because I don’t have access to my own devices…
When the solution requires changes to code, no one other than Apple is going to be able to apply anything. And solutions to today’s security vulnerabilities aren’t of the sort where you can just tweak a text file.
Agree. Little Snitch is the best protection for macos. I can restrain even macos processes and Safari, if desired. No need to connect to Yahoo behind the scenes!
More granular control of iOS would help, e.g., refuse text messages from someone not in Contacts as you can with Phone calls. (Text messages are a frequent carrier of 0-day attacks.)
“And even an experienced GP will go to specialists in the event of an odd infectious disease or cancer possibility.”
And even the best specialists don’t always have a clue. Software, even application software, is reaching the complexity of simple organisms, and there is no single organism that people fully understand. Something like E. coli (an essential gut bacteria) that’s been a lab model for many decades might come closest, but even it still has plenty of surprises left, especially in conditions closer to the real world than a petri dish. Few modern software projects of any size can be fully understood by any one human, and often not even collectively by the small team working on a single program. Partly that’s because the small team usually isn’t the whole team–these days pretty much everyone uses frameworks from other sources, and those might contain yet more third party software.
“But does something like Little Snitch exist for iOS devices? That’s the kind of control I’m aiming for.”
Sadly not. The lack of Little Snitch is a big reason why I don’t use iOS for a lot of things, including trying out alternative mail apps. On the Mac, I can be sure that a mail app only talks to my own servers.
The closest you can come is to use cell data only, then turn off cell data for apps that don’t absolutely need net access. It’s not completely reliable–it’s all to easy to start up wifi for something while a blocked app is open and still active, or worse, forget to turn wifi off. Opsec is hard. I’ve seen also reports that there are bugs, and sometimes cell data will be used for an app when it’s not supposed to be, but I haven’t seen that (not that I look all that frequently).
You can add a VPN style blocker to reduce app traffic to ad and tracking sites, but no blocker can reliably catch everything, and in any case some of what you might want to block would be considered to be part of the purpose of the app, such a bluetooth blood pressure app that wants to not only add the data to Health, but insists on squirting it to it’s own website too, 'for your convenience"…
On the other hand, Little Snitch on the Mac isn’t 100% perfect because they can’t see and block traffic used by other kernel extensions. Adobe, Filemaker, and some others have at least in the past been able to evade Snitch for their license verification traffic.
You can snoop on what apps are doing with the network by using Charles Proxy ($8?). It will show you which IPs they’re accessing, and can show the actual traffic. If that’s encrypted you can’t tell what’s being sent, and if isn’t encrypted you should toss the app for blabbing your data in public. I think you can find out more if you also run Charles on a desktop, but that’s a little expensive ($50) and even more involved. and I haven’t tried it
Jim Browning is a high tech tracker of Internet scammers. He has sophisticated equipment and has been known to reverse a scammer’s connection, and spy on them. He understands scamming, security, and technology.
He will literally call a scammer’s target and inform them that they’re being scammed as the scammer is trying to hover up their bank account.