But this post, in its pursuit of making Apple’s walled garden the primary villain, implies the latter is decisively more important than the former. That is, unlike the source article, you suggest it’s better to have a platform that cannot and does not block harmful software, so long as researchers have a way to detect it; and that it’s worse to have a platform that can and does, if researchers are (potentially) unable to audit it.
This seems to me like a false dichotomy. Simply booing Apple (much less cheering Google by default) rules out learning from what Apple gets uniquely right.
In Apple’s design, there is no way for software to run on your phone unless (1) the user consciously put it there, (2) it has a cryptographic paper trail connecting it to its author, (3) it runs within a restrictive sandbox, and (4) it has been manually verified against a list of legal, ethical and quality criteria (unless you compiled it yourself). That guarantee is only possible because of Apple’s totalitarian control; it couldn’t be sustained if the ecosystem included jailbroken phones or sideloaded software.
That means there’s less malicious code out there to begin with, and what does get through is more limited in what it can do, and the defendant for any lawsuit is always well-defined, and there exists a means to globally remove the code and prevent it ever being installed in the future. If the question is “what can courts or governments do about malicious software?”, for iOS the answer is “quite a lot”, where for every other platform the answer is “pretty much nothing”.
Neither iOS nor Android represents a perfect answer, but luckily for us there aren’t just two monolithic choices. I suspect the future will look something like Apple’s model, but with the uglier edges filed off. Sort of like how Portland has a police department, I guess?