If he had followed proper disclosure channels, they would have sued him into silence and maybe sooner or later gotten around to a fix. (Probably later, tbh. And probably with a description that masks the nature of the issue, if the '06 wifi exploit was any indicator)
Ignoring the hyperbole about suing him (Apple routinely works with folks on security releases) one of the points of responsible security releases is to ensure that there is a fix in place before disclosure. As someone responsible for actual systems in the wild targeted by very many asshats I welcome responsible disclosure for not giving every script kiddie in the world an option to break things for lulz.
No hyperbole here. I knew the guy who developed the wifi exploit in '06. He gave them the code, they lied and said there was no exploit, threatened to sue him if he released too much info (spoken with the other side of their mouth), then released a fix that they insisted was not related, but in looking into his report found something different. (While buried in the patch notes, was a description of exactly the thing he found in the first place)
Here’s the page Apple uses to credit responsible disclosure in, for example, their web front ends;
Here’s Apple’s policy:
A lot of this infrastructure surrounding security didn’t exist (or didn’t exist at this level of robustness) in 2006. Thankfully, the combination of the world moving forward combined with many hundreds of millions of IOS devices out there has helped in that regard, IMHO.
Unless this helpful and apparently heedless of the darker implications person was the only one to have stumbled across it; I suspect that the cat was not only out of the bag; but out of the bag and busy planting a keylogger and a lifelike replica cat in the bag for the next time someone checks on the cat/bag status some time ago.
This isn’t an obscure elevation of privilege, weakness in cryptography, exposed secret, or remote code execution. This is a mind bogglingly stupid issue that’s user serviceable and it appears this was already known. I’m all for shining a brighter spotlight on this.
Well, in this case the fix is fairly trivial (adding root passwords, though thankfully that’s part of good systems policy 101), but in the general sense of what would have to be done - if there were an external service affected by a “0-day” vulnerability like this, without a fix, you’d have to lock it down as best you could:
remove from public access entirely - if you can’t do this then
lock down access to the bare minimum of IPs that should be able to access it, or
otherwise harden the systems so that if they do become compromised, you can rebuild them easily while also
ensuring that compromised systems have no further access to the internal network itself.
Of course, in the abstract most of that doesn’t mean much, and I can’t think of a remotely-exploitable issue that was so egregious you ever really got past step 2 above. Part of that, though, is because it’s been a long time since someone released information about a vulnerability like this into the wild that wasn’t done with responsible disclosure in mind in the first place.