Assessing the security of devices by measuring how many difficult things the programmers tried to do

Originally published at:


Looking for libraries that have crappy documentation is a genius first step. So many crypto libs have docs written by and for PhD Cryptographers who assume you are already a decade into your post-grad crypto course and don’t need any of the basics explained.

Or the docs were written back in the 80s and still reference the old and busted APIs that are only still in the binary for backwards compatibility reasons and nobody has bothered to update them for the modern API you are supposed to be using.

Or maybe the developer went on StackExchange looking for the answer (because the docs sucked) and got a half-correct answer from 8 years ago that misses a couple of crucial details and/or uses defaults that are no longer sufficient.

And then the crypto libraries are built in such a way that even tiny mistakes can effectively cripple the protection silently. The math doesn’t care and hey, you’ve got 8 years of post-grad experience in crypto system development you’re supposed to be using right?


Doesn’t this imply that security updates are just as likely to introduce new security flaws as they are to fix what they’re supposed to fix?


Sort of, I mean I think they generally do fix what they’re supposed to fix, just they average at least as many new flaws. Since it takes a while for exploits to trickle down through the hierarchy of attackers from “motivated but focused on specific targets” to “lazy but blanketing the internet” there’s an advantage to always only having NEW flaws.

1 Like

I should be more precise, I didn’t mean to opine on whether security updates actually do introduce more flaws – just saying that supposing they do, it’s still worth installing them.

The people who only install security fixes and refuse feature updates are probably on to something.

1 Like

This doesn’t match the description in the linked article, CIT was not primarily looking for hard to implement features, instead they were looking for who implemented binary hardening features. (Techniques that make a compiled program harder to exploit, or prevent minor exploits from becoming major exploits) There is some discussion in passing that they also looked for know bad library calls, but the majority of the text dealt with binary hardening.


VenTatsu is correct here. Cory’s summary is inaccurate. What we’re talking about when we talk about things like ASLR or non-executable stack are protections which can be built into the binaries automatically during the process of compiling and building it to make bugs either not be exploitable or be more difficult to exploit. Binary hardening just means building binaries in ways which are more resistant to certain classes of attacks. These approaches are standard defenses generally included in programs which run in normal desktop, laptop, server, and smartphone environments. But in the IoT space, they are often lacking and this report is about the state of IoT from that perspective, why it is that way, and what we might be able to do to fix it. This has nothing to do with the difficulty of implementation.

The question of looking at which features are harder to implement correctly would also be an interesting approach, but it isn’t the approach that this report or talk takes.

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.