War on General Purpose Computers is the difference between utopia and dystopia


#1

[Permalink]


#2

Making the case to us doesn’t really help, because Boing Boing has limited clout. What I suspect needs to happen is EFF (or somebody like them) needs to put together a kind of counter-ALEC to lobby congress. Since Citizens United, nothing matters except lobbying dollars in the US. If a trade group got on board and could leverage the lobbying dollars of companies like Google and Microsoft and whoever else stands to gain by unfettered computing, there will be a voice to counter the voice of the MPAA and their ilk.


#3

Yes. You could go on to speak about legions of deadly swarming microbots printed out of some person’s garage, and just keep going as the scope of the actual nightmare that is unfolding comes to overwhelm your readers, as you remind them that you are not speaking science fiction. Yes, your observations are astute, and mild compared to the full scope of all that is unfolding in true history when the corruptions in all systems and hierarchies and organizations and operating systems and things is taken into consideration. Scientists go quiet, one by one, as they arrive at verifying that the world as we knew it is coming to a phenomenal conclusion, and that there is nothing that revealing how bad things are getting can do to change the outcome, and all it really does is stress people out, and alarm people, and cause riots and suicides and depresses the hell out of people to be reminded of everything that is not on TV for good reason. I do think it is important for some people to talk about these things, and if it results in lives saved, they are certainly well written words. But you are right… We are all in really big trouble on this planet.


#4

We need a combined approach, swarming the problem from multiple sides. The lawyer types to argue the policymakers, the PR types to inform the masses, the tech types to just go the civil disobedience way.

Most important, from my position, we need tools to facilitate easy reverse engineering of large opaque binary blobs. Disassemblers with facilities for visualisation of large chunks of code, to easily get the hang of the structure of its execution flow, to find that critical conditional jump instruction to be replaced with a no-operation or unconditional jump one. To find places where a call to a “friendly” subroutine can be injected.

Because the possible/impossible is a step more important than legal/illegal; the latter is only a subset of the possible, as the impossible cannot be done even if legal. Tools that make impossible possible are therefore more important than law tools that make illegal legal - not that the latter aren’t important at all, though!

The tech is ours. We just have to actually realize it, and take it back. By the force of our brains, if necessary.

Remember this companion image for the article’s one:


#5

Of course, making the argument in Wired is of limited use, because the people who make the regulations in any modern democratic country are lawyers (in the less democratic countries it’s even worse) and don’t know a fucking thing about the technologies they’re regulating. The people who know anything about the subject already know this, but they’re not the ones making the rules.


#6

If people want control over their technology, they should be fighting for root access on all devices by default. I don’t want to risk bricking my phone or tablet to get root access.

It should be easy for me to install software that gives me fine grained privacy controls over apps so that I don’t have to share my entire web browsing history and online life with a company just to read their content.

App stores are a scheme to limit software distribution. This trend needs also to disappear.

P.S., why does Boing Boing BBS need access to my Twitter private messages for me to post on the board via Twitter login?


#7

I remember as a younger reverse engineer the time I figured out returning the opposite result of a JNZ would alter flow control to my liking blew my mind. Mostly because (even till this day) I was an assembly noob.

But to your point understanding blobs is important, but from my point of view understanding who/where those blobs came from is tantamount. All code will have unfortunate execution paths, and a lit of them will do that on purpose. But mandatory signing at least tells you who is at fault (unless their certs are stolen…). That is why I like TPM chips, with the huuuge caveat of no usage restrictions, only verification.


#8

That feeling of rush of power, that’s hard to beat! :smiley:

On the contrary, I consider the blob understanding to be of higher importance. High-end adversaries will be able to forge the certs, which weakens the importance for security (though it can still somewhat hold against the lower-end threats - but these will likely find other ways, most likely via exploiting a social-engineering vulnerability in the user).

Knowing that a piece of bad code comes from Sony, by signature, is of little use in comparison with the ability to detect and analyze the code’s bad behavior. Attribution can be done by alternative mechanisms, e.g. by comparing hashes of files obtained from a known source (ideally a CD, or a download from a known-good server, by many other people), if something interesting in the code shows it should be done.

At the same time, understanding the code and ability to alter it gives you more power over your own machine. If the code contains something wrong, whether an intentional exploit or a restriction that you are not comfortable with (the latter I consider to be a much more important for us all), you first have to find it. For that you need to visualize the code somehow (a small screen full of tiny instructions takes too much time to understand the structure, though is vital in the final phases of the attack). Then, once you localized the code filament that you want to force or eliminate, you can do so. However, then you destroy the signature - so you have to have your own mechanism for local signing, or to disable enforcement of signatures.

An advanced add-on for the visualisation system could be a binary comparison. Decompile two blobs into code threads, highlight the sequences whose meaning differs (we’re dealing with compiler optimizations here that can cause syntactic but not semantic differences). That would allow seeing if an update is not trying to sneak in something stinky, without having to analyze the code we already looked through in the previous version.

The problem here is, a good verification can also be used for usage restrictions, if integrated too tightly within the system. I would prefer ability to boot the device from a trusted system (we’re in the verification problematic again, but that can be done via hashes) and then do the checks. That gives you the ability to run whatever you want while keeping track of changes done, whether by the owner, by an adversary, or by errors.

TPM is good as long as we, the possessors of the machines (regardless what the Lawyers say about “ownership”, to avoid loopholes with renting-only), have the full control over the module. Which, I am afraid, may not always be the case, due to rich and powerful interests pushing this way. A nice compromise could be an open TPM module that would look like an original one, but allow access to its firmware and the protected storage by a connector on its back. (And allow verification of the data via the same connector, which can be e.g. JTAG.) By making it look like the original we would allow the user-accessibility even for the software that wants to deny this to us, by making it think it interacts with the kind of the chip that is not supposed to let us in.

Various code signing mechanisms are also already abused in the field of firmware updates, making jailbreaks somewhat annoying and in some cases too difficult to bother. :frowning:


#9

Heh, I love having these kinds if discussions. And I think we have a ton of thoughts in common… But… (Puts on Hat of Pedantry +5).

Adversaries steal certs or compromise the CA, forging is a waste of time you could be using reading reddit.

IDA does a fantastic job of this, and it is still incredibly niche. This particular line of thought is actually incredibly fascinating, since it elicits all sorts of “can we/should we/what if we can’t/what if we shouldn’t” arguments. I can’t prove it mathematically, but my gut feeling is we won’t be able to effectively visualize flow control without the operator understanding the push/pop/move/jnz operations that underlying the application.

But then again I never thought a map of the world would work in JavaScript, so I’ve likely been wrong more than right.


#10

BTW, if you have novel thoughts on PE/ELF execution viz, send me a pm or three. Even though I have diubts about perfect solutions, we (as an industry) have got to get better.


#11

Good point. I thought about the scenario of a mole in a CA company or a company owning the signing certificate (or a man in black with a request that cannot be denied). Forgery is not a 100% correct word for that.

We should, that’s no question. The question is if we can, and what if we can’t. I think we can.

The instructions are like atoms capable of forming zero, one, or two “forward bonds” (and capable of accepting any number of bonds from other “atoms”). Zero-bond one is e.g. “RET”, two-bond are conditional jumps, one bond is everything else including plain jumps. CALL instructions are specific, similar to two-bond but with different “side-bond” handling.

This is easy for the machine to understand. Just arrange the instructions into strings, then bend the strings into some easy-to-handle-visually conformation. Then display the resulting loose-ball-of-yarn or loose-spaghetti-bowl in 3D, with the possibility to zoom at the strings of instructions in the classical debugger/disassembler view.

Functionality of the code “strings” can be to great degree inferred and auto-preannotated by the library and kernel calls that are performed from the strings.

Thought… could a disassembler for ATmega-class microcontrollers (with just few dozens of kilobytes of code, RISC instructions, and Harvard architecture) with this kind of visualisation be written in said javascript?


#12

Trust us. We’re Sony. This is not a rootkit.


#13

When an average CS or EE engineer can understand or explain how Christmas lights that were packed last year got into the spaghetti ball they are in when unpacked, then I will accept your answer >:).

I worked on system call/service descriptor table node graph problems in the mid 2000’s and all I can say is it is brittle. Assumptions could be correctly proven in the lab, but the moment you released code in the wild the amount of ssdt/entry point/ function patching made your models fly out the window.

And that is largely why I think “chain of custody” or signing is more reasonable. But that is just one internet denizens opinion.


#14


also, http://www.pnas.org/content/104/42/16432.full

I worked a little with molecular modelling. It tends to be a bit better behaving than what you describe, and the multiscale approaches for e.g. protein shape models may be handy to “borrow”. The physics model for the code-conformation would have to take this brittleness into account, and allow manual manipulation of the mess - push and pull the chains, let them snap into new and less tangly curvings, and instead of physical behavior of covalent bonds and electrostatic and other interactions use some “pseudo-physics” to achieve semantic separations - subroutines to separate to sub-tangles of their own (e.g. repulse the node the more the more CALL links go to it, attract on close proximity but repulse over longer distances to keep things that are together together and separated from other subroutines and code threads), etc.

You seem to have a bit more experiences with this field than I do; I didn’t touch assembler much for too many years (though I used to be fairly decent in the age of DOS, but the tech moved on a lot since then). So I am listening with intent…

The signing is a completely different problematics. It does not tell you what you are running, just that some subject, or anybody with access to the subject’s credentials, accepts responsibility for it. Which, especially in the fields like DRM and other vendor-imposed restrictions, helps us exactly not at all. :frowning:

So we need something that will actually give us some power.


#15

100% agree, and I think that is a nuance the original article from @doctorow contained. I wish I knew any new perspective I didn’t regurgitate.

Anyways, static analysis tools are lacking and if you figured out a way of deterministically outputting a graph that a of what an app will do that a CS masters grad can understand, I gaurentee you will be a billionaire. (Please do it, I seriously need a toolchain like this (PLEASE!!))


#16

This topic was automatically closed after 5 days. New replies are no longer allowed.