It seems simple enough to slip in some code customized to work only with some User IDs/URLs/other identifier with a regular update. It’s not like you’re going to get 4,000 signers to actually inspect an update for backdoors…
So, it’s a system that adds a lot of complexity and inconvenience for the company over the long term, but only really defends against one scenario - where the government tries to force something to be done in secret that would damage both the company and the government if it became public?
This seems like the sort of thing that would be rather difficult to sell to company management.
Could the same be used for monitoring granting access to user accounts, emails, metadata…?
… unless the company relied heavily on establishing and maintaining the trust of its customers that it wasn’t backdooring products. Lavabit comes to mind.
Removing the opportunity for a single actor to secretly undermine things sounds lovely.
How tamper proof can the implementation of such a system hope to be?
Problem is that it doesn’t look like it even establishes that, exactly. There seem to be a very small number of things it might protect against, but that protection is so focused on those things that it’s tough to see much benefit for the tradeoff the company would be making.
Even the main thing it protects against, having a specific software release that only goes to a small number of people, isn’t necessarily evidence of of an issue. And since the “witnesses” don’t get any information of what is in the release or the reason for the release, it doesn’t help at all against much more common issues.
We are looking for tamper-evident here. That’s a different (but related) class of problems. Often much easier, too.
yeah, and you can have both. (i’m sure you know this) you can have a cohort of signatories with individual sigs, or you can use combine them into a single sig so all signatories have to agree. the first is tamper evident, the second is tamper proof.
(with the huge caveat that every individual involved isn’t a double agent, but being that double agent in a large group of trustworthy individuals is reeeally hard)
I meant that this system would need to be implemented in an open and distributed manner so that no single actor could corrupt it.
Tamper evident is huge, thanks!
This only mitigates an update to a single target, FBI forcing a general update would hide this. And in the latest FBI filing they think they can force Apple to give up the iOS source code, so I’m sure they think they could force an update to everyone as well.
Plus suffers from same problem as Bitcoin, with fewer incentives to prevent it. Just fire up a couple thousand virtual servers and sign away.
Using Cothority means trading short bursts of inconvenience (having to muster a quorum every time you want to ship an update)
Disagree. As a sysadmin that doesn’t seem like an acceptable tradeoff. Yes, ensuring security against state actors welding court orders or legislation is important, but so is the ability to push out software updates as quickly as possible. Because, a lot of the time, those software updates are security patches for zero-day vulnerabilities. Rolling out those kind of fixes as quickly as possible is an important part of ensuring security.
And what happens when one (or many) of the other signatories realize the power they’re wielding and try to abuse it, arguably for good. “As a computer scientist, I don’t feel this security update adequately addresses CVE-XXXXXX and as such I refuse to sign until blah blah blah”.
Not every decision can be made by committee. The decision to roll out a software update - be it bug fix, security patch, or simply a new feature - can’t be one either.
In other words, this is something we can not win either way.
What about multiple signatures, and let the end user/admin choose if he wants to accept the vendor one, the Cothority one, or the one signed by X.Y.?
This topic was automatically closed after 5 days. New replies are no longer allowed.