Done! Thanks for the reminder, Cory.
The executable is a good idea. You can hide it behind a library if you want. A standalone process can be secured (including its access to the keys) better than a library; if you want, you can run it as a different user and communicate with it only over stdin/stdout, env variables and commands. With a library, you are too tightly bound to the executable it is linked to, and its bugs are your bugs.
âŠthought⊠what about a âcrypto coprocessorâ - a Raspberry Pi or the USB Armory or another such board, connected to the host computer, and the âgpgâ executable replaced with one that communicates the intent to the standalone board and sends data in and gets data out? That way even a compromised host can get access to the plaintext when it is exposed, but not to the keys. A PIN or a smartcard, a mechanism outside of the exposed data flows within the computer, can protect the keys themselves.
âŠto allow this without a standalone exec architecture would require recompiling the library in question, and then you are in the world of pain related to dynamic linking and possible recompiling of the software itself.
As of the license, how otherwise do you want to prevent the fork-and-pervert scenario, one of the embrace-extend-extinguish methods so common with dominant vendors?
Youâre right, it is not perfect. But⊠do we have better?
Are they a US 501© non-profit?
I suspect you will be able to work out the answers for yourselfâŠ
https://gnupg.org/donate/index.html
Iâm looking forward to seeing your own effort, then. Oh? You havenât written a crypto library? You havenât maintained a useful tool for over ten years while being paid very poorly? You havenât made said tool freely available to the world?
Well then, what have you done? Oh, I see. Youâve criticized.
Yes, Iâm sure Werner and the rest of the people who have maintained this very difficult codebase appreciate your input. Thank you for showing them what theyâre doing wrong. How would they manage without you?
Crypto coprocessors have been around for a long time doing pretty much just what you describe. You rarely see them outside of finance or government though. Doing that bit âright enoughâ for the crypto folks is hard enough and I canât imagine doing it âright enoughâ for end users who might want something like GPG.
On the whole though I have to agree with @Joe999 as far as preferring libraries, licensing, etc. I use GPG as the âleast worstâ of the ways to encrypt data.
Ah yes, the BSD license. Great idea. Unlike the GPL, the BSD license would allow closed-source vendors to incorporate the open-source code into their closed-source products, and thus reach a much wider audience.
And, of course, Iâm quite sure that the various governments with an interest in reading our mail would not take advantage of all that widely-used closed-source code to insert backdoors under national security letters and the like. Only someone completely paranoid would think that.
I donât have one in hand yet, so I canât comment on rightness(not that you should trust me about more than the most obvious wrongness), or even basic adequacy; but there is a smartcard implementation of some of the more sensitive aspects (key generation and storage) of doing PGP that should work with GnuPG and at least some of the worldâs reasonably common smartcard readers.
Iâm hardly convinced that this thing is the equal of the sort of HSM that banks and non-incompetent CAs buy to store their really cool crypto secrets; but I suspect that there are fewer rootkits that know how to nab keys out of smartcards than off disks or out of memory.
(Though, general note, damn are smartcard readers/middleware/etc. a market where not enough âsmoothingâ has been done. Most of it is nominally standardized; but the demand is dominated by a relatively small number of relatively large customers, each of which is interested in getting their setup working, and less in getting it to interoperate, so the further you go from specific, high-volume, systems, like CACs or various national ID cards, the nastier things get.)
Or maybe that no one who needs security (ie, no one) should use any closed-source software because it could all have built-in backdoors.
Given what weâve learned from Edward Snowden, Iâd have to say itâs damned unlikely that any closed-source crypto software does not have backdoors, especially if itâs widely used.
Given the choice between supporting a project whose code almost certainly would be incorporated into subverted closed-source products, and a project whose code canât be misused that way, Iâll support the latter, thank you.
Btw, using GnuPG as an executable (vs a library) does not prevent back doors. It only moves them around.
Quite true, and exactly why I built mine from source. Yes, Iâm well aware that my Linux distribution came as binaries, as did other libraries used by the GnuPG source. You have to draw the line somewhere. The impossibility of perfect security is no excuse for not doing what we can.
If you ever develop and release your BSD-licensed library, Joe, I might use it. In the meantime, I will use whatâs available. If you wish to continue to sing the praises of nonexistent vaporware over an available and working tool, you might consider how well that worked out for Ted Nelson or the HURD developers.
This goes way past Snowden. See the Cryptogate affair, aka NSA infiltration of Swiss company Crypto AG.
http://rense.com/politics2/crypto.htm
Beware of the side channel attacks. And compromise of supporting systems; random numbers are a common attack vector as truly random nonguessable numbers look quite like a predictable pseudorandom sequence. Iâd consider keyloggers and remote access trojans to be the most treacherous threat these days, though.
Nothing prevents backdoors entirely. But you can move them around to easier to audit/monitor places where the sun can shine.
I prefer none over poor. Because poor leads you into false sense of security which is more dangerous than knowing you have none.
The key itself is the reddest of the red materials. It should ideally never touch the machine itself, at least in non-encrypted form. The encrypt/decrypt operation should be done off the reach of the machineâs kernel.
Thatâs not a bad thing. The worse thing is, it will allow easy perversion of the original intention, making billions of incompatible variants on file formats, and generally being a headache. And then the incompatibility will be used to force us to use some closed-source crap that will have a NSL-enforced backdoor, a subtle one that looks quite like a common bug to be deniable, because everybody uses it.
We have little embedded computers these days, $30 a pop for raspberry pi. A fairly secure machine can be done with one, when we disallow all communication with the outside, except a serial line for data in/out. (ttyS code is less copious and easier to audit than ethernet drivers and TCP/IP protocol stacks.) Whether for sending converting red data to black and vice versa (but then we have plaintext on a possibly risky machine), or, if we attach a terminal and a keyboard, send in/out only black data and view the red ones localy. (Assuming we restrict the data types to those that cannot carry exploits, e.g. plain text or plain HTML, no scripts, validate binary data like JPEGs to conform to standards and remove potential buffer overflows and embedded scriptsâŠ). That way we can achieve a cheap solution that pretty much cannot be compromised from the internet.
Would that count as a crypto coprocessor of sort?
From what I saw, PKCS11 is a convoluted mess with its fair share of vulnerabilities. No wonder, given the complexity.
The problem here is part of the same problem as the discussion with @shaddack regarding coprocessors. None of this stuff is âcommonâ or âpainlessâ to use and thats even before we get back to the web of trust problem which plagued PGP from the start.
Theres a thousand ways to skin a cat but not a one the cat likes.
Oh, definitely, I wouldnât argue with you on either point, just noting that for GnuPG purposes, there is at least one offering that qualifies as at least an entry-level HSM-alike that is also pretty cheap, all things considered.
Nothing about that solves any of PGPâs other problems, or problems with any of its implementations, but it lowers the cost of entry of at least some degree of separation between high-value key material and your dubiously-trustworthy-pile-of-software on the main CPU.
For the purposes of making actually secure encryption common, much less ubiquitious, I have no idea how that would be accomplished; but Iâd rather have the (relatively small) percentage of PGPed messages I do handle be handled as securely as I can manage. The challenge of getting Joe User on board(without either diluting the web of trust beyond the point of worthlessness and into active danger and/or introducing some certificate authority analog with all the same sins) is beyond my power, or even imagination.
The commonness of stuff leads to a tautology. The uncommon stuff is uncommon precisely because it is not common.
Painless, thatâs a different can of wormsâŠ
The web-of-trust is a problem. But the hierarchical structure is also a problem. What about a hybrid, where web-of-trust is the base but the signatures can be also done by conventional hierarchy-based certificates? (Or, vice versa, where certificates can be signed not only by one authority (or perhaps multiple of them) but also by web-of-trust based signatures?)
Some time ago I had an idea of additionally securing SSL certificates for HTTPS by adding a detached signature file to a defined location on the server (e.g. https://domain.ext/certsign.gpg), which could be fetched by a browser extension and used to check the certificate in addition to its own signature. This would add a degree of independence on the authorities and allow additional signing of even self-signed certs. Opens a can of verification worms, though, but would catch forged signatures or compromised CAs fairly easily.
This topic was automatically closed after 5 days. New replies are no longer allowed.