Apple won't let EFF release a DRM-free app

No. I think people should have choice.

Apple is in this instance not in the business of making products. It is in the business of preventing people from easily accessing the products they want. If Apple rated products, that would be different. If, after reviewing them, Apple fully and completely described what they discovered and let the user decide, that would be different. But they don’t. They are the soup Nazi.

Apple can’t argue that there is a security problem here so that part of your argument is irrelevant. It is purely a content issue, content that Apple wants to prevent you from seeing. That may serve their corporate purposes but it does not server the purposes of all their users, some of whom are no doubt interested in electronic freedom and should have the freedom to make that choice themselves without interference.

2 Likes

I agree with @pjcamp

@cshotton if you only want email, web, and wordprocessing, on your computer, and that is all most users do, does this mean that all users should be barred from installing any other applications? Why do you think that just because you don’t care about certain freedoms that all users should be bared from them or that is even okay? The lack of freedoms you describe are not for security, they are to control the marketplace. If you own a device shouldn’t you actually own it and be able to do what you want with it? Is it yours or not?

Ironically, this isn’t even an either/or proposition, you can indeed have both quite easily, apple could provide a way to unlock the device and void all warranty/support with huge warnings about security risks of doing so and advanced users could choose to do so, you could do so on older devices that you want to re-purpose, etc. but of course no one would have to. But that would remove some of the stranglehold on the marketplace monopoly so they have refused to allow this option. OS X allows both with gatekeeper. You can choose.

The reality with iOS devices is you don’t really own your device, apple does.

[quote=“shaddack, post:60, topic:49632”]
I understand the seeming safety and comfort of the padded room. I don’t understand people who insist that it is a good thing they don’t have the key and cannot go out when they want.
[/quote] very true. what about people that insist that everyone should be in a locked padded room? …for their own good of course! the illusion of safety over freedom.

2 Likes

I certainly agree that there’s misunderstanding going on. Let’s try to clear it up.

First:

Do you have a source for this? Because the latest I can find from 11/2014 says that no attacks have been seen in the wild.

Now, here’s how I understand the Masque attack works, please tell me where I’m wrong:

  • The vulnerability is with bundle IDs.
  • Apps are checked to see if they cryptographically signed, but the bundle ids do not contain a record of the original cryptographic key they were signed with.
  • It’s therefor possible to give a signed app the same bundle ID of an existing non-apple app. This is our malware.
  • In order for the malware to run it still needs to be signed. There are three ways this can happen, listed in increasing order of likelihood:
    • Apple signs the app and makes it available on the App Store.
    • The developer uses a provisioning cert to sign the app. This requires the UDID of the device the app will run on, and at least one (probably 2, though) acknowledgements from the end user.
    • The developer obtains an enterprise cert from Apple and signs the app using this enterprise cert. This requires at least one (I think only one) acknowledgement from the end user.

In all three cases the app has to be signed with a cert that Apple controls at the root of the trusted chain. Because Apple implements certificate revocability in their chain, if such an app were discovered in the wild, signed using any of the three methods I’ve listed above, they could disable the cert and the malware would cease to function. We know this is possible because they’ve done it for NES emulators that were signed with enterprise certs.

Where the wall plays into this is in Apple having the ability to revoke every piece of software that gets a signature to run. And that means the signing is now the DRM.

As a side note: I completely understand that this is an anathema to a lot of folks. It means Apple users have to place a lot of trust that the company won’t arbitrarily remove content for the wrong reasons. And Apple has shown a willingness to remove content for some questionable reasons. But that is security because they can immediately stop code they don’t want from running. This is something you can’t do if you allow unsigned apps on the device. The malware would just not be signed and there’s be no way to restrict it from running.

So please, for my benefit, tell me where I’m wrong in the above summary.

You have misunderstood me twice on this point now, so I’m not sure how much more clear I can be: I understand that an iPhone is just a computer in my pocket. However I do not use it as such. For me it is not a device I use for general purpose computing. A general purpose computing device definitely needs to be able to run arbitrary code. My phone does not. Maybe yours does! Great!

Where have I argued that? Certainly not when I said this:

Or this?

The problem with an option switch is, developers would stop caring about Apples guidelines and just tell the users to allow software not approved by Apple on their device, instead of trying to conform. They might not even bother submitting a “clean” version, so users end up with less secure iPhones. Adding a sideload option can have negative effects on the ecosystem…

Here, FTFY.

1 Like

One thing I can think of is that the app can interact with the notification system. It can alert the device’s owner that there is new information to view. (the same way the phone alerts you to missed calls, the email app alerts you to new messages, etc.) Neither a web page, nor as far as I know an installable web application can do that.

1 Like

I don’t consider this a misunderstanding when you go on to say…

If you use apps then it already does and you do require that feature. Just because there are limitations set on the scope of the arbitrary code’s access to the lower level os and hardware doesn’t make the process of app writing and execution non arbitrary code wise, nor make it non general purpose. the same is true of desktop computers. the “smart” in smart phones and their ability to run apps is general computing arbitrary code execution. If you don’t need it to execute arbitrary code then you could use a flip phone. The fact that you can fling birds at pigs is proof that you can execute arbitrary code already. OF course this isn’t really related to the DRM vs code signing discussion per se, it was just a correction in a mistaken argument.

You are even arguing it above, every time you argue how you use it as justification for the lack of choice that is what you are arguing. It is also the implied argument when you make the mistake assumption that users have to trade freedom for security. that is the only way that argument makes any sense. your example of “choice” is choosing a different phone platform, not getting choice on iOS.

Again this is all tangential, and while interesting, has nothing to do with the main points that the drm and signing are not the same thing, that the drm is not a necessary part of the security, which is the thread. This whole side conversation was simply to illustrate a sub point.

does not require the UUID of the device it will be run on. These enterprise provision certs don’t require signing per device.

correct, but only during iOS updates. which leaves plenty of room for attack.

The signing allows revoking, the DRM is layered on top of that and not a necessary part of that. google store requires double signing and not drm and can revoke apps. again one does not equal the other. one is not necessary for the other.

correct, this is the issue once an app is loaded it can be replaced with malicious code that doesn’t need to match the original apple signing on iOS…yikes. so if it gets loaded via a provisioning cert (you are only prompted for the cert install if it isn’t one of the pre-installed ones or installed by the malware like in wirelurker) it can then replace other apps. Speaking of which:

Yes, there have been two attacks to date that have been highly successful, fortunately for north american’s one largely affected china and the other malaysia. The first attack was called WireLurker and it uses the Masque Attack and is verified to have compromised 356,104 iOS devices (three and a half hundred thousand ios devices) in china.

[quote]The WireLurker malware is the “biggest in scale” in the trojanized malware family, and it is able to attack iOS devices through OS X using USB. It’s said to be able to infect iOS applications similar to a traditional virus, and it is the first malware capable of installing third-party applications on non-jailbroken iOS devices "through enterprise provisioning."[/quote] There was zero prompt because it installed the provisioning cert first. there have been three waves of WireLurker attacks, the above attack was just the first, I’m not sure how many devices were affected in the second or third attack.

The second successful attack variant of the masque attack (non wirelurker) was in malaysia but i’m having trouble digging up info on it, it was in the news a while back, maybe your google-fu is better then mine. I don’t recall how many devices were affected but it was also in the hundreds of thousands and used a variant of the masque attack.

hope that helps with the sub points. cheers.

========================

the main points i was making that were on topic were:

  1. code signing != drm (you cannot conflate the two, you can have one without the other.)
  2. code signing is necessary for security drm is not.
  3. having the option to unlock your phone legitimately does not decrease security for those that don’t. it isn’t an either or proposition, we should have the option if we own the device even if most users don’t choose to exercise that option. we should have the option on weighing our own security to freedom choice since it only affects us.
  4. the walled garden isn’t necessary for security, there are plenty of other routes, such as a trusted app program, or how OS X gatekeeper operates, etc.
  5. I don’t believe in trading my freedoms for a false sense of security in any aspect of life, and i certainly wouldn’t ever consider imposing such restrictions on others, and such tradeoffs are very seldom either/or scenarios and rarely actually work.
  6. my usage scenarios are not every one else’s, we need to allow for the freedom for a spectrum of different users and user scenarios and never be okay with restrictions just because we don’t personally extend past them.
  7. even with all the restrictions iOS devices aren’t secure. there are iOS vulnerabilities, and more importantly the lower OS is rife with security issues.
1 Like

I have to say, it seems like you didn’t read my post very clearly.

I gave you 3 ways in which an iOS app can be signed. about the 2nd way I said:

And you responded with something that doesn’t apply to provisioning certs at all:

Provisioning certs absolutely do require a UDID before they will work. If you don’t understand this, I’m not sure how you can claim to understand iOS development.

Enterprise certs, the third way I mentioned, do not require UDID or signing per device, you are correct.

Regarding the revocability of the certs:

This is incorrect. There was no iOS update that disabled the NES emulation. It was based entirely on datestamp. Here’s an example: http://www.iphonehacks.com/2014/10/snes-emulator-iphone-ipad.html

This is incorrect. Wirelurker is OS X malware and the 356,104 number is the number of OS X application downloads known to contain the Wirelurker malware. Page 3, here: https://www.paloaltonetworks.com/content/dam/paloaltonetworks-com/en_US/assets/pdf/reports/Unit_42/unit42-wirelurker.pdf

Wirelurker doesn’t use the Masque attack, it installs a trojaned application onto non-jailbroken iOS devices, along with an enterprise provisioning certificate. Version A of Wirelurker didn’t do anything with iOS apps. Version B only installed them on jailbroken devices. Only Version C contained the enterprise cert that allowed it to affect non-jailbroken devices. Page 16 of the previous PDF.

Palo Alto networks released their findings on November 5th. Apple announced on November 6th that they had disabled the apps. Apple Has Shut Down the "WireLurker" Malware Affecting Devices in China

This is also incorrect. On the first launch of one of the apps installed by Wirelurker you would be prompted. Again, page 16 of the linked PDF.

As I’ve said, all I can find on the Masque attack is that Apple claims no compromised devices have been found in the wild. Apple Responds to 'Masque Attack' Vulnerability, Not Aware of Customers Affected by Attack - MacRumors Given the rapid response to the Wirelurker attack, I would expect them to say that they had identified the certs and blocked them if they had found any indication of the attack in the wild. If you wish to convince me that the Masque attack is a truly in-the-wild threat, you’ll have to provide some evidence.

These are the actual points, not side tangents created for the sake of arguing and moving the goalposts. These were our original contentions before they were obfucated by trying to lose them by moving the goal posts, I stand by these and will reply if you address these. Think what you want about the rest it doesn’t really matter…the entire reason i remade the main points in a numbered list was to get the conversation back on track, it is a shame you didn’t reply to a single one.

of course i read your post, how do you think i replied to each of your points, and pointed out the points of mine that you failed to address? by reading and understanding of course. on the flip side, you didn’t even do me the courtesy of addressing any of my main bulleted points, the ones we were actually discussing and the only ones pertinent to this thread. Not a single one! :frowning: That, and you insist on once again moving the goal posts yet again and dig into sub details that don’t even negate the main points while not realizing that many of those sub detail rebuttals actually make my main points for me, and you have the gall of saying i didn’t read your post clearly. sighs you must be a pleasure to discuss with IRL.

nevertheless i’ll do you more of a courtesy then you’ve done me and address these as well…

NOT TRUE. i’ve developed iOS enterprise apps. you sign once to match the provisioning cert, not per device.

SO AGAIN WRONG. The above quote explains it in apples own words. you absolutely don’t have to sign your app for every single device it will be installed in throughout your enterprise, that would be ridiculous. you’ve obviously never developed an iOS enterprise app. They aren’t installed through apples app store, which is where the per APPLE USER ID not DEVICE signing comes in. Even regular apple apps aren’t signed per device, you can copy from your iphone to your ipad as long as they use the same apple id. You so rudely ask how I can claim to be an iOS developer and have claimed to be one yourself, yet you get one of the most basic aspects of all iOS app development wrong and tell me I’m wrong about it…really? This on the heals of you incorrectly conflating the code signing with the drm, also getting that wrong, yet stating it in a belittling way. i just don’t…

WRONG. If it was only a cert revocation issue as you initially claim revoking the cert would make changing the date and time pointless. That app stopped being installable after an iOS update when Apple expired the previous cert it was signed under. The only reason you can install it by setting the timestamp back is because the cert wasn’t revoked per se just expired . Setting the date before the expiration is exactly why this workaround works. That and like i’ve explained and you’ve incorrectly negated, iOS doesn’t check the cert on each app execution only on install, so you can set the date back forward once it is installed and still use it. So um yeah…thanks for posting something that proves my previous points and misunderstanding it enough to think it makes your point.

If you read page 16 MORE carefully, you’ll see it clearly says that prompt occurs with the first installation of the provisioning certificate and after that the device is compromised. replacing other installed apps with infected ones, requires no prompt. don’t take their word for it, watch the demo videos of this attack yourself, they show the behavior i describe. this is what i’ve been saying. also, as you’ll see in the bullet points below, like i also said later version installed the cert directly (through USB device access).

But you claimed ZERO iOS devices were compromised, now you are saying that it did install trojened apps onto iOS devices? or you obtusely now claiming zero devices compromised by one attack but lots by another still contradicts you original point i was rebutting? so thanks, that was my entire point that such a thing had been done, in direct rebuttal to your first claiming it hadn’t. version c uses enterprise provisioning certs to replace other apps on iOS devices with infected versions…that is by definition a variant of the masque attack and how this specific attack vector first came to light and why it was isolated and tested by FireEye (FireEye just later dubbed this style attack the Masque attack it isn’t some signature, it is a name they created to describe a style of attack), and remember i only provided masque attack as “one example” of an attack that compromised iOS devices despite the walled garden, refuting your claim that iOS devices were secure because of it, are you not getting that you are making my original point while trying to refute a detail once you’ve moved the goal posts down field? so once again thanks.

Page 3 also confirms it may have affected hundreds of thousands of users. Did you notice the list of bullet points on page 3 that clearly refutes your previous points, or were you too busy cherry picking to catch that.

bullet point 4 describes the masque attack in the wild. even if you don’t agree that it is a variant of the masque attack it clearly states that it compromised non-jailbroken iOS devices in the exact manner i described. so yep again point made, it happened, you found proof yourself of my original point even if you insist on arguing about the inconsequential label, good job!

You do realize that you are taking the word of a casual public apple PR statement denying “awareness” over the official analysis of the same security company who’s paper you link to as reputable, right?

however not fun it is to have you comprehend zero of my points, fail to address any of my main points, or move the goal posts to inconsequential details to fuel your rebuttals for the sake of arguing instead of moving the conversation forward on the main points. I won’t be replying to any further comments unless you actually address my main points. no more moving the goal posts as it is tiresome and off topic and quite pointless to the main discussion. deep sigh

1 Like

@snej, @sfrazer - this just in:

This means that when a well-written secure messaging app appears, Cameron’s goons won’t be happy about it. Apple, being a corporation, will cowardly bow to the politicos’ will (if they fail to or won’t attempt to halt this), and pull the “undesirable” apps from the store. (See China.)

Then your security is actively compromised by not having access to secure-comm apps.

Now, how to solve this without having a full-ownership access to your devices? Suck it up and bow to the will of your politicos? Or what else would you suggest?

Edit: [crickets] ?

2 Likes

@snej, @sfrazer - [crickets],?

1 Like

Really? The EFF is demanding that they, and only they, should be exempt from DRM?

It’s indefensible for Apple to insist that creators allow it to add its proprietary DRM to other people’s creative work against those peoples’ wishes. [emphasis added]

Golly, and here I thought that by creators the EFF means anybody should be able to publish DRM-free apps on the app-store, but you’re saying that by saying creators the EFF means only the EFF?!!!

Such shenanigans are indeed indefensible.

3 Likes

What’s the difference between your signature, and a WiFi padlock that only works with an internet connection and the server being up?

Signed code and DRM are not the same thing.

1 Like

Man, I stopped responding to this thread when it was clear people weren’t reading the things I actually wrote or responding with correct citations for anything they were saying.

Then you post something that is obviously a political grandstand that has no chance of passing just to poke at the only major company that produces a phone with end-to-end encryption strong enough to make the FBI complain about it and ping me TWICE about it.

Just stop. I’m not responding in this thread any more.

This one thing in this one country has little chance to pass at this time. The same can not be said about all countries, all contexts, and all times. Useful apps were removed for more capricious reasons, and a single law, that the politicos can pass as easily as they pass gas, can shuffle the comm app landscape pretty well.

Why do you insist that contingency planning for such eventuality, fairly probable in mid-to-long term, is a bad thing?

Edit: To clarify, I do not insist on everybody rooting their devices. I however think that everybody insisting that the manufacturer should give them the choice of doing so (which they then may or may not act on, based on their choice), is a good thing.

The scenario above may not play out in near term in “western” countries. But we are not all living in the relatively free West, and the freedoms here are shrinking too.

Edit 2: I am really, seriously interested on what kind of contingency planning do you suggest for the people for such scenario.

1 Like

I asked you to stop. There’s a whole boingboing thread to discuss Cameron’s recent outburst, but you continue to bring it into this thread for some reason.

I’m blocking you now, because you cannot respect my wishes to stop replying to me.

For a very obvious reason that I explicitly spelled out. (And the thread is an echo chamber where I won’t likely get to know anything I already know. While you look like running out of arguments, you maybe still have something decent up your sleeve.)

I am still interested (more than in a fight, that’s just a collateral damage) in your proposal for contingency plans for such eventuality. How can the walled garden subjects opt out when a law takes away their secure comm apps? Assume that buying an easily rootable android phone is out of question.

Do you have an answer, or even admission you don’t have an answer and your walled-garden-without-opt-out-is-good argument holds some water only in “peacetime” and does not offer a contingency, or are just running out of arguments?

I really do want to know.

1 Like

String-can-phone and pig-latin?

aka “If you don’t like America, why don’t you leave!”

1 Like

If the sky falls then I will consider my priorities and do something that allows me to use secure messaging. Like jailbreaking my phone or whatever. As I said, I’m pragmatic: I have a device that suits my needs. If my needs or the device change maybe I’ll do something else. But in the current environment I prefer the way Apple manages device security.

I’m sorry I’m not playing into some desire to have a fanboy fight about this. And this really isn’t such a fascinating argument for me that I’m going to keep on arguing. (I’m reminded now of the Monty Python “Argument Shop” sketch.)

I’m out. Enjoy the crickets.

I’m getting a distinct whiff of sea-lion here.