You are aware this doesn’t work for things like iOS, where the carrier doesn’t have control of any of the software, right?
Wiretapping doesn’t require you touch the cell phone at all. You just directly access the carrier’s data stream, bypassing the device.
It doesn’t work this way. Telcos no longer have ultimate control over the software on smartphones, especially iOS devices.
It’s also entirely moot as the telco can just access your calls directly, without needing to install anything on a device.
Even if the FBI was able to get this access (they weren’t), they would still need physical access to the iOS device to activate the software. (You can’t update iOS without entering the passcode because the passcode is required for the updater to do data migration of passcode encrypted content)
Either way, all of this still painfully ignores the fact the OS doesn’t reference hardware microphones directly. There’s an aggregate software audio input that presents all input to the OS.
Therefore, removing all the hardware microphones would still give malware access to any microphone in a separate Bluetooth headphone or wired headphone device through the aggregate software audio input.
No. Just … no.
I’m a mobile software dev. I’ve also worked for a mobile operator for a short while, a few years ago.
I know this is the internet and you’ll just have to take my word for it, but you are wrong on so many levels it’s just not funny anymore. As the quote goes: you’re not even wrong. But you do represent a lot of people who just have no idea.
Let me preface this with the warning that I’m not going to provide links here. This is what I know to be true and basic googling will back it all up. I’ve spent years doing my research to do my job, but just google the terms here and you’ll do fine.
First off, you mention physical access to the phone. Fine. It is well known that the NSA has intercepted whole shipments of Linksys routers and switches to plant hardware/software backdoors in them and then repackaged them and sent them on their way. Hell, this goes back to before the first Gulf war when printers/photocopiers had tracking hardware installed in them allowing the US to pinpoint bombs on them! And these are just two well known instances … how many more do we not know of?
Then there’s the software side. TrueCrypt and Lavabit are just two cases where we know the National Security Letters made them do odd things to their software and they had a gag-order preventing them from saying anything about it. Lavabit chose to just cease operations rather than comply. What about Microsoft? Their latest OS really likes it’s data gathering. Do you really think one of the countries which developed Stuxnet couldn’t put something in there? Or the closed portions of Android (Google Play Services etc)? Google had to install complete encryption WITHIN their own infrastructure because the NSA wasn’t just listening (physically) on the internet backbones, but also within Google.
And if you can listen, you can change things.
As for your insistence that iOS is safe due to it’s passcode … there are many ways of getting that passcode. All those surveilance camera’s can see you enter it. Or one bugged piece of software can grab it. Or maybe use the accelerometer, or plant your own in the phone when the person is sleeping. OR just plonk a keylogger on their computer … backing up through iTunes is RIFE with security holes.
And then there’s all the things you can do if you just steal the phone: dumping the memory off the chip, duplicating the chip memories content so you have unlimited tries … the case where the FBI tried to get Apple to backdoor their OS made quite a few hardware flaws apparent, many of them actually demonstrated. Hell, the FBI bought a hack to get in … do you really think that was the only way they could have gotten in? NO! There are many more ways and it is only the FBI’s incompetence which made it take so long before they got in.
Above was also mentioned the article which was on BB just the other day: ‘haunted hardware’.
Also above was a link about REX OS. But there’s another one which underlies all mobile communications: SS7. This ancient bit of tech underlies all mobile telecoms and is hacked together and patched way to hell. And you can do a SHITLOAD of wierd things with it. Tracking and listening in are trivial. And let’s not get started on the baseband chip ALL phones use. Look it up, google it: “baseband chip cellphone security”.
Thing is, using a phone’s architecture to spoof the phone into doing pretty much anything is all possible using the most base underlying systems they work on. These were created in the 70’s, when security just wasn’t a reall issue to those who designed the systems and protocols.
And finally there is, of course, the carrier. They see EVERYTHING you do. All your calls, all you SMS’ … when I worked for a carrier, I could have seen it all. And through the HLR, I had even more access. The only possibility to change that would be to go fully digital: end-to-end encrypted IP based messaging and VOIP calling through a VPN. And even then the carrier HAS to know where you are and sees the packets you send. And they do know, using deep-packet inspection.
I’ve typed enough and you have enough topics to do research into before you say anything else about something you know nothing about.
tl;dr: anyone who thinks mobile phones are safe or can be made safe/private just has no idea about the underlying technology, from the hardware to the base layer it operates on to the OS it uses: it’s all full of holes.
It seems you didn’t read my other posts in this thread, where I explicitly said they can just go to the carriers to get the data stream?
It also seems you are very unfamiliar with iOS security. I suggest reading https://www.apple.com/business/docs/iOS_Security_Guide.pdf. Specifically, iOS devices explicitly protect against the reply attack on NAND you suggest for brute forcing the passcode with an anti-replay counter. You also seem to paradoxically suggest that malware, which requires a passcode to install, would have access to the passcode.
You also mention the baseband/REX OS. This is a red herring as the baseband doesn’t have direct access to the application processor (AP). The baseband runs on a different chip that is separate from SoC that runs the real OS and cannot execute code inside the OS. Think of it like a Ethernet Hub in that all it does deliver network packets to a computer, but like hubs, never gains an execution context on the computer. In order to get code to run, you must find and exploit a vulnerability in the parser that determines what the network data means. It’s far, far easier to find such vulnerabilities in code for things already running on the OS.
As for the San Bernardino case, you seem to have lost sight of the scope of this article. This article was discussing an attack that could be done silently, without someone knowing. In the San Bernardino case, all theoretical attacks involved permanently destroying the device. Of course, destroying a device is not an option if you want to surreptitiously modify a device to listen into calls.
You also My problems with this VICE piece are two fold:
-
It suggests a problem exists without listing any of the requirements like physical access to the device beforehand and the passcode.
-
It suggests “solutions” which fundamentally can not work if the attacker is advanced as they scare you into believing.
Remove the hardware mics? The attacker can just install a new hardware mic since they already have physical access to the phone. Hell, as I mentioned previously, if they installed malware on the device, they have access to the audio input, regardless of where it comes from.
They could also use the speaker as a microphone as they work on the same principle (ever plugged a speaker into a mic port by accident and noticed it worked as a microphone?)
I think there is an untapped market for upscale privacy boxes. The more tasteful, the better. Perhaps gold plated, or silver or platinum. You could call it The Platinum Box, or maybe The Privacy Box, or perhaps just Pbox. You pitch it as a critical accessory for the upwardly mobile. When you absolutely need privacy, just put the phone in the Pbox.
Expensive lawyers would use it to reassure clients that they took their privacy seriously. C-level executives would use it to highlight the importance of their discussions. People in Washington would shoot each other to buy one.
Of course, those Washington people would shoot each other anyway, but at least this would give them an excuse.
The primary attributes of this product would be:
- It must demonstrate “Tasteful Expense” like a fine watch.
- It must look good on an executive’s desk.
- It must block the sensors of any cell phone that is placed inside.
- It must close with a smooth, audible click.
For extra points, you could easily design it to:
- Act as a Faraday Cage, tho this isn’t as critical as looking expensive.
- Recharge the phone(s).
Wish I had the capability to make something that looked expensive and tasteful. I think this would sell itself.
I been waiting to see if this kick start is legit or not: JackPair: secure your voice phone calls against wiretapping by Jeffrey Chang & the AWIT team — Kickstarter
…but it’s still “on hold” “in delay” “might be a scam” state
Maybe easier and better quality audio using the capacitive touch-screen. Andybody know if it’s possible to get real-time state data from those?
Speaker is maybe another option as most of the internal i/o pins can be programmed to be either/or/both though I suspect the d/a converter and amp might get in the way of an inbound signal.
I don’t know The sampling rate and sensitivity of accelerometers in phones is almost there for full voice quality. Plus, with 3-axis accelerometers, you can do speaker direction detection which would be handy for automated transcription of conversations.
Nice! Didn’t even consider that aspect.
This topic was automatically closed after 5 days. New replies are no longer allowed.