The eminently hackable police bodycam

Originally published at: https://boingboing.net/2018/08/12/vievu-patroleyes-firecam-di.html

5 Likes

Well, it ain’t called the Internet o’ shiT fer nothing.

4 Likes

Just like voting machines, anyone in authority who’s not working to fix it probably has a vested interest in it being broken.

8 Likes

Once I looked beyond the witty chatter, the pr0n, the cat gifs, and such, I believe I understood the whole thing is unmanageable, unless it can be shut down. Someone, somewhere, can do this, and it will happen eventually.

2 Likes

Attackers could pinpoint intense police activity by watching for groups of cameras that all switch on at the same place and time.

No, you pinpoint intense police activity by watching for groups of cameras that all switch off at the same place and time.

6 Likes

You stuttered.

7 Likes

It mentions cryptographically signing the video.

Is there anything out there that does this? I doubt it. But there should be!

1 Like

so if we combine this with fake video what is the worst thing we can make? Why do I wonder these things…

So, what I’m hearing is that a particular department in question could have all their cameras activated with remote feed video hacked to outside storage?

So, if you know a particularly troublesome department has a weirdly high incidence of camera “failures” during public interaction, then say, at a planned protest, all the cameras could be forced on and the video fed to a publicly accessible location?

Something something geese and ganders…

4 Likes

On the one hand, I can envision cases being overturned when the defense can demonstrate tampering… OTOH, if the police themselves do the tampering, there could be all kinds of dirty convictions. Yuck.

The software industry has had plenty of time to get their act together with security, yet we continue to see this amateur software being released the public. It seems developers simply cannot learn security basics, so software must be pen tested and fixed at a minimum. However, it’s obvious these IoT manufacturers can’t be bothered to do anything to secure their products.

Oh, what a wonderful Rickroll (and thank heavens Goatse was never a video…).

My new startup uses machine learning trained on an extensive proprietary corpus of police videos to detect open non legally important spaces for insertion of advertising material.

imagine the impact of your product shown in the heightened emotional state that a courtroom confrontation involving police planting evidence on a minority member could have!

Our target advertisers will be lawyers, insurance companies, souped up sports cars perfect for long speed chases, and ikea.

1 Like

What usually happens is that there is this big block of time allocated to security, and that gets shifted to the end of the project, “Get it working so we can show it to people, and then we’ll have plenty of time to get the security right”, and then that time is squeezed, “fix these other problems first”, and finally an arbitrary shipping date lands on it and crushes it.

I assumed they probably mean encrypting the stored or transmitted video. US government requires secret and top secret video files to be encrypted with AES 128 and 256 respectively. Properly implemented, AES 128 would prevent all but state-level actors with massive resources (such as the NSA) from decrypting and therefore tampering with the video…unless of course they have the key. I argued years ago that streams should be encrypted and that police departments should not hold the encryption keys which should be entrusted only to courts of law.

Now the caveat. Any encryption protocol is only as secure as the software application implementing it and the vast majority of cracking focuses on finding vulnerabilities to exploit therein since the math itself is virtually unassailable by classical computation. This is why - even in the unlikely event that law enforcement take this threat seriously and demand manufacturers implement video encryption, and the even more unlikely event that legislatures consign the keys to courts of law - the hardware and its software must be publicly audited by independent security researchers such as Josh Mitchell to expose bugs so that manufacturers can fix them before they’re exploited by criminals.

If we lived in a world where corporations and institutions fixed their crappy software instead of trying to vilify the messengers, that’s what we’d get. Alas, as the endless parade of major security breaches and abuses of the DMCA demonstrate, we do not live in that world. We live in a world of corrupt law enforcement and shady corporations where the guiding principle is to get away with whatever you can and hide behind bad legislation written by technologically illiterate and ethically incompetent legislators or worse, their corporate cronies.

In that world our best hope is independent watchdogs such as the Electronic Frontier Foundation.

1 Like

Where my head was at was for this era of DeepFakes, where we could encode a crypto signature into each frame and embed it into the container, or even be a part of the image sort of like the legacy VBI used for transmitting closed captioning.

Yeah, there’s a lot of challenges with that. But, I think it’d be a useful undertaking.

2 Likes

It would add computational cost(though a fairly manageable amount of it; fixed function cryptographic accelerators can handle significant bitrates); but having a camera sign each frame(and presumably data like which frame it is in sequence, to prevent editing that doesn’t attack any individual frame but carefully elides or rearranges valid frames from working) with a private key burned into it; or generated onboard during initialization.

That approach wouldn’t be resistant to a sophisticated attacker with ongoing physical access to the device; but (one would hope) that if the burden of proving integrity is on the people who want to use it as evidence the various mathematically unsophisticated, but tricky, ‘guys with guns and a humorless attitude toward tamper evident seals are blocking my physical access’ approaches would be used.

What I’d be interested to know is if the unique properties of each sensor could be helpful or if, while you cannot feasibly construct a second sensor to have the exact same distribution of nonlinear responses and noise as the target sensor does, you can fairly easily characterize the peculiarities of a sensor from some sample footage and then edit matching artifacts onto edited footage or footage from another camera that has been aggressively filtered enough to scrub its own artefacts.

The latter approach would be a lot trickier(potentially infeasible if it turns out to be relatively easy to edit-in the noise characterized from a sensor without having to duplicate it); but would have the virtue of not depending on the target device keeping its private key private(or the vendor refraining from keeping an ‘escrow’ copy when burning it in); and instead being based on physical defects that are unique for the purposes of anyone who doesn’t consider fabrication atom-by-atom to be merely careful workmanship.

(Edit: just to clarify. I very strongly suspect that a silicon image sensor has many, if not all, the properties of a physical unclonable function because of unavoidable manufacturing variations in the zillions of light sensitive areas and their support circuitry; With exposure to a given light input(or series of them, if the sensor doesn’t fully resume ‘normal’ state after each frame and also has heating, residual charge, etc. from the previous frame that changes it’s behavior toward the next frame slightly) being the ‘challenge’ and the noise/image artifacts they expose to the host camera/system(not because they want to; but because the noise goes with the signal when the image data are read out) being the response. What I don’t know, though, is whether they expose their uniqueness too readily and usefully (unlike PUFs specifically intended for security; which are, like asymmetric encryption schemes, designed so that you can’t infer their secret by issuing a series of suitably crafted challenges and watching the responses; and so would be vulnerable to having a ‘noise profile’ constructed that could readily be applied to computer generated, ‘noiseless’ footage to make it appear to have originated from the camera in question; or to footage from some other camera that has been noise reduced enough to allow the new ‘noise’ to be inserted.

We already know that some useful techniques based on inferring the noise properties of a silicon sensor are viable, useful; and not terribly tricky: dark frame subtraction is the big one: take an exposure of a known-dark scene(your own lens cap being popular for DIY; the unopened shutter being popular for in-camera automated implementations); any pixels that actually show up as something other than black, in the absence of any light, you know are ‘hot’; and can subtract or attenuate in the pictures you are trying to take to keep hot pixels from ruining things on long exposures or high sensitivity.

That said, dark frame subtraction is said to work best when the dark frame is shot under circumstances as close as possible to the desired frames(sensor noise being temperature dependent, probably among other things); so it might be a ‘good enough to be visually pleasing; not good enough to fake the sensor’s actual noise’ thing. Don’t know; would be very curious to. If you could use the big fancy image sensor as a ‘free’ PUF that would be major boon to in-camera signing; since it would free the system of the ‘trust everyone involved in loading the private key; and all the software that interacts with it’ burden; but if the impossibly or cloning the sensor is irrelevant because you can infer its noise patterns from some sample footage and add them in post production then it isn’t exactly a helpful option(except as yet another way to fingerprint naive users; which I’m sure won’t go badly.)

1 Like

that’s what I was thinking as well. It’s an integrity issue, not a data security issue, so you would want a digital signature algorithm, not AES. (Tangent but the 128 for Secret 256 for TS guidance is over a decade out of date.)

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.