The eminently hackable police bodycam


#1

Originally published at: https://boingboing.net/2018/08/12/vievu-patroleyes-firecam-di.html


#2

Well, it ain’t called the Internet o’ shiT fer nothing.


#3

Just like voting machines, anyone in authority who’s not working to fix it probably has a vested interest in it being broken.


#4

Once I looked beyond the witty chatter, the pr0n, the cat gifs, and such, I believe I understood the whole thing is unmanageable, unless it can be shut down. Someone, somewhere, can do this, and it will happen eventually.


#5

Attackers could pinpoint intense police activity by watching for groups of cameras that all switch on at the same place and time.

No, you pinpoint intense police activity by watching for groups of cameras that all switch off at the same place and time.


#6

You stuttered.


#7

It mentions cryptographically signing the video.

Is there anything out there that does this? I doubt it. But there should be!


#8

so if we combine this with fake video what is the worst thing we can make? Why do I wonder these things…


#9

So, what I’m hearing is that a particular department in question could have all their cameras activated with remote feed video hacked to outside storage?

So, if you know a particularly troublesome department has a weirdly high incidence of camera “failures” during public interaction, then say, at a planned protest, all the cameras could be forced on and the video fed to a publicly accessible location?

Something something geese and ganders…


#10

On the one hand, I can envision cases being overturned when the defense can demonstrate tampering… OTOH, if the police themselves do the tampering, there could be all kinds of dirty convictions. Yuck.


#11

The software industry has had plenty of time to get their act together with security, yet we continue to see this amateur software being released the public. It seems developers simply cannot learn security basics, so software must be pen tested and fixed at a minimum. However, it’s obvious these IoT manufacturers can’t be bothered to do anything to secure their products.


#12

Oh, what a wonderful Rickroll (and thank heavens Goatse was never a video…).


#13

My new startup uses machine learning trained on an extensive proprietary corpus of police videos to detect open non legally important spaces for insertion of advertising material.

imagine the impact of your product shown in the heightened emotional state that a courtroom confrontation involving police planting evidence on a minority member could have!

Our target advertisers will be lawyers, insurance companies, souped up sports cars perfect for long speed chases, and ikea.


#14

What usually happens is that there is this big block of time allocated to security, and that gets shifted to the end of the project, “Get it working so we can show it to people, and then we’ll have plenty of time to get the security right”, and then that time is squeezed, “fix these other problems first”, and finally an arbitrary shipping date lands on it and crushes it.


#15

I assumed they probably mean encrypting the stored or transmitted video. US government requires secret and top secret video files to be encrypted with AES 128 and 256 respectively. Properly implemented, AES 128 would prevent all but state-level actors with massive resources (such as the NSA) from decrypting and therefore tampering with the video…unless of course they have the key. I argued years ago that streams should be encrypted and that police departments should not hold the encryption keys which should be entrusted only to courts of law.

Now the caveat. Any encryption protocol is only as secure as the software application implementing it and the vast majority of cracking focuses on finding vulnerabilities to exploit therein since the math itself is virtually unassailable by classical computation. This is why - even in the unlikely event that law enforcement take this threat seriously and demand manufacturers implement video encryption, and the even more unlikely event that legislatures consign the keys to courts of law - the hardware and its software must be publicly audited by independent security researchers such as Josh Mitchell to expose bugs so that manufacturers can fix them before they’re exploited by criminals.

If we lived in a world where corporations and institutions fixed their crappy software instead of trying to vilify the messengers, that’s what we’d get. Alas, as the endless parade of major security breaches and abuses of the DMCA demonstrate, we do not live in that world. We live in a world of corrupt law enforcement and shady corporations where the guiding principle is to get away with whatever you can and hide behind bad legislation written by technologically illiterate and ethically incompetent legislators or worse, their corporate cronies.

In that world our best hope is independent watchdogs such as the Electronic Frontier Foundation.


#16

Where my head was at was for this era of DeepFakes, where we could encode a crypto signature into each frame and embed it into the container, or even be a part of the image sort of like the legacy VBI used for transmitting closed captioning.

Yeah, there’s a lot of challenges with that. But, I think it’d be a useful undertaking.


#17

It would add computational cost(though a fairly manageable amount of it; fixed function cryptographic accelerators can handle significant bitrates); but having a camera sign each frame(and presumably data like which frame it is in sequence, to prevent editing that doesn’t attack any individual frame but carefully elides or rearranges valid frames from working) with a private key burned into it; or generated onboard during initialization.

That approach wouldn’t be resistant to a sophisticated attacker with ongoing physical access to the device; but (one would hope) that if the burden of proving integrity is on the people who want to use it as evidence the various mathematically unsophisticated, but tricky, ‘guys with guns and a humorless attitude toward tamper evident seals are blocking my physical access’ approaches would be used.

What I’d be interested to know is if the unique properties of each sensor could be helpful or if, while you cannot feasibly construct a second sensor to have the exact same distribution of nonlinear responses and noise as the target sensor does, you can fairly easily characterize the peculiarities of a sensor from some sample footage and then edit matching artifacts onto edited footage or footage from another camera that has been aggressively filtered enough to scrub its own artefacts.

The latter approach would be a lot trickier(potentially infeasible if it turns out to be relatively easy to edit-in the noise characterized from a sensor without having to duplicate it); but would have the virtue of not depending on the target device keeping its private key private(or the vendor refraining from keeping an ‘escrow’ copy when burning it in); and instead being based on physical defects that are unique for the purposes of anyone who doesn’t consider fabrication atom-by-atom to be merely careful workmanship.


#18

that’s what I was thinking as well. It’s an integrity issue, not a data security issue, so you would want a digital signature algorithm, not AES. (Tangent but the 128 for Secret 256 for TS guidance is over a decade out of date.)