One thing about the footage - it is heavily edited, with large chunks missing. One wonders what they didn’t include.
Watching their use of tear gas rounds, they appeared to just randomly spray the canisters in all directions, and not in any way toward a specific threat.
I don’t understand why GoPro hasn’t already donated hundreds of cameras to the Ferguson PD to equip each cop with.
The next generation of marketing should/ought to be companies donating to social causes. Instant boost in name recognition, free coverage, virality, plus buffs the company image as being “good people.”
See also: why hasn’t Dasani or Evian or Poland Spring donated free water to Toledo/Detroit/etc.
It’s heavily edited because it was put out by a newspaper which is, pretty much, dead.
Given how many other military toys and techniques the PD are using it actually seems very much in character for them to be using the current standard in military media ‘management’.
Team DoD has been working for years on how to defang the journalists without too many unwinnable lawsuits and/or outright deaths and they have apparently gotten fairly good at it.
I’d always taken “embedded” to mean “neutered”.
Love them asking people to respect the residents and not walk in their yards, then soon thereafter they are firing tear gas into the residents property. Not freaking out, just super calm, John Pike style, gassing the residents.
they kind of can’t - for the footage from a camera that a cop wears to be admissible as evidence and all that kind of legal stuff, the cameras have to be tamper-proof and have some kind of forensic support such that someone can’t alter the footage after it’s taken, etc.
such as it is, specialty camera makers get $1000s for the equivalent of a gopro for cameras that satisfy these requirements.
That should be doable by a firmware tweak. E,g, a blockchain hash of every frame, attached to the audio/video stream as a third stream, possibly digitally signed by a public key associated with the camera, where the manufacturer or a neutral third party has the private key.
(I am not 100% familiar with tamper-proofing video so please correct me if I am wrong.)
It would be interesting to meta-embed some psychologists to see how much of the ‘embedding’ effect is derived from simple area denial(those soviet ‘minders’ by another name) and how much is derived from the feelings of group identification that you are likely to form during a period of weeks spent in close company with a small, tight-knit group, sharing their food, their risks, their camp, depending on them for protection if things get hostile, and otherwise twiddling the assorted primate social knobs.
Pure movement restrictions definitely help (keep the journalists at the FOB and you’ll get a lot of stock-photo shots of gear being unloaded and tested in the yard and a lot less scary battle; and it’s a matter of historical record that minders almost stopped some of the iconic shots like those from the ‘Highway of Death’); but I have to imagine that subtler, friendlier, group identification is both more effective than the target might expect, and far less likely to be offensively coercive and bring their guard up.
(Obviously, this is much less relevant for an ‘embed’ with a police unit that likely lasts a day or two at most, and much more of a consideration for a weeks or months assignment to a military unit.)
As with most things crypto, I suspect that there are a zillion subtle ways to screw it up; but your implementation proposal seems very much like what one would do to build a tamper-resistant video system.
I suspect that the devil-in-the-details is not so much building the system; but building the system, having the vendor submit to a bunch of independent validations(many possibly worthless; but still Hoops Must Be Jumped Through), and then wrap the result in some relatively idiot-proof interface, offer support agreements and long-term-compatibility assurances so that a department can be sure that interoperable cameras will be available 15 years from now, and so on.
A few college-to-postdoc geeks with rasberry Pi cameras and some linux-fu could probably hack out something that works in a semester or so; but once you add all the procurement goo…
It could be perhaps written as an add-on to e.g. ffmpeg or another popular video stream/recorder software. Even the “motion” program, a de facto standard for security cameras based on raspi, can be enhanced this way.
Once the software is out, it gets cheap for the vendors to take and implement it.
Interoperability should be maintained by a well-designed standard for the additional data stream muxed in the audio/video. The container formats like .avi should support such secondary streams already, and the playback software that does not understand them just quietly ignores them. (Then there’s the video validation software that takes the file and uses this signature stream to validate the file’s authenticity.)
The deployment can be assisted by going bottom-up - first unroll it out to the activists and other people in the field, to provide robust tamper-evident footage for eventual lawsuits. That should get the publicity and get the ball rolling.
We need opensource-firmware cameras out there… Aren’t all the gopro and others just small linux computers anyway? Can they be rooted?
Well I do hear bang bang at 2:45, but it is not at all clear that the police “came under fire.” Was that sound really somebody shooting at the police? How do police normally respond when shot at? Do they stand around calmly and lob tear gas? Or do they seek cover and return fire?
That was my take as well. No indication they had ‘come under fire’ was evident. All we heard was a single bang which no one seems to react to except one guy who decides to fire gas grenades in random directions while in front of the camera.
Talk about your yellow journalism. At least when Rather went to Vietnam, he reported the things the establishment didn’t want you to see. Here we have a guy basically acting like a PR arm of the police.
I suspect that there are a couple of obnoxious complications; but at least one handy ffeature you get ‘for free’ in designing a suitably trustworthy and open camera.
On the minus side, openness and trustworthiness are…slightly at odds. For high reliability, you want a camera that signs footage with a key that the user cannot access, except for the specific purpose of the signing, and time-stamps the footage with a secure RTC that cannot be tampered with once initialized. If the user can access the signing key, they could simply strip the original signature metadata, modify the video, and then have their video editing program neatly sign everything back up as though it were fresh off the camera. Similarly, if the RTC could be manipulated it would be much easier to elide inconvenient context(of course, if the signing key isn’t protected the RTC is irrelevant because you can just rewrite the timestamps and sign the altered footage).
This set of requirements basically turns into our old friend the TPM once you have a go at implementing a general purpose version of it. Additional DRM-esque verification between the CCD/CMOS sensor and the encoder might also be necessary to prevent a man-in-the-middle processing and rewrite of the data passing over the MIPI interface before it reaches the video encoding and signing steps. Basically HDCP; but for input.
These issues aren’t insurmountable(nor are they obviously necessary to reach sufficient plausibility for admissibility as evidence); but they don’t mesh well with free and open design, since they place a ‘trusted’ root in a black box where the user can’t see it.
A second issue, likely to complicate some(though definitely not all) rooted/3rd-party-firmware camera modifications is that power efficiency demands appear to have driven gopro-style cameras a fair distance away from general-purpose computing. As best I can tell, the gopros are based on Ambarella Camera SoCs, which do have a general purpose ARM core; but a very feeble one. All the heavy lifting happens in dedicated blocks for the MIPI interface, h.264 encode and decode, and mass storage control.
Depending on exactly which SoC is used(other vendors also exist; and smartphone chipsets commonly include many of the same functional blocks) and how high the video bitrate is it may or may not be possible for a software change to add cryptographic hashing and signing operations into the video processing chain, which is otherwise composed of fixed-function blocks that shove data from the sensor to the SD card as fast as possible. Much more likely to be possible on a full smartphone(where Android Camera RAW and similar have increasingly demanded low level access to the image processing chain), much less likely on low-cost/long-run-time heavily embedded devices that have just enough CPU to catch button inputs, run a spartan GUI, and feed a few config parameters to the fixed function blocks. It’s a hell of a lot more efficient than a PC-style UVC video input arrangement; but it may be a fairly opaque one.
Doom and gloom aside, while robust tamper-resistance/tamper-evidence is the goal to shoot for, the power of video evidence produced even on generic junk cellphones appears to be considerable, and the ‘Ah, that bystander must have used his elite photoshop skillz to produce false evidence!’ argument is currently confined to the fevered paranoia of patrolman’s benevolent associations…
Plus, even if skepticism grows, there is the handy feature of silicon image sensors: like Tolstoy’s unhappy families, every silicon sensor is defective in its own way. Unique pattern of more and less sensitive pixels, especially visible in dark field exposures and low light conditions generally. Definitely not as mathematically incontrovertible as a robust cryptographic implementation; but you get it thrown in for nothing with any sensor that isn’t running in a liquid helium bath and it makes covert tampering, compositing from multiple cameras, or similar shenanigans much trickier.
One additional consideration: Given the enthusiasm of Our Peace Officers for ‘safeguarding’ and/or smashing recording equipment, a robust mechanism for exfiltrating captured video as quickly as possible seems like a worthy consideration. In the case of a device with a wireless data link, you’d want an upload client for one or more video services that can be operated with a ‘write only’ authentication token(rather than a stored username and password or full oauth). The credential must be stored, since you won’t have time to type it in when the situation strikes; but it can’t allow an attacker who physically seizes the device to delete footage or compromise the account. Hence a ‘write only’ auth token that allows uploading of additional video; but not modification or deletion of previously uploaded material or account particulars.
If wireless data is unavailable(uneconomic, jammed, underground) a short-range redundancy system that copies the output to one or more physically separate ‘slave’ storage units to preserve it in the event of the seizure of the visibly camera-like bit would be handy. A microSD card, battery, and antenna can fit unobtrusively virtually anywhere, and are much less likely to be noticed and seized if not attached to the lens module.
It’s not so weird to me, when I realize that the people on the other side are mostly black people, who are upset about what’s really a black issue.
In the minds of these almost exclusively white authority figures, racial difference makes the other side “the enemy,” in what basically becomes a “war.”
i wasn’t saying it was impossible necessarily, just that it’s one thing for go pro to donate a bunch of cameras (small cost to them) and entirely another to fix up their cameras so they are police-ready and then donate them (huge cost to them - it’s not only the firmware, but the body of the camera that has to be made tamper-proof). or is the ferguson PD going to do these firmware hacks themselves and also create new enclosures for the cameras?
You don’t have to make the housing tamper-proof. (Tamper-resistant is the best you can hope for anyway.) Tamper-evident is much lower bar and should be enough for the purpose.
The weak CPU used is a potential problem here, though. A weaker but computationally more palatable hash function may have to be used, or sign only every nth frame (and use conventional image forensics for detecting eventual tampering in the “untrusted” frames). The signing has to be done post-compression anyway, as the compression is lossy.
The police won’t do anything, they have to be forced to adopt the tech. Too many officers are wishing all the cameras would just go away and the world returned back to the age of expensive and time-demanding film or even before.
Just some thoughts…
Not necessarily. Security by obscurity sucks anyway. What you need is that the only part you have to rely on as secret/inaccessible is the private key (and eventual shared secret for such hash signing).
Hide it inside the chip. It will still be accessible by hightech means. Make the housing tamper-evident, e.g. by potting the chip with epoxy with glitter particles or swirl paint, make a highres photo of the glitter, sign it, pair it with camera’s serial name, store at manufacturer or even better, publish as openly accessible information; EFF and ACLU will happily house the data.
Secure RTC is a bigger problem. However, we can assume there will be more cameras around, by both sides of the conflict. We can even force a timestamp by e.g. a series of flashes by a photoflash, with the data encoded by timing. A series of the uniquely spaced flashes on multiple camera footages will then serve as an additional timestamp; if matched by recordings from both sides of the conflict, can be considered trustful enough. Still can be forged but then there are the overlapping videos; if the scene shown by the camera is partially overlapping with scene from another, trusted camera, the footage can be considered as authenticated.
May not be strictly necessary; a tamper-evident housing may take care of this. The scenarios ideally involve more cameras at once and a scene reconstructed from all the moments and angles has to match.
We have to sign the frames after compression anyway, as the compression is lossy. We can also sign only every nth frame, to save time. (And use conventional video/photo forensics to check the authenticity of the frames in-between.) Public key signing is needed only once per the record. SHOULD be doable (not 100% certain).
True that. These more powerful chips are certainly a choice if a trusted camera would be built from scratch.
That’s certainly a good news!
Very good thing to note. A lot will be lost by the lossy compression but the “signatures” are thrown in for free (and fall under the “conventional photo/video forensics” I referred to earlier).
That’s an excellent idea! And important one, too. (And can be implemented as a smartphone app; I believe there already are some.)
Or maybe even just no auth at all for streaming-in the data, but auth needed for erasing (and possibly even placed in hands of someone in a different international jurisdiction; I bet e.g. EFF Finland would happily assist Americans, and vice versa).
Can be also carried by other person than the cameraman, instructed to clear the scene when the cameraman gets in trouble, and carry away the recording. More than one such personnel can be assigned to each cameraman, and as long as bandwidth and space allows, one such device can record from more than one camera. (No need for specialized devices too, if bluetooth or wifi is leveraged on conventional cellphones. Just write an app. With wifi, cameras can stream the data as UDP broadcasts to be captured from the wifi interface and recorded to disk in a tcpdump-like way for later processing and splitting to data from individual cameras. The shared wifi bandwidth will be a limitation here, though; a compromise between number of cameras and bitrates will have to be found.)
The camera lens and the camera storage/controls can be carried by two different people, though the person carrying the lens probably should practice blind operation first.