A person's heartbeat can be used to detect deepfakes

Originally published at: https://boingboing.net/2020/09/29/a-persons-heartbeat-can-be-used-to-detect-deepfakes.html

2 Likes

Can’t currently do it. Now that this research is out look for an updated algorithm soon.

25 Likes

I’m suspicious of anything based on Eulerian Video Magnification; I worked with some folks doing photoplethysmography (reading human pulse from video) a few summers ago, and not only could we not replicate the widely reported technique, we found evidence that it’s bogus. Some of the first-cause paper’s ( doi:10.1145/2185520.2185561 ) example videos appear to be doctored; in one of them the subject’s red shirt shows a pulse when sampled. There is also a paper (doi: [10.1117/1.JEI.26.6.060501 ) using the technique that found cyclical variation in plant leaves …at a frequency which some biologist colleges assured us shouldn’t be a thing.

We ended up doing a much simpler Luminance (or Green) channel minus Red channel (to cancel out ambient light) method to amplify the signal of interest - HbO2 vs. Hb absorption is very different at a couple frequencies, ~570nm offers good differentiation and good sensitivity on a conventional visible-light bayer-filter color digital camera - then a little bit of autocorrelation to extract the heart rate.

22 Likes

youre of course right. but I asume that its really difficult to mimic. the method should hold for a while…hopefuly.

2 Likes

(Wren, a Malkavian vampire and funeral home operator looks around nervously.)

“Can detect a heartbeat, you say? It’s not showing up on your video? Well, uh, I suffer from low blood pressure and a melanin deficiency… I’m sure that is it. Clearly your technology is a bit buggy. I assure you I am alive.” (nervous laugh)

5 Likes

There are ways to circumvent this system.

8 Likes

Maybe, but it was only a few years ago that deep fakes as we now know them were still in the realm of science fiction. There seems to be an exponential growth in the quality of deep fakes so I doubt if it ends up being legitimate that this will be faked sooner than expected.

2 Likes

Agreed. I’m no domain expert, but I think the safe assumption is that as soon as there’s a specific, known, well-defined signal, someone will train a machine learning model to spot or mimic it.

It’s anti-inductive, like stock markets. As soon as you find a meaningful signal in the data and announce it to the world, everyone else changes their behavior to make it go away.

8 Likes

It’s an interesting method to use on the videos that it can work for, but like any detection method, it’s only a matter of time before it is defeated. You can already defeat it by simply decreasing the video quality or other methods of limiting what the camera can see, but it will be defeated by deep fake software so that not even those restrictions will exist sooner, rather than later.

You kind of have to assume that the end game is going to be perfect deep fakes, and that these perfect deep fakes are going to arrive far sooner than most people expect. We will almost certainly have to resort to video crypto authentication methods to prove any video was shot in a place by a real camera soon enough.

If you think we are struggling to live a shared reality right now with people’s various filter bubbles, just wait until anyone can fake anything convincingly, and video evidence is worthless and easily faked. Even people who are able to accept the new reality that they can’t believe there eyes are going to struggle. Your stupid monkey brain just wasn’t built to not believe what it sees. People who continue to believe their eyes (and many people definitely will) and are going to be so deep in their filter bubbles that they will be living in another reality entirely.

It’s crazy how fucked we are. Neil Stephenson’s new book has a lot of weaknesses, but the total fragmentation of people’s reality and how that fucks society in half, I think is pretty spot on.

World wide, I think it is even scarier. China is leading the way, but I think we are on the cusp of some new terrifying forms of autocratic control. Nations are going to be able to control and fake a lot of reality, and then combine that control with knowing everything you do and having AI and algorithms that can spot dissidents long before they are problems.

On the plus side, as this technology damns us by destroying our ability distinguish between real and unreal, we are going to get some really good movies and video games out of this technology. It’s probably going to be really easy to fake one of your friends screwing a goat or something, which will be hilarious, so at least we will have that too.

4 Likes

Was this generated by GPT-3?

I came here to say similar (thanks!), but my understanding of algorithmic high-frequency trading is that it’s essentially constant open warfare between competing algorithms. So much of our lives is shaping up to be… this.

Fucking A Gibson was so right about everything.

3 Likes

I’d say that we’ll get human-undetectable deepfakes quickly, and then a perpetual arms race between fakers and computer-based detectors. As new detectors are developed, old fakes will be uncovered.

1 Like

We’ll also get a lot of fake detectors that will be adjusted to agree with what kind of videos the creator wants to label as fake. We’ll have people taking genuine videos, doing subtle changes in them to make them register as fake, and then upload them trying to replace the original with something that is easy to discredit. Authoritarian governments or companes like Facebook will have a huge advantage in this kind of armsrace.

2 Likes

A person’s heartbeat can be used to detect beefcakes

2 Likes

I don’t think that it is a fair arms race. The faker only needs to know your detection method to defeat it, and there are literally a finite number of detection methods. Eventually you just run out of stuff to search for, and are really just hoping that they make a mistake. The detectors jobs gets harder and harder, while the fakers job is getting easier.

I think very shortly, the fakers are going to be mostly on the winning side, with only brief moments of detectors having the edge. I think eventually fakes will only be spotted when the faker makes a human mistake, rather than over technical mistakes.

Not that it matters all too much, because once you can consistently fool humans and make those fakes cheap enough to be mass produced, we are fuuuuuucked. Even if a computer can detect that something is a fake, good luck convincing humans that want to believe something that their eyes are wrong and some stupid algorithm counting blood pulses is right.

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.