Originally published at: https://boingboing.net/2018/05/24/analyzing-mouse-movement-to-se.html
…
There are people who can lie without even thinking about it. You probably know who I’m thinking of.
Actually. that raises an interesting question. Would this test detect someone who’s delusional? Presumably even lying to one’s own self requires more mental effort than telling the truth, but can the researchers be sure the hesitation is due uncertainty?
I don’t think so. They can’t actually tell truth from falsehood, they are using remembering vs. fabricating as a proxy. Asking a delusional person about their experiences of their delusional life are may be slightly more likely to require fabrication to answer than memory, but I doubt it’s significant.
Interesting thing about me, for many years if you asked me how old I was, I would recall what year my oldest brother was born in, subtract that from the current year and subtract the difference in our age. You didn’t need advanced analysis to tell I was doing this, it could take me several seconds (or even longer if I got particularly confused trying to ensure I didn’t make an off by one error).
One way in which this test fails is that it assumes that all people would remember the same basic details about themselves (like their age, their place of birth, etc). People are way weirder than we admit.
This is fun. We can’t tell without an experiment, but here’s the model I would test. With careful phrasing, you could flip the test and detect the delusional by their positive response. For example, it would be interesting to see what it makes of picking True or False to the following…
Guns are good.
The world is flat.
People have made up evidence for global warming.
Could you immediately pick ‘False’ for all of these?
Guns are good for what? They are good for killing people. Are you using ‘good’ in some moral sense, rather then whether they are fit for purpose? Surely you are, but in which case, what are your terms of reference?
Flat? Oh come on, you can’t mean that. As long as people have thought that there might be a curvature to the ground, they have thought it was curved in some way. Of course, the world is flattish in parts. In fact, it is pretty hard to see the deviation from flatness unless your distances are large. Most local surfaces are roughs you can’t even say which way to curves.
Global warming is real. But perhaps some people have made up evidence for global warming at some point. Most of the claims of the Club of Rome in the seventies were pretty rough guesses. Anyway, what do you mean by ‘made up’?
You can pick out the people who are absolutely certain. I am not sure what happens next. Drone strike, maybe?
But what you’re talking about is facts and beliefs about the outside world, not memories of direct experience.
I’m pretty skeptical of the idea that we’ve finally worked out an actual lie-detector after ages of just pretending lie detectors were worth a damn.
Here I’d say they aren’t really measuring lying. They’re measuring cognitive load, sure but in the example question, they basically asked participants to do a quick bit of mental math. That’s something a prepared liar might already know, but an un-prepared truth teller might not. Most people know their own age right off the bat [citation needed] so the results come out well here. But, for an example, when trying to unlock a frozen account, I was asked about my father’s age. Given that I’m a terrible son, that’s not information I knew straight away, and I had to do a little mental math to work out the correct answer.
A more interesting test might be to give participants a series of questions, and ask them to lie to some of them. Maybe tell them to lie to at least three of fifteen questions, with a maximum of five lies (numbers pulled directly from my anus here). Have subjects report what questions they lied on. Use the mouse movements to try and determine the lies before looking at the participant self report data, then see how well prediction matches reality.
You’d need to find people who have a delusional belief or false memory regarding specific things in their past. The thing is they’d have had time to live in that belief so I’d assume they’d look just like the truthful.
That’s not a good test case for reality, though. In that case people would be deciding whether to lie, which is an entirely different operation that they’d have to do for each question. In real life people are trying to conceal something and will lie if they think the question gets at that something. I think testing a lie detector in realistic conditions makes sense.
That said, I mentioned myself as someone who doesn’t automatically remember my own age above. I don’t think there is any such thing as a real lie detector test.
But will an AI hesitate before checking the “I am not a robot” checkbox?
Yeah, I was considering making the questions of a somewhat personal nature (or at least some of them) to give participants more realistic lying motivation. So long as the answers aren’t recorded, and the researchers are blind, in the data, as to which questions are the more likely lies, it should still work.
The main problem with that approach is that if you put a bunch of lie encouraging questions up, participants are likely to also lie on their self-report of what they lied about, and potentially lie to a larger range of questions.
I suppose you could start with a questionnaire that participants are blind to, give them questions they’re likely to lie about, than inform them afterwards that no data was collected regarding their answers, then ask them to indicate which questions they lied on. Not sure if they’d comply or not, though with a sufficiently large sample size, and the right data handling, maybe you could get something interesting.
Not really. I am not a climate scientist, so all my information comes by proxy other than the weather I see, which is a tiny part of the whole. We can only see the universe out to the Hubble radius, but find it is reasonable that the rest of the universe beyond it should be similar, rather than we are bounded by great walls of ice.
I think what is being measured here is cognitive load, not truth. In the original case, the load may be increased by trying to maintain a coherent falsehood. In my case, the load may be reduced if the person is ignoring the nuances and just answering along party lines.
You can doubtless fool this by the same techniques you might use to fool a lie detector. Stay calm. Always take a fixed time to answer each question. Do not move the mouse until you have chosen your answer.
I think those particular checkboxes are counting on AI/robots NOT hesitating before or after clicking. I fail those quite a bit, myself, perhaps because I tend toward keyboard usage and am relatively fast and precise even with the mouse. I might also do fairly well at making a mouse movement-based lie detector think I’m telling the truth, at least if delay before moving the mouse is not a significant factor (I would likely decide which thing to click without moving the mouse, then move my mouse directly to it).
Again with the delusional bit though.
He doesn’t use computers. Coincidence?
This topic was automatically closed after 5 days. New replies are no longer allowed.