It’s a mystery to those of us outside the US that polygraphs are still used for anything more than kids electronics projects. So how does it work? The results from a polygraph test are not admissible in court. But they are used for job interviews, and you can fail based on serious accusations like owning child pornography. Has nobody ever failed, and then sued their potential employer for defamation of character, or similar?
Maybe, but it does keep the AI at bay:
When I read Stephen Fry’s novel The Liar as a bobling, there was an idea mentioned in passing that struck me as interesting. In order to tell the truth (even if you remember it wrong), loosely speaking, you have to call up a memory and turn it into words. But to lie, you have to first remember the truth and then do some different, additional processing to say anything other than the truth. So it’s vaguely plausible that some kind of physiological intervention could selectively block you from being able to do the second part, making you unable to lie. Though, it might block you from being able to speak at all, because even telling the truth involves constructing a story.
Anyway, polygraphs are nonsense, as demonstrated by the fact that they are legal, which they certainly wouldn’t be if they worked.
But where do I put my feet?
Remembering the good old days, when gummint didn’t need no dang polygraphs:
Why have a system that could 100% detect lying? Because the alternative is worse. People believe others based on completely irrelevant factors, such as whether they align with their political party, or whether they are the same race as the person talking. Some innocent people are on death row because a jury didn’t believe them when they were telling the truth, and others can become the president while lying constantly with impunity.
Sure, you could explore such a technology with a million dystopian sci-fi stories to show it’s terrible downsides, but let’s not pretend there are no downsides to the terrible methods we employ now–often without realizing it–to decide what is true.
You don’t need science fiction, you just need the currently existing balance of power in US labour relations.
Questions that will be asked while they have job applicants under the lie detector:
- Have you ever been a member of a union?
- Do you think that the wages here should be higher?
- Who did you vote for?
- Are you gay?
Etc.
And then we get onto what Trumpian police could do with that thing.
The right to lie is an essential component of the right to privacy.
Old Joke:
Soviet Russian defects to the US and is asked about life under Stalin.
Did you have enough to eat?
-I couldn’t complain.
What about housing?
-I couldn’t complain.
What about your job, there?
-I couldn’t complain.
So why did you defect to the US?
-Ah, but here I can complain!
Expressive freedoms are important, and honestly people should have the ability, if not the right, to break the law.
Disagree. The right not to have to answer questions under oath/futuristic-lie-detector is an essential part of our right to privacy.
In an interview today, the interviewer is not allowed to ask you about your religion (and, if they work it out, are not allowed to base their hiring off of it, though that’s harder to prove).
However, you’re free to volunteer how much you love your church and how religious you are, etc. But if they find out you were simply lying your ass off about that, they could certainly fire you for lying.
So therefore you’re already protected from having to answer questions you shouldn’t, but don’t have a right to lie. In a future with unbeatable-lie-detectors, we would need even stronger versions of these laws, but these are laws that we already understand.
Again, I think it’s important to recognize the cost of unreliable detection of lying today: millions of innocents in jail or killed over time, people rising to power due to lying with impunity, wars being started over falsities, etc. We can then argue over which cost is greater–and I definitely don’t know–so long as we honestly recognize the ills caused.
One question is, if humans naturally couldn’t lie, would we think that an invention that allowed lying would be a good thing, knowing all those ills listed above would be in store for us?
Ah ha! But if he’s not lying and he says he’s lying, then is he not lying about lying?
Maybe, but it does keep the AI at bay:
Works on Star Trek…
So it’s vaguely plausible that some kind of physiological intervention could selectively block you from being able to do the second part, making you unable to lie.
I mean, unless one believes in mind-body duality, which I do not, all thoughts are neurochemcial activity. If you could resolve that activity down to the firings of individual neurons, then you would have the raw data of all mental activity, the so-called connectome. That’s actually pretty plausible. NMR imaging has theoretical resolution limits that disqualify it, but there are experimental ways such as infrared holographic imaging and others that will probably be invented that we haven’t thought of yet. William Gibson’s gotten plenty of millage out of such speculative neural implants.
But raw data is one thing. We have no idea how that raw data is processed into what we actually think. And it’s not at all clear even if we had a robust central theory of cognition that it would be useful in translating raw neural activity into human-readable information. Indeed, given the evolving complexity of weighted connections between our roughly 100 billion neurons, it’s likely a statistical process. Back-propagation “deep learning” algorithms do something that may be a pale shadow of that complexity and we can’t even figure out how those algorithms wire up ten simulated neurons. There’s no way we’ll be able to follow the internal logic of an actual human connectome. Which means if we ever can translate it into human-readable information, we’ll have to rely on artificial neural networks of which we also can’t follow the internal logic and that will have to be trained by flawed humans answering whether it’s returning accurate information about their actual thoughts. But that’s not even the worst of it! We have no way of knowing if the connectome is similar from one person to the next. Even if the raw data is 99.99% the same, minuscule differences in overall neural maps might result in catastrophic differences in the actual thoughts they encode.
This is why, while I think we’ll eventually be able to read the raw data of the mind in real-time, we will probably never have accurate means to interpret that data. But some asshole is going to claim the black-box interpretations are correct and misuse the technology just like they do polygraphs now.
Why?
It would make SEC investigations into CEO and board of directors so much simpler and so much more effective.
If the only way for a statement not to be true were for it to be a lie, then this paradox might have some kind of significance. Since there are multiple ways for a statement not to be factually correct, then there’s not much of a gotcha here at all.
If the only way for a statement not to be true were for it to be a lie, then this paradox might have some kind of significance. Since there are multiple ways for a statement not to be factually correct, then there’s not much of a gotcha here at all.
sigh 'Twas a joke. Nonethless…
A lie is any intentionally false statement. @Papasan made an intentionally false statement about lying, which is called the Liar’s paradox. It’s a subset of performative contradictions in self-referential logic. Another example in this subset is Epimenides paradox, named for a Cretan philosopher who said “All Cretans are liars” and can be generalized to “I am lying.” Another is “This sentence is false.”
Now, OED2 offers a corollary definition of a lie as intending to deceive or founded on a mistaken impression. Since @Papasan’s statement was a joke, it was not intended to deceive. Now, @Papasan is a smart guy, so I assume part of the intended joke was to demonstrate the Liar’s paradox; he therefore knew what he was doing and it was not founded on any mistaken impression. Ergo, @Papasan was lying about lying according to the core definition, but not according the optional corollary to it.
Also, jokes! (And proof that they stop being funny if you have to explain them.)
Oh, I got that it was a joke, all right. It’s just not funny, in the era of, "depends on your definition of “is”, and “biggest inauguration turnout ever”.
It was playing clever games with the truth that convinced us this current situation could never happen. I’m a bit humor impaired these days, turns out.
I’m a bit humor impaired these days, turns out.
Fair enough. These interesting-in-the-ancient-Chinese-curse-sense times are taking a toll on all of us. Some of us decompress with non-sequitur humor, others are nonplussed.
I know.
This topic was automatically closed after 5 days. New replies are no longer allowed.