The firat few times I saw this puzzle on TV, it seemed like a legitimate paradox. But the premise started to fall apart when I imagined Harry Mudd as an infant, and everything that infant said was a lie. It wasn’t hard to imagine such a baby starving to death. Taking the idea further, life at any development stage-especially one capable of space travel- requires the ability to tell the truth, because its physically impossible for humans to fly through space by themselves. There has always been, and always will be, a need to be able to tell the truth when it’s important to do so.
So if Mudd were to claim to be lying sometimes, when it suited him, there would be a way to verify or disprove that claim. But since he’s claiming to always be lying, then the real puzzle isn’t whether or not there’s truth to the statement, but rather to puzzle out what exactly he is lying about.
Kirk’s input here isn’t necessary for the puzzle, but it does help to distract Norman (and the audience) away from the part of it that can be solved, toward the part that can’t.
This multiple choice puzzle is only misleading because we are so accustomed to seeing sentences like this in testing environments designed to find out what we know about the real world. The format distracts us away from the fact that this isn’t really about anything in the real world. There’s not an actual question here, it just looks like there is.
At least the Rockwell Electro Encabulator schtick drapes enough of a human element around itself, to make it (eventually) obvious that it’s a joke. This one is more mechanical, like the still images that have a “play” button at the center, tricking people into trying to start the video.
(Not long ago I was at the store looking at the ingredients printed on the side of a cardboard box. The print was too small to read, so I tried to pinch the image larger with my fingers before I remembered it wasnt a cell phone I was holding)
As the confidence fraud industry learns to exploit deep learning systems to create variations on this gimmick, we can expect to be flooded with this kind of semantic bullshit to a far greater degree than we are now. I suspect that AI is going to be naturally better at worsening this problem, than it will be in fixing it.