Originally published at: https://boingboing.net/2024/02/15/man-fined-for-scratching-his-head.html
…
It’s a Vulcan mind meld exercise.
“My mind to my mind. My thoughts to my thoughts. My mind is merging, my mind is becoming one. AHA! Now I remember where I left those keys.”
so… lobot wasn’t included in the training set?
it’s okiedokie spiffy fine if you’re distracted with a cell-phone which is linked to the car’s system, but if you’re using it directly then that’s what’s bad? (“ten and two at all times even during an argument over the phone”)
Same thing happened to me, but it was a human who “saw me using a phone”.
He was scratching his head wrong.
Yeah, optimistic interpretation about the hands-free laws are that they’re trying to prevent people from physically looking at the device when driving. In theory you could answer a call and hold the phone in your hand without looking at it, but that seems like a pretty minor edge case.
Why base the conviction on a single still image? A 5 second video would immediately tell what was actually happening.
I don’t know, I have had some persistent itches.
His passenger was using the phone, while he scratched his head.
He was using the phone legally hands free, while he scratched his head.
He was listening to a podcast with data turned off, not scratching his head.
Neither the image, nor the image + cell data are enough.
If you want the driver to prove his innocence, or miss a day of work to go to court, they AI company better be paying him the same hourly equivalent rate that they pay their CEO.
Oh, c’mon, he’s totally holding a phone to his head
Training these sorts of image detection models on movies takes much more than training them on still images. So they’re trained on stills and given stills to evaluate. Sure it gives worse results, but if they actually cared about getting it right, they wouldn’t have used machine learning in the first place.
My favorite example of “your AI model is actually not looking at what you think it is looking at” was a group was seeing if they could use AI detect pneumonia in chest X-rays, and it turned out that the model was at least partially relying on artifacts from the model of X-ray scanner used. It turns out the AI was “cheating” by recognizing the images from the portable X-ray scanner (used in the ER when you have good cause to think a patient really does have pneumonia) as opposed to fixed clinic X-ray machine.
10 Reasons you are scratching your head wrong. #6 will shock you!
We have similar AI detection in our town and await for someone to be fined for picking their nose at the lights.
I also wonder what would happen if I wall papered my dashboard with prints of people using phones. Or wore a hat with extra faces on it.
@pesco Your Oddity Central link isn’t working (for me).
A month after the event and this driver remembered he was scratching his head? I can absolutely understand why a human accepted the AI’s verdict, wonder how many are rejected?
Maybe he should stop scratching his head with his novelty phone-shaped headscratcher.
… busted again
The fact that this guy is an ML engineer is borderline Black Mirror stuff.
I secretly love this.
Right, but we don’t need more information, we need to not issue tickets in situations when we can’t know if a violation was committed.
And any system where false positives generate income for the operator needs a LOT of extra scrutiny.
“It may well be that the training dataset contains few or no photos of people sitting with an empty hand on their ear. In that case, it becomes less important for the algorithm whether a phone is actually held in the hand, but it is sufficient if the hand is close to the ear. To improve this, more photos should be added where the hand is empty.”
It’s a lot of fun finding all of those and then trying to plug the gaps.
There was also a lot of this discussion when people wanted to define driver drowsiness as X seconds of eye-closure. Had to provide that most eye closures are, in-fact, not sleepiness related.