Originally published at: OpenAI discovers its visual AI can be fooled by written notes | Boing Boing
…
Now I want to make a heist movie where the team defeats the advanced AI monitoring the security cameras by sticking post-it notes with the company board’s names on their heads.
Identifying item...
Item identified
Result: Not a pipe
“These are not the droids you’re looking for.”
Beat me to it!
It says “ipod” right there.
Why would anyone lie to the camera?
A few years ago, my 10 y/o was begging for an Apple iPod for X-mas. Can you imagine how psyched she was when she got to unwrap this?
I like that it’s more certain that the labeled apple is an iPod than it is that the unlabeled apple is an apple.
Yeah, the reported 0.4% chance that the top picture is an ipod points to a bigger problem than just overweighting labels
I’m reminded of how you can hijack your own brain, like optical illusions, the red/blue dress, “yanny”/“laurel” etc. (That last one is interesting - my husband plays Age of Empires and there’s this little ‘taunt’ voice clip that says “wololo” and he says it sounds like “mini-me”, whereas I hear it as “wo-lo-lo”.)
Or, if I give you a lemon and make it look orange, you might think it tastes less lemony and more orange-y.
or if I write the word “red” with a blue crayon and tell you to identify which color it is.
Now that these weaknesses are known, can the algorithms be retrained to avoid these kinds of mistakes? That’s not to say that other hacks will be found, but this is how machine learning is supposed to work. Or…is there something more fundamental here meaning that current machine learning techniques will always be subject to these kinds of problems?
It really sounds like this AI is just learning to be racist.
Reminds me of the Stroop Effect in humans.
For the cameras:
This topic was automatically closed after 5 days. New replies are no longer allowed.