I’ve read the paper and it does a remarkably good job of obfuscating what their results mean. Like most machine learning researchers, it looks like they cheat. When they report accuracy ratings for their results, they appear to mean the following: given a set of original images and a set of obfuscated images, the program can correctly guess which obfuscated image came from which original image (e.g. 80% of the time). That’s much easier than recognizing faces. Their technique only works when you have a (small) set of possible source images to choose from, one of which is guaranteed to be the correct unscrambled image. I’m struggling to think of any real world case where you need to de-obfuscate an image when you already have the unscrambled image before you start…
15 Likes