Now that’s just unfair to former Webster star Emmanuel Lewis.
{looks in mirror}
.
.
.
.
.
{pokes out Harrison Ford’s eyeballs}
.
{flees}
Wait, people with a coloboma are fake??
Because the AI has no idea it’s generating a face.
Someone trained the network by feeding photos of faces; then said “your turn. Make an array of ones and zeros similar to the ones I just gave you.” The algorithm produced output, then the human said “yeah, good” or “no, not so good .” Repeat a thousand times and it gets better and better.
Whatever those thousand lessons contained was absorbed into its digital neurons as arrays of numbers, and not as distinct features of a face. It wasn’t told anything like “expect a face where a face has a nose in the middle with two eyes and a mouth”, or that “an eye has a white round orb with a colored ring around a black circle”, etc. It’s just a collection of numbers that we know to have patterns; the neural network eventually learns what patterns make for good output through training and feedback.
Here’s a slide deck with really cool images explaining what’s going on inside the process. Warning: many of the intermediate images it produces look like a cruel, disturbed Picasso assembled a dog. If you are triggered by photos of spider-like collies with three eyes, well, you were warned.
This is also why simple “adversarial perturbations” cause AI image recognition systems to misidentify images in ways that have nothing to do with the shape of the object.
Perhaps it was trained on photos of people with syphilis.
This topic was automatically closed after 5 days. New replies are no longer allowed.