This company wants to use AI to help you pretend to increase diversity

Originally published at:


Well - there’s your problem right there.


That’s going to cut into the real fake company employee business.


“We aim to make creative works both more accessible and higher quality through generative processes” sounds a lot better than “Auto-diversify the avatars for your army of Twitter sockpuppets!”

Given the tendency I’ve noticed of right-wing and Libertarian tr0lls using African-American avatars and Hispanic usernames on this site and others, this company isn’t helping.


Just in time for the 2020 Presidential campaign! (I know it was mostly the Republicans getting busted for using stock photos of PoC in their ads last time around but the Dems aren’t entirely innocent either)


The only distinction with that party is that it is (hopefully) rare, and given Mayor Pete’s track record I’m less surprised that he’s the one doing it. With the GOP, I’m more surprised when they get real POC doing their PR.

1 Like

Oops, thanks for the bug report. We do have various genders in the training set, and we do have them in the classifier. Fixing.

I appreciate your coverage; I also agree we are testing the waters and generating the buzz.

However, it’s not correct to call it a deepfake. Deepfakes are swapping faces; we don’t do that.

And yes, generative media is in it’s infancy, and it will improve a lot in the coming years; innovator’s dilemma it is. If you compare it with the monsters we had in September, you’ll see the striking difference.

Our next version would look like this:

Well, if nothing else, props for facing the controversy.
Yet, the elephant in the room is that many users here would object to a tool that creates faked inclusiveness even easier than before. One would then worry that it changes a ‘fake it till you make it’ approach to diversity to ‘fake it forever since it’s so easy and cheap’.
Yet, I understand what a short-term, reality based approach looks like: Fake diversity stock photos are in demand, tools to get them easier would sell, tools to even create them from scratch would be a whole new ballpark.




I disagree - this is actually a boing-boing appropriate wonderful thing. A computer can completely re-create an appropriate human face, free from the exploitive world of modelling and copyright.

1 Like

Their response wasn’t really facing the controversy.

Actually it was an almost pitch perfect example of the ways in which tech companies dodge controversy - it has all the key manoeuvres.

First we need to get people on board - build an artificial sense that while there are some minor disagreements, there are key things that we all agree upon… In this case we’ve had to manufacture the idea that people are suggesting this is about ‘testing the waters’ …

Establishes a peripheral “error” in the coverage - it doesn’t matter that for a casual audience the notion of the deepfakes are a good conceptual link between ‘neural network’ and ‘people/faces’ and ‘nefarious intent’… or even that this detail is hardly the point the coverage hinges upon. We also get a bonus here, because we’re re-establishing control over the body of knowledge.

Replaces the criticism of social issues with a technical problem and a possible solution. No-one was worried about the poor quality of the images, except as a horrible punchline to the larger concerns. Solving any technical issues does not address the underlying problem with the service, which is its potential for rapid, realistic, deployment of fake people for bad actors. But this does, again, allow us to re-establish a technical competence and control over the discussion by neatly sidestepping social concerns.

Err… Actually, I don’t know how this part fits the script. But I can feel it staring into my soul.

Anyway, summary of empty tech response to social criticism.

  1. Manufacture broad consensus.
  2. Identify peripheral error.
  3. Identify technical issue (and solution) to sidestep social issue.
  4. Distracting image.

I have plenty of friends who are neural networks; but I’m not sure I’d want one marrying my sister.



Next time they should use AI to better calculate how to place more than one aerial to get a better reception of moblie radio signals.

It will be a more useful diversity.

Besides I find the use of these stock images in web sites and similar advertising a bit annoying.

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.