Deep learning AI "autoencodes" Blade Runner, recreates it so faithfully it gets a takedown notice

[Read the post]

1 Like

dude, the first rule of AI is you never show them blade runner, terminator, or short circuit…

35 Likes

Don’t forget Small Wonder.

15 Likes

… reduces complex information to a small subset that the neural net believes to be most significant.

Compression. That’s the correct word.

9 Likes

So AIs can dream. One step closer to consciousness.

3 Likes

…of electric sheep.

21 Likes

touche!

Doesn’t YouTube’s content id system use a neural network to categorize videos? Wouldn’t that make it possible the two bits of software are actually seeing the same thing?

1 Like

…or of cuddly human babies. We shall see.

not sure but at least one of the videos contains the audio from the movie… which probably made it easy to flag.

the vimeo video seems to have no sound, but then again ā€œblade runnerā€ is in the title which may have weighted the automatic analysis.

1 Like

it was a set up, and you too have fallen prey to my evil plan.

1 Like

First of the day, won’t be the last though…

2 Likes

Well now I haven’t – thanx for nothin’

7 Likes

Is this to be an empathy test?

3 Likes

No, no… the rule on that is you never show it to any intelligence, artificial or otherwise.

6 Likes

She’s a small wonder, lovely and bright with soft curls.
She’s a small wonder, a child unlike other girls.
She’s a miracle, and I grant you
She’ll enchant you at her sight.
She’s a small wonder, and she’ll make your heart take flight.

She’s fantastic, made of plastic.
Microchips here and there.
She’s a small wonder, brings love and laughter everywhere

:smiling_imp:

5 Likes

The AI autoencoded version is like viewing Blade Runner through a scanner, darkly.

17 Likes

There’s this: http://news.berkeley.edu/2011/09/22/brain-movies/

ā€œImagine tapping into the mind of a coma patient, or watching one’s own
dream on YouTube. With a cutting-edge blend of brain imaging and
computer simulation, scientists at the University of California,
Berkeley, are bringing these futuristic scenarios within reach.ā€

Aw man, why didn’t he use the Director’s Cut without the narrative dubbing?

2 Likes

The final convolutional layer in the encoder network Enc is reshaped (flat-tened) from a 4th-order Tensor of size 16x9x640x12 to a linear (ignoringthe batch depth) 2nd-order Tensor of size 92160x12. The figure 92160 being the sum 16Ɨ9Ɨ640. The final layer in the encoder network Enc is fully connected to the flattened convolutional layer. The final layer is a 2nd-orderTensor of size 200x12, 200 being the dimension of latent variables z.

So no, that’s not a 200-digit number. It’s 200x12x(16.5 bits), which is about 12,000 decimal digits of information.

I think.

4 Likes