Deep learning AI "autoencodes" Blade Runner, recreates it so faithfully it gets a takedown notice


#1

[Read the post]


#2

dude, the first rule of AI is you never show them blade runner, terminator, or short circuit…


#3

Don’t forget Small Wonder.


#4

… reduces complex information to a small subset that the neural net believes to be most significant.

Compression. That’s the correct word.


#5

So AIs can dream. One step closer to consciousness.


#6

…of electric sheep.


#7

touche!


#8

Doesn’t YouTube’s content id system use a neural network to categorize videos? Wouldn’t that make it possible the two bits of software are actually seeing the same thing?


#9

…or of cuddly human babies. We shall see.


#10

not sure but at least one of the videos contains the audio from the movie… which probably made it easy to flag.

the vimeo video seems to have no sound, but then again “blade runner” is in the title which may have weighted the automatic analysis.


#11

it was a set up, and you too have fallen prey to my evil plan.


#12

First of the day, won’t be the last though…


#13

Well now I haven’t – thanx for nothin’


#14

Is this to be an empathy test?


#15

No, no… the rule on that is you never show it to any intelligence, artificial or otherwise.


#16

She’s a small wonder, lovely and bright with soft curls.
She’s a small wonder, a child unlike other girls.
She’s a miracle, and I grant you
She’ll enchant you at her sight.
She’s a small wonder, and she’ll make your heart take flight.

She’s fantastic, made of plastic.
Microchips here and there.
She’s a small wonder, brings love and laughter everywhere

:smiling_imp:


#17

The AI autoencoded version is like viewing Blade Runner through a scanner, darkly.


#18

There’s this: http://news.berkeley.edu/2011/09/22/brain-movies/

“Imagine tapping into the mind of a coma patient, or watching one’s own
dream on YouTube. With a cutting-edge blend of brain imaging and
computer simulation, scientists at the University of California,
Berkeley, are bringing these futuristic scenarios within reach.”


#19

Aw man, why didn’t he use the Director’s Cut without the narrative dubbing?


#20

The final convolutional layer in the encoder network Enc is reshaped (flat-tened) from a 4th-order Tensor of size 16x9x640x12 to a linear (ignoringthe batch depth) 2nd-order Tensor of size 92160x12. The figure 92160 being the sum 16×9×640. The final layer in the encoder network Enc is fully connected to the flattened convolutional layer. The final layer is a 2nd-orderTensor of size 200x12, 200 being the dimension of latent variables z.

So no, that’s not a 200-digit number. It’s 200x12x(16.5 bits), which is about 12,000 decimal digits of information.

I think.