dude, the first rule of AI is you never show them blade runner, terminator, or short circuitā¦
Donāt forget Small Wonder.
ā¦ reduces complex information to a small subset that the neural net believes to be most significant.
Compression. Thatās the correct word.
So AIs can dream. One step closer to consciousness.
ā¦of electric sheep.
touche!
Doesnāt YouTubeās content id system use a neural network to categorize videos? Wouldnāt that make it possible the two bits of software are actually seeing the same thing?
ā¦or of cuddly human babies. We shall see.
not sure but at least one of the videos contains the audio from the movieā¦ which probably made it easy to flag.
the vimeo video seems to have no sound, but then again āblade runnerā is in the title which may have weighted the automatic analysis.
it was a set up, and you too have fallen prey to my evil plan.
First of the day, wonāt be the last thoughā¦
Well now I havenāt ā thanx for nothinā
Is this to be an empathy test?
No, noā¦ the rule on that is you never show it to any intelligence, artificial or otherwise.
Sheās a small wonder, lovely and bright with soft curls.
Sheās a small wonder, a child unlike other girls.
Sheās a miracle, and I grant you
Sheāll enchant you at her sight.
Sheās a small wonder, and sheāll make your heart take flight.
Sheās fantastic, made of plastic.
Microchips here and there.
Sheās a small wonder, brings love and laughter everywhere
The AI autoencoded version is like viewing Blade Runner through a scanner, darkly.
Thereās this: http://news.berkeley.edu/2011/09/22/brain-movies/
āImagine tapping into the mind of a coma patient, or watching oneās own
dream on YouTube. With a cutting-edge blend of brain imaging and
computer simulation, scientists at the University of California,
Berkeley, are bringing these futuristic scenarios within reach.ā
Aw man, why didnāt he use the Directorās Cut without the narrative dubbing?
The ļ¬nal convolutional layer in the encoder network Enc is reshaped (ļ¬at-tened) from a 4th-order Tensor of size 16x9x640x12 to a linear (ignoringthe batch depth) 2nd-order Tensor of size 92160x12. The ļ¬gure 92160 being the sum 16Ć9Ć640. The ļ¬nal layer in the encoder network Enc is fully connected to the ļ¬attened convolutional layer. The ļ¬nal layer is a 2nd-orderTensor of size 200x12, 200 being the dimension of latent variables z.
So no, thatās not a 200-digit number. Itās 200x12x(16.5 bits), which is about 12,000 decimal digits of information.
I think.