Originally published at: https://boingboing.net/2018/05/04/ai-finds-solutions-its-creator.html
…
I, for one, welcome our elbow-walking, claw-smashing, deceptive, algorithm-short-circuiting overlords.
Inevitably (artificial) life imitates art.
Came here simply to say “I, for one … etc” (literally, just that) but I knew I’d have been beaten to it in one form or another - and you nailed it.
Obviously, all bugs arise as a result of human error: Solution, eliminate human element. Problem solved. There is NO WAY that is not a good solution.
Come on! That’d never happen, I think…
Fun anecdotes - but so much sloppy thinking about evolution!
The AI doesn’t “want” anything. It doesn’t “try” to get certain results. It’s a machine. If it maximizes paper clips, somebody either told it to maximize paper clips, or some other criterion like “maximize office supplies.” Emergent behavior is emergent, but it’s still a surprising outgrowth of a human-designed system. Skynet will not kill us all unless we instruct it to.
Let me tell you a story. Back in the mid-70’s, in the midst of the first wave of hype about AI, Doug Lenat wrote a program called AM, which appeared to (re)discover many important concepts in mathematics, including, for example, Goldbach’s conjecture. It got a lot of excited attention from people who were convinced machines were about to become more intelligent than humans. It turned out the explanation was much more prosaic: the program was just generating short snippets of Lisp code selected according to an “interestingness” heuristic, and Lenat had over-interpreted the results by equating some of the chunks of code with mathematical discoveries. The math was just built into the programming language, and could be “discovered” by almost any random process.
See: https://en.wikipedia.org/wiki/Automated_Mathematician
What does around comes around.
AI lacks ‘functional fixedness’ therefore it can ‘easily’ do what we call ‘think outside the box’ and can surprise us with it’s ‘creativity’.
F’rinstance, at 1:58 the announcer talks about the robots ‘deceiving’ each other. Therein lies my main concern about AI coexisting with humans - we have constraints (ethics, morals, codes of behavior, customs, traditions, etc.) that tend to keep us ‘driving in the lanes’ . AI does not.
AI needs those to be implanted, if you will, and/or it needs a feedback push from the environment, like we (hopefully) got many times in many ways as we grew up (socialization).
The difference is between robots ‘growing up’ and humans growing up is that humans have (thankfully!) limited agency, limited physical strength, limited access to harmful things in youth… well, maybe the school shootings show that they don’t!
Whereas robots can be ‘born’ fully functional with abilities and interconnections that generate possibilities we can’t imagine (and guard against) in the design phase, as this video points out.
Even simple neural networks can develop unexpected behaviors, as shown in the video. Therefore, it takes no great leap to say that a more complex network could generate more complex behaviors, and more complex ‘deception’.
Sigh…
The Law of Unexpected Consequences is powerfully corrosive.
I don’t know. I can imagine giving skynet a “stop all war” directive may lead to some interesting solutions. I think the issue here is that humans automatically trim away parts of the decision tree because “common sense” tells us those results are undesirable. AI’s don’t have that and will find any solution that matches the given criteria.
Sounds like this again:
I am also reminded of that study in which attempting to breed a stable insect population led to Unexpected Consequences. No AI there either, technically.
And I am also reminded of how Dan Simmons gushed over the “Terrarium” simulation in The Rise of Endymion (a book which I strongly recommend not reading), wherein AI spontaneously developed clever programming techniques that not even human programmers could improve upon. But considering that simulation doesn’t otherwise seem to draw attention, I suspect he overstated its significance.
“Listen, and understand! That Terminator is out there! It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop… ever, until you are dead!”
If you haven’t already seen it, you might be interested in Colossus: The Forbin Project.
This is how we die!
The problem is that we may instruct it to without realizing so until it’s too late.
Just a note: he’s Hungarian, not Austrian. He just moved to Vienna to do his PhD: https://users.cg.tuwien.ac.at/zsolnai/about/
He also has the most Hungarian name possible.
His videos are pretty cool.
This would be very comforting if humans were logically omniscient, in the sense of knowing all the consequences and implications of our own knowledge/beliefs.
cough I think most “natural” scientists understand that “life” is just chemical machinery…