I’m pretty skeptical of AI myself, but it is important to realize that understanding how humans and other animals think isn’t necessarily a prerequisite for AI. In fact, trying to do it that way might even be a mistake. In the 19th century people tried to make flying machines which flapped their wings, on the reasonable, but misguided, idea that flying machines should mimic birds. We only developed practical planes when we realized that machines could fly by entirely different methods which eventually turned out to be able to fly faster and higher than any bird.
And who will guard the AI guardians?
But seriously, we won’t have to worry about programming/training ethical bank software because we’ll have AI guardians who will do it for us? If we can write generic AI guardians, then it will be a lot easier to write ethical software with only one area of experise. And really, watching the software should really be a human job right now, not a job for an AI in the distant future.
That’s what I take away from these warnings from various luminaries about evil AI: that we should be spending time now thinking about how to instill ethical behaviour into all of our systems — software or organizational — instead of putting it off to some numinous future.
The whole discussion of AI is one of the most weird and warped conversations in popular culture.
So many people seem to be so worried about the emergence of a superior generalised intelligence, a thing so utterly implausible and distant at the moment, while they remain completely oblivious to the current steady emergence of specialised AI.
So much of the workforce is set to be replaced by AI in the immediate future, it will be completely disruptive. And while the immediate and most visible job losses may be in fields like transportation, few jobs are really immune, think of all the graphics analysis algorithms being applied to design for instance. My phone can enhance a photograph at least as well as I can, and that used to be a marketable skill I had with specialised software 10 years ago. We might even see an improvement in journalistic standards with some decent AI applied to copy editing…
I can really only see two paths ahead, one where there is a basic minimum wage and people are allowed to pursue lives of leisure, and one where almost all wealth is concentrated at the top, and many of us live in poverty kept in check by an iron fist.
I see no political will for the former outcome at all.
Well, quickly before it becomes >$7B count the negative value (and normalize like a dervish…whose dance is gaussian) http://www.nytimes.com/2016/10/24/technology/artificial-intelligence-evolves-with-its-criminal-potential.html?_r=0
tl;dr In the future, machines will pause telling mom jokes and ask for minders of banking credentials in her voice for 15 minutes.
And they will never understand the answer.
I, for one, welcome our new overlord @Flossaluzitarin.
When it comes to destruction, I’m still betting on natural stupidity rather than artificial intelligence.
No, but if we don’t even know what intelligence is or how to define it, what does AI even mean? If we keep moving he goalposts we may one day reach the point where machines we still don’t accept as intelligent take over and the discussion will be whether or not they will see us as intelligent.
Still, it’s not human like general intelligence that is the problem at the moment but computers that become better and better at solving specific problems while having no clue about anything outside their field of expertise. A suitably advanced trading algorithm could decide to make a killing on speculating on food prices without realizing or caring that as people starve the economy will crash.
I add this interview in my “Elon Musk is full of shit” file.
Cockroaches will be here long after us.
Thing is, we have a pretty clear understanding of what flight is. Do we have the same actual understanding of intelligence?
Also, while an airplane is faster than a bumblebee, a bumblebee is unsurpassed in doing its own thing - at least to the best of my knowledge we cannot make a flying vehicle of that size, weight, and maneuverability. In the same way, a computer excels at certain tasks, but does not measure up at all for others. A car is much faster than a human, but it cannot climb trees, and most certainly it cannot do embroidery. I doubt that we can surpass human intelligence without understanding it better, but I guess by the time one of us dies, it either will or will not have happened.
Again, two wrongs don’t make a right - the failure to define AI or even intelligence properly does not prove that AI will happen.
As for computer systems causing trouble or destruction, I would say it has been clear from day 1 that this can happen without any trace of intelligence in either the system or those that use it.
-
Very few people outside Nick Bostrom, MIRI, and the like are consistent about whether “AI” is referring to general intelligence or domain-specific competence. This leads to lots of people talking past each other and confusing their audiences.
-
The fact that we don’t know how to build AGI is part of the timescale problem, but also not very comforting. Maybe all the pieces already exist, waiting to be stitched together and given more processing power. Maybe it’ll be a hundred years or more to even get close. But if it is more like the former, and if there is a possibility of hard-takeoff recursive self-improvement, then not getting it absolutely right the first time is likely to approach a global extinction level event. Not because of malice, but because of indifference. Paperclip maximizers not terminators.
-
Narrow AI can also be dangerous when used in many parts of complex networks, as described in a number of comments above. Also, really hard to identify/correct once in place.
-
Writing ethical guardian software as the OP describes is at minimum equivalent to being able to produce artificial general intelligence in terms of difficulty (interpreting laws, understanding the complexity of human goals expressed in imprecise and informal language, etc.). In practice, for an AGI to act as a reliable guardian it also needs to be an expert in all human knowledge areas, more noble than Gandhi, and able to predict the long term impacts of decisions as they ripple through many layers of the AI and human and natural systems it is supposed to oversee or interact with. It is most likely equivalent to writing an artificial general friendly superintelligence.
-
Define soon. I’m going to be around most likely another 60 years, and if I’m lucky and medical science figures out aging quickly enough I may last centuries or millennia. My newborn niece will very likely still be alive and kicking in the 22nd century. Am I not supposed to care about her and her potential children’s well being? The norms we set out now in terms of how we think about AI can absolutely impact the tone of future debate, regulation, and research for generations.
If I was going to actually try to define “flight” it would be more complicated than the idea intuitively feels. Should it include a paper airplane, or a party balloon, or a rocket operating in vacuum? What about a submarine, it relies on buoyancy sure but is otherwise kinda similar to a propeller plane? What is the cutoff between flight and swimming, is it defined by Reynold’s number? Does it even make sense to describe the actions of birds and airplanes using the same short english word, would humans seeing both for the first time (Plato’s cave style) cluster them together that way?
Intelligence is the same but more so, because it is more multifaceted, more abstract, less directly observable. I suspect the only reason we can even roughly measure human and animal intelligence now is that all biological intelligence evolved under similar goals/constraints, survival and reproduction in whatever the local environment is, usually using biological sensing and motive equipment with some kind of analogs in our own bodies. When we create narrow AI systems they are truly alien, and don’t work that way, so to us they don’t look like our intuitive understanding of intelligence.* But we can still make some instinctive comparisons. I don’t know if Deep Blue is smarter than a high frequency trading algorithm, but Watson or a self driving car probably is. Luckily, humanity does have at least a small community of researchers actually trying to answer the question of what intelligence is.
*Among other reasons, such systems might be intelligent but are not sentient or sapient - they like feelings or self-awareness, neither of which I can adequately define but which certainly play a role in my estimates of the intelligence of other humans.
If I encountered a program that could converse with me smoothly in natural language, learn to play a new game I brought it given the official rules, explain why it took various actions, ask probing questions, and surprise me with new insights, I would admit it is intelligent. Not saying those are all necessary, but together I would probably find them sufficient - though only as long as the system was not specifically designed to pass such a test, I expect that would be game-able. But if, say, a future iteration of IBM Watson, designed as a tool set for use in many applications, could also readily do this, I would credit it with intelligence.
To repeat a thought I heard a few weeks ago, big American “job” policy calls 100,000 new jobs a great success, and those gains are measured over years. What happens when the most common job in the US, delivery driver, goes the way of the dodo because of drones, or driver-AI?
I don’t think a machine with a human-like mind will exist anytime soon. I’m not even sure if it’s possible to have a human-like mind that exists apart from a human (or human-like) body.
But then, who says AI has to be anything like a human mind? Artificial intelligence need not bear any closer relationship to human intelligence than a supersonic jet bears to a sparrow. Our machines surpassed us decades ago depending on which measure of “intelligence” you prefer to go by.
Re. measuring intelligence, do we really have any good direct measures? Fitness, as I think you allude to, is only indirect, and quantifying how much can be attributed to any one characteristic can be rather tricky (possibly an understatement).
Generally tho, I’d say a good measure of intelligence would be creating a system that can successfully tackle all problems a human can tackle - not task-specific solutions, but a general problem solver and task-doer. Possibly a very dead-average human, for that matter. It’s a lot of stuff, I guess: finding your way, puzzles, working, learning, talking, making some kind of rough sense of the world, stuff like that. Something like that. Fuzzy as all hell, all of it. As far as I can tell, AI struggles mightily with even single, specific tasks (e.g. driving a car), and often relies on workarounds (one car-driving AI knows every damn detail of all roads it drives on, iirc? Which kinda reduces the amount of actual problem-solving involved…).
Well, there is some… Trump much.
Agreed on all counts. Humans also rely on lots of workarounds, they’re just well-tested ones we all share so we don’t necessarily notice, and they go by names like culture, convention, procedure - all the ways we avoid solving problems de novo each time we encounter them.
Have you ever read Niven’s The Mote in God’s Eye? There are aliens in it with an instinctive but highly specialized ability to make complex tools. It is so easy for them they do it custom every time, so no two screws, handles, or coffee machines are the same. Other “breeds” of alien from the same world, though, have much more of what I’d call general intelligence, and are quite willing to kill the barely-sentient toolmakers whenever needed.
No doubt, but that doesn’t make them “smart”; just well designed.
Take it up with Kaku if you wish to debate that; it’s his quote.