Originally published at: https://boingboing.net/2018/07/27/several-experts-explain-key-et.html
Originally published at: https://boingboing.net/2018/07/27/several-experts-explain-key-et.html
“At its best, it’s going to be the thing that solves all of our problems, and at its worst, it’s going to be the thing that ends humanity.”
Yeah, that about sums up my feelings on it as well.
I look forward to watching the video, but my skepticism about “us” handling the advancement of AI ethically is fueled by how poorly “we” treat actual, living breathing people through all levels of society; both ethically as well as physically.
To be fair, they said the same thing about nuclear energy. And the printing press. And this wonderful thing:
I think it’s a good idea to always ask ourselves if things have always been thus to see if we are just engaging in historical patterns, but I do think it’s good to recognize that nothing is really all that different…until it is.
Humanity has never had the kinds of powers that we possess now (whether it’s destructive power, or communication, or constructive ability) and when you combine those unprecedented powers with something like AI that could be fundamentally different in ways that we literally can’t conceive of, who the hell knows what happens?
That’s all fair and true, but there’s also unexplored upper limits to new technologies that we’re not fully aware of yet. Most fears about AI are based on the assumption that it’s possible to bootstrap it into “god mode”. And yes, I do consider it almost certain that we can, at least in some domains, exceed individual high-level human performance in many tasks with AI.
But with that said, there’s a really, really massive gulf between “better than human” and “legit superhuman”, and an even more massive one from “superhuman” to “godlike”. I have very strong suspicions that somewhere around low-grade superhuman we’re gonna start running into barriers, because to really outperform humans an AI doesn’t have to be just waaaaay smarter than one human at one task, it has to be massively better than large groups of specialized humans at complex things like research and programming and materials science and deep quantum physics and many hyper-obscure branches of mathematics and so on and so forth. Y’know, the stuff an AI would actually need to be insanely good at to be able to super-bootstrap itself into godhood.
When you actually start picking apart all the different technologies and scientific domains an AI would have to truly master, and how much cognitive/processing overhead it would require (replacing millions upon millions of really good existing computing substrate known as human brains) just to match existing levels of innovation, not to mention vastly exceeding them and somehow automating a lot of that research and… Suddenly a “runaway AI” looks a lot less likely than just a tool that helps us incrementally-but-significantly speed up many existing areas of research. So a linear instead of exponential or super-exponential speedup, essentially, except in some areas that are really really optimal for AI optimization.
I’m not so sure that I’m most concerned about “godhood,” but the fact that AI could be a fundamentally new form of being which we may not be able to relate to in ways that we understand. All those things you rightfully point out, that humans have a head start in innumerable areas and have millions of people advancing these areas constantly, but when compared to an AI with unfathomable computing power and the ability to learn unfathomably quickly, I’m not sure what that head start means at the end of the day.
AlphaGo is a great example of this, I think. Millions of humans had thousands of years to develop complex and subtle strategies, and the AI taught itself to surpass humans in essentially no time using strategies that the best human players said seemed alien to them.
It’s that alien-ness combined with scales of speed and power that we can’t wrap our heads around that I think will be the most transformative (for good or ill, who the hell knows?). If/when an AI becomes self-aware, can we even imagine the priorities it will have, or the methods it may use to achieve them?
Advances in computing and the existence of entirely new data sets are ushering in AI capable of realizing milestones that have long eluded us: curing cancer, exploring deep space, understanding climate change.
This is really exciting! But not actually true.
This is all correct, but it’s worth considering that we can currently make a machine be superhuman… At things that we’re honestly really shitty at, cognitively speaking. The games of Go and chess are really good examples of this - they became popular amongst humans not because they were easy, but because they are difficult for us, meaning that the “skill cap” for humans is really, really high, and it’s vulnerable to machines beating us at our own game because a lot of the high-level skill is just memorizing the shit out of a gazillion different algorithms and developing your own internal set of algorithms of when, where, and how to apply all those attack and defense algos.
But then you have stuff like machine vision, as applied in things like self-driving cars. Tens of thousands of the smartest humans alive in known history with enormous budgets, drawing on a huge corpus of prior research are pushing existing technology to its absolute upper limits to try and accomplish over decades what I can teach my 15-year-old daughter to do in a few hours of active teaching and a few dozen hours of practice: drive a car around without murdering anyone or destroying property. Hand-eye coordination and visual processing are something that humans are fucking amazing at, as well as a whole host of social games and things along those lines, which includes things like “modeling someone else’s mind” which is a prerequisite for many, many activities.
For stuff we actually have some evolutionary advantage at, machines struggle to keep up, and likely will for some time, especially when it’s something that groups of humans can collaborate on (one of our meta-strengths).
For these things, we’ll have a much more gradual overtaking of human ability, which will give us a lot more time to adapt to the changing economic/political/evolutionary landscape than I think most people fear, and thinking in those terms of “We’ll be overtaken quickly at stuff we’re bad at, but slowly at stuff we’re good at, and really slowly at stuff we’re good at in groups” is a lot less scary than “God-mode HAL-9000 will sneak up on you overnight and turn you into paperclips because of some bad programming”.
I think we fundamentally disagree about whether gradual, incremental advancement is more likely to be the status quo going forward.
Yep, and that’s fine - I think there’s going to be explosive growth in some limited areas initially, slower and incremental growth in others, and some really weird and unpredictable explode-plateau-explode-plateau in still others as certain prerequisite dependencies (often unknown until they’re developed) are met.
I think we can agree, though, that these advances are absolutely going to devastate the human labor market across many industries, and that’s more of an imminent threat than anything else. And probably agree even more that we’re not doing a very good job of planning for these shifts, because shit’s gonna get weird really soon, especially when self-driving vehicles displace a significant amount of the transport industry (> 10 million jobs in the U.S. alone).
May I remind everyone that you guys are talking about statistics and stochastics?
SCNR. It’s just to apt.
This is the plot of a whole Avengers movie.
Yes, and I think it was pure chance that we didn’t blow up ourselves and our friends and our enemies and everyone else. Will we continue to be lucky? I hope so, but it only takes one problem.
ETA: And to prove my point, there’s this:
We have learned more about many episodes where we did nearly suffer a nuclear cataclysm but managed to avoid it.
The first time I read Russ Kick’s book of lists, my mind was literally blown by how many times the fate of the planet was hanging by a fucking thread.
This sums up why I think most of the stock zOMG doomsday scenarios are questionable(as much as I like SHODAN); but also why I expect that even fairly limited; even hopeless outside of narrow scope, ‘AI’/glorified expert systems to be a fairly messy shake-up.u
The trick is that, except when absolutely necessary, we don’t replace what a human was doing(or add some new activity) by building an android to replicate their functions only 26% better and with no need for sleep:
Instead, Asimo is left to his geriatric waddling as what is largely a prestige project; while we modify the task to fit the agent.
This has worked remarkably well even with markedly cruder technology (most things ‘industrial revolution’ involved obtaining an acceptable similar result/product by a new means amenable to the machines you could build, not steampunk clockwork proletariat); and there seems little reason to expect it to stop working.
You won’t get a human-like strong AI going all Colossus: The Corbin Project on us one day; you’ll see increasing amounts of what used to be human activity atomized and reconstructed to match what the expert systems can do; and increased deployment of things that have long been possible but we’re not viable without inexpensive, high throughput, expert systems to handle them(among any number of possible examples, pervasive license plate reader deployment could have been done with a notebook and a watch; but only becomes economically viable, potentially even profitable, with machine vision LPR systems).
That’s the thing about tech: it’s generally rather poor at being a drop-in replacement; but if you can modify the task to fit it’s strengths the products of the new task can frequently displace the old method quite handily; with the attendant disruption.
This topic was automatically closed after 5 days. New replies are no longer allowed.