The real problem with the Turing Test is that it’s the important question is not whether an intelligence is human, but how humans treat AIs, how AIs treat humans, and how members of both intelligence categories treat one another.
This was a really compelling talk! I wonder what specific actions he has in mind for developing a robust system to ensure that AI does no harm. Is this essentially Asimov’s rules of robotics?
I guess one good place to start is to better educate everyone about what AI is. I appreciated in the beginning of his talk that he clarifies that he is talking about machine intelligence and not the metaphysical or conscious AI of movies and books. I know I often think about it in that fantastical way, when actually it seems that the machine intelligence is the real threat.
I did think that his insinuation that the google car used a video stream to drive was misleading. From what I understand the google car is not crunching a video feed, but is using a host of data from sensors and information (including the video), some of which is captured ahead of time by human-driven cars with more sophisticated sensors… Essentially the google cars have a pre-scanned 3D map of where they are driving and don’t deviate from that. But I read that about a year ago, so maybe that’s old news?
AI. The ultimate Libertarians.
This points to what I suspect will be AI’s first real dividend: effective programming of collective natural intelligence via legal reformation. I’m not sure whether to feel optimistic or terrified by that prospect.
This topic was automatically closed after 5 days. New replies are no longer allowed.