True. I’m just trying to point out that being “universally pro-life” is a rather complicated proposition. What of the tapeworm inside your intestine housing its own nematodes, housing their own bacterial colonies?
Obligatory: What could possibly go wrong?
Good luck programming those complex statements into your robot as laws. Most robots, in the far future, won’t be capable of determining if they are in violation of its own laws.
The robots WILL take over the world. It is inevitable.
I always find these laws of robots and AIs to be a nice idea “in theory” but completely ridiculous as a practical measure.
How would we write a program to serve the “common good” of humanity, much less recognize it? What exactly is an ecologically, socially, culturally, and economical way of life? For that matter, what is a human or intelligence? I defy you to even find two humans who would agree on a definition of these terms, much less the entirety of humanity in terms that we could create a program for robots.
Then you get into the really sticky questions: Do all humans count the same? Ideally that would be the most egalitarian, but I’m pretty sure that the massive transfer of wealth from Western societies to developing nations that this would require would not be considered a “good thing” from a Western point of view. I’ve heard that the ideal population of humans on Earth is about two billion. Does the common good therefore mandate that we cull five billion people from the Earth? This would certainly be an ecological and economical solution. If we go that route… who are the ones that should live and those who should die? Furthermore, If you could come up with a simple rule for society… wouldn’t we have already done so by now? Marxists and Nazis both thought they had the solution and we all know how well those social experiments turned out.
You could probably write a whole book of scenarios exploring that idea. Or one so-so Will Smith movie.
This topic was automatically closed after 5 days. New replies are no longer allowed.