I’m not entirely convinced that I believe that there is such a thing as a robot, as distinct from “a computer in a special case” or “a specialized peripheral for a computer.” At least inasmuch as mandating that a robot must (or must not) do certain things is a subset of the problem of mandating that computers must (or must not) run certain programs.
I imagine the vital factor of the concept of a “robot” is autonomous function. A robot can of course receive commands or directives from an operator, but a true robot carries out its demanded functions without direct external guidance, relying instead on its internal programming.
An industrial assembly arm that is operated by a human via a control stick is not a robot, but one that performs the same task independent of human input is. Likewise, an aerial drone piloted remotely is not serving as a robot, but the same drone piloting itself is.
That’s part of what makes the concept of robotics so very compelling. The earliest imaginings of robots, well before the technology was anywhere near possible, were conceptions of non-human workers performing the tasks humans would otherwise be required to perform, independent of human guidance. The entire point was to reduce the need for human labor, by offloading that labor to a machine that performs the same physical functions without complaint or concern.
How this should affect Law seems a simple question in my mind.
In any case where a human operator is solely involved, it should be the operator who is the subject of the law - just as the operator of any tool is held accountable for the actions they take and the uses they put that tool to.
In cases where the robots themselves are at fault, well… there things get trickier. The nature of the fault becomes the prime factor in laying blame.
If a mechanical failure causes a vehicular accident, we blame the party responsible for that failure. If your mechanic forgot to refill your brake fluid, they are guilty of negligence. If the manufacturor cut corners during production, then it is they who are guilty of negligence. And if an action on the part of the operator causes the failure, such as overloading the vehicle beyond stated limits, then they are at fault.
The same needs to be true of robots, just without need to concern ourselves over an operator.
If a robot is used to commit a crime, the parties which caused the crime to be possible are the guilty ones. If a mechanical failure occurs, the maintaining mechanic or the manufacturor or whomever allowed the fault to occur is to blame. If a fault in programming allows the robot to function in a way that results in a death, the only logical conclusion is that programmers are guilty of negligence.
I’m not entirely convinced that I believe that there is such a thing as a robot, as distinct from “a computer in a special case” or “a specialized peripheral for a computer.”
I actually thought Ryan Calo did a very good job arguing for robots as distinct from “a computer in a special case”, though perhaps the “special case”-ing of robot needs to be further refined. To draw such a broad metaphor paints the picture that a robot is nothing more than a dumb tool that cannot observe its surrounding, access and render judgement, then execute upon it, which I believe to be false.
To quote his definition in the paper:
robots are best thought of as artificial objects or systems that sense, process, and act upon the world to at least some degree.
While it is true that it all boils down to lines of code and it can be hard for us to truly figure out how “intelligent” a software system is, in my opinion (CS person, not a lawyer) it is overly simplistic to categorically say that all robots are nothing more than systems that act on pre-determined rules.
Sure, a robot can seem “smart” for a while if it is programmed to buy this stock if it hits $X sell if $Y, but I think it can be agreed that this is just a simple program. However, Machine Learning techniques allows the robot to improve over time and learn based on pass mistakes, which Ryan touches on as part of “emergence”. These dumb tools can slowly become smarter over time (subject to limits in learning algorithms/input/etc). More crucially, based on these improvements, the judgement and behavior of the system will change to better suit their function.
The more interesting part is not that they will change better over time, but the unintended side-effects of such improvement, which cannot be foreseen by anyone.
So perhaps the definition of robots as it pertains to whether there is material difference to computers today, and thus whether it should be treated as exception in terms of law, hinges on its ability to learn.
That said, Ryan Calo says in the paper that he is “hard-pressed to point to systemic innovations in doctrine that owe their origin to cyberspace … the Internet did not lead to a new agency”, which I disagree on. Fundamentally, law is constrained by distance, people’s willingness to respect its rule, and the power wielded by those enforcing them. On the other hand, “code is law” (and enforcer) on the internet, and I think the Internet just has not had time to develop a new agency… yet.
It looks like that Kiddibot has blow her Asimov Circuits… Fleeeee!
If the past is any indicator (and in this case, it probably is not), then it will look like a cute, fluffy kitten and be more murderous than Hitler and Stalin combined. Yikes.
That little robotic oppressor has stolen my heart.
This topic was automatically closed after 5 days. New replies are no longer allowed.