Elon Musk and Stephen Hawking call for ban on “autonomous weapons”

Soooo, we need a couple of dozen, to allow for a few spares, yes?

I think you must have not picked up on the fact I was talking about the US. Russia is actually a little less imperialistic and anti-democracy than the US as far as I can tell.

Bio-agent, semtex, whatever the payload is, quadcopter or whatever, point being non state actors can just as easily fully automate an act of war without going into full on Hollywood True AI™. The kill decision can be automated as well with a camera relaying back images to determine crowd density, etc. if(conditions are right) then boom. You want that part because attacking a non populated area does not achieve the PR goals much less the desired result of the act of war.

For less than the cost of one Lockheed KillBot, you could deliver a bunch of these across a whole country or across several countries.

I suspect that the failover-presidents get increasingly cautious as you go down the list. Everyone sort of expects that being president comes with some risk of somebody wanting to take a shot at you; but if you are the Secretary of Agriculture and you’ve seen the first 7 replacements need replacement you’ll probably elect to spend your temporary term in an undisclosed location, probably one of our old Cold War hideouts.

2 Likes

There is a mindset at demonstrated in your comments that I think exemplifies a flaw in how many people believe we should approach this autonomous AI problem. While you point out the potential strengths in an AI, (non-biased, non-aggressive, patient, un-fearful) there is still a tendency to treat the AI as an extension of conventional human force–and therefore, human frailty. The robocop concept is conventional policing using force-on-force but with AI. I ask, why should force-on-force solutions be necessary for autonomous AI policing?

As puppet masters, we are still responsible for the actions of our puppets. Arming an AI for any purpose extends our human flaws (bias, hate, fear) into that AI (as it is with the creation of any weapon designed to kill humans). The programmed limits we place on that armed AI’s ability to make mistakes is also the minimum measure of the potential wrongs that armed AI can commit. I beleive that without weaponry, autonomous AI could still be very effective for policing (and yet, still have the potential for great harm. But that is beside my point in this post.)

Furthermore, human perception of a deployed, armed, autonomous AI in the neighborhood is an important factor to consider. Any demonstrably effective AI system will (rightly) be perceived of as a threat by any human wishing to act against those “in control” of the AIs. But also, once ANY armed AI makes a fatal mistake, killing a (humanly) perceived innocent person, ALL autonomous AI will be perceived as a threat by most humans. Not many people are going to be willing to tolerate buggy, armed AIs walking around the neighborhood while Microsoft works on a patch to fix the “child with toy gun” bug.

2 Likes

I simply disagree, and you’ve not even begun to make a case for this assertion.

It’s not about cost, it’s about money put into R&D. Killbots would be more effective, reliable, and numerous with trillions of dollars of R&D as opposed to a few million spent by uncoordinated amateurs worldwide. That’s why I think a ban would actually make a huge difference.

Even if you’re going to maintain that amateurs could do as well as defense contractors (which I think is laughable), I think at the very least it would take them much longer which is still a worthwhile goal.

You think quite high of defense contractors. Just look at the F35 pains. And many more incidents, including but not limited to lack of chromium coating of rifles sent to highly corrosive 'Nam jungles.

The bunch of amateurs have quite some advantages; lack of secrecy and bureaucratic structures, for example. The lack of money is also sort of an advantage - the resulting designs will be much cheaper to build.

2 Likes

I don’t, really. And I’d even considered the possibility that the sort of corruption that leads to boondoggles like the F35 would hamstring the defense contractors.

BUT.

Military contractors have access to technologies, resources, and expertise that amateurs simply don’t have, and they have a giant money spigot connected directly to the US federal gov’t.

I don’t doubt that amateurs can figure out how to integrate a quad copter, a gun, a camera, and facial recognition software. I think it would take them longer than it would take trained professionals with a warehouse full of spare parts to choose from and experts in optics, facial recognition software, firearms, and quad copters to ask questions. And I think the resulting product would be less effective. I think these are reasonable assumptions and you need more than “you think quite high of defense contractors” and mentioning the F35 to dismiss these sorts of considerations.

1 Like

On the other hand, they get an internet full of resources and people to ask questions. The people don’t have to have security clearances, so you have larger pool, which compensates the less resources to at least a degree. The people aren’t limited to one country, so you have an international cooperation. And you have full China of spare parts. And you have overlap with non-weapons applications in both the optics, mechatronics, control systems, and image processing. This plays in favor to the development agility.

Less effective, perhaps. But way way cheaper, that can be guaranteed.

1 Like

OK, but what is the point of this argument?

We’re considering two different possible future timelines:

  1. The world’s major superpowers enact a ban on autonomous weapons systems.
  2. No such ban is enacted.

Are you arguing that (1) is worse than (2)? Essentially identical? What exactly am I meant to be arguing against here?

I mean, I can take issue with the specifics of your argument – all of us are essentially engaged in sheer conjecture as to the capabilities of the industry that’s spent the last six decades producing weapons systems vs. the capabilities of amateurs working from blueprints they got on the 'net. I can certainly pick nits with the comment I’m responding to (for example, you assume international cooperation between the tinkerers, but the chances are the tinkerers would be members of different ideological groups with little in common). But I don’t see that going anywhere productive.

Is the fact of the possibility of non-state actors working on such systems enough to argue against a ban on nation states investing in the technology so as to prevent an arms race?

2 Likes

All obvious examples of automation in weaponry. I think one of the other differentiating factors aside from possibly mobility and autonomy is the idea that these newer systems are designed to attack soft targets.

The Japanese would probably disagree.

1 Like

Only if they spectacularly missed the point of what I was saying.

I wonder: Do you feel that war should be restricted to humans fighting humans, or is it okay for a wealthy nation to field a warfighting force of robots (ostensibly AI’d) in lieu of humans? Also, given that the Geneva Conventions were established in international law to create a “humanitarian treatment” of war, do you think fielding an all-machine warfighting force against an all-or-mostly human force is humanitarian?

On Point had a good show on this general topic the other day, and in it a caller asked about Asimov’s venerable “Three Laws of Robotics”. Asimov was able to find any number of perturbations within those laws to then bypass or short-circuit them, and despite it being a plot device, the troubling aspects remain. Also, if we have a computer-driven car that knows about dogs, street trash, and humans, and can, when encountered, make decisions about what to do when faced with one or more of each, the necessity becomes that CS folks will have to create, essentially, the “Three Million Laws of Robotics”–my point being that the programmers already have to build such programs from the standpoint of “the worst” so that the robots can act accordingly in that particular situation.

Well again, it depends on who is making what for what. A country making ED-209s for war isn’t going to give a shit about Conventions or Rights. They want to make machines that will kill their enemy at any costs.

Something like Chappie I think could be more humanitarian than humans (to paraphrase Bladerunner). In policing I think the incidents of suspect deaths would fall to near nil. In war I think it would dramatically reduce the collateral damage of the death of civilians, especially in the more recent wars where we are fighting non-uniformed combatants.

The Three Laws of robotics are too simple, though a good starting point. From the first one, “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” There are times when the only way to stop a human from being injured is to injure or even kill another human.

Of course again it all boils down to who is programming it for what. You can make a Chappie as fascist and authoritarian as the worst Gestapo.

Not to mention the production facilities that could be re-geared to build killbots–DIY doesn’t stand a chance.

That’s not all it comes down to. A few people have tried to point out to you a more fundamental problem – that the engineers who program these things cannot foresee all possible problems, that the ability for automated systems to self-correct on issues that weren’t explicitly programmed into them is essentially nil, and that until a human being has an opportunity to patch the code you have an armed and armored death machine walking around making faulty decisions.

Unintended consequences are nonetheless consequences. Automated systems tend to generate a lot of unintended consequences. Consider the “flash crash” caused by the financial industry equivalent of the killbots.

1 Like

Well first off, obviously you would test it and make it reliable. Again, I would program with caution. So if they did come up into a scenario where it doesn’t have the ability to reason out an action, instead of the default being “attack” you just make it “not-attack”. There are fucked up situations in war all the time. A robot could have the luxury to stand their with a thumb in its ass until it gets new orders. Humans are panicky and reactionary.

Again, I am not saying that it will have 100% success rate. But can you look at all the police shootings recently and tell me with a straight face that a robot would fuck up more than humans would?

I realize this is counter intuitive. We think we are are awesome, and we are to a degree. But self driving cars are getting to the point that they are way better drivers than us. I am pretty sure there will become a point where only self driving cars will be allowed on highways. But even then there will be some fatalities from an unforeseen error cropping up. But I think that is acceptable when you factor in the overall death rate from accidents will go way down. The goal is reduced deaths.

1 Like

So we’re losing international law at the drop of a hat, here? If any single nation feels nutty enough to completely, and given the destructive power of our fabled ED209, openly disregard international law in actively making war against some other nation with the use of weapons banned by international law (as opposed to a deterrent force–nukes–that are never meant to be used), I would think that nation would suffer the resultant slings and arrows of the international community’s ire.

As for reducing collateral death and damage, I’ll refer you back to @Brainspore’s comments (to your post):

I think he has a good point, but at the same time every encounter and shot could be recorded. We could see dirty details in HD.