Elon Musk and Stephen Hawking call for ban on “autonomous weapons”

I’m going to take a wild guess that you don’t work in software R&D or QA.

It certainly could. The bigger problem from this perspective isn’t so much the likelihood of a poor decision made in any specifiable context. Once you’ve specified the context, the machine is obviously more reliable.

The problem is that computers can’t second guess their programming. Humans make poor decisions all the time, especially under pressure – that’s acknowledged. What they don’t necessarily do, and what machines definitely do, is to persevere in making poor decisions time after time again even when confronted with all kinds of evidence that their decision making is faulty. Human beings can step back and ponder whether the actions they’re taking are really conducive to the goals they’re trying to achieve. Computers just can’t do that.

Unless they’re explicitly programmed with some limited self-correction, which increases system complexity and therefore the number of bugs (and therefore the risk of deploying such systems).

I asked you before to look into the limitations of self-driving cars – what they specifically can’t do. They’re very much in-line with the criticisms I make above.

Sure, self-driving cars in most situations are much better drivers than human beings. Except when a bridge gets heavily damaged by a storm, google maps isn’t updated with that information, and several hundred human beings die all at once because the automated cars didn’t realize the bridge was unsafe.

Much of your argument has depended on the limitations of human beings. At no point have I disagreed with you that human beings have limited capabilities. Nowhere have I suggested otherwise, which makes me wonder why you keep making the same point I’ve already agreed with.

What you don’t seem to acknowledge is that machines also have limitations, ones that are relevant to this discussion.

Edit: Perhaps it is presumptuous of me to say so, but it seems to me you often point out problems caused by excessive bureaucracy. We’re on the same page there – bureaucracy leads to faulty decision making for a number of reasons.

This is primarily because it tries to proceduralize decision-making – to make it as automatic as possible so as not to rely on the judgment of individual human beings.

The way machines make decisions is more similar to how bureaucracies make decisions than to how human beings make decisions. They continue to do stupid things because there are no policies in place for second-guessing the policies that are in place.

I hope that analogy helps to clarify my argument.

3 Likes

Which only makes me think of the (not so) recent spate of LEO’s harassing, threatening and otherwise killing innocent people. Once they’re dead, they’re dead, and we’re left with grisly HD video of the murder.

This discussion is not really about autonomous weapons. We’ve had autonomous weapons for decades in the form of land and sea mines. What the discussion is about is trusting autonomous weapons to tell the difference between friend, foe, and neutral.

1 Like

Not what I said at all, quite the opposite really. Its not that a nsa group would be able to guid something as well as Lockheed at all, its that an nsa could automate an attack system of some effectiveness. Think more along the lines of an asymmetrical warfare distributed systems version of a terror attack.

I’ll be interested to see how the development of countermeasures plays out.

It seems to me that even the outright banning of AI weapons would include the development of those same weapons in hardened facilities so that countermeasures can be created.

So ultimately, I don’t see an absolute moratorium on autonomous weapons even being possible. The implicit assumption of a bad faith actor creating autonomous weapons in the face of a unilateral treaty would be assumed to be a distinct possibility.

What do we do in the face of such an existential threat?

Just to take an example, what does a country such as the USA, a part of which is well known for promoting the idea of putting more weapons into situations where weapons are causing many deaths, do under that threat?

It, at the very least, develops them in secret in order to create countermeasures. At the very least.

Doesn’t stand a chance of what?

One of my many infuriating sayings is that: “Everything you buy instead of make is merely somebody else’s DIY”.

1 Like

Just a thought, but autonomous weapons aren’t alive, so they don’t have to protect their own lives by resorting to lethal force for defense. They can afford to try every possible non lethal option first to subdue a person. In theory, less people would have to die, but of course humans are too kill happy to even consider such an idea. ~sighs~ Why are humans so kill happy?

1 Like

Millions of years of kill-or-be-killed-and-eaten environment. The brain evolution did not catch up with the environment changes yet.

Also called “being human”.

There will also be continuing development of homemade paintball gun turrets. Banning nonlethal versions is stupid and unenforceable (you’d have to jail everybody who dares to think about a water-pistol sentrygun), and replacement with a real gun on an existing design is less than difficult.

Some things are way too easy. Even in other banned fields; technically, every chemist who ever prepared the bromoacetone lachrymator is guilty of making a chemical weapon (it was actually deployed in World War 1, as White Cross (together with other such agents)). This one chemical is so easy you can make it by accident; it is quite easy to forget that the solvent you’re using is reactive too, for example.

Countermeasures are good. I am thankful, for example, that computer viruses and other exploits are as a class in the open, not deemed a military secret (individual zero-d ones yes, but not everything), with all the countermeasures out in the open.

Thought. A safe mine that dies over time. Use a little bit of electronics, with an exploding bridgewire or slapper detonator as the initiator (the power circuit is pretty much the same as a photoflash). Once the battery dies, the thing is unable to fire. No more accidental activations decades after the conflict end.

It is generally easier to push a behavioral update to robots than to humans. The retraining is easier, the effect is longer-lasting, and there are no egos to stand in the way.

In day to day operations, it is possible and sometimes even easy to get a program to behave more reliably than the humans it is replacing. Tested in the field of automating logistics.

Unless you push a software update. Then all the agents in the field change behavior. Good luck with instant retraining of all the deployed humans.

You are assuming smart humans. A common mistake.

You can choose the side the thing will fail to. Human soldiers tend to err to the side of shooting. Machines can be programmed to err to the side of not shooting much easier than humans can be trained.

An autonomous car won’t get lulled into the false sense of safety by driving the same route a thousandth time, and won’t disregard data from the laser scanner telling it that part of the road is missing. It can also get signal from the smart bridge about the damage (the infrastructure can be equipped with sensors fairly easily), or can know that there is supposed to be a bridge that tells it is okay and not getting that information will cause the vehicle to slow down as a precaution.

A vehicle-to-vehicle network can also provide information from the vehicle that crashed a moment before, lowering the death toll to one car content in the catastrophic failure scenario. That is one of the things these networks are being designed for - every car can be a scout car for the car behind it.

The kind of judgement that, in context of weapons, gives us friendly fire and My Lai?

I’m not decrying the creation of countermeasures, if that’s what you thought.

It seems inevitable that if the technology exists to create AI weapons then they will be created. Either by a criminal intent on using them or by security services seeking to find a way of counteracting them (or using them I guess).

Insofar as those weapons are let loose into public space, I can see why a moratorium on their use is a sane objective but there is no scenario I can imagine where they are never developed.

And if we are to imagine generally intelligent weapons we’ve got to take that even further, if the thing is conscious a whole bunch of new questions have to be asked. Like, what does the weapon think about being created as a weapon? Should we be keeping self-aware, intelligent entities in captivity in order to somehow test them in probably quite inhumane scenarios, over and over? What does it even mean to be self aware but programmed to have an objective such as killing other entities?

Obviously we want to make a start with banning weak AI systems that would be designed to kill people. I see no problem with that apart from my previously stated point about the development of countermeasures that are also based on weak AI. Banning them from being used isn’t the same as banning them from being created.

The moment we start talking about Strong AI as applied to weapons, the whole ‘banning’ scenario just goes off a cliff so far as I can tell.

Perhaps that it is its meaning of life?

Should we do animal testing? Do we have an alternative that provides comparable results?

Isn’t this a question to ask e.g. soldiers?

Philosophy is where time goes to die. Better spend it in the lab.

Experimenting on animals to find cures for humans… damn man, I’m torn on that question.

But that’s different than experimenting on human level, conscious entities.

It’s a moral mire.

Hehe, knew you would say that. Even some soldiers have decided not to follow orders and even some commanders remind their soldiers that they needn’t follow commands they are not morally comfortable with. It does happen… honest.

Hmm, my personal philosophy is that mind is more fundamental than time and space but that’s a different conversation.

This is one of my main problems with the trend towards autonomous weapons - it becomes easier to deal with the idea that you are killing other people if you aren’t sharing the same space as them, and it’s easier to remain unaccountable or approve missions when you don’t have to do unpopular things like declaring war or putting boots on the ground on order to complete those missions. Fewer people need to be involved in making the moral decisions, and it’s easier to ignore any dissent when you don’t need to hear it.

1 Like

Most people understand DIY to be an activity of production done by small groups of people, rather than an activity of production performed with some relatively large percentage of the industrial capacity of a nation. NRA dipshit-ville’s .44 magnum Colt Anaconda kept to protect themselves against the Federal Government’s A-10 vulcan cannon or JDAM.

My uncle built a kit car a few years ago → loosely DIY. Ford, Toyota, fucken BMW → not DIY.

Or, perhaps you might read the comment thread of @wysinwyg that I quoted in my post?

1 Like

That’s just like how we get ants. Huge, hungry metal ants!

2 Likes

More reliably in a specific context. I never argued otherwise; you’re simply ignoring the most important point of my argument.

All the agents that have an active link at the time, at any rate. And it also means if a bug gets into the update, every agent gets that bug. Nothing could possibly go wrong, here. /s

No, I’m actually not. The problem is that you can’t find a good counterargument so you’re misinterpreting my argument into a form that’s easier for you to disagree with.

Human soldiers actually tend to err on the side of missing because the vast majority of human beings don’t like to kill other human beings. This is actually a huge problem in combat training – how do you get human beings to try to kill each other?

But this is really irrelevant to my argument, and kind of a stupid point. Yeah, you can say “on ambiguous input, don’t fire.” Obviously. My problem isn’t with cases where the machines interprets the input as ambiguous. My problem is with cases where the machine doesn’t interpret the input as ambiguous when it probably should. Not every possible input could be accounted for, especially given the complexity of the input streams an autonomous combat system would have to be able process.

The problem isn’t how the systems respond to perceived ambiguity. The problem is the possibility of the system responding to erroneous certainty.

It’s also the kind of judgment that stopped My Lai: Hugh Thompson Jr. - Wikipedia.

Yes, human reasoning isn’t perfect – a point I already conceded at great length. But the difference between humans and machines isn’t imperfect judgment versus perfect judgment. It’s imperfect judgment versus no judgment at all. Computers can’t make judgments at all.

Autonomous weapon systems are, of course, just as capable of friendly fire. Maybe the enemy hacks the software update link that you thought would be such a good idea and updates the machines not to identify the “friendly” RFID tags any more (and maybe now recognize the enemy’s RFID tags as friendly). Maybe something in the atmosphere prevents the machines from reading the friendlys’ RFID correctly. One can imagine any number of technical problems either internal or external to the systems.

The point is that once the problem happens, the machines aren’t capable of second-guessing their programming. Once their programming says: “definitely shoot” they will shoot. And keep shooting until their progamming says to stop or a human being intervenes somehow.

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.