Elon Musk and Stephen Hawking call for ban on “autonomous weapons”

Yeah, but they don’t. While there has been cases of soldiers in the Middle East targeting civilians on purpose, and they were tried and convicted, I am less worried about people seeking out to do bad things on purpose, and more about “shit just happening”.

For example you are on patrol in the city and think you see an armed militant and fire, and it turns out to be something else. Or you return fire into a building where one militant is hiding, but end up shooting others who were just in the building. Those are mistakes on the front and no one is faulted for them.

I am not saying that a robot soldier will ever not fuck up, but it will fuck up way less than humans. It can afford the luxury of not acting and to be sure of their actions before making a move.

With your logic, should we allow self driving cars? Because that is happening now, and honestly, I see it a mandatory in 50 years. Now self driving cars will still kill people. But if we reduce the number of fatalities by 90%, do we still condemn the self driving care because it isn’t perfect - even though it is a marked improvement over humans.

This has a whole host of philosophical arguments we can make. But I think it boils down that we think people are really good at decision making, but the statistic show that no, we aren’t. But maybe that is ok?

1 Like

I would agree with a ban. However, Russia is now an overtly repressive nationalistic wannabe empire run by a gangster/secret police kleptocracy with scant regard for anything like international bans, laws or copyright, and along with China is renowned for industrial espionage on a vast scale. It would only be a matter of time before the profit-driven private contractors building these machines ‘leaked’ the designs to one or both, so there would be knockoffs being stamped out as fast as production line robots could weld. Shortly thereafter, they would be up for sale…and we get Terminator, but without the happy ending.

Civilians have already made quadcopters and other robots that carry payloads. Civilians have already made quadcopters and other robots that are autonomous. Civilians have already stuck guns in quadcopters and other robots. Even if every military in the world agreed not to build them, you can’t prevent the existence of self-guided killing machines. Every component in their construction is ubiquitous and unbannable.

2 Likes

Lockheed putting billions of dollars into autonomous systems scares me a whole hell of a lot more than a couple of idiots screwing around in a garage.

But even assuming you’re correct that there’s no difference between an arms race between three global superpowers and amateur tinkering, what’s your point? That it’s somehow not a bad idea?

5 Likes

But it’s completely different with autonomous systems?

Certainly not yet. There’s still a number of problems with them that you can read about in different parts of the internet.

They’re exactly the sorts of problems that should make you worry about autonomous weapon systems.

We’re inconsistent at decision making; machines are consistent.

When the system is already making appropriate decisions, that dynamic favors machines. When self correction is needed, it favors human beings.

Edit: Note that this means that when the context of the decision changes in such a way that what constitutes an “appropriate decision” changes, humans are fairly likely to recognize the change of context and change behavior accordingly. Machines are almost guaranteed not to recognize the change of context and then persevere in suddenly inappropriate decision making until intervention by a human being.

Believe it or not, the current state of the world allows substitution of multiple parameters for “X”.

2 Likes

That’s why they call him the Velour Fog.

I’ll just…ah…here’s a link:

Ecobots.

MangleBots? We can print fractals on them as they hack us to bits and shoot us full of lead.

1 Like

I’m saying the big things don’t scare me. A working follow-on to the X-47B would scare me as much as any current fighter aircraft scares me - they all have the same massive destructive potential, but they are all controlled and I have zero expectation I will be bombed.

It’s the small things you have to worry about. Like a drone with a single-shot gun and the facial recognition pattern of the President of the United States. Preventing Lockheed from building things like that will do nothing to prevent them from being constructed

As usual: In the USA. Outside the US, police is perfectly capable to respond with less than lethal force in (extreme) situations.

4 Likes

I bet it’s easier to do when your police force isn’t dealing with other Americans.

2 Likes

I like the sentiment, but it’s not gonna happen. I mean, I already have robotic area denial sentry devices in my back yard, to drive off a large family of skunks. I bought them on Amazon, and am prepared to escalate to a more unpleasant (but still completely non-lethal) device if the ones I’ve got now don’t do the trick. It’s just too tempting.

2 Likes

Unless your model is pretty much exclusively one of overpowered countries with huge robot budgets beating up on their hopeless inferiors, I’d be inclined to suspect that maximum-caution programming would go out the window pretty quickly.

Yes, a robot can at least be ordered to take enormous risks(where a human would decide that a possible court martial beats almost definitely getting shot); but a cautious robot is a robot that is more vulnerable to camouflage and signal jamming; and much less likely to land the first hit in a given encounter. A less cautious robot will kill more civilians; but be harder to trick and much more likely to suppress hostiles before taking fire from them.

In a conflict between even vaguely evenly matched forces, it’s going to be the twitchy, collateral-damage-insensitive, killbots that remain(either because the cautious ones get destroyed, or because the safeties get relaxed in a hurry once losing starts to look like a real possibility. There will be an upper bound, imposed by the costs of fratricide, infrastructure destruction, and sheer waste of ammunition; but the equilibrium state is likely to be a lot more trigger happy than one would like.

Think of the humble land mine, the original killer robot(possibly, naval mines might be earlier, I can’t remember): those have not the slightest interest in self preservation; but their design tended to favor high sensitivity and lots of fiendish anti-tamper mechanisms, making them highly dangerous to even very lightweight non-combatants; because making them insensitive enough to respond only to the heaviest footfall or the pressure of an armored vehicle meant far too many false negatives and a relatively easy time for the opposition’s mine clearing efforts.

3 Likes

It hasn’t been tested much, aside from the occasional VP; but the current implementation of the Presidential Succession Act specifies 18 failover presidents, to be used one at a time as needed, to deal with that sort of problem(currently, we only have 17; because one of the positions specified is held by a non-native-born officer, so they can’t be used). It wouldn’t entirely surprise me if the delightful mass of Cold War ‘continuity’ schemes specifies an even deeper list so that we could fight the commies to the last American Cockroach; but any such plans aren’t as public).

1 Like

Won’t work. Stuff like this is too cool and too easy (at the lower levels, at least) to be possible to sufficiently enforce.

Way too cool, way too attractive even for the youngest of engineers.

1 Like

It’s hard enough keeping human-operated weapons from killing the wrong people with distressing frequency, never mind autonomous ones.

Also, given that computer security is largely based on wishful thinking, you should simply assume that any such device will be pwned shortly after it arrives on the battlefield…

3 Likes

That’s part of the fun.

1 Like