A dozen googlers quit over Google's military drone contract

I added it to my Amazon wish list for later perusal. It’s in BR… but the studio handling this is the Shout Factory and… as I feared due to its rep and my short experience with them… the BR reviews declare this to be a horrible transfer. I’ll just go for the much cheaper DVD and enjoy my low-buck coup of a neat little movie.

4 Likes

For what it’s worth, it’s a credit to Google’s corporate culture that employees retain enough sense of agency to protest like this. Most companies that do bad things – even defence contractors – are made up of people who wouldn’t act that way on their own moral dime, but they have simply pulled out the mental cables that connect “what I do at work” with “what my employer is responsible for”

3 Likes

For all of you that criticized Google: Turn your phone off, and take the battery out. Try to stay indoors, and if you do have to go out wear a disguise and put a rock in your shoe. And for goodness sakes put a piece of tape over the camera on your laptop…

~Quietly ponders how much tinfoil it will take to stop 5g mm waves~

1 Like

I dunno - it may be MORE ethical to work on it vs not.

Reason:

Drone strikes and the like are going to continue with or with out Google’s help. They currently are piloted by very fallible humans - prone to make mistakes, get tired, and have internal biases and prejudices.

A well working AI program that can properly ID targets, doesn’t get tired and make errors because of it, and won’t ignore a protocol or make a poor judgement because they have a “kill 'em all and let God sort them out” mentality. If properly, ethically programmed, one will end up with LESS deaths than if run by humans.

Ideally.

A similar story comes from the origins of the main bomber sight used in WWII by the US. The maker hoped it would make bombing more precise, resulting in less civilian deaths.

2 Likes

To properly solve the issue of “mind control beams” or other waves you’d have to fully enclose your body in a type of Faraday cage/suit.

3 Likes

I don’t buy this argument. Google as a business would be directly responsible for developing the technology that would be taking human lives, versus… not being involved at all. As a consumer which company would i likely do business with? The one not killing people.

6 Likes

How long until Google execs start soft blackballing people in the Valley for just quitting? I can only see this getting worse. I doubt their resignations will do anything but bring out the worse in the company’s execs at this point. I’m not saying it’s a bad thing they did but that if Google’s execs are willing to work with the US govt to help them murder people (albeit indirectly) then what’s the difference then to them when it comes to ruining former employee’s prospects in the field?

2 Likes

He fishes poorly

1 Like

A program or any device can be designed with bias from humans. For example, many devices use sound or visual cues to indicate functionality/modes. This is a problem for people who are not able to see or hear since it makes these devices perform poorly or not at all for those kinds of people. Similarly, programs can have biases with respect to human beings as we’ve seen with the iPhone X facial recognition software which generalized face patterns from a very narrow group of people which allowed non-Caucasians to unlock each other’s phones (as with Chinese customers) or just not let them unlock at all (as with some African customers). So we build technology all the time with bad assumptions and internalized biases. Making a machine take care of all the murdering isn’t going to take away those biases. Software in this regard would have be doubly vet more so than the manual process is done via law.

6 Likes

A dozen googlers quit over Google’s military drone contract

Was it a bakers dozen?

1 Like

I have to:

Now that I have that out of my system, good for them.

5 Likes
2 Likes

Well, again, the lives are being taken either way. Their efforts could REDUCE that number. For another example, imagine if no one wanted to work on creating “smart” or guided bombs? Old school carpet bombing which usually claimed more civilian lives, infrastructure, and even friendly units would still be the norm. Where as no one can hit a military installation in an urban center with out hurting building next door.

Furthermore, you can’t really take that stance and lead a modern life. You use components, software, and technology made by companies who also sell those things that are used directly to create things like drones, their various systems, GPS and spy satellites, GPS units, computers, missiles, guided missiles, war panes, submarines, cameras, and a million other things are used in modern warfare. Hell, even M&Ms used to be packed into soldiers rations, fueling their bodies so they could kill people.

Absolutely machines can be programed to include biases. Some of your examples are usability and technical issues that comes without proper testing for wide enough variables.

But the way I see, it, one can at least insist on the logic programmed to confirm whether or not they are taking “ethical” actions. Many people are worried about them being programmed for “evil”, when people are already doing that fine on their own. Whereas a human can hide what is in their hearts, a machine can’t hide it’s code entirely. And it removes all of the mistakes humans make all the time.

Yes, yes, ultimately we will completely outgrow the need for any of this. But that won’t be happening in the near future. I also acknowledge this could create Skynet.

2 Likes

If you play it on the Brutal setting.

3 Likes

Machine learning is almost completely opaque, actually. Engineers can coax algorithms to do certain things, but the means by which Google Photos can identify an apple in a photograph is not written down as any sort of discretely audit-able library.

Um. I’m just gonna leave this here.

3 Likes

The protocols for actions shouldn’t be opaque. Each mission should be able to have a complete review of it, including the “thought process” for how it arrived at its conclusion and actions.

Yes, I realize algorithms can be tweaked to get certain outcomes. But first it would have to happen that such tweaks were being made on purpose, vs tweaks towards the “correct” outcomes. I realize that machines are only as good as their makers, but I still think oversight and correcting errors in them is much easier than in humans. If we programed them to give us the results that we CURRENTLY get in the field, I think one would be aghast. But when we have humans do something wrong in the field, it is written off as a “judgement call”, or an unfortunate outcome that happened to bring back our soldiers alive.

  1. They will never be PERFECT. They will be considerably BETTER.

  2. We are still a ways off for autonomous vehicles - but were are getting closer every day. I fully expect to have my old ass shuttled around by a robot car.

1 Like

They need to integrate this with Google Home. “Google — Kill some brown people.”

1 Like

And I’m telling you that a mission executed by an “AI” powered by machine learning is not going to give you that, because there will be no one thing you could point to in the code to say “this is why the drone misidentified a wedding as a terrorist training camp”. If debugging and auditing machine learning algorithms was as straightforward as you claim, Google would be able to quickly identify and correct the reason why their photo recognition algorithm kept tagging black people as gorillas. All you can really do is tell the algorithm “no, that was a wedding, try again”.

With regard to the supposed ethical imperative to do this in service of killing fewer people, that’s just not how this technology is going to be used, and you know it. Maybe wives and children won’t be among those quasi-legally murdered from afar as often, but it’s foolish to think that “improved targeting” is going to result in the military merely maintaining its current kill rate. Justify it however you want, but making weapons of war more effective is not going to reduce the incidence of war. It just makes it easier to justify perpetuating it.

8 Likes

I just quit working for the DoD. I was just an accountant auditing programs, but I audited a defense contractor that created drones. I decided I had to quit when one of the projects I audited was about automated autonomous mobile target systems. The idea was it creates an algorithm to automatically decide who is a target on the fly. Not every project goes to production, and R&D is R&D, but damn. Right? (It was recently all over the news, so I am not releasing any info that isn’t already released so feds? Don’t hunt me down, please.)

I guess I’m not the right fit for that job because it horrified me as I went over the engineers explanation. They make these explanations for people that don’t have clearance. Even with the cut down explanation with all clearance necessary info cut out, I could read between the lines. I don’t work there anymore, so maybe I’m biased but I respect anyone that quit over something like this because it made a big part of my decision to quit.

7 Likes

This was about the Boston Dynamics demo, but it’s relevant here as well:

image

5 Likes