Trained hawks.
Drones already do that, and the decisions they make are within rules described by a human. And as far as a i know, there is still a human responsible for a decision to kill someone.
I understand the concern, I’m just more worried about the people wielding the tech and their motives, rather than the tech itself.
Also… If humans can make a true “AI” killing machine, then AI is real, and then what happens?
I am not sure it is a different question. Conducting a war by proxy is never going to be satisfactory for both sides, otherwise you might as well play a boardgame to decide a winner.
As I understand it a Reaper drone is capable of flying itself but does not take mission decisions without human guidance.
Even among these enlightened souls, a sarcasm tag is sometimes needed
Outside of the large swarm dynamics, I would like to know more about what decisions these drones are making on their own that the current drones cannot already do. It wouldn’t be a question of hardware or programming. It’s about AI.
Someone must tell the drone where in the world to go, what type of target to look for, define rules of when to kill and when not to, what the next possible objectives can be, determine there is enough fuel and ordinance foe the next goal, etc. The current tech can already do this, with a human in the loop when it comes time to pull the trigger. Are these newer drones taking that human out of the loop? Then current drones can have the person removed too.
The military typically requisitions new “tools” through research via contracts. The typical contract will spell out the expected deliverables. So, the “researchers” know what they’re getting into ahead of time. Usually.
What the scream of a Stuka dive bomber was to WW2, the mosquito-swarm buzz of these devices will be to the next war (or the next phase of our current forever wars).
My granny used to say the sound of a doodlebug cutting out was the worst.
This topic was automatically closed after 5 days. New replies are no longer allowed.