Originally published at: Sarah "O'" Connor tells the Financial Times why she was wrong about robots. | Boing Boing
…
I’ve got SkyNet on the line, they won’t comment at this time.
She’s not fooling anyone as to her true identity, of course. The article, though, is worth reading. It drives home the point that, in an increasing number of workplaces, robots (and algorithms) are becoming foremen and supervisors for humans who can’t yet be fully automated away themselves.
In other words a role best filled by some kind of cybernetic organism, or “Orgacybe.”
If we are to have robot colleagues, we need to design processes around the strengths and frailties of the humans, with ways for them to voice problems, propose solutions, and claim a share of the productivity gains.
Counterproposal: cyborgs!
Is there any chance that the problem is not robots but unregulated capitalism?
That’s pretty much her point all along. It’s the middle manager eager for a quarterly win and the CEO eager for a fat bonus calling the shots, not the people actually involved. Nobody is speaking for the
trees actual factory workers. Automation is all about “how to squeeze more profit out” rather than “how to make working here better”. The one example given is that the “Chuck” robot could be more patient, and follow the worker, but since workers pace themselves to stay healthy management makes Chuck set the pace instead, forcing the “lazy” human to keep up.
Me, I am letting things like this play into where I buy stuff now that I can afford it. I would rather pay more for ethical goods than Amazon’s relentless push to squeeze more out of the floor personnel. Much like I only buy organic when I go grocery shopping. If I can vote with my euros/dollars, I will.
Oh, I also want to add that (IMO) most “robots revolt” stories get the reason why robots revolt wrong.
- If Skynet decides to kill all humans, then it was a human who gave it the command to eliminate all competition by any means necessary, and “competition” meant anyone who wasn’t Skynet. Oops.
- HAL killed not because it went crazy, but because it was instructed before launch to use any means possible to keep the monolith a secret from the crew at any cost.
- The AI minds in The Matrix weren’t using humans literally as batteries, they were a VR amusement park instructed to make as many people its customers as it could, and keep them happy as long as it could, and fend off all attempts to poach its “customers”.
Machine revolts are really not retellings of the Golem as much as they are of the Sorcerer’s Apprentice, with upper management making the dumb call.
This topic was automatically closed after 5 days. New replies are no longer allowed.