My main fear isn’t intentional murderbots or self-aware AIs. I’m just concerned that in attempting to create something really great or solve some relatively mundane problem, some researchers will build an AI just smart enough to start paperclip maximizing:
The paperclip maximizer is a thought experiment originally described by Swedish philosopher Nick Bostrom in 2003. It illustrates the existential risk that an artificial general intelligence may pose to human beings when programmed to pursue even seemingly-harmless goals, and the necessity of incorporating machine ethics into artificial intelligence design. The scenario describes an advanced artificial intelligence tasked with manufacturing paperclips. If such a machine were not programmed to value human life, then given enough power its optimized goal would be to turn all matter in the universe, including human beings, into either paperclips or machines which manufacture paperclips.
In truth, probably nothing of that scope to start out. But I envision a situation where we just don’t manage to notice all the possible, but unlikely behavioral edge cases when designing an AI. It doesn’t even have to be a super-smart AI, just one well placed to exert it’s influence in an inconvenient way at the wrong time.