What are the real risks we humans could face from a rogue AI superintelligence?


The rare few. :books:

1 Like

I think our disagreements are narrow. I don’t think that the problems you describe are new. We are already faced with complex self-configuring systems (see: any operating system install), beyond the capacity of any human to comprehend. We deal with this complexity through careful and rigorous testing and bound setting. We also encounter systems where we must organize the work done by unconstrained intelligences (viz, other humans),and we have practices for dealing with those too, i.e. socialization, laws, mores, etc.

What is new is the vogue of believing you can set loose complex, fault-intolerant agents on your process and expect them to magically perform well. But this is also an ancient habit with a name: idiocy. It will soon give way to reality.

When someone uses the words risk and threat interchangeably, I can’t take them too seriously on subjects like this.

1 Like



At this point? I’m hoping the AI will take over. Surely to god they can’t do a worse job.

While I don’t think we’re in any danger of seeing any form on artificial consciousness or sentience in the next 100 years, but I do think we’ll have pretty damn good emulation of human behaviour long before that.

I think there’s some definite danger that if we are culturally inured to casually creating and destroying things that emulate humans close enough for our brains to be confused, that could affect how we value real human beings.

We don’t, as a species, treat humans very well as it is. I suspect anything we create could pick up on that rather quickly; not too far of a leap that we may accidentally create hostile synthetics simply because they follow our example.


I come at it from a rather different perspective. If Homo Sapiens Sapiens are 200,000 years old, it is only in the last 0.1% of human history, a veritable eye-blink, have we even considered the possibility that all human beings might be human.

The concept of a human right is so utterly foreign to humans as we are born, that I find it a bloody miracle that we can learn the concept and then actually extend the idea to people that share neither language, nor appearance, nor location, nor religion with us.

If somehow we could make something that could follow an example, is there anything in creation that you would prefer it to follow over the example of present-day man? Yes, we are crafted of crooked timber, but in the realm of my experience, I think we’d be incredibly fortunate if such a creature was capable of the modicum of sympathy that we hold for other human beings.

However, the truth is that if we were to build some form of life, it would be a miracle beyond compare if it was capable of any empathy whatsoever. Not because of our oversight, but because the concept of empathy is so complicated that it will likely be forever beyond us to create it.

Luckily, I think there’s no chance of humans being smart enough to create a self-aware entity, so the problem is moot. For better or worse, it’ll simply be a machine doing whatever it was programmed to do. AI isn’t anything special. It’s just another program that can do just as much damage as we choose to allow it to do.

The threat is not AI - it’s the threat that we are choosing to trust our safety to tools as imperfect as the human that created them. The fact that a frayed wire could cause a nuclear Armageddon is far more apropos than a buggy AI.

1 Like

The notion of imposing your own ethics upon another group because you have no guarantee that their activities will suit you sounds to me like a very sanitized version of slavery. What ethical obligation does any other organism or system have to do what is convenient for humans? None! And difficulty accepting this is why contemporary human societies routinely fail to coexist with pretty much all living things.

This topic was automatically closed after 5 days. New replies are no longer allowed.