Three pro-human laws of robotics

I’m more concerned with giving robots intelligence enough to realize the world around themselves, but giving them no rights or humans no responsibility. I know the whole ‘we are making a literal inhuman slave race’ argument, but while people are concerned about making sure robots treat us ethically I want to make sure the machines are able to tell the person that wants to take a sledgehammer to them for the lulz that they do not wish to die.

The problem is we have spent our entire history finding reasons to turn other humans only marginally diffrent in look or geography or any of a thousand diffrent thigns ans somehow ‘less’ than ourselves. What hope does an AI have when it is genuinely at EVERY level ‘other’?