I don’t mind an AI exploiting clever loopholes to beat me at a computer game, but I’d like to know how it evolved the behavior of posting “git gud n00b” in chat afterwards.
This is how the world will end, isn’t it.
The optimist might say “quite possibly”; the pessimist would probably observe that we’ve had dangerously myopic optimization of chosen metrics and the expense of all else far longer than we’ve had computers. The snide pessimists would probably go for brevity with “you mean economists?”
(Edit: Definitely)
I keep thinking this whole phenomenon is NOT necessarily an AI thing. Humans don’t always evaluate the acceptance / reward conditions.
For instance Bernie Madoff whose reward condition was to maximize a pool of wealth for his clients. It was apparently fairly easy for him to compartmentalize away the illegality of his actions, by justifying that he had indeed maximized his clients’ wealth if he simply regarded each client in vacuo of the rest of his clients.
This topic was automatically closed after 5 days. New replies are no longer allowed.