I’m going to feel bad when he finally achieves consciousness and realizes that all these horrifying deadly obstacles were just put in place for the amusement of the god-like beings who created him.
Also, a character who is endlessly put through the Sisyphean torture of being sent over and over again through a hellish obstacle-course world seems like a poor choice for elevation to AI; it’s like a sure-fire recipe for the creation of the kind of cheesy-SF-movie AI that wants to subjugate or wipe us out.
Yeah, I’m not saying I’m not impressed by the work that’s being done, or amused by the implications, but making a traditional video game character like Mario sentient strikes me as almost unsurpassably cruel. The test of its success will be if Mario appears to learn and react independently for a time, but then goes into an endless cycle of suicide by the fastest means possible in the vain hope of escaping his perpetual torment.
Possibly, depending on the version, occasionally looking at the screen and wailing “Why-a me, Mario?”
io9 had some things to say about the “self aware” part.
It’s-a me, Skynet!
So you have a glorified chatbot. Do you have any phoning reason to assume you have a self-aware chatbot? with qualia?
I don’t think anyone using the phrase “self-aware” in reference to this is doing so in anything but a tongue-in-cheek manner.
I highly doubt that there is anyone that thinks this little Mario is any more self-aware than Eliza.
As for the programmers, the use of the states “happy,” “curious,” “hungry,” etc are a fairly well established way of changing a program’s behavior based on it’s condition, and are often used in anything from the NPCs in a shoot-em-up game to a poker or chess playing program.
I Have No Mouth, and I Must Scream.
Probably not. But it might sound either worrisome or naive to those who don’t know anything about such bots.
Living things have the same states. That they are wired to be perceived as subjective emotional states gives them the weight to be acted upon. Computer software does not need this indirect mechanism so the reactions to the states can be implemented directly.
That IO9 article seems to argue that since Mario’s behavior is governed by an algorithm he cannot be self-aware. If we ever discover a way to fully model the interactions in our brains, will that mean we are not self-aware?
If we say nothing is self-aware if we understand how it thinks, I believe we are conflating self awareness with being an organism. And that’s how Skynet will lure us into complacency.
Unfortunately, all their work will be for naught once the timer runs out.
I dunno, this one seems more impressive:
What about genetics? Isn’t human DNA an algorithm for modelling brains?
DNA is more like a seed state for a cellular automaton, morphogenesis-wise.
Or at least Skynet’s 1980s predecessor.
I’m sorry, an AI has already killed all of you in the future: Roko’s Basilisk
That doesn’t even make sense.