I just lost the game.
Letās take the ābestā variant of the Basilisk Offer.
Box A, royal fun on a high-power project.
Box B, nothing.
A it is. Nothing to decide about.
We call it ābinary Buddhismā.
the thing is, the only fucking reason to couple a bog-standard rudimentary knowledge representation systems with a bog-standard rudimentary voice synthesizer is to give people reason to think in that direction.
so why else would they do it?
Iāve heard that in Platoās Peopleās Republic the Interior Ministry forces can just stop you in the street, for any reason or none, and demand to inspect your qualia. Reports from the few dissidents who have survived The Cave suggest that itās virtually impossible to present qualia authentic enough to satisfy them.
Iāve been raising qualia at home from qualia eggs.
Rokoās Basilisk is intellectually equivalent to Satan or possibly the concept of sin, in that in order to consider it seriously you have to believe in a lot of articles of faith, including time travel, if the reality we exist in isnāt simulated already.
Iām not convinced we live in a simulation yet, Iām definitely not convinced time travel could ever work, therefore, I donāt really care about the Basilisk. Iād rather think about how we could make the singularity even happen before Iād worry myself about the horrors it could spawn. The singularity isnāt going to happen unless we make it happen. Not yet anyway.
So people could hear it? Speech is just an output device, like any other. Likewise, with computer text on a monitor, I donāt feel itās an illusion of somebody drawing on the other side. Itās just data translated for my senses. Bots are fun, and speech synthesis is fun, so why not? Rudimentary knowledge representation gets you unexpected connections. Rudimentary speech synthesis can offer linear predictive coding which is deliciously coarse and buzzy.
itās a very dumb output device for this purpose; itās slow, hard to navigate, and requires otherwise unneeded hardware. itās used only to wow the easily-impressed.
there are no unexpected connections from their knowledge representation; there are a handful of slots (enemy, obstacle, etc.) that could be filled with a handful of sprites (goomba, wall, coin, etc.); you could write down every combination on an index card with a Bic, and have room to spare.
if they could mine the A*-search algorithm for insight, or even better design one which discovered them without so much a priori knowledge, i agree that would be cool. the speech synth would still be dumb though.
Thatās not the choice. You donāt get to pick A, you have to pick either A&B or B.
I donāt think the boxes analogy works too well for the basilisk problem, which is why itās awkward. Theyāre two separate problems.
One is the established Newcombās box problem, where the crux of the issue is that in all possible worlds, picking A&B gives you a better outcome than picking B alone. If youāre going to pick B, then it would seem intuitively than you may as well grab that grand from A while youāre at it. (And then we make a paradox by magically making it so picking B alone is better.)
This is where the comparison breaks down: it just doesnāt map to the original problem, where itās necessary for box A to be good. In the box representation of the Basilisk problem, box A isnāt necessarily a good thing. So you might have every reason to think picking box B alone is better than A&B ā Iād prefer to do nothing than help bring about an evil AI.
I think trying to map the problem onto Newcombās box just muddles it. I think the Basilisk problem is simply this: if you believe that either (a) weāre in an AIās simulation, or (b) an AI in the future can look back at all your decisions, then you must work towards bringing that AI into existence or He will be Angry.
In that sense, the better analogy is simply Pascalās Wager, with the nice twist that ābelieving in God AIā is what causes the AI to be created.
Makes me think of Yahtzee Croshawās Mogworld.
Okay, I misread then.
Still, A sweetens the deal enough for me.
Donāt do it to Luigi.
Heāll look at the title screen and realize that itās āSuper Mario Bros.ā
Then heāll realize that heās āplayer 2ā, the one whoās only there when somebody canāt be Mario.
And then the only thing heāll be feeling the rest of his life is endless depression.
Nah, Luigi will be super obsessed with photorealism and plumbing intents, and keep releasing shared Mario routines 12KB at a time until plumbing is the super profession and life extension it was at first release.
It will mean our awareness has been modeled. Will the model be aware of our awareness, or its own?
If itās a perfect model of a human brain with a working mind, then it should at least be self-aware. If it has the sensory hardware (or knowledge input) to know about other brains and minds, then it should also be at least able to be aware of our self awareness as well. I donāt see how a perfectly simulated brain would be distinct in itās function from one of our squishy biological meat brains.
Thatās a tall order, because youād have to model not only the brain but the lifetime of sensory data that goes into developing a human mind. Otherwise you just have the equivalent of a cloned brain floating in a jar.
So to get anything like a healthy human mind you either have to plug the simulated brain into a highly sophisticated android body and raise it from infancy or simulate an entire world for it to live in, Matrix-style.
Which is exactly what I was thinking. Besides, given the comedy of unintended consequences we humans love to generate via any number of ways (R&D, daily conversations, random interactions), the likelihood that weāll awaken technologies that donāt necessarily do our bidding, or maybe they follow our orders but in ways we hadnāt previously considered, seems a distinct possibility to me.
Itās either that, or maybe Iāve watched Robocop too many times. I really donāt give a shit, as long as I get my 6000 SUX. Itās an American Tradition, yall:
This topic was automatically closed after 5 days. New replies are no longer allowed.