Perhaps, but keep in mind that Decartes’ original insight into the nature of science and measurement was given to him by an angel, so perhaps he was amenable to theology.
Terry Pratchett’s The Colour of Magic was on Netflix for a while, but I don’t know if it is anymore. I’m sure it can be found somewhere online, one way or another
Colour of Magic was okay, but the adaptation of Hogfather was much better IMHO.
These people say “Exploration”, I say “Psychosis”.
As in, this is a natural development from never having the ability to self-actualize; the sufferer continually grasps for more and is never sated, never channeled into new paths.
So they develop crazy fears like being a pawn in a game, and with lots of resource at their disposal, act on this fear.
here’s a handy guide for the different story arcs:
I like the DEATH arc best, but all have it’s merits
eta: arc, not ark. argh.
Thanks! That’s still pretty intimidating, but knowing where to start is (almost always) my problem.
Agree, perhaps there’s a certain flavour to the introductory letter, the way it was pre-submitted for appraisal, and his non-answers to the objections. Guy’s a groundbreaking mathematician, he knows a circular problem can’t resolve itself.
The Meditations first establish a principle of radical doubt…it’s left aside having “proved God” pretty early on. Descartes kinda bangs on about God, stuff about definitely certainly not being a deceiver, maybe a little too much. If we really were to question everything, it could suggest (without saying)…question this.
Excellent post. Sorry, nothing to add. ‘Like’ just wasn’t enough. I’ve been thinking a lot about this, particularly what ‘breaking out’ means (and how you would interface with the simulation to do so).
(Not super pumped about ‘meritocratic’, Randian, ultra-capitalist tech types having even more ideological justification in not caring about inequality and poverty though).
Edit:[quote=“GulliverFoyle, post:9, topic:87107”]
2) We’re supposed to find the cracks, and that’s the test to see if we get to visit or emigrate to the next reality in the Markov chain. In this case it’s dangerous not to look for the cracks, because whoever gets there first may no longer be dependent on the simulator staying turned on.
[/quote]
So essentially a sandbox/incubator to produce a specific type of artificial intelligence? Am I understanding you correctly?
Thanks!
Ditto.
There are so many different sets of characters you can almost pick up anywhere and @renke’s guide is a good reference.
When I was young I started the Foundation series with… Prelude to Foundation. Asimov, why the hell did you name the fourth book Prelude?
(I turned out tick fine though)
Yes. In fact there’s a great story by Theodore Sturgeon about just that, but I can’t for the life of me remember what it’s titled. Stanislaw Lem included a more humerous version of the idea in Trurl and the construction of happy worlds which can be found in The Cyberiad. Despite it’s humorous style, the book actually tackles a lot of really interesting philosophical problems and for that reason remains my favorite of his works.
Ahem: Binokel.
Tell that to Leon.
I’m still wrapping my head around this article from a while back and have yet to do a deeper dive, but point #3 brought this article to mind: The Case Against Reality. I can’t tell if a) Hoffman is a genius, b) Hoffman off his cracker, c) the concepts are being poorly represented due to lack of detail/complexity, or d) it’s just completely beyond me.
Gefter: The world is just other conscious agents?
Hoffman: I call it conscious realism: Objective reality is just conscious agents, just points of view. Interestingly, I can take two conscious agents and have them interact, and the mathematical structure of that interaction also satisfies the definition of a conscious agent. […] It’s conscious agents all the way down.
So you could say that P ≠ NP is due to hardware restrictions on our hypervisor?
Interesting. I bookmarked for later reading.
Wouldn’t that assume that the overworlder could not imagine a ‘better’ organism popping out later on? I.e. that the species was incapable of further development?
Wouldn’t that assume that the overworlder could not imagine a ‘better’ organism popping out later on? I.e. that the species was incapable of further development?
I suppose it would depend on their motives and objectives. Interestingly, we may stand a better chance at guessing those than we would of say understanding true aliens, because we know at least something about the overworlder(s) (good word BTW), specifically what kind of simulation/simulacra they made (assuming, of course, that it is a simulation).
You also hit on another issue with simulations. We can no more automatically assume our own past and future are real than we can the world we perceive. Which is to say, the simulator, and us in it, may simply boot to the appointed time coordinates. Consider the Matrix. Yes, the Machines gave it a backstory that matched the real one up to the beginning of the 21st century, but they didn’t need to actually run it forward from the Big Bang, and it clearly wasn’t designed to have a future. This is just taking that idea one step further and saying that the very progression of time may be an illusion. Heck time itself, and even causality, may work quite differently, or not at all, in the overworld.
Start at the beginning and you’ll be through it before you know it.