Yeah I have the most superficial knowledge of that, it sounds better when they say it though. The way I imagine it is kind of nightmarishā¦
Like so many great insights of youth, you soon find the idea is ancient and well explored. But it has a lot of potency when you arrived at something similar yourself.
True, but I was thinking of something with a bit more content with my latter statement. Two things:
There really isnāt a solid basis on which to decide if an observed effect is the result of imperfect simulation or the result of a law of the universe. For example, suppose that we find an anisotropy in some physical law e.g. some forces are weaker in one direction than in another. Are we supposed to think that weāve found the simplices in the FEM ( https://en.wikipedia.org/wiki/Finite_element_method ) of our simulation? Or maybe thatās just how the universe is and our idea of what is natural is wrong. It really seems to be a consequence free proposition.
The usual probabilistic argument about this is really, really messy* and it isnāt clear why you couldnāt take the same argument and induct. i.e. Say that we are simulated by a simulated universe which is simulated by a simulated universe which isā¦ And there really isnāt any end to that tower so none of it is ārealā at all. (Whatever that means.)
*: For starters, how are we going to meaningfully talk about probability when we donāt even have a vague idea about the possible space of states?
You are right. I should have better clarified my point - I was mainly arguing against the āancestor simulationā proposition. There really isnāt an argument to be made against the possibility that we are embedded in a ābiggerā universe with different rules - because the properties and probabilities would be beyond our event horizon in that case.
But I do think there is a strong case to be made against the scenario (which was, I think, the one mainly presented in the article) where we are a sim run by a future versions of ourselves - 'cause such a sim wouldnāt manage to āfitā into the same reality itās supposed to be imitating.
Itās not easy to make a consistent ad-hoc simulation. Imagine flying in the simulator, you meet a person and ask them who their great-grandfather was and what they did (or if you can read their diary). Then you fly to another point and ask another person the same question. Universes are so complex that it is extremely hard to ensure that both histories are consistent; their are almost certainly to be contradictions between the two stories, especially if you start digging deeper. A simpler model is trying to simulate a chess game. Chess games work under a set of rules, and only certain positions on the board are legal chess games. If you set up an arbitrary mid-game position, the only way to ensure that it is legal is to backwards compute the histories of all pieces and moves and make sure that one of them is legal, and combinatorial explosion makes this almost impossible. To ensure a legal non-contradictory arrangement, you have to simulate the game from the start.
Sure there is an end to the tower. There is a first-mover (first simulator). Also, once your simulations start running simulations, it may increase the power requirements of your simulation dramatically. At that point it may be time to terminate your simulation. Good reason to NOT start running our own simulations. However any simulator can also reason likewise, so it could be that nobody starts running simulations, since they canāt be sure that they themselves are not in a simulation. Hence, if the simulation argument is true, then it is false.
This is just Elon Muskās fancy way of expressing his insecurity that, yeah, his life is going so well, but what if he wakes up and it all just vanishes?
Oh so thatās what I have been seeing. Thanks for this term. I learned something new today.
I believe the late great Terrence McKenna said something like āwe have our eyes for collecting local informationā and our brains for non-local information-gathering.
It might also be visual snow; it looks like the two main distinctions are whether it goes away in bright light (visual snow doesnāt, CEV does), and whether or not you can ignore it if sufficiently distracted. Most of the time, I donāt see it, but whenever itās dark enough, and whenever I think about the CEV, it shows up, which leads me to believe mine is CEV and not visual snow.
Of course, Iām not an expert on [insert pertinent field here], so I wouldnāt be surprised if Iāve misdiagnosed myself.
Based on your description (this CEV totally goes away in light), and the wiki entry describing the various levels, Iām probably at Level 3. Long ago I thought I was losing my mind (some days, I still do, I suppose), but now itās all just kind of vaguely amusing, vaguely interesting movies that donāt make much sense.
Over the years Iāve worked onā¦
ā¦ and sometimes I wonder if it was my early years with CEV (since I was 10 or something) that led to my interest in lucid dreaming.