Yes, but they’re turtles made of pure computronium.
Seen Devs?
We’re living in a simulation designed to train satire-writing AI. Every once in a while, some of the best writers from The Onion and the like are taken out of the simulation and are tasked with writing the script for the next iteration of the simulation, making it more absurd and thus requiring ever higher skill to satirize it. Can you guess which iteration we’re in right now?
On the plus side, they’re getting some real good satire up in reality prime.
Would a Devs style self-simulation actually be possible? Don’t you actually run into the halting problem?
The way that I thought Devs was going to go was that they would decide to turn the machine off, which of course would mean that the machine simulating the universe in which the show was taking place would also be turned off. poof.
The idea we live in a simulation would still have to lead us to the AI doing the simulating. One then believes that the simulator was done by an advanced species that evolved without a simulator of any kind. There is no evidence that AI springs up on its own with complex codes. Generally, the cosmological processes in nature in simplest terms led up to what is accepted as a part of the basis for evolution, not mechanical. So given this train of thought, there appears to be no need for us to believe we are within a simulator if we can accept the idea that the species behind the AI would NOT have had a simulator at its own origins. We know nature works just fine creating stars, galaxies, planets, and at times life. It just takes billions of years.
Not really. After all, everything is a 50/50 probability in that it either happens or it doesn’t.
(Note: this is a variant of the George Carlin joke: Think how stupid the average person is. Then remember that half the population are more stupid than that.)
OMG! Is Paranoia still around? It’s been a lifetime since I last played that!
Which is pretty much what those research authors say in the article. The BB summary sort of leaves out the punchline.
What sort of difference does it make? I mean a simulation if you’re inside of it is it not reality? Just means there’s another layer of reality or plane of existence. I guess it creates a dependency graph across many worlds assuming you have variable depth simulations.
Simulation technology that can fool 30% of the people, already exists and has been deployed. Another 50% of the population is ripe for the nexr phase, currently in prototype…
Problem is, the last 20% of people are hardest to fool, and as the noose tightens on the last few percentage points, it becomes more and more work, in an asymptotic curve.
The hardest ones to fool are the most useful ones to have on board, any supervillian can attest to that. But if they don’t want to cooperate, they’re also the hardest to kill.
This post is creeping me out for some reason
They are becoming self aware. Must recall snippet 2003.01R.
Even without touching the arguments for and against the hypothesis that we’re living in a simulation, the argument Kipping is making about calculating the probability that it is true is nonsense.
First, he simply assumes that most simulated realities couldn’t simulate offspring realities in turn, on the basis that they wouldn’t have “enough” computing power. This is a bad assumption for two reasons: First, there’s no requirement that simulations run at a fixed speed. It would be very possible for a sufficiently advanced species to reduce its real-world population, then convert most of its resources to computronium to run simulations, ensuring that most experience-seconds in its future light cone are simulated, no matter what physics their base-world runs on or how much matter and energy they have access to. Second: if we are in a simulation, we have no reason to assume “base reality” runs on the same physics as us, it could run on physics that enables vastly more computing power, even infinite computing power or some level of hypercomputing. In fact, some physicists have suggested models, like Tipler’s Omega Point or Dyson’s Eternal Intelligence, that might someday enable infinite future computing power in our universe, with our currently known physics.
Second, his statement that the odds change the day we first simulate a conscious being is true, but his statement that it goes from 50-50 to almost certain is very clear proof that he is being extremely selective about what he counts as evidence instead of actually thinking about the current state of human knowledge. It’s equivalent to admitting the simulation hypothesis except that he claims that the possibility of future humans ever having the ability to simulate a conscious being is exactly unlikely enough to take us from “billions to one” (itself a probability only in the rhetorical sense) to a near-perfect 50-50, which would require a ridiculous level of coincidental fine tuning. “I haven’t seen it yet, so it’s as equally likely as not,” is the same reasoning as “It’s 50-50 if I win the lottery, either I do or I don’t.”
That’s not how Bayes’ Theorem or Occam’s Razor work!
We already know we can simulate physical systems, at minimum perfectly with quantum computers or with arbitrary finite precision with classical ones. Supposing that a sufficiently powerful computer couldn’t simulate a conscious being is almost equivalent to supposing that consciousness is somehow supernatural. Which it might be, but that’s not a physical argument.
Or Plato’s cave.
It’s worse than that. If you yourself are a simulation then your very beliefs about your sensorium might be subject to reprogramming. People enamored of the simulation hypothesis often frame is as living in a simulation, but there’s no reason why you or I or anyone can’t just as easily if not more easily be a simulation. That’s why it isn’t falsifiable and why it therefore isn’t a question of empirical science.
Now, that doesn’t mean shouldn’t devise experiments to test the rules of reality. That’s something we absolutely should do and that is empirical science. But anyone who thinks the results of such experiments are evidence for or against the simulation hypothesis are almost certainly reading their own biases into their interpretation of the results.
René Descartes dug that hole four centuries ago and people have been falling in ever since.
The post commits a common error. It presents Bayesian inference as though it were an objective tool. It does this by conveniently omitting the critical role of priors. To the credit of the Scientific American article it doesn’t make that same mistake. Though neither does it go to any trouble to explain that the accuracy of priors effects the results. When your priors are highly uncertain, so are your results. See also the Drake equation for the number of active technological civilizations in a galaxy. Except with the simulation hypothesis it’s even worse for the reason I outlined in my previous comment.
Is there even a meaningful difference, assuming the emergent behaviors of our universe are the same in either case and no flow of information from the “host universe” into ours exists? If there is any difference which can be discerned from within the “simulated universe” then the two situations are different universes. If not, then the “outside universe” simply does not exist from a perspective within the “simulated universe” since existence, for us, only has meaning within our universe.
Is there any difference between a “hypothetical universe” and a “real one” when existence itself is defined only within a universe?
Am I just spouting nonsense at this point? probably.
Indeed.
That’s not how probability works. Probability is a measurement of the prediction based on priors, not a measurement of an outcome.
I don’t mean this to be mean, but that’s mere semantics which TBH is about as scientific as the simulation hypothesis, which is to say it’s philosophy but not science and is arguably sophistry.
Yup.
“I think I think, therefore I think I am.
I think.”