Which is why trying to make an argument about why it’s not possible is like trying to make an argument about why there’s no god. If there’s a next level above us, we most likely have no hope of understanding how it works from down in this level.
if a universe was a cellular automaton, an observer without knowledge of the rules used to generate it would come up with particle-based physics to describe the interactions
Greg Egan’s novel Permutation City goes into this idea in some depth. At one point humans simulate a cellular automation based universe which develops intelligent life. The inhabitants come up with their own physical theories and reject the reality presented by humans.
Why do you think that to be case? Are you using Lapace’s rule of succession or something similar? According to this, the probability of rain on a day having seen a single, no-rain day, is 1/3. But in my view, the assumptions it’s based on do not make a lot of sense under such low-sample (but high information) circumstances.
This really taps into the (quite philosophical) topic of what “probability” really is, how it should be interpreted, and how it should be used.
You seem to argue the so-called classical or frequentist viewpoint, that probability should only apply to repeatable phenomena / events, and not to single, one-off events. Bayesians in fact think the latter is possible and useful, using probability to express a subject’s uncertainty about a certain event.
That conflates probability with uncertainty. Although probability features uncertainty, uncertainty does not always imply a probability. For example, if I have evidence that it will rain in a desert which for some weird reason only exists for one day, I can make an educated guess that it will rain, and I may even say it will probably rain (where probably is used colloquially), but this is not the same thing as a mathematical probability which requires a set greater than one.
Thank you for understanding what I was saying and expressing it more succinctly, and hopefully more clearly, than I did. I was getting frustrated with my inability to convey it to @knappa, and since I know him to be a sincere and established regular, I put that down at least somewhat to be a failure to communicate on my part.
I also now understand what @knappa was saying about Baysian and Frequentist. I’m afraid that philosophy of math is outside my wheelhouse; I go by the consequences of the math, and they point to what you’re calling Frequentist. I assume Baysians name themselves after a faulty understanding of Bayes’ theorem. I’ll look up their arguments later in case I’m misunderstanding them.
A Bayesian would instead posit that probability is a very powerful tool to model uncertainty, and the more fundamentalist Bayesians even argue that when a subject is uncertain about an event to occur, it is possible to determine their probability for the event to occur (given enough time and resources).
I think you go too far by dismissing the Bayesian understanding of probability and considering your own (frequentist) understanding as “mathematical”. The set of rules for working mathematically with probability actually do not differ for both interpretations of probability, and in my view there is no way to label either as more “mathematical”.
The way of understanding probability really lies at the heart of the divide between frequentist and Bayesian statistics. Discussions between frequentist and Bayesian statistics have a long history and are still ongoing, you will find statisticians fervently arguing for either side.
Btw, thanks for having this discussion, I really enjoy being on BBS!
It just seems the logical outcome of the math. I’m not really familiar with either school of thought.
I clearly need to at least read up on the debate, if only to clarify whether they have anything to say that might alter my understanding. As a scientist, I welcome the opportunity to be wrong.
Likewise. And thank you for, as I said, breaking through what I feel had become an impasse in the discussion.
Setting aside issues like “Why do I care?” and “Is this even a meaningful question, given that my expectations for the future evolution of this universe aren’t affected in any way by the answer?”…
“Computing works differently outside the matrix” isn’t some bizarre, obscure edge case. It isn’t actually in any way difficult to imagine a simulating universe with arbitrarily vast computing power compared to what we possess, or with simulating agents creating our universe in particular. It doesn’t even require the simulating entities to be conscious, let alone intelligently choosing to run our program.
Example: Conway’s Game of Life is a simple set of rules, and is Turing complete, meaning real Turing machines with arbitrarily vast memory, zero error rate, and infinite run time. Imagine that the “real” universe is an infinite Life grid initialized in a maximum entropy state of live and dead cells. Somewhere out there it would be essentially guaranteed that there are Turing machines of arbitrary complexity computing all possible programs with all possible initial conditions, including infinitely many simulating our universe (to arbitrary but finite precision, if it turns out our observed laws of physics would not be computable if taken to infinite precision). And Turing completeness shows up in simple systems all the time.
Alternatively, consider that our minds and computers both run on the physics of our own universe. The only fundamental operations/instruction sets we execute are the ones our physics allows. A sufficiently outside-the-box assumption about higher-universe physics might allow mathematical operations that are literally unthinkable for us, such that our math axioms and the resulting theorems look like over-constrained nonsense. Even within the math we know, though, “computing works differently” is hardly an unexplored field. Correct me if I’m wrong, but I don’t think you need to go all that far up the arithmetical heirarchy before simulating quantum or classical universes becomes trivial.