Ah yes, pilot wave theory! I found that quite compelling myself, from a lay perspective. I got to the point of trying to figure out what consequences it had for the interpretation of delayed-choice quantum eraser experiments, but I couldn’t really get my head around it.
Mostly I don’t accept that the kinds of simulations being described would have any real utility for the simulators. Obviously, as the computing power available to us increases, we will simulate things - physical processes, biological processes, and even, I’m sure, brains of various types - in order to learn more about them. Note that the types of simulations that Nick Bostrom posits are not physical simulations at the quantum level, but simulations of billions of minds with a virtual reality being presented to their senses. So maybe we’re talking about different things, but this is the conception of the idea that I’m most familiar with, and the one that I think most people have in mind when they talk about “glitchy” Academy awards ceremonies and whatnot.
The reasons for running such a simulation are not really stated to my satisfaction, but the suggestion seems to be that a post-human civilisation would run them in order to learn about their ancestors (i.e. us). However, I don’t see how such a simulation could add anything significant to their knowledge. It would be incapable of telling them whether their knowledge of us (from other sources) was accurate or not, and would therefore lack any real purpose.
A second purpose that is hinted at is that the simulation would be for “entertainment”. I assume this would be more like infotainment (come see how our primitive ancestors lived before the grey goo converted the entire solar system into computronium!). Fine, I guess… But would that require or benefit from simulating all of the billions of our minds on an ongoing basis? The same proposed techniques for presenting a plausible reality to us could probably be used to present a plausible reality (including apparently conscious agents) to any visiting post-humans, far more cheaply, and without many of the ethical concerns that I’m going to get to next.
Regardless of the purpose of the simulation, I think it is unavoidable that creating one and running it for any length of time would be an act of horrific and intentional cruelty. We struggle today, as a species, to alleviate even the most egregious and avoidable suffering, but I think most of us recognise that other people do suffer, and that we should try to avoid inflicting further suffering on them. I would hope by the time we are in a position to consider simulating billions of conscious beings and condemning them to suffer, we will be morally sophisticated enough not to do so.