is essentially what MWI is, as far as I can remember (it’s been a few years). The difference is that rather than assume that measurement results in probabilistic collapse of all but one state in a superposition, I assume that it is a deterministic fact that future portions of my current wavefunction will experience different outcomes, and because the states measured are orthogonal those portions become non-interacting and non-interfering after decoherence. The whole point is that decoherence prevents the “other worlds” from interfering with our own, and preventing decoherence is key for quantum computing.
I do object to using this to try to explain quantum computing regardless, no matter how “cool” it sounds. It’s essentially guaranteed that just about everyone who hears it will misinterpret it.
Brochures for investors/shareholders and marketing, mostly.
“Yes! We do Quantum! We are cutting edge!”
And don’t knock it, it’s an application for quantum computers that actually produces results.
It’s a bit of an overreach to assume that quantum computing can solve all problems. It’s not that I buy into the entire argument by Penrose but more that I think NP problems aren’t all going to be solved by mere parallelism. In fact, a significant part of what makes some problems NP is the fact they’re not decidable. Meaning they’re a question of “should I take this path or the other” and not a question of “which path is best” (technically it’s both but it’s really the former since the latter defines the condition of choosing). It’s why you have not only the Traveling Salesman Problem in the list of NP problems but also the Coloring Problem (these two oddly have related approximate algorithms, and boy are they hard to grasp).
Even if you have an infinite number of quantum computers via entanglement working together to collapse to the correct solution for some NP problems they’re still dealing with the same constraint of decidability that hampers them. It’s why problems like decryption with quantum computers seems feasible since most encryption schemes use prime numbers to create cipher text which means there exists some kind of algorithm that can use parallelism to find those prime numbers that are candidates for decryption whereas something like the coloring problem might not be something you can throw infinite quantum computers coloring all possible graphs (assuming we don’t put an upper bound on the graph size or vectors they contain which if we did then it’s something you could solve even with a classical computer given enough time). Basically, quantum computing is novel but not magic.
Again, they do not work via parallelism. That is not the specialness of quantum computing. Parallelism also does not give you any special power for complexity classes either; it doesn’t change the scaling. For certain problems, quantum computing does. Whether it could for all is an open question. The relationship between BQP and NP is still open (and they are not the only interesting complexity classes one might bring into the discussion.)
The only way you could solve these problems most generally is to assume there’s a kind of determinism in all quantum systems (like that in the Quantum Zeno Effect). But that in itself isn’t proven and I would say that it’s going to take a mountain of evidence for determinism to be proven in any quantum system. Until I see evidence that props up this idea I’m going to say that quantum computing is a big bridge to nowhere.