My like is for the first half of your comment. I’ve heard some interesting arguments, but I remain unconvinced on the importance of the question.
However, if your ethical system depends on whether you’re living in a simulation, it has much, much deeper problems. As far as anyone has ever been able to test, the laws that govern our universe are deterministic and apply everywhere without fail. Why should morality/responsibility depend on what, if any, (computational?) substrate instantiates those laws? Seems a bit like saying the truth of 2x2=4 depends on whether you use a calculator.
Also, the second half of your comment would seem to invalidate the first. If the existence of moral responsibility depends on whether we are living in a simulation, then answering that question should be one of our species’ absolute highest priorities. Whereas if we say the question is unimportant, that ought to imply that its truth or falsehood has no important consequences.
Whatever metaethics you subscribe to (wherever your ethical principles come from), things that are important are so at least in part because we as sentient, sapient beings find them so. Should my behavior depend on whether I believe I am a simulation, or not? In either case, why? If an important question with profound impacts is undecidable even in principal, something very strange is going on. (Why postulate that what I value depends sensitively on a free variable each of us can set however we want in our own beliefs without contradicting one another? Why not cut the middle man and say we can each choose what to value directly?)