I would say more neutrally that humans are inherently subjective, so even if humans had infinite time to decide on the most ethical choice, they would still be limited in their choices. AI would likewise be inherently subjective based on its original programming and the limitations of its ability to understand the world and to assign value to choices and potential casualties.
Only an omnipotent, omniscient deity who could prevent the trolley problem and its choices from happening in the first place would able to make a “rational” choice.