They are thinking when they deliberately disabled multiple safety features in their self driving cars in order to avoid false positives which might get their executives carsick during a demo and then killed someone with it. You know, mistakes.
Seriously, if you read the incident reports for the uber self driving fatality I am not sure it is so far removed from this. It is cold, clinical, and random compared to messy, brutal, and targeted, but some engineers sat down and wrote code that said “when we detect a life-threatening emergency and need to stop immediately, instead keep going”
All of the stuff about how the computer didn’t have proper categorization for someone walking a bike, or whatever is all true, and those are legitimate “bugs”. Testing in that situation is reckless but probably not premeditated. That confusion took up much of the 5 seconds before impact.
However, 1.1 seconds or so before impact, the self driving computer decided that an emergency stop was needed. However, due to false positives and an immanent executive demo they added an unconditional 1 second wait before reacting to emergencies, so the car only started braking 0.1 seconds before impact. Even with the confusion, that 1 second was likely the difference between a low speed non-fatal crash and what happened.
Furthermore, the vehicle in question had a factory installed emergency automatic braking system provided by Volvo which was also disabled and also would have prevented the fatal crash.