But there is an explanation. After the Tay fiasco MS decided to be a leader in ethical AI and actually hired a team to back that up. Fast forward to 2022 and the hype building around plausible sentence generators. MS fire their ethical AI team (late summer ‘22), roll out their Sydney bot in public beta, they are told in December 2022 that their system was recommending people kill themselves and they ignore it and spend billions on plausible sentence generators and sack thousands of people, many of whom have a job stopping fuck ups like this.
So there are reasons and culpability and agency at work here.