It’s not a neutral statement once it’s released to and impacts the public. Algorithms like this are trained or have filters hard-coded into them by their creators. The problem in this case is not anti-Semitic intent by the code engineers but their naive and privileged assumption that “kill all [insert ethnic or religious group here from standard corpus]” would not show up as a trend at all. The deeper problem is that when it does show up Twitter embraces it as just another opportunity for user engagement and marketing targetting.
Twitter could consult with experts in mass media about such matters – anyone who’s worked in broadcasting could helped them figure out what to watch out for and shunt into a reporting area not available to the general public.* It could and should accomplish something like this through a combination of algorithmic and human moderators, both properly trained.
[* I can see Twitter recording the trend for internal use, and perhaps releasing it as a group of related trends through the reputable news media. The point is not to enable a toxic feedback loop with the users or to profit from it]