I would fully disagree. We targeted civilians, not in war zones at functions like weddings, funerals and other explicitly civilian gatherings. We (the US) had the intent to kill civilians, and our tolerances for civilian casualties were an order of magnitude larger than what the IDF is using.
If anything it's not comparable because what we did was worse.
LLM's hallucinate all the time. The hallucination is the feature. Depending on how you design the neural network you can get an AI that doesn't hallucinate. LLM's have to do that, because they're mimicking human speech patterns and predicting one of my possible responses.
A model that tries to predict locations of people likely wouldn't work like that.