Check this excerpt from Facebook's postmortem on the live streaming of the New Zealand mass shooting:
We use artificial intelligence to detect and prioritize videos that are likely to contain suicidal or harmful acts
Of course, Facebook did not identify it. They go on to say:
AI systems are based on “training data”, which means you need many thousands of examples of content in order to train a system that can detect certain types of text, imagery or video.
One reason they offer is that mass shootings are rare. However, another reason is more interesting:
Another challenge is to automatically discern this content from visually similar, innocuous content – for example if thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground.
How does one differentiate between video game shooting and real world shooting?
On the positive side for video surveillance use cases, this is not a concern. But how well will video analytics work in identifying an active shooter is still an important open question.