Interesting article. I have not had much occasion to work with analytics beyond VMD. I do recall having to "train" an Avigilon camera a few years ago. Does deep learning on a camera require human feedback?
Deep learning does not "require" human (user) feedback other than at an engineering level where feedback can improve accuracy. Alternately, Avigilon's "teach by example" "allows" for human feedback. Play back stored video and check, "person true", "person false", "vehicle true", "vehicle false" on a classified object.
Undisclosed #2 said a lot of what I was going to say first. It's also worth noting that Avigilon does self-learn, but it takes several hundred events, if I recall correctly, to be fully trained. In areas where that would take a very long time, such as low traffic remote sites, the teach by example feature is used to speed training.
It's also worth pointing out that part of Avigilon's self-learning is determining what is background in the scene. They don't have specific manual calibration settings like some do, where you have to measure points in the camera's FOV to calibrate distance and figure out where the horizon is in the scene. The self-learning is intended to figure that out automatically.