After reading several instructive articles regarding video analytics detailing what the pros/cons of camera-side vs. server side placement are, I didn't notice any mention of the effect of analytics being done pre-compression vs. post-compression.
Certainly since only the camera (or encoder) could do pre-compression analytics, I would think that this could be a major factor in performance, depending how much compression was involved, of course.
Practically speaking, with no compression artifacts, wouldn't it allow settting a much lower threshold for alarms, say with motion detection?
Related, I have heard that most analytics typically reduce any MP resolution images to D1 before processing. If this is the case, given two compressed streams from the same camera, one high-res/high-comp and the other one low-res/low-comp, would the low-res be more accurate and with less FA than the high-res one, at VMD for example?