Perhaps a noob question, but I'm looking for some input / brainstorming for fighting spider webs. We have about 200 cameras deployed currently, with another 250 in the process. We have constant problems with spider webs. They are triggering our motion detection (With in NX Witness) and when only recoding high res on motion, this is eating up a lot of storage. Many of our cameras are at small locations with minimal staff. We have an auditing process currently, but I'd much prefer to attempt some automation. We do not have an active staff member monitoring cameras 24x7. We're fairly limited to a physical solution due to budget constraints (we're a non-profit overseas), so moving to PIR or external IR isn't much of a choice. We're very happy with our cameras currently. And we've considered some options mentioned here (How to keep spider webs off your security camera lens - VueVille) but the need for detection would still be there.
We're aiming for 6-12 months of archive, so this issue really adds up over a long period of time.
Currently, we have a local staff member do a manual footage review weekly, checking day and night for image clarity, as well as archive consistency. They submit a form, and if a camera needs cleaning it gets assigned to someone (or they just do it if possible).
And then I review cameras at least once a month as I'm able.
We've also worked at tweaking our motion detection settings, but for most cameras this hasn't mattered. (We're using IR for all cameras)
I'd really like to attempt to automate this... on a budget. I have two main approaches:
(1). Image comparison.
I'm able to pull snapshots/thumbnails via API with NX Witness relatively easily. And I can iterate through all of our cameras, to grab a day time and night time image, and compare this against a day time and night time "baseline". This is problematic for a lot of reasons, but I could average the scoring over 2-3 days and only alert if the issue has persisted. I'm currently using DeepAI's Image Similarity tool, which is very easy to setup, but discerning a threshold for the score I receive from image comparison is difficult, especially as a singular baseline score for all cameras in all locations. I can adjust the threshold level per camera... but I'm hoping to avoid having to tailor things individually. I'm gathering test data now to determine feasibility. It's worked great for BIG issues. Camera moved or damaged, or 75%+ spider web coverage.
(2). Network bandwidth comparison.
I've also considering comparing network bandwidth usage against a baseline. Our Network Monitor _should_ make this easy to automate, but I've not yet had the time to script it out. It would really come down to how I'm able to average out the bandwidth. If motion kicks on high res for a total of 10 minutes, I'm not bothered. But if it totals up to several hours, then with our number of cameras that a huge impact. So it would be about finding a proper threshold.
Both of these options are fairly easy to setup for me, but I fear are inherently susceptible to false positives. Office moved a desk, truck is parked in view, raining, bugs, etc...
So I just thought I'd ask for some feedback and thoughts. Are there other data points that I should be considering? Other tools that would be able to analyze footage for problem detection?