Subscriber Discussion
Create Video Analytic Test Samples As Screening For IPVM Testing?
Makes sense.
What I was suggesting was instead of IPVM performing the tests directly, as this is cost and time prohibitive, that IPVM determine the most typical use cases and create or crowd-source a series of progressively more difficult video footage along that same scenario.
For example, cross line detection along a fence.
Level 1 : Empty Scene, great camera positioning, object larger and closer to the camera and thus easier for VA to detect and filter out nuisance alarms.
Level 2 : Same as before but object farther away and thus smaller (i.e. represented by smaller pixel density)
Level 3: Scenario that introduces shadows, or a scene where the clouds are shifting and the sun shines through causing a massive change throughout the entire scene.
Level 4, 5, 6, 7, 8. progressively get more and more complex.
This way over the next 8 years we can see if any VA manufacturers want to take a go at the challenge and we can then discuss their results.
You could make it even more challenging if a VA mfg can produce a result across all levels using the same configuration and without significant tuning of settings per each challenge.
This way the burden of the initial testing would be on the various manufacturers and if after they have succeeded at a number of scenarios and difficulty levels, only then would IPVM setup an independent testing to validate their submissions.
NOTICE: This comment was moved from an existing discussion: Manufacturer President: "Customer Is Now Very Angry"
That's an interesting idea, as an initial screen.
I need to think more about it but it's worth discussing further. What do other members think?
It was asked "what PC based analytics allow processing of recorded video" in which I would answer all that I know of.
Most analytics are based on quality levels from QCIF to CIF when processed. Some process at a higher resolution but it's typically those embedded in a camera and rarely higher than D1 from my time in that business.
Processing live video at high resolution requires an immense amount of processing power. Usually that amount of detail is t required. Compare that to the human brain of a 3 year old and John's comment is very valid.
A PC based analytic would take in the video from a recorded device into an encoder or a video file such as an AVI or RTSP stream.
There is no value added from the camera and all comparisons are only based on the computational capabilities of the analytic, which evens the field for those who don't make the embedded camera analytics.
Wow, that was boring just writing it!
Avigilon for sure, if anything through a Rialto encoder using an RTSP stream or analog video input. I can't remember if they offer a server based which would.
Except for LPR, I don't recall Milestone having it's own analytics engine.
Here is a publicly available Milestone price list. Which line item provides perimeter or object left behind analytics without it built into a camera or other manufacturer product? I can't find it, but I honestly don't follow Milestone that closely.
http://www.kernelsoftware.com/products/catalog/milestone.html
I am not associated with kernel software, that was a google find.
We can cut a fine line and debate if the Rialto would be considered PC based, more over I was just defining it can take a stream from an encoder or other video source. You win.
My only point was for a test to be comparable the source has to be equal and the analytic engine a variable. That removes the benefit of camera based analytics such as Avigilon that processes directly from the camera imager, to my knowledge as I am also not affiliated with Avigilon.
Again, you win. I'll stop discussing.
Here are a few more criteria for classifying Video Analytic Videos:
Classification by Site Type (eg. Car Dealership or parking lot, Construction Site, Fenced Open Area Storage, Marina,...)
Lighting Levels (Day, Night with IR, Ambient Lighting prevents IR Filter,...)
Resolution (SD, HD, minimum pixels on target,...)
Weather Factors (Calm and Clear, Windy, Rain, Snow, Hail,...)
Factors adjacent to ROI (pedestrian traffic, street traffic,...)
Factors within a ROI (reflections from passing traffic, streamers, balloons, banners, flapping tarps, ...)
Missed Events (person on bike, person on skateboard, large high reflecting objects in scene,...)
I do some work with ERNCIP, a European group ' Joint Research Council'. The work is research focused but there are a couple of relevant recent publications -'Surveillance and video analytics: factors influencing the performance' and 'Surveillance Use Cases: Focus on Video Analytics' which might provide food for thought on the definitions of video sources/ scenarios.
I know that there is a list of available data sets which will be release shortly.
The UK government had something a while ago too:
https://www.gov.uk/guidance/imagery-library-for-intelligent-detection-systems
The UK government issued a more recent one:
https://www.cpni.gov.uk/Documents/Publications/2015/18%20December%202015%20Guidance%20Note%20Testing%20installed%20video%20analytic%20systems.pdf
Important to test a variety of intrusion scenarios and environmental conditions.
Marie-Claude
Newest Discussions
Discussion | Posts | Latest |
---|---|---|
Started by
John Saunders
|
3
|
less than a minute by Undisclosed Manufacturer #2 |
Started by
John Honovich
|
7
|
about 2 hours by Undisclosed Manufacturer #2 |
Started by
Brian Rhodes
|
12
|
about 3 hours by John Honovich |
Started by
John Honovich
|
6
|
less than a minute by John Honovich |
Started by
John Honovich
|
2
|
about 22 hours by Undisclosed #1 |