Subscriber Discussion

Create Video Analytic Test Samples As Screening For IPVM Testing?

UI
Undisclosed Integrator #1
Jul 03, 2016

Makes sense.

What I was suggesting was instead of IPVM performing the tests directly, as this is cost and time prohibitive, that IPVM determine the most typical use cases and create or crowd-source a series of progressively more difficult video footage along that same scenario.

For example, cross line detection along a fence.

Level 1 : Empty Scene, great camera positioning, object larger and closer to the camera and thus easier for VA to detect and filter out nuisance alarms.

Level 2 : Same as before but object farther away and thus smaller (i.e. represented by smaller pixel density)

Level 3: Scenario that introduces shadows, or a scene where the clouds are shifting and the sun shines through causing a massive change throughout the entire scene.

Level 4, 5, 6, 7, 8. progressively get more and more complex.

This way over the next 8 years we can see if any VA manufacturers want to take a go at the challenge and we can then discuss their results.

You could make it even more challenging if a VA mfg can produce a result across all levels using the same configuration and without significant tuning of settings per each challenge.

This way the burden of the initial testing would be on the various manufacturers and if after they have succeeded at a number of scenarios and difficulty levels, only then would IPVM setup an independent testing to validate their submissions.

NOTICE: This comment was moved from an existing discussion: Manufacturer President: "Customer Is Now Very Angry"

(2)
JH
John Honovich
Jul 03, 2016
IPVM

That's an interesting idea, as an initial screen.

I need to think more about it but it's worth discussing further. What do other members think?

(2)
Avatar
Murat Altu
Jul 03, 2016
AxxonSoft

This is great idea. We are happy to go at the challenge. And I think that it is important to put one condition: that analytics should work without any camera specific tuning. Because in reality (1000+ cameras) it is almost impossible to configure each camera.

UI
Undisclosed Integrator #2
Jul 03, 2016

It was asked "what PC based analytics allow processing of recorded video" in which I would answer all that I know of.

Most analytics are based on quality levels from QCIF to CIF when processed. Some process at a higher resolution but it's typically those embedded in a camera and rarely higher than D1 from my time in that business.

Processing live video at high resolution requires an immense amount of processing power. Usually that amount of detail is t required. Compare that to the human brain of a 3 year old and John's comment is very valid.

A PC based analytic would take in the video from a recorded device into an encoder or a video file such as an AVI or RTSP stream.

There is no value added from the camera and all comparisons are only based on the computational capabilities of the analytic, which evens the field for those who don't make the embedded camera analytics.

Wow, that was boring just writing it!

U
Undisclosed #3
Jul 03, 2016
IPVMU Certified

What PC based analytics allow processing of recorded video" in which I would answer all that I know?

Avigilon? Milestone?

UI
Undisclosed Integrator #2
Jul 03, 2016

Avigilon for sure, if anything through a Rialto encoder using an RTSP stream or analog video input. I can't remember if they offer a server based which would.

Except for LPR, I don't recall Milestone having it's own analytics engine.

U
Undisclosed #3
Jul 03, 2016
IPVMU Certified

Avigilon for sure, if anything through a Rialto encoder...

Rialto is not PC based analytics.

U
Undisclosed #3
Jul 03, 2016
IPVMU Certified

Except for LPR, I don't recall Milestone having it's own analytics engine

Milestone XProtect Analytics provides an intelligent yet highly intuitive solution for video content analysis tasks such as license plate recognition (LPR), perimeter protection, left objects detection, etc. XProtect Analytics works in tight integration with Microsoft® Windows® components as well as a range of different Milestone products.

UI
Undisclosed Integrator #2
Jul 03, 2016

Here is a publicly available Milestone price list. Which line item provides perimeter or object left behind analytics without it built into a camera or other manufacturer product? I can't find it, but I honestly don't follow Milestone that closely.

http://www.kernelsoftware.com/products/catalog/milestone.html

I am not associated with kernel software, that was a google find.

We can cut a fine line and debate if the Rialto would be considered PC based, more over I was just defining it can take a stream from an encoder or other video source. You win.

My only point was for a test to be comparable the source has to be equal and the analytic engine a variable. That removes the benefit of camera based analytics such as Avigilon that processes directly from the camera imager, to my knowledge as I am also not affiliated with Avigilon.

Again, you win. I'll stop discussing.

U
Undisclosed #3
Jul 03, 2016
IPVMU Certified

My point wasn't actually directed at you, my (mostly rhetorical) question about PC based analytics was directed at John's response of

Never heard of a camera allowing [analytics on recorded video], positive it cannot be common.

To your point, I am sure it's 'conceptually' possible but it is not something that is normally allowed.

What I was trying to say was that whether it's a camera not isn't the sticking point. It's the ability to process recorded video thru their analytic engine. Some may allow this explicitly in their released version. But all could be (relatively) easily be modified, to treat a clip as a live stream by the mfr for the purposes of testing.

So whether it's Exacq on a PC or Exacq on the edge, the mfr could loop thru video to if they desire certification.

Avatar
Robert Baxter
Jul 03, 2016

Here are a few more criteria for classifying Video Analytic Videos:

Classification by Site Type (eg. Car Dealership or parking lot, Construction Site, Fenced Open Area Storage, Marina,...)

Lighting Levels (Day, Night with IR, Ambient Lighting prevents IR Filter,...)

Resolution (SD, HD, minimum pixels on target,...)

Weather Factors (Calm and Clear, Windy, Rain, Snow, Hail,...)

Factors adjacent to ROI (pedestrian traffic, street traffic,...)

Factors within a ROI (reflections from passing traffic, streamers, balloons, banners, flapping tarps, ...)

Missed Events (person on bike, person on skateboard, large high reflecting objects in scene,...)

SN
Sean Nicholas
Jul 03, 2016

Adding to Roberts discussion I think we would have to define specific Camera Positions and Placements in which a solution would be expected to work within a reason. So for say tripwire or crossline detection, it should have stipulations to set customer expectations. Such as installed at a height or angle and such that the intended objects are within a reasonable distance from the camera.

I saw the IPVM article about the baby monitor analytics (https://ipvm.com/reports/nanit-suster) and this was very interesting as it was an application specific analytics focusing on a very specific problem.

The question is how do we define specific use cases or scenarios and then educate ourselves and thus our clients when analytics would be applicable. John, this is where I think IPVM can play a role that no one has been able to so far.

With application-specific analytics focusing on a very specific problem, with defined camera position/placement/angles, the application can be more easily optimized for simple setup and reliable operation.

A lot of the problems stem from adding analytics after the fact to an existing camera position. That's like trying to apply the baby monitor analytics to an axis camera in a nursery and expecting it to perform the same.

UE
Undisclosed End User #5
Jul 05, 2016

Also for VA of human activity - Crowded scenes vs. sparsely populated.

SD
Sarah Doyle
Jul 03, 2016

I do some work with ERNCIP, a European group ' Joint Research Council'. The work is research focused but there are a couple of relevant recent publications -'Surveillance and video analytics: factors influencing the performance' and 'Surveillance Use Cases: Focus on Video Analytics' which might provide food for thought on the definitions of video sources/ scenarios.

I know that there is a list of available data sets which will be release shortly.

Avatar
Robert Baxter
Jul 04, 2016

Thanks for the references. Appendix D on p. 59 indicates this could be classifying an infinite number of possibilities. Need to simplify the classification for "most needed" improvements to va.

UM
Undisclosed Manufacturer #4
Jul 04, 2016

The UK government had something a while ago too:
https://www.gov.uk/guidance/imagery-library-for-intelligent-detection-systems

Avatar
Marie-Claude Frasson
Jul 05, 2016

The UK government issued a more recent one:

https://www.cpni.gov.uk/Documents/Publications/2015/18%20December%202015%20Guidance%20Note%20Testing%20installed%20video%20analytic%20systems.pdf

Important to test a variety of intrusion scenarios and environmental conditions.

Marie-Claude

New discussion

Ask questions and get answers to your physical security questions from IPVM team members and fellow subscribers.

Newest discussions