Manufacturer President: "Customer Is Now Very Angry"

Published Jun 29, 2016 16:27 PM

Sad.

Video analytic strikes again.

There are some extremely important lessons here for manufacturers and integrators.

Angry ********

*** ************ ********* ********:

** **** *** ********* ** ** handle **** ************ **** ********* ***** installation. ** *** * *** ** stories **** ****** *********. **** ********* objects *** ********* *** *****. ** real *********** ** ***** *-* ***** alerts *** **** *** *** ** it *** *** **** *** ******, but *** **** ******* ** **5000 ******, so customer now complain and we are working hard couple of months trying to get better result, but still not sure if we can fulfill their expectations. Conditions in metro are really bad for this kind of analytics when people are staying and waiting without any movement. And this customer ** *** **** *****. [Emphasis Added IPVM]

** ** ****** **** **** ** imagine **** ******* ***** ****** ***** ****** ***** cause ***** *********?

Bad *******

** ********, **** ** ************* * broad *** ********** ***** **** ***** analytics *********.

** **** **** **** ******** *****, easily once *** **** **** ******* ************* and ***********. *** ******** ********* ** that ** ** ***** **** '*******' customers ** '*****' ************ **** ******* this ******* ***** ********* ** ****. Wrong.

Needs ** ****

********* **** ** **** *** **** is *** *** ****, **** **** to **** ** *** ********'* ************. And, *** **** **** ********, *** expectation ** *** **** **********.

***, ** *** ************, ********* ** integrator ***** *-* ***** ****** *** hour ** *** ***? *****, **** is **********.

**** ******* ** **** *** ******** thinks. *,*** ****** *** **** *** much? ****** **********.

******* *** '**** ************ **** ********* after ************'? *****, **** *** ** your *****. *** *** ******** ******* and *** ************ ************* ** *****? Did *** ***, "** ****** *,*** false ****** *** ****. *** ** with ****?" * ***** ****. 

Applies ** *** ****-**** *********

** **** *** ****** ** ** are *********** ********* *********, ********* ******, facial ************, ***. *** **** *** are ********* * ****** **** ****-**** analytic ***** *************, **** ** ********.

**** ** *** ******* ***** ***** alert *-* ***** ** ****? **. Do *** ***** *** ***** ** out ** ******** ** ** ***? Yes.

Eliminate ***** ****** ** ****

** ****** ** **** ******. ****** you *********** ********* *** ***** ****** or *** ****.

** *** **** ** **** *** the ***** ********* ****** *** ******** so *****, ******* ** ***** ** trying, **** ** **.

**** ******** ***** ** '******** ************', and **** ******* **** ****** *** until you *** ***** ******* ***** ***** free *********.

Comments (88)
Avatar
Luis Carmona
Jun 29, 2016
Geutebruck USA • IPVMU Certified

I would blame the manufacturer more than anything, but the integrator shares a portion, too, albeit smaller. I say this because I have always seen manufacturers oversell analytics. So when manufacturers do this, isn't it logical the integrator will too, since they are relying on their manufacturer "partner" to give them reliable information...? But the integrator shares blame too for "trusting but not verifying" the claims and doing their own R&D to test it's limitations. Testing products costs time & money, but that is the nature of the beast, it just is what it is. We can blame the "CSI Effect" all day, but if we sell expectations based on sales literature and not properly vetted one, because a competitor is more willing to sell unrealistic performance without doing proper testing and so therefore at cheaper cost, that is on us (the integrator).

(8)
JH
John Honovich
Jun 29, 2016
IPVM

Luis, I do not know the specific role of integrators or consultants in this case.

That said, in general, I do agree that integrators hold responsibility. Indeed, I think the risk is greater for the integrator since the integrator is on the hook for labor to make it work. I have seen many projects where the integrator lost substantially trying to make analytics work over and over again.

(1)
Avatar
Luis Carmona
Jun 29, 2016
Geutebruck USA • IPVMU Certified

You're right. I made assumptions based on limited info, because that is how it goes most times, but doesn't mean all the time. For all I know, maybe the analytics were hard spec'd because of work done by the manufacturer's sales reps and the integrator had no say. I only mentioned integrators because I have seen it happen from that side, too.

U
Undisclosed #1
Jun 29, 2016
IPVMU Certified

It really is this simple. Either you essentially eliminate all false alerts or you fail.

Doesn't it depend on the application?

For instance, a cross-line detection in the metro might be annoying if falsely triggered once a day, esp. without 24/7 live monitoring. On the other hand, once a day false alerts for cross-line in the Capitol building might be an acceptable trade off considering the assets at stake and the budget available.

Which brings another point, false positives can often be significantly reduced by lowering the sensitivity, which of course will increase the false negatives.

So do you mean "Either you essentially eliminate all false alerts without missing ANY real ones, or you fail?"

I think that slider is vastly different depending on the circumstance.

As an example, 1 out of 3 times I travel thru the airport the detector flags me, without apparent fault of my own. They can't be overjoyed with this result, but I'm guessing they could also turn down the sensitivity if they wished less false positives.

(5)
JH
John Honovich
Jun 29, 2016
IPVM

once a day false alerts for cross-line in the Capitol building might be an acceptable trade off considering the assets at stake and the budget available.

Once a day, to me, is "essentially eliminate all false alerts" or as I said earlier in the post, "the expectation is for near perfection."

Once a day is near perfection.

Somehow you went from 5,000 an hour to one a day.

"Either you essentially eliminate all false alerts without missing ANY real ones, or you fail?"

Ask the customer, by definition, but, yes, typically, they do not want any real ones.

U
Undisclosed #1
Jun 29, 2016
IPVMU Certified

Somehow you went from 5,000 an hour to one a day.

I dropped the x 1000 camera coefficient, thinking it superfluous.

So, 1000 per day. If the Capitol has 1000 critical cross-line boundaries.

Maybe one a minute, at peak times. Somebody's full time job.

But what are your other options?

JH
John Honovich
Jun 29, 2016
IPVM

If the Capitol has 1000 critical cross-line boundaries.

1, I am disappointed. I thought you memorized all of IPVM ;)

Recall: Security Lessons From The White House Turning Off Perimeter Alarm, money quote:

"An alarm box near the front entrance of the White House designed to alert guards to an intruder had been muted at what officers believed was a request of the usher’s office"

The alarm had apparently been regularly generating false alerts, annoying staff.

That said, I do agree with implied fundamental point. Situations with higher risk profiles are going to tolerate more false alerts.

But, as the White House incident and my own experience with military video analytic deployments shows, it is not as high as one might imagine.

1000 cross-line boundaries per day is 1 every 1.5 minutes. Most staff, even at critical infrastructure sites, would turn it off, unless you were at very high alert for imminent attack. Operationally, floods of false alerts are debilitating.

As a business, though, it is still hard to find and sell analytics if you are really generating 1,000 false alerts a day. You surely will get some but at great pain for your sales and field engineering team.

(3)
(1)
U
Undisclosed #1
Jun 30, 2016
IPVMU Certified

1, I am disappointed. I thought you memorized all of IPVM ;)

I'll be the first to admit that I have a few gaps there, especially between the so called 'Late Lindgren' and 'Second Boughton' epochs.

(1)
(4)
Avatar
Kevin Nadai
Jun 29, 2016

Conditions in metro are really bad for this kind of analytics when people are staying and waiting without any movement.

Is the OP suggesting that people standing still are causing the nuisance alerts? If so, a size limitation on the analytic may reduce them. A target would only count as an object if it was smaller than a person. (Which opens up the problem of really small persons)

Even so, a train platform is not a good application of "object left behind." People naturally set things down while waiting. Neither is the baggage claim area at an airport, frequently cited in news reports regarding the "magic" of analytics.

Are the 5-6 alerts per hour truly "false" (mistakes by the analytic) -- or "nuisance" (technically correct but not what the customer wants to know)? As I have said in this space before regarding analytics -- often what the customer wants to know, they don't want to know. A screen shot of one of the "false" alerts would facilitate discussion.

It really is this simple. Either you essentially eliminate all false alerts or you fail.

A project can fail even with perfect analytic performance. This may be such an example. In addition to accuracy, I still contend that managing customer expectations is essential to success with video analytics. Based upon the OP information, IMO, the customer's expectation in this application should have been, "This ain't gonna work."

(3)
(1)
SS
Scott Sheldrake
Jun 29, 2016

This article is bang on. Unless video analytics are perfect, you just cannot sell them.

Countless times we are asked to set up email notifications for Video Motion Detection, and in 100% of cases we have ended up turning it off for the customer due to high number of false alarms. Forget line crossing, facial recognition, etc, this is basic VMD. It absolutely does not matter which brand or product you use, or how much tweaking/adjustments you make, you will get false alarms. Unless you turn it down to be so insensitive that it does nothing.

Sadly, video analytics are unlikely to improve while lawyered-up companies like Avigilon are hoarding patents, so we are stuck here for a while.

(4)
(2)
(1)
Avatar
Brian Karas
Jun 29, 2016
IPVM

Unless video analytics are perfect, you just cannot sell them.

I tend to disagree, but I think you need to set proper expectations up front, and know when to tell a customer that their expectations cannot be reasonably met.

Forget line crossing, facial recognition, etc, this is basic VMD.

I understand what you are saying, but I think you may be drawing the wrong connection there.

Video analytics (for perimeter protection/notification use cases) is not about what you DETECT, it's about what you IGNORE. As your VMD experience shows, it is easy to send an alert on any random movement, that is not very valuable. What you want is to ignore more things, without also ignoring people (or vehicles if that is what you are alerting on). Being able to intelligently ignore things is where you move from "VMD" to "Analytics" (IMO).

VMD is very useful for managing storage when the customer's requirements do not mandate 24/7 recording, no need to record a generally static scene. VMD does not need to be super accurate, it is OK if we end up recording too much, within reason of course. But VMD is not good for sending alarms out unless you have a very sterile environment, like inside an office, because it will trip on blobs of pixel change without any care about what caused the pixel change.

Line-crossing applications (though I consider those a somewhat weak form of analytics in the current market) can and do yield better results than VMD, if you have the right product and the right application.

It absolutely does not matter which brand or product you use, or how much tweaking/adjustments you make, you will get false alarms.

I 100% agree with this, zero false alarms is not achievable in the sense of making a general claim. It is a matter of WHEN not IF that you get a false alarm (or 20) from any system. However, if the false alarm rate can be kept low, and the event handling side of the alarm (the software/VMS interface) makes it easy for the operator to dismiss these you can have a system that is very "sellable" and has a good ROI for the customer.

Part of determining where video analytics for alarming is worthwhile or not involves determining the overall value of the customer's problem. If you want to protect a cherished garden statue in your yard, the cost of setting up the system, and the inevitable false alarms will likely outweigh the value of your prized gnome. If you have $50,000 worth of inventory in an outdoor store yard and are getting hit for $1,000/month in losses you can likely find a solution that is cost effective.

It still comes down to choosing "No False Alarms" or "No Missed Events" though. If you want to reduce false alarms to near zero you are inevitably going to have to set sensitivity or selectivity to the point that you will end up missing some valid events (and thereby likely have losses that may have been preventable). If you want no missed events, then you are going to get some false alarms as well.

(7)
(1)
SS
Scott Sheldrake
Jun 29, 2016

Brian, this sounds like a very long-winded way of agreeing with me and proving the point of the original article. Analytics DO NOT WORK. You choose between many false alarms or many missed events. That means they DO NOT WORK.

Like the article says, it doesn't matter what you the integrator or manufacturer thinks is acceptable - it matters what the customer thinks. And the example about how ADT would go out of business if they had daily false alarms perfectly illustrates this.

(1)
(2)
Avatar
Brian Karas
Jun 29, 2016
IPVM

How do you define "WORKS" and "DOES NOT WORK" in a broader sense?

If your customer wants a camera able of capturing license plate details in a parking lot, and you install such a camera while also explaining to them that the camera will not be able to capture those details from 11PM-6AM because the lot lights are on a timer would that system "work" or "not work"?

What if a car speeds through that parking lot at 50MPH and the plate is too blurry to read? Is the system working, or not working?

I do not think you have to settle for "many false alarms" or "many missed events" as a general statement. I believe there are certain products that are far more likely to generate false alarms (or miss events) than others, and I believe there are also particular scenes or settings that are more prone to false alarms. But I also believe that there are analytics products on the market that can cost effectively satisfy a wide array of customers and applications.

(8)
Avatar
Frank Jacovino
Jun 29, 2016

The specification determines what works and does not work. If the specification stated that LPR needs to work 24/7 then the timer needs to change and/or alternative lighting needs to be provided. This is the job of the intergrator to understand the specification and know the site conditions after a walk thru.

Similarly if the specification calls for LPR at all speeds the integrator needs to take exception, or, require installation of speed bumps to slow the car down, if he is to meet the specification.

(1)
AJ
Adam Jaison
Jun 29, 2016

Is it wrong to mention the manufacturer referred to in this article?

(1)
Avatar
John Bredehoft
Jun 30, 2016
Bredemarket / Incode Technologies

The issue spans multiple manufacturers, and in fact spans multiple industries. In the automated fingerprint identification system world, I have seen RFPs that request 100% accuracy. Not near perfection, but perfection. Such a requirement does not get a response of a simple "Comply" - primarily because if a vendor did state full compliance, and that response made it into a contract, there would be trouble down the road.

(1)
U
Undisclosed #2
Jun 29, 2016

"It really is this simple. Either you essentially eliminate all false alerts or you fail."

So,if manufacturers/integrators can not eliminate all false alerts

what do You/we do?

Avatar
Murat Altu
Jun 29, 2016
AxxonSoft

I'm first to completely agree on all statements. But this is not manufacturers fault! We always warn our integrators that real time analytics is not perfect, but they want the project and that is all!

They are even doing pilot installations, but trying to hide real results. So integrators often knows the real situation, but they prefer not to say "not" to customer, because in this case customer will find another one who will say "yes".

I don't know the right behavior for manufacturer in this situation. We have this analytics, because everybody has and often it is requested. But we know, that to manage effectively you need operators to confirm often false alerts and we always saying this. Another our decision is to set zero price for online analytics! We have all kind of online analytics and it is free! So, we are not giving the reason to complain, that online analytics is not perfect. Yes, it is not perfect, that is why we are not taking money for it! Now we take money only for forensic search analytics as this is really helpful tool which increases efficiency of investigations, saving operators time and money as result!

(2)
(1)
(1)
SS
Scott Sheldrake
Jun 29, 2016

Let's dissect this first statement : "But this is not manufacturers fault! We always warn our integrators that real time analytics is not perfect, but they want the project and that is all!"

1. this is not manufacturers fault

2. We always warn our integrators that real time analytics is not perfect

So the manufacturer is not at fault because they warn customers ahead of time that their product is faulty. This statement is a perfect example of both why nobody uses analytics and why nobody is willing to pay for analytics. It completely proves that the original article was bang on.

(1)
U
Undisclosed #1
Jun 29, 2016
IPVMU Certified

You don't have a cell phone, obviously.

(2)
Avatar
Luis Carmona
Jun 29, 2016
Geutebruck USA • IPVMU Certified

"So the manufacturer is not at fault because they warn customers ahead of time that their product is faulty."

Faulty would be if it did not work as designed or expected. (The car's brakes fail.) If it works as designed, but with disclosed limitations, that is not exactly faulty. (The car does not brake the exact same stopping distance every time.)

(1)
(1)
SS
Scott Sheldrake
Jun 29, 2016

To extend your analogy, your brand new Toyota's brakes work flawlessly when you are driving inside a warehouse with optimal lighting, but when you take the car down the highway on a mixed sun/cloud day, the shadows created from the clouds cause the car to brake suddenly every 30 seconds or so.

So you return to the Toyota dealership and complain. The sale manager says you need to manage your expectations as a customer and also assigns partial blame to the salesperson for not fully explaining that the brakes only work in optimal conditions.

So you avoid driving on cloudy days completely since the car is basically unusable. But then one day you take it out in the rain..

(1)
(3)
Avatar
Luis Carmona
Jun 29, 2016
Geutebruck USA • IPVMU Certified

That is a bridge too far in logic. We will have to agree to disagree.

(3)
(1)
Avatar
Frank Jacovino
Jun 29, 2016

John, as an engineer who has been involved with "video analytics" for over two decades I am amazed by the strides the industry has made in its algorithms, but, at the same time, truly amazed by of the markets perception of video analytics capabilities.

My analytics background comes from the semiconductor industry where we had to commit to 100% identification of true defects and false alarm rates close to zero percent. This was not an easy task to achieve. It is important to understand that our task was easier a surveillance application since our scene was always the same and the lighting was always the same. With surveillance applications you have scenes which are constantly changing: people/cars moving, trees waving, day/night transitions, weather implications etc.

I believe the problem lies with the manufacturers not really explaining what video analytics can do and under what conditions they can do them. Although I do not know the specific details my initial reaction is that arming 1000 cameras within a metropolitan area was irresponsible. The manufacture must have known of the application, the extent of the installation and the risks associated with such an installation. To not expect 5000 false alerts an hour and not know this would be unacceptable is borderline criminal.

My view is that analytics, even in their current state, have a place in our industry. Applications which have controlled environments or applications where the identification of an alert is far more important than the false alarm rates, are examples where analytics can be deployed. For instance, if the manufacturer told the metro center to deploy the analytics in 4 or 5 critical areas, educated him on what to expect for a false alert rate, the customer would probably be seeing analytics as a success.

(6)
(1)
(2)
Avatar
Luis Carmona
Jun 29, 2016
Geutebruck USA • IPVMU Certified

Aspirin does not always work to take the pain away- faulty, pull them off the market?

Aribags don't always save lives- faulty, pull them off the market?

Speed limit signs don't always make people travel the speed limit- faulty, pull them off the market?

CAT scans don't always catch a blood clot- faulty, pull them off the market?

What are we arguing about? That analytics are not 100% accurate so they are faulty and never be used? Even if you explain the limitations to a customer because they will ignore what you said, or is in writing, and expect 100% accurate results?

Or whether or not the customer was told what to expect?

(5)
(1)
SS
Scott Sheldrake
Jun 29, 2016

None of your analogies are applicable here. I could counter them all with equally ridiculous arguments.

#1 - Aspirin has a huge market, because there is a decent chance of relieving pain. Analytics have basically zero market, because using them has a high chance of causing pain. Maybe there is a related conspiracy theory here.

#2 - Airbags don't accidentally deploy when you drive down the road on a cloudy day, or the lights from another vehicle reflect off a wall and into your windshield. If they did, people would improve airbags or stop using them.

#3 - Speed limit signs are informational, and allow drivers to know valuable information. Imagine if speed limit signs were placed every 25 feet on some roads, then other roads you just never saw one at all, and then on cloudy days the signs all changed to show the speed limit was double, then changed back intermittently as sunlight reflected off them. How useful would the speed limit signs be then?

#4 - CAT scans are a valuable tool because they might reveal a blood clot. Imagine if CAT scans falsely revealed blood clots because it was raining out, or the bushes outside the hospital were blowing fiercely in the wind.

There's nothing really to argue here. Video analytics do not work well enough to be considered useful. Own own experience certainly confirms this, as does the video analytics market (or lack thereof) and of course the original article which was bang on, plus the contradictory comments from that one guy.

A better way to win this argument would be to cite a useable analytics package (and then lose the argument when you run an IPVM test on it to show how it actually doesn't work after all.)

(6)
U
Undisclosed #1
Jun 29, 2016
IPVMU Certified

Video analytics do not work well enough to be considered useful.

Such a blanket statement.

Do you really think no one finds VMD alerts useful? Inside a store during the early A.M. for instance?

Again video analytics don't have to be perfect or even near perfect to be useful.

They are still useful to SOME people in SOME cases.

Example: I have a new car with "Video Analytics" with Blinker activated change lane warning. I would say 2 out of the ten times it warns because the road curves ahead and it wrongly thinks a car is in my path. Many times it has stopped me from even beginning to change lanes when there is another car there.

Would you disable yours?

(1)
Avatar
Frank Jacovino
Jun 29, 2016

You are ignoring this particular customers perspective in this situation. The statement was made that there are 5000 false alerts per hour (over 1000 cameras). How can any municipality respond to 5000 alerts an hour? There is such much "NOISE" that the "SIGNAL" (true alert) is going to get lost. It is our jobs as manufacturers to provide meaningful information and to separate the signal from the noise.

(1)
Avatar
Sean Nelson
Jun 29, 2016
Nelly's Security

Working with pixel based analytics has to be much more difficult to master than the PIR based motion sensors that ADT uses so not sure if thats a fair comparison. I somewhat agree with the manufacturer that customers expectations are too high but I do think this should have been explained more upfront before the sell was made.

Im a big fan of the cameras that have built in PIR though. Its much more of a true motion event for sure. I'd like to see more of them. The only issue is if the user wants to detect motion from several hundred feet away, then thats where PIR's start to lose effectiveness. But I'd like to see more advancements in this area (cameras with built in PIR's). If I were a camera manufacturer, I would do this. Not sure if PIR's could help with advanced analytics past typical motion detection though.

(2)
JH
John Honovich
Jun 29, 2016
IPVM

Working with pixel based analytics has to be much more difficult to master than the PIR based motion sensors that ADT uses so not sure if thats a fair comparison.

Sean, for sure, on pure technology, PIRs are far more mature than pixel based video analytics.

For the comparison, I mean from the user's perspective of dealing with false alerts. The user may not know anything about the underlying technology but they do know what operationally they can handle. Both technologies send alerts about security risks, one technology is just a lot worse at meeting customer's expectations of valid notifications.

Avatar
Sean Nelson
Jun 29, 2016
Nelly's Security

Yeah I agree. I think the solution would be to be more upfront with the customer about user-ability with these things. If reliability on video based alerts was a stated requirement on such a large job, there should have been much more clear communication about the false alarms that they "will" receive. We do the same thing when people ask us if there is email alert capabilities. We say "yeah, for your your outdoor cameras its going to be blowing up your inbox with false alarms" and then explain the whole video based motion detection thing.

I'm no technology manufacturing expert at all, but I just dont see how they would solve false alerts for cameras installed outdoors for pixel based analytics. I dont think it will ever get down to the same reliability as a PIR would have unless computing in this area comes a very long way. There are way too many things that can trigger false alerts: Light changes, Bugs, shadows etc.

(1)
U
Undisclosed #1
Jun 29, 2016
IPVMU Certified

I'm no technology manufacturing expert at all, but I just dont see how they would solve false alerts for cameras installed outdoors for pixel based analytics.

If a person can tell whether something is a real alert just by looking at the pixels, then it's possible for the computer to evaluate by a similar heuristic.

If a person can't tell from the pixels then the computer might not be able to either. Then again maybe not.

Case in point: Captcha's

You notice how wickedly hard they have gotten? That's because computers can guess the easier ones. Ones that you might have failed at.

JH
John Honovich
Jun 29, 2016
IPVM

If a person can tell whether something is a real alert just by looking at the pixels, then it's possible for the computer to evaluate by a similar heuristic.

Some are harder than others to emulate...

I know you know this but I do feel it's worth emphasizing. There's a widespread conception that video analytics 'heuristics' including understanding what people really 'look like' but it's typically far less sophisticated with very rudimentary inexact heuristics that a 3 year old would outclass, ergo the common false alerts.

(1)
U
Undisclosed #1
Jun 30, 2016
IPVMU Certified

Yes. Not trying to be too optimistic here.

I think we need to keep an open mind here. We can be somewhat dismissive of things like MJPEG and CCD's without too much fear of being surprised in the future, but with analytics, I think we have to really watch carefully as this unfolds.

There are likely to be a broad range of capabilities and price points mostly uncorrelated. Off topic: I would imagine within a couple years Analytics would warrant a permanent sub-section on IPVM.

I know you've heard it before; for the last decade (or more), you have heard the promise of game changing Analytics that are right around the corner.

So what I'm saying is that maybe there is no sharp corner, just a slow bend that we might miss if we don't look for the cumulative result in incremental changes.

JH
John Honovich
Jun 30, 2016
IPVM

just a slow bend that we might miss if we don't look for the cumulative result in incremental changes.

So we certainly should look for it. My point is that it's (overall) bad now not that it will be bad forever.

Indeed, if I was a manufacturer, my #1 priority right now would be analytics to maximize differentiation / price premium justifications vs the Chinese.

Unfortunately, there is not a lot of promising efforts within video surveillance right now.

(1)
Avatar
Brian Karas
Jun 30, 2016
IPVM

I'm personally not a fan of PIR's for outdoor applications, but thought you might find this interesting:

PIR integrates with Hikvision

Avatar
Robert Baxter
Jun 30, 2016

I presume that the PIR triggers a preset in the PTZ??

These systems suffer the same problem with disconnected camera systems and alarm systems where what is triggering the event cannot guarantee that the camera view is less than 1 second within the event and can see in the field of view everything that might enter the detectors zone of coverage. If the monitoring station operator is uncertain what triggered the event, and they don't see what triggered the event, what will they do? Annoy the customer and ask what do you want us to do...

Are ipvm users aware of the Video Verification Best Practices as published at the www.ppvar.org site? This resource provides an excellent place to start in creating an appropriate SOP for the system design.

UI
Undisclosed Integrator #3
Jun 30, 2016

Story time!

New York, fresh off one disaster with analytics asked a friend during a presentation how well his "object left behind" analytic would work. His answer was "see something, say something," like your wall posters would work better.

The board agreed and disclosed they use that question as a "BS detector".

While this subject is debating all analytics, one in particular "object left behind" was the focus of this issue.

What people want is "dangerous object arrived" and anything less than that is a problem for managing expectations. If the system accurately detects an article left behind like the backpack from the Boston Bomber that's good? Right?

How soon do you respond and with what? Should they have shut down the marathon because of a backpack? How many were set down that day for more than say.... 30 seconds?

If you want to assign blame, it's "all of the above" for not thinking through the whole scenario. It would be different in a sterile environment, back hallway door, secure facility....maybe, possibly, depending.

(3)
Jd
Jurgen den Hartog
Jun 30, 2016

I completely agree. However, the next pitfall is to tweek the system to lower the number of false alerts however resulting in lowering the true detection rate as well.

Avatar
John Bredehoft
Jun 30, 2016
Bredemarket / Incode Technologies

I happened to read U3M's comment Thursday morning, as reports of the situation at Joint Base Andrews were circulating.

While the situation isn't completely clarified, it appears that (a) the base was scheduled to hold an active shooter drill today, and (b) someone apparently reported someone who looked like an active shooter.

I know of no algorithm that can definitively conclude whether this was a threat or not. (For example, what if someone took advantage of the drill to pose as a fake active shooter, when in fact the person was a real active shooter?)

Perfection is unattainable.

Avatar
Paul Curran
Jun 30, 2016

Analytics need a yard stick measurement or what happens is you get every man and his dog adding it in cameras etc and it fails. iLids is a uk test, I'm interested to see what standards exist for testing?

mh
mark hobday
Jun 30, 2016

ILids is a uk test that measures an analytics products capability to pass numerous quite stringent tests under diverse conditions. The result is you get an analytics product that is 'approved' for say secondary perimeter detection. Its not perfect as a test scenario, but its all we've got at the moment. My issue with it is it merely indicates a product is capable of performing adequately under set conditions - manufacturers and integrator alike continue to install I'Lids certified product under scenarios that are so far divorced from the test conditions to make the certification worthless.

I spent 3 years field testing an ILids product certified for perimeter use on a CNI project. Unfortunately, the overall system design was so far away from the test conditions and had some fundamental flaws that made the analytics near useless. At the end of year 1 I did not believe that the product was capable of meeting the requirements and told the supplier so. By some fundamental changes by the integrator to their basic design, and extensive recoding by the analytics manufacturer, they managed to produce an end result that I would not have thought possible. However, this took extensive rework to basic system design elements plus extensive time on site by the manufacturer, changing configuration coding at a level far beyond the capabilities of an integrator. Yes it eventually worked to a degree, but the cost to get it there far far exceeded the budget in the project.

I don't consider analytics a product for mass market use in all but the most basic and controlled conditions.

(4)
(2)
JH
John Honovich
Jun 30, 2016
IPVM

Mark, good feedback on i-LIDS, thanks!

The last time we reached out to i-LIDS 3.5 years ago, they would not share any information, saying:

There is a list of all the i-LIDS approved systems on the government security catalogue, however this is not a publically available due to the nature of the contents. All the manufactures who have been awarded the i-LIDS classification will have an i-LIDS approval seal on their products, this can be found publically.

This aspect struck me as strange. The only way I can find if a system is approved by the government is to ask the seller? Wouldn't a list directly from the government be more authoritative and less risky than asking a seller to vouch for themselves.

i-LIDS does not seem to get much recognition in North America so i-LIDS actions is more curiosity here.

(1)
SD
Sarah Doyle
Jun 30, 2016

The UK have recognized that iLids may need to be updated. They are leaning towards replacing it with new scenarios and a performance level versus a pass/fail test. I believe its more law enforcement focused- project VALE

Taking of performance testing, did any one attend the recent NIST workshop on video analytics http://www.nist.gov/itl/iad/2016-workshop-on-video-analytics-in-public-safety.cfm ? if so, what did you think?

(1)
Avatar
Brian Karas
Jun 30, 2016
IPVM

We are working on getting an update from NIST about the analytics workshop and will post if we receive anything.

UM
Undisclosed Manufacturer #4
Jun 30, 2016

A couple of thoughts:

1) Try before you buy.

2) TECHNICAL NOTE - Of the Video Analytics tasks - detecting abandoned objects in a busy place like a station is one of the very hardest. The level of machine vision required is up there with driverless cars driving around cities.

To understand why, look at the numbers. Let’s say each station has 50 cameras and each camera 'sees' 20,000 people per day. Every one of these video 'events' needs to be analysed to see if anything has been left behind and whether it might be a threat. During some periods of the day the cameras can hardly see the ground at all, it’s just full of people. And the user wants less than one false alarm per day, which is one false alarm per 1 million events.

Video Analytics can deliver 95% accuracy or maybe even 99% accuracy. But looking for 'rare events' in large quantities of data as in this case requires 99.9999% accuracy. No Video Analytics system on the market comes anywhere near this yet.

(6)
HL
Horace Lasell
Jun 30, 2016

Unrelated to the topic, but directly related to your post: Try before you buy is impractical for many systems because the greatest cost is the time and training necessary to attain competence within such systems. Lets hope that, for complex systems, there are effective metrics for suitability that are much less expensive than a "try before you buy" investment.

(1)
UI
Undisclosed Integrator #5
Jun 30, 2016

I think Video Analyitcs in its current format is totally wrong, until some Artificial Intelligence is not developed to make proper decisions regarding an event.

For a human or an AI, its not a hard job to decide that someone REALLY left a backpack on the floor which he carried before, but an algorithm can only see pixel changes on a static image which it tries to analyze and cannot detect such a simple thing for example in a crowded hall where people are constantly moving in front of and behind each other.

UI
Undisclosed Integrator #5
Jun 30, 2016

Instead of looking for left objects, they should employ well trained dogs, who can scent explosives and get the attacker before they enter the area. A dog can be a lot smarter than any camera with analytics...

(1)
UI
Undisclosed Integrator #3
Jun 30, 2016

Spot on.

The cost of dogs and humans exceeds the cost of analytics in OPEX so the poster and using passengers was the next best thing and random human/dog sweeps.

Object Left Behind also morphed into Object Separation which then set a whole new question of alert timing.

I believe analytics should be scenario and response based.

Requirement...guy hops fence, detect him and ignore all other events such as bugs, birds, shadows, flashing lights, storms, lightning, fog, rain, snow, wind(shaking of camera and objects in view) darkness, sun facing and blinding. That's an easy one for most analytics in the upper cost range.

Here's a hard one.

Requirement...bomb left in train station platform and ignore all of above as applies, and at times it does, and add a timer because this will happen and sometimes I want an alert and sometimes I don't.

ILids was an easy pass. You have the videos and you get to tweak and manipulate until you run the video for the test. It weeded out some analytics I'm sure, but the cost was high and difficulty was low to medium.

SD
Sarah Doyle
Jun 30, 2016

The last time I asked only 3 companies were approved for iLids 'sterile zone' and 1 for 'long range thermal imager.' I don't know of any company who passed the others 'left baggage'/ 'multiple camera tracking' etc.

One of the issues is that people often don't want to invest in testing analytics prior to a sale ( the sales man might not offer it due to cost).

I would be nice if you could have a centralised open source database of videos that represent a certain scenarios and analytics companies tested themselves against them

(1)
(1)
UI
Undisclosed Integrator #9
Jul 03, 2016

John, Sarah has an interesting idea. Maybe IPVM should consider doing a better job of iLids.

JH
John Honovich
Jul 03, 2016
IPVM

9, that's an interesting thought.

We would not literally try to be like i-Lids since they are a government program and we are a private entity but I see your general point of doing a better job of iLids.

We do video analytics testing every so often but the problem we have found so far is:

  • Most industry people think most video analytics work poorly
  • We test video analytics and they work poorly (e.g., Axis Video Analytics Are Weak)
  • We report that video analytics work poorly and members effectively respond 'Of course, we knew that, why waste time on this?"

To make it more challenging, video analytics testing (done right) takes a lot of time (a lot more than cameras since so many more factors impact analytics).

So our issue is that few seem to work well, members do not seem all that interested (especially in ones that perform badly, which they expect) and it takes us a lot of time to do.

We are still going to test video analytics (Agent VI is up next, whenever Avigilon ships Appearance Search, we will test that, Bosch improved analytics is something in queue as well, etc.) but until we see a real uptrend, video analytics testing will remain a lower priority for us.

Yes/no? Makes sense? Thoughts?

(2)
U
Undisclosed #1
Jul 03, 2016
IPVMU Certified

...but until we see a real uptrend, video analytics testing will remain a lower priority for us.

As we are fast approaching IPVM's Decennial, how would you categorize VA's improvement over that same period?

NOTICE: This comment has been moved to its own discussion: How Would You Categorize VA's Improvement Over The Last 8 Years?

UI
Undisclosed Integrator #3
Jul 03, 2016

What made iLids workable when I worked with it was the time spent to create scenarios that all analytics would be pushed through. Back then it was CD's. As a manufacturer you were provided sample clips to tune with but the full test was not disclosed. Similar to an SAT practice test.

There was a US version of this type of testing with Sandia Labs that was on video tapes for peimeter fence line only.

I wonder how iLids manages the camera built in analytics? Aim the camera at a monitor and play the DVD?

Its not practical to attempt to run the exact same tests over and over both cost and comparison wise.

U
Undisclosed #1
Jul 03, 2016
IPVMU Certified

I wonder how iLids manages the camera built in analytics?

It shouldn't change things much, at the conceptual level.

You would have to get the clips into folders on the camera. Then using the camera's edge recording/playback, (and assuming that the mfr allows their analytic to run on recorded video), you would run it locally.

(1)
JH
John Honovich
Jul 03, 2016
IPVM

at the conceptual level.

Everything is straightforward at the conceptual level.

You would have to get the clips into folders on the camera. Then using the camera's edge recording/playback, (and assuming that the mfr allows their analytic to run on recorded video), you would run it locally.

Never heard of a camera allowing this, positive it cannot be common.

To your point, I am sure it's 'conceptually' possible but it is not something that is normally allowed.

U
Undisclosed #1
Jul 03, 2016
IPVMU Certified

Never heard of a camera allowing this, positive it cannot be common.

What PC based VMSes allow it? (The running of analytics on already recorded video.)

Avatar
Murat Altu
Jul 03, 2016
AxxonSoft

AxxonNext4 and BriefCam

UI
Undisclosed Integrator #3
Jul 03, 2016

Wouldn't that discount any value the actual manufacturer imager brings to the table?

U
Undisclosed #1
Jul 03, 2016
IPVMU Certified

Wouldn't that discount any value the actual manufacturer imager brings to the table?

Yes. But that is what is does now for everyone else; They're just evaluating the analytic response to a given pixel set.

JH
John Honovich
Jul 03, 2016
IPVM

Let's discuss this here, new thread: Create Video Analytic Samples?

(1)
UI
Undisclosed Integrator #9
Jul 03, 2016

Makes sense.

What I was suggesting was instead of IPVM performing the tests directly, as this is cost and time prohibitive, that IPVM determine the most typical use cases and create or crowd-source a series of progressively more difficult video footage along that same scenario.

For example, cross line detection along a fence.

Level 1 : Empty Scene, great camera positioning, object larger and closer to the camera and thus easier for VA to detect and filter out nuisance alarms.

Level 2 : Same as before but object farther away and thus smaller (i.e. represented by smaller pixel density)

Level 3: Scenario that introduces shadows, or a scene where the clouds are shifting and the sun shines through causing a massive change throughout the entire scene.

Level 4, 5, 6, 7, 8. progressively get more and more complex.

This way over the next 8 years we can see if any VA manufacturers want to take a go at the challenge and we can then discuss their results.

You could make it even more challenging if a VA mfg can produce a result across all levels using the same configuration and without significant tuning of settings per each challenge.

This way the burden of the initial testing would be on the various manufacturers and if after they have succeeded at a number of scenarios and difficulty levels, only then would IPVM setup an independent testing to validate their submissions.

NOTICE: This comment has been moved to its own discussion: Create Video Analytic Test Samples As Screening For IPVM Testing

DW
Dale White
Jun 30, 2016

Nice article, John. All the comments are very good. It's not as black and white as some demand though. Everything depends on where and what you are trying to use the analytics for. We have struggled with these issues for a long time. Given the infinitely variable environmental conditions, I don't think near perfect performance is a realistic expectation.

This is reminiscent of discussions I used to have with customers over volumetrics many year ago. Even now, 98% are still nuisance activations.

We hope that the intelligence in real-time video analytics will make it an effective tool for many applications. But, many on both sides try to turn their hope into expectations.

A minimum level of false activation gives a human an opportunity to discern what is true short of numbing the system to the point that you miss a real event. But too many falses desensitize reactions rendering the system ineffective. Same old dilemma.

I'll keep hoping, but I always temper my customer's expectations with reality.

Forensic analytics have a brighter future than their real-time cousin.

(3)
(2)
JH
John Honovich
Jun 30, 2016
IPVM

Forensic analytics have a brighter future than their real-time cousin.

Dale, nice contrast!

I agree that forensic analytics have greater potential to actually work / solve problems.

The challenge though is that forensic analytics generally have lower value than real time analytics, e.g., alert me when the wanted criminal comes into the mall is generally more valuable than tell me if the wanted criminal came into the mall last Thursday.

End users tend to be willing to pay a lot more for real time alerts (given the potential value they bring) but then they do not work as well often, ergo the dilemma because forensics and alerting.

DW
Dale White
Jun 30, 2016

Agreed. Thanks John.

While we wait for the perfect capture rate, you must plan resources for human discretion as well as response. Some organizations aren't ready for the responding resources they need for alerts. Sometimes it is better to not know about a condition/event than to know and not respond appropriately. Hopefully, the mall cop won't shoot until they have a positive ID so I think human assessment/intervention will be around a while. At least until we reach "Minority Report" level. Even license plate recognition isn't 100%.

|"forensic analytics generally have lower value than real time analytics"

Behavioural and Traffic analysis are examples of forensics that are extremely valuable in planning everything from roads to emergency evacuation to product/service placement.

Also, there are a lot of other practical applications for real-time analytics that are less critical than IDing a criminal. Blocked exits, line queues, spills, loitering, object removed, speeding, tailgating, and I'm sure many more.

Avatar
Robert Baxter
Jun 30, 2016

Would be interested in knowing "exactly" the application alluded to in the "left behind" or metro analytics situation. It also appears that there is confusion of what is meant by false positive. A false positive is not an event generated where it is not a threat. A false positive is where what the analytics should ignore (eg. birds, tree branches moving, animals) is generating events. The analytics did what it was supposed to do if it is detecting people - what was the error is not considering the frequency with which non threat events will be generated by normal activity in the scene.

A common customer request is dealing with crime in their parking lot (eg. golf course parking, apartment parking, metro parking, casino parking). These applications are not workable with analytics for many reasons that has nothing to do with analytics; while analytic applications involving car dealerships do. I run into these kind of installations from time to time - all have been turned off. If your staff doesn't understand why they don't/can't work you will spin your wheels in misapplying video analytics.

Likely there are similar principals that make a left behind or metro analytics situation workable or unworkable.

UM
Undisclosed Manufacturer #6
Jun 30, 2016

I'd ask for a bit more clarification on the user's definition of "false alert".

For example If the analytic is set to notify on an "abandoned object" and the dwell to alert is 1 minute once an object is detected and a passenger sits a suitcase down for 1 minute is it a "false alert" if the analytic triggers? By the customers definition it is, but is it really since the analytic has no way of distinguishing that?

That may be an oversimplification, but again, more details are needed.

I'm in agreement with those that stated it was a bad application of the technology for this scenario.

By the same token, we also need to examine the customers motivation for wanting it to begin with. He has 1,000 cameras. How big is the staff that monitors 1,000 cameras? Nothing takes the place of actual human surveillance and decision making, but obviously there's no way the customer can afford staff to effectively monitor it so he's turned to technology as a force multiplier.

I'm hard pressed to understand why an initial pilot of the analytic on a dozen or so cams wasn't done to begin with so he could establish a baseline to determine if he could live with the "false positives" (if that is indeed the case) to begin with.

(1)
JH
John Honovich
Jun 30, 2016
IPVM

He has 5,000 cameras.

Point of information: He has 1,000 cameras, they average ~5 false alerts per hour, ergo the 5,000 alerts (per hour) number.

UM
Undisclosed Manufacturer #6
Jun 30, 2016

Edited - thanks John

DW
Dale White
Jun 30, 2016

This is why it has always been important to distinguish between a nuisance activation and a false activation.

(2)
JM
John Marco
Jun 30, 2016

I would like to join this conversation from a sensor POV. We have been testing, with success, the combination of sensors working as the trigger for video analytics (we call this a IP video double knock internally).

Testing the complementary features between the video analytics and physical sensor in outdoor environments, we have set up a live system using both technologies: Optex sensors and video analytics. The footage gathered gives a good insight on how the double knock can decrease false alarms and provide real time forensic information:

https://www.youtube.com/watch?v=CYYrmWPOeb0&feature=youtu.be

Introducing physical sensors solely dedicated to the function of sensing and independent of the camera view and angle can help. Using different sensing technologies that can suit different environments also help lower false alarms. Sensors can also help cameras eliminate cause of false alarm in outdoor environment where lighting conditions will hurt video analytics.

How does it all apply?

Sensors can be used to trigger a number of devices including non video devices such as lighting, signaling systems, barriers, fogging systems. Sensors can detect in areas where no camera can be installed: walls or network cable. Sensors can cover detection areas difficult to achieved with a camera. Sensors complement and validate video based detection to create a “double knock” system.

All in all, there is not one technology that can do it "all." Why not use multiple technology strengths based on environmental, system configuration and requirements? I hope this POV can help manage some of these missed expectations.

(1)
UM
Undisclosed Manufacturer #7
Jul 01, 2016

There is certainly room for improvement, and here is an example; In automotive-grade analytics used for self driving cars (ADAS) the failure rate requirements are extreme. This level of analytics deploys a lot of processing performance, multiple sensor types, and in general use other classes of algorithms than what is typically used in security today. True, the use cases as different, but there is significant overlap in tracking and object classification.

A challenge in surveillance is the inability to charge (enough) for analytics in a smallish market -> which means less R&D can be invested -> resulting in less performing solutions, etc. Not so in automotive analytics. That is a much bigger market and good analytics is a must with tough industry conformance testing, which drives specialized chipset and algorithm development. My guess is that such automotive technology will become more affordable and trickle down to surveillance in next few years in some form, making security analytics more reliable and usable. Security being a relatively small market, has great traditions in deploying standards and technology from other bigger markets. The era of seeing cameras and robotics has begun.

(2)
(1)
Avatar
Murat Altu
Jul 01, 2016
AxxonSoft

Yes, the era of seeing cameras has begun, but not automotive industry is going to help. Automotive industry is always far behind IT and general electronics. The major limit for perfect analytics is not ideas or algorithms. Imagine, that mathematical model of brain is already designed and open sourced. The only problem was calculation performance and education. NVidia already designed platform and SDK for deep learning (technology to educate this "brain"), so we now have a platform. The only challenge is to find the most effective ways to teach artificial security guards :)

(1)
UI
Undisclosed Integrator #3
Jul 02, 2016

Your timing was off, I believe the guy who put too much faith in his self driving Tesla would argue the point differently today, if he were alive to do so.

So far it appears the camera was blinded by the sun and the truck was white.

U
Undisclosed #1
Jul 02, 2016
IPVMU Certified

I believe the guy who put too much faith in his self driving Tesla would argue the point differently today, if he were alive to do so.

Also might have helped if he wasn't watching that Harry Potter movie at the time...

UI
Undisclosed Integrator #3
Jul 02, 2016

He expects 100% PD and 0% NAR/FAR

(1)
SN
Sean Nicholas
Jul 01, 2016

For what its worth, every implementation of abandoned objects or removed objects using VMD (video motion detection) is destined to fail.

I do however have first hand experience with more than one implementation of abandoned objects (or removed objects) using NMD (non-motion detection) with a video analytics company out of Toronto circa 2008. I wont plug their name, but you can google Non-Motion Detection as its a patented technology which allows them to operate extremely effectively in crowded environments.

With standard VMD implementation the detection times for abandoned object is often within seconds of the object being placed. This is because of limitations of this solution using VMD.

Simply ask any analytics company if their object detection is based on VMD or NMD and/or specifically inquire whether their solution can detect an object that is left behind for 5 minutes in a busy environment , i'm certain they would hesitate and be vague in their response. However, if you ask the video analytics provider that's capable of using NMD, i'm certain you'll get a much more confident response.

Better question for the sales people are to ask if they can detect a car parked for longer than 20 minutes. Its guaranteed that VMD will fail in this scenario, but for NMD this is where it would shine.

Its a pity that RFQs arent more specific. Customers should request that a system can detect objects left behind for longer than 3 minutes or 5 minutes. It would protect themselves.

FYI, another highly effective application of NMD is graffiti detection.

Avatar
Murat Altu
Jul 01, 2016
AxxonSoft

Sean, I'm sure that no one is using VMD for adandonded object. This is simply impossible to have any result with VMD, because we are looking not for moving objects. Calling it NMD is like salesmans trick to highlight your uniqueness to customer who doesn't understand anything in analytics. Typical abandoned object analytics idea is to summarize images for long period of time to predict the background and summarize for shorter time (20min for your example with parked car) and than compare this too and difference will show the abandoned objects. I never ever heard about using VMD, because it is just sily.

(1)
(1)
SN
Sean Nicholas
Jul 01, 2016

You are certainly right about it being impossible to have any result with VMD, however, if you've ever experienced the snake oil salesman of Video Analytics I think you would reconsider your first statement about being "sure that no one is using VMD for abandoned object" :)

I hope that readers don't take your statement and assume all implementations are the same.

I've had experience years ago with VMD based attempts at Object Left behind. They only ever worked with a short duration and relatively empty scenes.

For the sake of users like the "Manufacturer President", that prompted this discussion, I thought I would share my experience in hopes that integrators and consultants can become more educated and discerning. I hope that they ask the right questions if they ever are to offer this to a client. If anyone is to take away anything from my comments is to ask about the implementation, ask about a busy environment and particularly use of longer durations. This will help the integrator filter out solutions they should stay away from.

Ultimately the best advise is to pilot/demo any system within the proposed scenario.

Cheers

JH
John Honovich
Jul 01, 2016
IPVM

if you've ever experienced the snake oil salesman of Video Analytics I think you would reconsider your first statement about being "sure that no one is using VMD for abandoned object" :)

Sean, +1

Don't underestimate snake oil salesman....

U
Undisclosed #1
Jul 01, 2016
IPVMU Certified

...I think you would reconsider your first statement about being "sure that no one is using VMD for abandoned object" :)

Since your entire post was dedicated to disparaging VMD based object left behind analytics, where the only stated alternative was the patented NMD technology, I would assume that the VMD form are quite common, if not the norm.

Can you help us by indicating what products that you are familiar with that use VMD for abandoned objects?

SN
Sean Nicholas
Jul 02, 2016

My last experience with any form of Video Analytics was over 6 years ago, so it wouldn't be fair to elaborate.

My best advice would be to scrutinize demo video from the vendor and if the scene is empty then be very sceptical.

Another sign is when the red box around an object is constantly resizing when triggered. This is certainly a giveaway that they are trying to implement this solution using VMD.

See a generic video below of an example where I am certain VMD is used as the underlying algorithm.

U
Undisclosed #1
Jul 03, 2016
IPVMU Certified

Sean, are you saying that here the algorithm determines the car is "wrongly parked" by noting the direction of the previous motion and not by the orientation itself?

P.S. This is why I always *back into* a spot wrongly. Drives the analytics nuts...

UI
Undisclosed Integrator #3
Jul 03, 2016

I won't comment about the bounding box resizing and the use of bolbology for detection.

I will say the examples define why object left behind and object removed don't work well.

How long can the laptop be blocked before an alert would sound? That's a bad FOV for this analytic.

Seriously, how many alerts during Black Friday shopping in a mall for this object left behind analytic?

UM
Undisclosed Manufacturer #8
Jul 01, 2016

What did Einstein say again:

“We can not solve our problems with the same level of thinking that created them”

Video analytics needs some drastic different approach. Maybe integrated multi sensor negotiating (not video alone) would get us somewhere.

(2)
Avatar
Sagy Amit
Jul 09, 2016

Wow...

I usually give up on reading comments after 8, maybe 9.

Admittedly this subject is close to my heart and I have played the devil’s advocate on many discussions such as this one in the past. So I’ll try to be objective here, as much as I can after 11 years in Video Analytics and 4 different manufacturers.

I will work with the main story and avoid response to individual comments as there are too many of them.

Conditions in metro are really bad for this kind of analytics”

Well, no freak’n way, Einstein!! BUT YOU REALLY WANTED TO CLOSE THE SALE, DIDN’T YOU?

I can write a book about all those projects I have turned down, in which VA was just NOT the rights solution!

From detecting sharks on a Florida beach front hotel, to determine a friend from a foe (U.S military…), VA is not meant for that!

Nevertheless, saying that video analytics is a worthless solution?? It is like saying that police radar detectors are useless, because of false alarms...

It is true that very few (If any) VA companies made it to the finish line of profitability. We can blame many factors such as timing, investors patience (or lack thereof) and bad leadership.

Nevertheless, we can’t take away some great success stories in which Video Analytics are used successfully and almost exclusively. One example is the Remote Video Monitoring business. There are tens if not hundreds of small time operations that cater to 100+ locations and provide them with useful service.

Does an ADT burglar alarm false alert 5-6 times an hour? No. Do you think ADT would be out of business if it did? Yes.”

First off, the burglar alarm industry, for the longest time prior to recent years regulations cost the tax payers $2B annually in false alarms. Statistics don’t lie and 90% of all police dispatches were false alarms! In fact, the reason why VA is so useful in the RVM business is recent requirements for Video Verification of alarm events.

In my past 11 years in the business, there has been very little improvement to VA performance. I’d argue that we even went backwards on innovation.

The reasons??

I don't know, maybe greed, made up market value projections, and down the loophole of inexcusably dishonest marketing campaigns. I have recently gave up my job due to that exactly.

We will never eliminate ALL false alerts. Heck, the radar detector in my car drives me nuts, but it saved my ass on countless occasions!

For fence-line intrusion detection, where the alternatives are "dumb" cameras, PIR triggers camera, or fence sensor triggers camera, Analytics is still the best option. With current thermal camera prices, thermal analytics is more cost effective than most solutions and provides the best combination of detection/verification.

Bottom line, for perimeter detection, what is a better alternative??

(1)
(1)
Avatar
Sagy Amit
Jul 09, 2016

As to "Object Left Behind", a requirement to detect potential bombs left to detonate in a crowded mall, subway station, etc. From a pure optics standpoint, will never be a reliable solution. those environments are too crowded. in most cases the object will be blocked from camera FOV. Where it will work, indoor, sterile environment, it beats the purpose..

As to the guy with the new bombastic acronym, if you wait 5 minutes or more to provide a detection alert, the bomb already went off..

The ONLY current solution for suspected objects and suicide bombers is human vigilance.

(3)