Testing VideoIQ's Video Analytics (iCVR)

Published Jul 30, 2009 00:00 AM

In this test, we examine one of the most discussed manufacturers in video analytics - VideoIQ [link no longer available]. Their claims of automatic calibration and self-learning technology has drawn substantial interest. Their most recent product offering, the iCVR [link no longer available], is one of the only products available that integrates a camera, DVR and analytics into one unit.

Challenges with Analytics

Over the last few years, analytic performance has been a hotly debated topic with many users finding disappointing results.

Increasing the challenge was a lack of public independent test results. With huge marketing campaigns but minimal 3rd party technical reviews available, determining what worked and how it worked were hard questions to answer.

Summary of VideoIQ Analytic Test

These are key findings from our test:

  • False Positives in a variety of environments were uncommon. A small number of conditions generated periodic false positives.
  • False Negatives (missing suspects) was generally very low except for a number of conditions that should be carefully considered.
  • Performance was achieved without any software configuration; However, users should recognize system limitations and plan accordingly.
  • Initial start-up and system re-starts created temporary reductions in performance
  • Performance was consistent with technical guidelines provided by VideoIQ. However, some of the more aggressive marketing claims were not consistent with our test results.

Test Setup and Overview

We tested the iCVR with an 80 GB hard drive (model number VIQ-CT208) which has an MSRP of approximately $1,800 USD.

The test results were obtained from software Version 1.3. This is important because our test began with an earlier version (1.1) which was noticeably less accurate than the current version. Testers or users should upgrade/use the newer version.

Readers should note a number of limitations in the conditions tested. We did not test in snow, fog or hail. We did test in a limited amount of moderate, but not heavy, rain.  Based on our observations with rain, bodies of water and shadows, we do not believe severe weather will significantly increase false positives. We do believe that weather can increase false negatives as the ability to detect targets decreased as the visibility of the scene did.

Daytime performance

We did a number of tests in a wide open field to better understand the width of the scene and the performance during the day. In our daytime tests, we were able to reliably track suspects at widths up to 170 feet wide (approximately 50 meters). This exceeded VideoIQ's technical document.

Of course, the total width of the FOV depends on the needs for night-time illumination and the presence of artificial illumination available.

Nighttime performance

When light levels fell under a few lux (and usually under 1 lux), system performance often degraded. Sometime the system tracked targets that were clearly visible but othertimes it missed them. As light levels decreased, tracking accuracy was less reliable.

At lower than .5 lux (measured in our office setup), the image became essentially unusable (black with little contrast). In this condition, nothing could be seen.

Since low light is critical to many security applications, users should be careful about measuring their lighting conditions and understanding the limitations of the built-in camera and analytic performance. While VideoIQ recommends adding in artificial illumination, this is often impractical due to cost or implementation constraints.
Narrow Field of View performance

In narrow fields of view, under 35 feet wide, identification of targets was poor. If an object was moving slowly or loitering, the system could identify. However, if the person was walking through at a normal pace, the system frequently did not identify targets.

This is an important limitation that designers should note. While video analytics is more often used for larger outdoor areas, if you need coverage of a small area, VideoIQ's results may be poor.


False Alerts examination

False alerts were uncommon. In most tests, we experienced no more than a false alert every few hours. The false alerts that did occur seem to be driven by a few common issues:

  • Reflected Light: sometimes reflected light was detected as a person (however shadows were not)
  • Large branches: some larger branches were periodically detected as a person (however leaves were not)
  • Rebooting the camera: most times when we re-started the camera, a few false alerts would be triggered (the system would stabilize with tens of seconds ceasing any false alerts)
  • Shifting the camera: if the camera was shifted/jolted, sometimes a few false alerts would be triggered (again the system would stabilize shortly thereafter)
  • Large light changes: sometimes when lights were turned on or off or when the camera switched from day to night mode (and vice versa), a few false alerts would be triggered (until the system stabilized itself)

The actual number of false alerts will certainly depend on site specific conditions. From our tests, we would expect false alerts not to exceed a handful (2-6) per camera per day. We could see it being lower than this but again would depend on site specific conditions.

The screencast demonstrates a number of these issues:

Object Appearance search performance

When objects are detected by VideoIQ, they are essentially catalogued. An investigator can then select an object catalogued and run a search for objects that are similar in appearance to the selected one.

In our tests, object appearance search worked best when the video searched had similar lighting, FOV and colors of the original object. For instance, searching for a delivery person who wears a brown outfit and comes to your office every day would work well for multiple week searches. However, searching for a random individual who changes outfits daily and may go to different offices with different camera setups is unlikely to return successful results over multiple days.


Analytics configuration and optimization review

There are very few software configuration options for VideoIQ's analytics. The only one we actively used was the options for "indoor" and "outdoor" modes (see screencast below).

The most important element we found in configuration and optimization is the presence of an auto-calibrating period whenever the camera first starts up (visually displayed as a cyan bar overlayed on the live video view). This auto-calibrating period seems to be the cause of the frequent false alerts generated when the system reboots. Beyond that, VideoIQ mentions that performance improves over time, specifically that false alerts are minimized when the auto-calibration period is complete.

Given this auto-calibration function and the 3-5 minute reboot period for the camera, it is important for users to ensure backup power is provided and that any threats to frequent power outages for the camera are eliminated. If this cannot be done, this has the potential to be a significant risk.


Contrast of VideoIQ's performance to marketing claims

While most video analytic providers make bullish marketing claims, 3 of VideoIQ's claims from their Analytics Whitepaper [link no longer available] stood out as being unmet in our testing.

  • Teach by Example: VideoIQ claims that they have 'introduced an industry breakthrough" where operators mark false alerts and the system automatically improves its model. In the previous version, the button for this functionality was available but it did not impact system performance. In the current version (1.3), even the button has been removed. This could be a powerful tool but it is not currently available and stands in stark contrast to their marketing assertion that it has been introduced.
  • London Bombings: VideoIQ claims that, with their products, investigators could retrieve matches to bombing suspects and with further searches find accomplices. The former is unlikely, the latter nearly impossible. As our test results found, at a small scale, object search returned accurate matches. However, with a larger scale (numerous cameras) and different angles/lighting conditions, such searches would provide hundreds of incorrect matches and may not capture any correct matches (especially if the search ran over multiple days).
  • Tracking objects consistently: VideoIQ makes a number of claims about tracking across dynamic backgrounds, changing directions and moving behind walls/barriers. While VideoIQ will identify targets in all of these conditions, it regularly loses track of objects (sometimes for a split second but often for a number of seconds) in these conditions. When using the system for intrusion detection, this is not a problem. However, if you are using the system to detect loitering or crowds, this can be a problem. For loitering alerts, the bounding box needs to be consistently around the target for the set period of time. If the bounding box disappears even for a moment, then the clock is restarted. This created issues with triggering loitering alerts. With larger groups of people, the system routinely groups multiple people together (see the false alert section above). This would throw off alerts set for crowd detection. 

While these issues do not impact core alerting/detection, they are important to establish proper expectations of performance.

Recommendations on VideoIQ's Analytic Applications

At a high level, selecting video analytics depends on 3 criteria:

  • Tolerance to false positives
  • Tolerance to false negatives
  • Integrating/Monitoring the system

While the 3rd point is important, I will leave that to our report next week on VideoIQ's VMS. As for the first two:

  • False positives were modest. An organization with a monitoring staff (such as remote monitoring) could likely handle this without a major problem.
  • False negatives could be significant in certain situations. An organization that has to catch the suspect every time (like the military or other homeland security organization) may not be satisfied with some of the risks here. On the contrary, an organization looking to reduce property crime may accept the benefit of reducing some crimes but not all.