Testing: ioimage wdc100dn Video Analytics

By Benros Emata, Published Sep 14, 2010, 12:00am EDT (Research)

In early 2010, one of the most well known analytic developers, ioimage, was acquired by DVTel. This triggered a debate about its meaning and the state of the overall video analytics market. No doubt, the initial wave of enthusiasm for video analytics in the security industry has long since passed, supplanted with well-founded skepticism.

In this report, we test ioimage's wdc100dn camera [link no longer available] to better understand its performance. We successfully integrated the ioimage camera with the ExacqVision VMS to record video and monitor alerts from our test.

This report continues our video analytics testing series. We recommend readers compare to our other test: Agent Vi, VideoIQ, VitaminD, and Archerfish Solo. VideoIQ is probably the best comparison as they are both 'smart cameras' aimed at the professional market. Agent Vi is another good comparison as their analytic software can be loaded on 3rd party cameras.

We examined the following issues/aspects:

  • How well does detection work in narrow or wide FoVs? Inside we demonstrate a test of detection capabilities from 10ft. to 300ft. horizontal FoVs.
  • How easy or difficult is the system to setup? Inside we examine how to calibrate the system and the key lessons we learned.
  • How well does the system perform in low-light conditions? Inside we provide a comparison of daytime versus nighttime performance
  • Can the system really eliminate nuisance alarms? Inside we challenge the system with scenes containing plenty of troublesome vegetation and lighting effects
  • How is the system optimized to suit various applications? Inside we discuss the advanced settings available inside the system
  • ioimage's analytics performed well in a variety of environments with minimal false alerts due to vegetation and stationary lighting effects. Reflected light from objects in low light scenes was the most notable cause of false positives. Low-light scenes produced a modest increase in false negatives, especially at outer reaches of the scene and lower contrast areas.

    The analytics detected humans quite consistently from very tight field of views (as narrow as 10 feet) to very wide field of views (as wide as 300 feet).

    ioimage analytics need calibration that require domain specific (not IT) skill and experimentation. While the time per camera calibration is minimal (generally 15 minutes or less for a two man team), becoming proficient can take days of training/practice. Failure to properly calibrate scenes generally resulted in increased false negatives (missing intruders) but minimal false positives (nuisance alerts).

    Product Overview

    The wdc100dn [link no longer available] (MSRP $1795) is one of four ioimage intelligent IP cameras featuring built-in analytics.  The ioicam family of IP cameras [link no longer available] are designed for stand-alone or VMS/DVR integrated operation and feature PoE capabilities (excluding the PTZ model), an IR cut-filter, and an integrated web-server for simplifying management tasks.

    The ioicam intelligent IP camera line [link no longer available] also includes:

    • mmp100dn (3 megapixel)
    • xptz100dn (PTZ - subset of detection types)
    • sc1dn (entry-level VGA - subset of detection types)

    Among the ioicam IP cameras the mmp100dn and wdc100dn support the broadest variety of detection rule types such as intrusion region, tripwire, fence tresspass, unattended baggage, object removal, stopped vehicle, loitering and camera tampering. The mmp100dn provides an additional ability to track objects in a Picture-in-Picture (PIP) or split view format.

    Architecturally, ioimage IP cameras support stand-alone or VMS/DVR integrated operation. Stand-alone operation is accomplished via IP or analog connectivity (the ioicam features an analog output). 

    In an IP stand-alone environment, configuration, live viewing and alarm management are all conveniently performed via a web browser connection to the camera's internal web server. However, in this scenario there is neither support for on-board storage nor local manual/scheduled recording.

    In contrast, the analog method uses an on-screen display for alarm monitoring. Management, configuration and optimization still requires an occassional IP connection to the camera's web server application, and requires an analog recording device to store video.


    Prospective adopters of the ioicam technology should carefully consider the complexity of the scene to be monitored and the specific detection type(s) that will be required for the deployment.  More difficult scenes require more expertise to deploy, and in some applications the system may not perform well at all.  In more challenging applications the installation will require a strong level of conceptual knowledge to find elegant strategies to circumvent problem areas.

    The support for autonomous stand-alone and VMS/DVR integrated operation provides more deployment options architecturally and thus delivers flexibility in cost for implementing a system.  However, for a smaller than 4 to 8 channel ioicam system, there exists many free to low-cost VMS/DVR solutions that can provide video storage, management, and alarm monitoring capabilities.

    End-users should strongly consider a recording platform for the ioicam system if only to use as a means to gain insights into the system's performance over time, and as a reference for potential optimizations. For example, we successfully integrated the ioicam with a free one (1) channel ExacqVision VMS.  

    Summary of Test Results

    • Only the very simplest of low-complexity scenes with simple detection types can be configured expeditiously with little knowledge and forethought
    • Procedures to calibrate the camera for video analytics operation are clearly described in a step-wise manner in the user interface
    • Following the procedures properly consistently produced calibrations that withstood the internal verification tests
    • Includes some very helpful aids in calibrating analytics, such as a digital zoom PIP feature, unknown camera height calculation, horizon estimation, warning on poor calibrations, "show human size" feature
    • Calibrations challenging on extreme overhead 'shots' and extremely deep/far 'shots' with shallow down-tilts
    • Analytics minimized nuisance alarms due to vegetation exceptionally well during both daytime and low-light conditions
    • Analytics minimized nuisance alarms due to flickering light sources, and other challenging lighting effects very well during daytime conditions.  Under low-light conditions reflective surfaces triggered false alarms when moving vehicle light sources mimicked movements on the reflective surfaces, themselves.  Also, stationary light sources did not produce any noticable frequency of nuisance alarms during low-light conditions.
    • Analytics consistently exhibited poor discrimmination between human and vehicle subjects within human intrusion detection regions. I.e., vehicles trigger a high number of false positives
    • Analytics demonstrated an exceptionally high probability of detection at various horizontal FoVs and depths in a low complexity daytime testing scenario
    • Analytics exhibited high frequency of false negatives on extreme overhead mounting location (80+ feet above detection plane) within a complex scene (high vehicular and human traffic)
    • Optimization settings in the 'advanced' rules definition are intuitive and not overly complex to understand theoretically. We did not extensively test the effectiveness of 'advanced' optimization options

    Physical Overview

    The ioimage wdc100dn integrates analytics inside the IP camera form factor. The ioimage camera does not feature on-board storage, HDMI out, and USB support. In comparison, VideoIQ's ICVR includes a hard disk drive, solid state drive, SD card slot, and USB port within the camera itself. In the following screencast, we cover the physical attributes of the wdc100dn, and point out a few subtle caveats

    Key points include:

    •  The 1/3" CMOS sensor is made by Pixim
    •  The I/O terminal block supports 1 alarm input and 1 relay output
    • Power input supports 12-24VDC or 24VAC
    • BNC connector is for analog video out
    • By default, the analog video out is disabled, but can be enabled in the web interface settings
    • The 3.5mm jack is for two way audio support
    • Two way audio can't be enabled from the web interface - only through third party VMS (if supported)
    • Camera supports CS mount lenses (lenses are not included with the camera)
    • Accessories include a documentation & utilities CD, quick install guide, and audio splitter plug
    • The analog signal provides additional video overlays and on-screen display information that is  not available on the IP video feed

    Installation, Setup, Calibration and Detection Rules

    There are some key concepts to be mindful of when selecting a vantage point for the camera in relation to the scene of interest. When object size is more critical for the application, a shallower camera down-tilt provides more discrimmination of size in relation to varying depths within the scene. Conversely, if accurate distance measurements along the detection plane are of greater priority, a steeper down-tilt provides better resolution or pixels/foot throughout the depth of scene. Theoretically, an example of where distance resolution would be a priority is when detection rules are to rely on speed measurements of objects to optimize detection behaviors.

    Establishing the browser based connection to the ioicam requires Active-X controls and as such Microsoft Internet Explorer is recommended.

    For analytics functionality, the setup of the ioicam system consists of a calibration phase and a rules phase.  The calibration phase requires the application of four (4) human markers and (2) ground markers.  After the calibration markers are provisioned the next step optionally allows the administrator to provide an empirical value for the camera's height above the detection plane and set the horizon line.  However, the manufacturer suggests that you skip straight to the last step of calibration to verify what the system has automatically calculated for camera height and the horizon line.  If your calibration markers were provisioned properly the system's estimates of camera height and horizon line should come fairly close to their actual values.  If the provisioning of markers are done poorly enough then the administrator may recieve a 'warning' message suggesting that calibrations be adjusted.

    Simulating a poor implementation with purposefully inaccurate height and ground marker provisioning produced some interesting effects on system performance. Note that in both the following scenarios, the system provided a warning that the calibrations were inaccurate  (we simply ignored these messages for purposes of testing). 

    Our first simulation of 'poor' calibration defined the human and ground markers such that the system would 'think' objects were smaller than in actuality (more zoomed in than actual). As a result human subjects within the scene appeared much smaller than the detection size range.  This resulted in roughly a one-third to one-half decrease in the probability of detection of human subjects, continued false alarms on vehicles, and no change to the normally observed low nuisance alarm rate on vegetation. 

    The second simulation, erred in the opposite direction where human subjects were made to appear much larger to the system than in actuality.  During this trial, absolutely no detections took place. 

    In the following screencast, we provide a demonstration of the setup procedures required to calibrate the ioicam system and apply a basic human intrusion detection rule.

    Test Scenarios, Methodology and Discussion

    In nearly all test cases we configured a human intrusion detection region 'blanketing' the entire scene. Our methodology does not aim to exclude potential nuisance causing objects (e.g., vegetation, reflective surfaces, etc) from the detection region, but aims to include them in order to test the technology's native capacity to resolve such issues.

    We started tests with the default 'advanced' settings to test baseline performance.  Some adjustments to 'advanced' settings were made to test the system's ability to tune detection behaviors.  In all scenarios the human intrusion detection rule exhibited difficulty in discrimminating against vehicles. And, in nearly all test cases nuisance alarms due to vegetation were minimal.

    In our initial Detection vs. FoV tests we mounted the camera only ~6.5 feet above the detection plane level.  This is below the manufacturer guideline of ~15 feet for detection out to approximately 150 ft depths using a 2-5mm lens.  In this relatively 'controlled' environment we still achieved solid detection results.  Additionally, we achieved reliable detection out to depths greater than 200 feet.

    In an extreme test case where the camera was mounted approximately 80 feet above the detection plane, 'looking' down into a busy intersection, we experienced a clear increase in the level of false negatives on human subjects. This deficiency seemed to disproportionately affect human subjects moving longitudinally across the scene.  Results for this particular trial seem to suggest a limitation for high mounting locations with steep camera down-tilts (overhead shots), such as on roof-tops 'looking' directly or steeply downward.

    In another test case we zoom in to a distant roof-top parking lot.  The camera is situated approximately 35 feet above the detection plane. Due to the length of the 'shot' the camera down-tilt is shallower than the manufacturer guidelines. The FoV did not yield as many pixels per longitudinal distance as a steeper down-tilt would have provided. This created some issues with calibrating ground markers effectively. We use markers of approximately 100ft to compensate for the lack of pixels/foot in the longitudinal direction. Despite some challenges in calibration, the detection capabilities performed nearly equal to the Detection vs. FoV test - low false negatives, low nuisance alarms due to vegetation, and high false positives on vehicles.

    In the low-light (nighttime) test scenario, our vantage point provided an inherently manageable balance between camera height, camera down-tilt, and depth of scene. However, the scene was considerably busier than the more 'controlled' scene in an earlier test (Detection vs. Horizontal FoV). Calibration tasks proceeded without notable issue. We performed a daytime control of the same scene and setup to establish a baseline. The daytime performance was in alignment with expectations based on our previous findings, with a slight increase in false negatives in this fairly busy scene.

    As the scene transitioned to night, the low-light test case exhibited low frequency of nuisance alarms due to vegetation and stationary (at times flickering) light sources. There were some reflective surfaces that consistently triggered nuisance alarms when light from approaching vehicles simulated movement on those surfaces.  The low-light scenario provided a couple of extremely low contrast areas, within which consistent false negatives on human subjects resulted. In comparison with the daytime control, the nighttime trial produced a slightly higher frequency of false negatives, especially at the fringes of the scene and within low contrast areas.

    While there are some clear strengths in the ioicam intelligent IP camera products' usability and performance, the application, implementation and optimization of these types of systems still require expert 'hands' for all but the simplest detection needs. With 'plain' video surveillance applications the behaviors are relatively simple to predict and estimate as the output of the system is mostly optical and visual.  Our own senses can tell us whether a camera at a particular height, azimuth, down-tilt, lens angle, etc. will produce the results the end-user desires.  In analytics the output or performance is not so obvious. The optical/visual output represents information that is only useful in the context of 'human' senses and not the 'analytics' senses.  Thus, the non-intuitive nature of designing intelligent video systems, requires significant conceptual and empirical knowledge to properly design, deploy and optimize an effective video analytics solution.


    Detection vs. FoV

    Our first test was designed to determine ioicam's ability to detect human subjects at varying horizontal FoVs. We minimize the complexity of scene by limiting excessive vegetation, human traffic, and vehicles in order to produce focused results. Our test subject enters and leaves the FOV in a lateral progression from one edge to the other.  We repeat this procedure at horizontal widths starting as narrow as 10 feet and ending to as wide as 250+ feet. Note that the lens angle of ~52 degrees results in a near 1:1 ratio of depth to horizontal FoV. For example, at a depth of 100 feet from the camera origin the horizontal FoV is roughly 100 feet, as well.

    Test Parameters:

    • Camera Height: ~6.5 feet above ground level
    • Focal Length: 5mm
    • Horizontal Lens Angle: ~52 degrees (empirical)

    In this low complexity scene there were little to no false negatives at all tested horizontal field of views (10ft to 300ft).  We experienced no nuisance alarms originating from the vegetation near the horizon. Also, some challenging lighting conditions didn't appear to adversely affect the analytics' perfromance.

    In this screencast we examine the results obtained from the Detection vs. Horizontal FoV testing scenarios.

    Performance Issues and Theoretical Optimizations

    In our tests the ioimage product performed well in mitigating nuisance alarms due to vegetation, flickering light sources, small animals, and reflective surfaces.  However, in this video, we demonstrate some of the nuisance alarms discovered during our tests and also some issues with false negatives that appeared more frequently during the nighttime/low-light conditions. In addition, we'll discuss some potential fixes for these types of performance issues.

    • When deploying detection regions, try to place the detection regions in the middle of the scene.  The analytics appear to require some 'breathing' room to monitor and analyze a subject before a detection is triggered. A disproportionate number of false negatives seemed to occur near the fringes in our test scenarios.
    • For human intrusion regions, make an effort to avoid including areas with any level of vehicle traffic. Theoretically, false positives on vehicles can be minimized by decreasing the object 'speed' limit, but this optimization was not tested sufficiently to make any judgements on its efficacy.  Regardless, traffic speed can vary throughout the course of a day and trigger nuisance alarms as a result.
    • For troublesome vegetation, elect to remove them from the detection region. Alternatively, enclose the vegetation in its own detection region and define parameters to lower the sensitivity of the region. I.e, increase the required minimum distance traversed inside region (advanced setting).

1 report cite this report:

New HD Video Analytic Cameras (ioimage / DVTel) on Sep 17, 2014
DVTel is releasing their first HD video analytic cameras, nearly 5 years...
Comments : Subscribers only. Login. or Join.
Loading Related Reports