Top 10 Surveillance Myths DebunkedBy John Honovich, Published Jun 25, 2011, 12:00am EDT
Aggressive marketing creates dangerous myths. Vendors transform their product's highest aspirations into concrete claims. Unfortunately, these become generally accepted 'knowledge' that helps sales, hurts users and sometimes crushes a market (such as video analytics).
In this report, we examine 10 of the most serious myths facing the video surveillance industry. Our analysis is based on the systematic results of our testing program that disproved these claims.
Here are the 10:
- Myth: Resolution Comparison Diagram
- Myth: More Pixels = Higher Image Quality
- Myth: A Megapixel Camera is Equal to Many SD Cameras
- Myth: Pixels Per Foot is a Reliable metric
- Myth: WDR Camera Specifications are Legitimate
- Myth: Minimum Illumination Specifications are Legitimate
- Myth: Superior Low Light Performance Claims
- Myth: IR Illuminators Massively Reduce Bandwidth Consumption
- Myth: VSaaS is Secure and Mature
- Myth: 80% Analytics are Good Enough
- Myth: Megapixel 'Virtually Eliminates' PTZ Cameras
Most of you have seen megapixel comparison charts where overlayed boxes show how much more higher resolution cameras can capture than lower resolution ones. Here's an example:
While the layout varies by vendor, this is an industry wide technique.
These resolution comparison charts are dangerously misleading because they imply that all pixels are equal.
Here' an analogy. Let's say I claimed:
- A 600 pound man can lift twice as much as a 300 pound man.
The assumption is clear as it is wrong. While more weight often correlates with more strength, this is far from universal.
- More weight does not guarantee more power. More pixels does NOT guarantee more details. Period.
This flawed assumption is the basis of a number of other myths and might be the most serious issue our industry faces as we attempt to properly integrate megapixel surveillance.
While more pixels often delivers higher image quality, it does not always. Here are the 3 bounding factors to keep in mind:
- Light Variations: Glare and shadows can significantly reduce or eliminate the benefits of higher resolution. In real world video surveillance, it is very hard to overcome glare and shadow throughout the day (this is not a photo shoot where you can control lighting for a few hours and then leave). If the camera can see sunlight, windows or streetlights, be prepared for significant reductions in image quality for megapixel cameras.
- Low Light: Even if you have street lighting, at night, megapixel cameras will perform only marginally better or equal to SD cameras. This is because of low light sensitivity restrictions and the impact of aggressive gain levels that increases noise. See our SD vs HD night shootout for proof and examples.
- Target Location: Even if a higher resolution camera can provide more details, often those details do not matter. For instance, once a person is far enough from a camera, a higher and lower resolution camera (even in ideal lighting conditions) will both show blobs. The higher resolution camera's blob may be bigger but the practical difference will be meaningless.
Megapixel cameras can provide higher image quality. However, it is imperative to factor in from the start (1) what lighting variations a scene faces, (2) if the scene will be dark at any time and (3) what practical differences the cameras will make in your scene.
While it is simple to say more pixels = higher image quality, it is bound to deliver underwhelming results and disappointed users.
Myth: A Megapixel Camera is Equal to Many SD Cameras (4, 9, 12, 27, 81, etc.)
Based on the reasons laid out above, this myth is clearly false. Megapixel tends to be better but claiming that it is 4x or 10x better has no grounds in reality. We examined this myth in detail in our debunking of an Arecont Rep's megapixel 'calculator'.
Myth: Pixels Per Foot is a Reliable metric
The goal of pixels per foot is to provide a standard metric that can be used across cameras to guarantee image quality specifications are met. Theoretically, if a specifier states that 40 pixels per foot are needed, they can be assured whatever camera manufacturer, model or resolution is used, the image quality needs will be met. While a noble attempt, this is fundamentally flawed.
Pixels per foot (or per meter) only works based on the assumption that all pixels provide equivalent image quality. That is false and it kills the metric.
See our 'Specifying Video Surveillance Quality' Report for our full recommendations on how to use Pixel per Foot metrics and avoid the dangerous consequences of this myth.
Myth: WDR Camera Specifications
While Wide Dynamic Range (WDR) functionality is a very important function to overcome lighting variations, WDR specifications are unreliable.
- It is easy for any manufacturer to identify their product as WDR. No standards, no third party testing, nothing. It is simply a marketing choice by the manufacturer.
- The most common quantitative specification is using dBs to identify range (e.g., 59 dB, 121 dB, etc.). These numbers are incomparable across models rendering them useless.
In our tests, including a focused WDR study, an absolute difference clearly exists in camera's WDR performance that makes a material impact on image details captured. However, no easy way exists to determine this based on WDR specifications claim. Either keep track of our ongoing WDR scene tests or test yourself.
Myth: Minimum Illumination Specifications
Most experienced surveillance professionals know this: Minimum illumination specifications can NOT be trusted. Really, just throw these numbers out the window.
- Numbers are incomparable amongst manufacturers: Just because Manufacturer A says their camera has .01 lux and Manufacturer B says their camera has .1 lux means absolutely nothing.
- Standards and assumptions used are different: Manufacturers vary in settings used for exposure, gain, etc. Equally important, what is considered minimally acceptable image quality varies.
- Image Quality is Generally Terrible: While manufacturers almost never release the resulting image in their minimum illumination specification, from our discussion with insiders, these images tend to be grainy, dark and deliver not much more than an outline of the scene - a far cry from the quality expected by most users.
Despite this, RFPs continue to base product selection on these self-reported specifications (e.g., camera must have a minimum illumination of .00001 lux). Because of this, and in fairness to manufacturers, it is a stupid game they all have to play. If one company was 'honest', they'd lose a lot of deals.
Review our 'Surveillance Camera RFP Specification Template' for guidance on how to properly overcome these issues with WDR and minimum illumination specifications.
Myth: Superior Low Light Performance Claims
We often hear integrators, even experience ones, talk about certain manufacturers having the 'best' low light performance. Almost universally the manufacturer they praise is one who defaults to using a digital slow shutter.
Shutters control how much light a camera capture. A 'standard' shutter in surveillance is typically 1/30s. However, if you open the shutter longer, you capture more light. For example, a camera with a 1/6s maximum exposure (like most Axis cameras) take in 5 times the amount of light as one that uses a more 'standard' 1/30s.
Essentially, every IP camera allows for slow shutter speeds. The only difference is what defaults different manufacturers choose. Here's an example of defaults from our testing: Arecont 1/12.5s, Avigilon H.264, 1/30s, Axis 1/6s, Basler 1/8s, Bosch 1/7s, Pelco 1/8.3, Sony 1/30s. Indeed, over the last few years, we have noticed the trend of megapixel cameras defaulting to slower shutter speeds.
Differences in default shutter speeds make massive differences in the brightness of the image and the perception of the user. Without a doubt, cameras with slower default shutter speeds are viewed as superior to those with faster ones - even though there is no fundamental technological differences.
While we certainly believe some differences in low light performance exist, be very careful that you are not being tricked into favoring a camera simply because of more aggressive shutter speed settings.
Myth: IR Illuminators Massively Reduce Bandwidth Consumption
Understandably, IR illuminator vendors want to sell more IR illuminators and bandwidth consumption does tend to spike at night (at least for certain cameras). However, there are a number of problems facing this claim:
- Not all cameras even face this issue: Any camera that uses constant bit rate encoding or sets a maximum bit rate (ceiling) can avoid such spikes.
- Cameras impacted differently: Cameras using variable bit rate (VBR)encoding can see spikes but the level of spikes is significantly impacted by the gain settings of the camera. Higher level of gain create noise which increases bandwidth consumption in cameras using VBR. The specific level depends on the vendor. Also, users can and should set gain limits on cameras to reduce this issue. Often high level of gains provide no quality improvements but significant bandwidth consumption.
- IR illumination coverage needs to be strong and wide across the entire scene to deliver massive bandwidth reductions. This works best in a lab where you point an illuminator against a wall. Unfortunately, most IR illuminators are used outdoors in wide environments.
Myth: VSaaS is Secure and Mature
Unlike the other myths in this report, this one is only promulgated by a single vendor - albeit the most powerful surveillance manufacturer in the world.
While VSaaS has potential, the limitations are significant:
- Maturity: VSaaS software sophistication can hardly compete with low end DVRs. VSaaS user interfaces and functionalities tend to be extremely rudimentary compared. This will certainly change but in 2011, it is not close. VSaaS users would have to give up many of their existing benefits - advanced search capabilities, 3rd party system integration, IP camera support, etc.
- Security: The security risks of VSaaS are much more significant than traditional surveillance while the security maturity of the VSaaS providers are quite low. With VSaaS, video is now bring transmitted (almost always) across the public Internet and hosted by outside provider, exposing users to 2 new risks. While VSaaS vendors like to talk about the security/maturity record of cloud computing providers (which has issues itself), almost all of the VSaaS providers are small operations with limited track records and minimal evidence to prove their security.
We understand that convincing users that VSaaS is secure and mature is key to adoption but it's just not there. The maturity is nearly self evident but the false claims to security are a ticking time bomb.
We broke down these claims and our concerns in our Axis VSaaS Myths - Issues and Inaccuracies.
Myth: 80% Analytics is Good Enough
While analytic vendors have retreated from their wildly bullish claims of yesterday, the new claim is that even if analytics are not perfect, they can be good enough. The pitch goes, "If my analytics can get 80% of the bad guys, that's 80% more than what you are getting today. Sure we may miss some but you are not identifying anyone today."
For most security purposes, this is a dangerous approach that fails to deliver in practice:
- Analytics has never had a problem alerting on 'true' suspects. It's fairly easy to alert against a person crossing your fence or smashing in your window. That 80% number is certainly achievable.
- The problem remains the number of false alerts triggered by wind, rain, leaves, small animals, sunlight, shadows, etc. This tends to happen a lot with '80% analytic systems. Operators can then be responding to dozens or hundreds of false alerts every day. In our experience, this is the number 1 reason why analytic systems get shut down.
- 'Boy Who Cried Wolf': When faced with so many false alarms relative to valid alerts, operators tend to give up. If you have 100 false alerts to every valid one, motivation declines significantly. Academic research shows that.
Imagine trying real time facial recognition across every Wal-Mart. Even if the system was 80% accurate in identifying me, the number of times it would falsely alert again people who look like me is astronomical (given the hundreds of thousands of Wal-Mart shoppers daily and variances in lighting, camera positioning, etc.).
Can analytics be 80% accurate? Absolutely. Can it scale and meet the operational requirements of large organizations? Highly highly unlikely.
Myth: Megapixel 'Virtually Eliminates' PTZ Cameras
False and not close. The optical zoom capabilities of PTZ cameras provides far more coverage area than even the very best megapixel camera. We debunk this in great detail and with images from our test results in our 'Debunking of PTZ Elimination Claims.'
1 report cite this report:
Back to Top