Do Not Use Cameras With Higher Resolution Than The Monitor Displaying Them

We hear such sentiment fairly regularly so I am curious what the consensus of members is. I, for one, disagree.

The question perhaps should be rephrased. If the video has any potential forensic or storage value for future retrieval, we do not necessarily know what the resolution value will be for that monitor.

The statement references "The Monitor Displaying Them" That could be a monitor for live or recorded video.

I haven't heard that one before, but I strongly disagree myself, for a number of reasons including.

  • You can't display a full resolution 5MP image (such as 2592 X 1944 or 2560 X 1920) full size on a 1920×1080 monitor, but you can use the downsized image for activity monitoring, and use one or more windows displaying a key portion of the image in actual size (Live Digital Zoom). That's almost always workable, and leading VMS systems support it.
  • 4K UHD monitors (3,840×2,160) are available now at Best Buy and other retailers. If I had advised clients to use 1.3 MP (1280 x 1024) cameras only due to monitor limitations, we'd have to upgrad 50 or 100 cameras as opposed to a few monitors.
  • Today one can build a very workable and affordable 4-monitor video display (2 x 2) with consumer or professional grade video monitors and a professional quality 4-output video display card. Such a display would support a 5MP image displayed in actual size.
  • The level of detail in a high-megapixel camera displayed in actual size surpasses what human monitoring can process. In real time, how could one examine the multitude of fine details of the images of people entering a doorway captured at 5MP (for example)?
  • Not all video cameras server the same type of purpose and function. If you are monitoring a crowd on a large monitor and looking for excellent facial recognition, then a high MP camera with a 4K UHD display would really help (i.e. monitor supports full resolution image display). But it would be a rare video system application that needs to display all of its video camera images in full resolution for monitoring purposes.
  • Some use of cameras, such as monitoring food or pharmaceutical products, has its highest value during investigations, where—for example—it is a requirement to see serial numbers or lot numbers on small packages or vials. The lines move at too high a speed to see such data in real time. After the fact, typical usage is to zoom in to a particular spot on a high megapixel image to see the desired detail. So the monitor size would not be a hindering factor for 5MP or 10PM cameras at all.

From what sources are you hearing such sentiment fairly regularly?

Ray, I agree with you. Thanks for elaborating the specific counterargument.

As for the sources, it is not from any manufacturer but we hear it in comments regularly. For instance, this actually just came up in the poll discussion about the value of 4K as a reason against 4K.

By this same logic, there's no point in having a car that will go over the posted speed limit... no sense buying a fridge that will store more than one day's food... and all liquor should be sold only in shot glasses (okay, maybe 2oz glasses), because what's the point of a 40-pounder of Grey Goose if you can't drink it all in one swig?

I got a kick out of this and see your point. the source material is what matters, not whats displaying them. Maybe the user won't get the full picture, but its much easier to upgrade a monitor than a camera. Also, lets say you had tape of a robbery, and you had a really high end camera, but a terrible monitor, what would that matter? The source material is still good, so when it was exported as long as the court or whoever had a good monitor you'd get a good video... I think.

I would have to say that I don't see a need for a monitor of equal or better quality than the camera. However I would say the only thing I can come up with is for the image quality. If the tech does not have a spot monitor and the low resolution monitor how can you be assured the image was clear and focused properly. Having stated this. I have had instances that the tech adjusts the camera image with his spot monitor, than closes the housing or dome and the image on the vms monitor is out of focus. So is this a result of a poor quality monitor.......

Whether it is or is not an issue with focus we have never considered the monitor resolution.

Damon, focusing is one of the challenges I've seen with megapixel cameras. When I focus things for tests, I generally use a test chart, and digitally zoom onto the chart area, so it's enlarged. It makes it much easier to see focus. You still need to potentially check for other areas of the scene behind the chart or to the side to be sure, but it's a better start.

Obviously this is easy when using a laptop. When using an analog spot monitor, it's impossible. Tools like the Razberi/Axis/Dynacolor installation displays let you zoom in, but I'm not sure how user friendly it is to operate, having never used one.

Digital zoom makes higher then screen resolution camera worthwhile.

Did I miss the product announcement for a monitor that matches the resolution of thaty fancy panoramic gigapixel camera? ;-)

I agree with most of the comments above - limiting camera selection to the display resolution is just silly. There are ways around the challenges of camera focus. In fact I lean VERY heavily toward cameras that have a quality auto-focus capability. I would love to see every manufacturer adopt Panasonic's approach (on certain cameras) of re-focusing every time it switches between day and night mode, but even just having a manually-triggered auto-focus capability in the UI is a huge improvement. I actually use that as a big selling point. All customers have been in stores and seen the poorly maintained camera systems with most of the cameras out of focus. When I tell them I can protect their investment by re-focusing their cameras for them without ever needing to roll a truck, their eyes light up almost every time.

Beyond the human interface issues, one should investigate performance issues associated with any in-place motion detection or video analytics, which may benefit from higher resolution that the operator needn't see.

Correct me if I am wrong but but as far as I know most analytics/motion analysis runs at CIF except for VideoIQ which runs at D1 or 1080p.

My 5MP Arecont cameras that perform motion detection and provides substantial spatial granularity for sensitivity and masking. You might also ask, does Arecont down-sample the 5MP image and conduct motion detection only on CIF? I don't know. You might also ask, would 5MP motion detection provide any advantage over standard CIF motion detection, for the same field of view? I don't know.

Now feeling a little defensive about my very informative answer (LOL), I will say that well written image processing algorithms are scalable and that unless a decision is made to fit within limited processing capacity, one would expect them to process at full resolution and frame rate. Alternatively, in the consumer business, since processing power is a cost driver, perhaps they've done analysis to find the knee in the curve and process at those "good enough" frame rates and CIF.

As to analytics, that's a specialized market that I have no insight into. For example, I would expect (again in ignorance of real-world implementations) that face recognition software would benefit from full resolution up to some limit of pixels per foot or pixels per face, and then there'd be little additional benefit.

Horace, analytics typically limit the resolution being analyzed. I bet you a steak dinner that Arecont is not analyzing at 5MP on a 5MP camera. This is not simply an Arecont thing either.

You say, "unless a decision is made to fit within limited processing capacity, one would expect them to process at full resolution and frame rate."

That's like saying "unless cost was no object, one would expect people to drive Bentleys."

Limited processing capacity is a very real issue, for every camera manufacturer, but especially for Arecont that uses FPGAs. Even for a manufacturer who could just go out and swap in a more powerful encoding SoC, that would substantially increase the cost of the BoM.

Thanks, I appreciate the clarification.

Funny thing... our favorite HDcctv hawksman over on LinkedIn is currently touting the superiority of uncompressed 1080p30 video for analytics, claiming the improved clarity makes for better/more accurate analytics performance...

That's an incredibly idiotic point, even for him. Just like 'uncompressed' analog, analytics need compressed video to work. Does he, or anyone else, really think that it would practical to analyze uncompressed video?

I'm looking forward to Craig's input on that subject ;)

Arecont Vision User Manual "Megapixel IP Cameras and AV100 Video System Software" appears to provide some clarification on this matter.

The section on motion detection says, "Motion detection is achieved by analyzing inter-frame brightness changes on a pixel-by-pixel basis."
The section on motion detection control parameters says, "To provide accurate motion detection in low contrast and low light environments, EACH pixel of EACH frame is analyzed." The emphasis is as quoted from the manual.

Actually John, regarding uncompressed video, the analytics algorithms run on frames/images/pixels, so in effect the actual processing is done on uncompressed, decoded frames.

Theorically the HDcctv does have an advantage for analytucs but ONLY if the analytics are done on the DVR/encoder and not on a PC afterwards. This basically means you need a compleltly rewritten analytics system, which is unlikely to happen.

VideoIQ ICVR, UDP cameras, ISD's cameras and Axis cameras have the same advantage as they provide edge analytics with algorithms running on the camera which also access the raw unvompressed frames.

Bohan, no, cameras are not processing raw uncompressed frames. They have access to them which allows them to directly generate images from the stream, but those are typically in the order of CIF, VGA or maybe HD resolution. Nobody is analyzing raw uncompressed frames. This is still better than a centralized recorder getting an H.264 stream and then having to decode it and create bitmaps but it's neither raw nor uncompressed.

Brian Karas: could you provide some commentary for the last two posts from me and John with reference to VideoIQ. Ibelieve what I wrote is true for at least VideoIQ.

John: i believe that providors that process at CIF like UDP would downscale the 1/2MP raw pixel bitmap data to CIF raw bitmap data then apply agorithms on that input. in VideoIQ's it should be processing the raw bitmap coming out of the ISP.

in either case I do agree that in practice HDcctv does not bring practical real world benefits to the table, but in theory it can if their is enough standardisation and support from the world leading analytics providors - which is of course very unlikely to happen.

Regarding "CIF raw bitmaps", the issue is that it's already CIF so you have lost a significant amount of detail already, by reducing the pixels 75% or more.

As for Axis, my understanding is that they send frames to the analytic applications running, typically at CIF or VGA (regardless of the camera resolution - 1080p, 3MP, 5MP, etc.). When we did Agent VI tests, this was a practical issue especially depending on the Artpec chip used in the individual model (older chips forced lower resolution analysis, etc.).