We hear such sentiment fairly regularly so I am curious what the consensus of members is. I, for one, disagree.
The question perhaps should be rephrased. If the video has any potential forensic or storage value for future retrieval, we do not necessarily know what the resolution value will be for that monitor.
I haven't heard that one before, but I strongly disagree myself, for a number of reasons including.
- You can't display a full resolution 5MP image (such as 2592 X 1944 or 2560 X 1920) full size on a 1920×1080 monitor, but you can use the downsized image for activity monitoring, and use one or more windows displaying a key portion of the image in actual size (Live Digital Zoom). That's almost always workable, and leading VMS systems support it.
- 4K UHD monitors (3,840×2,160) are available now at Best Buy and other retailers. If I had advised clients to use 1.3 MP (1280 x 1024) cameras only due to monitor limitations, we'd have to upgrad 50 or 100 cameras as opposed to a few monitors.
- Today one can build a very workable and affordable 4-monitor video display (2 x 2) with consumer or professional grade video monitors and a professional quality 4-output video display card. Such a display would support a 5MP image displayed in actual size.
- The level of detail in a high-megapixel camera displayed in actual size surpasses what human monitoring can process. In real time, how could one examine the multitude of fine details of the images of people entering a doorway captured at 5MP (for example)?
- Not all video cameras server the same type of purpose and function. If you are monitoring a crowd on a large monitor and looking for excellent facial recognition, then a high MP camera with a 4K UHD display would really help (i.e. monitor supports full resolution image display). But it would be a rare video system application that needs to display all of its video camera images in full resolution for monitoring purposes.
- Some use of cameras, such as monitoring food or pharmaceutical products, has its highest value during investigations, where—for example—it is a requirement to see serial numbers or lot numbers on small packages or vials. The lines move at too high a speed to see such data in real time. After the fact, typical usage is to zoom in to a particular spot on a high megapixel image to see the desired detail. So the monitor size would not be a hindering factor for 5MP or 10PM cameras at all.
From what sources are you hearing such sentiment fairly regularly?
By this same logic, there's no point in having a car that will go over the posted speed limit... no sense buying a fridge that will store more than one day's food... and all liquor should be sold only in shot glasses (okay, maybe 2oz glasses), because what's the point of a 40-pounder of Grey Goose if you can't drink it all in one swig?
I would have to say that I don't see a need for a monitor of equal or better quality than the camera. However I would say the only thing I can come up with is for the image quality. If the tech does not have a spot monitor and the low resolution monitor how can you be assured the image was clear and focused properly. Having stated this. I have had instances that the tech adjusts the camera image with his spot monitor, than closes the housing or dome and the image on the vms monitor is out of focus. So is this a result of a poor quality monitor.......
Whether it is or is not an issue with focus we have never considered the monitor resolution.
Digital zoom makes higher then screen resolution camera worthwhile.
Did I miss the product announcement for a monitor that matches the resolution of thaty fancy panoramic gigapixel camera? ;-)
I agree with most of the comments above - limiting camera selection to the display resolution is just silly. There are ways around the challenges of camera focus. In fact I lean VERY heavily toward cameras that have a quality auto-focus capability. I would love to see every manufacturer adopt Panasonic's approach (on certain cameras) of re-focusing every time it switches between day and night mode, but even just having a manually-triggered auto-focus capability in the UI is a huge improvement. I actually use that as a big selling point. All customers have been in stores and seen the poorly maintained camera systems with most of the cameras out of focus. When I tell them I can protect their investment by re-focusing their cameras for them without ever needing to roll a truck, their eyes light up almost every time.
Beyond the human interface issues, one should investigate performance issues associated with any in-place motion detection or video analytics, which may benefit from higher resolution that the operator needn't see.
Correct me if I am wrong but but as far as I know most analytics/motion analysis runs at CIF except for VideoIQ which runs at D1 or 1080p.
Actually John, regarding uncompressed video, the analytics algorithms run on frames/images/pixels, so in effect the actual processing is done on uncompressed, decoded frames.
Theorically the HDcctv does have an advantage for analytucs but ONLY if the analytics are done on the DVR/encoder and not on a PC afterwards. This basically means you need a compleltly rewritten analytics system, which is unlikely to happen.
VideoIQ ICVR, UDP cameras, ISD's cameras and Axis cameras have the same advantage as they provide edge analytics with algorithms running on the camera which also access the raw unvompressed frames.