Ban Resolution

By John Honovich, Published Feb 18, 2013, 12:00am EST (Info+)

The word 'resolution' should be banned in surveillance. What all IP manufacturers mean when they say 'resolution' is 'pixel count.' We should be clear and precise, calling it what is - pixel count - to avoid confusion and making important mistakes.

Two Meanings

Traditionally, resolution meant the ability to resolve, or see, details. This focused on the user and the ability of the device to deliver meaningful visible benefits.

Now, resolution means the number of physical pixels that a sensor has - 1 million pixels 'resolution', 3 million pixels 'resolution', 10 million pixels 'resolution'.

What's Missing

Pixel count is only one element in a camera's capability to deliver visible details. Other critical ones include lenses, compression, frame rate, low light and WDR performance - some of which actually can be worse with more pixels. As such, resolution as 'pixel count' ignores critical elements in delivering real 'resolution'.

Height in Basketball

Pixels are like height in basketball. If you are too short or have too few pixels, you can never be the best at either. But being the tallest or having the most pixels does not ensure success. In basketball, a very tall person might lack coordination, athleticism, drive, intelligence, etc. just like a super high pixel count camera might be terrible in many other ways. More pixels can be useful but, just like height in basketball, smart 'scouts' should consider the whole package.

Call it Pixel Count

A simple solution would help a lot. Stop using 'resolution' and start saying 'pixel count'. Then users and specifiers can think more clearly what more pixels actually delivers.

For more on image quality, see our tutorials on resolution, PPF, WDR, lenses and compression.

'Real' Resolution Tested

Comments (40)

Can't we just say frame size if we're talking mpeg4/h.264 ala the standard definitions?

By frame size, you mean 1920 x 1080, 1280 x 720, etc.?

Correct, that's where I would currently use the term that shall not be named

I have found in both IPVMU classes, that entering the class, many attendees do not understand the relationship between frame size and resolution - i.e., 1080p ~ 1920 x 180 ~ 2.07 MP etc.

It's certainly easy enough to teach and people get it quickly but my concern with using frame size is that many would be confused when someone says '1920 x 1080' and it would then need to be clarified that they mean '2MP resolution pixel count'.

Btw, for other readers, here's the chart we recommend memorizing:

That makes sense, I like using the 720p/1080p vs 1 or 2MP terms because they are CE product terms that help people understand what theyre getting. Also Ive had multiple 10+MP point and ahoot and DSLR cameras in the last 5 years and haven't sold a single camera over 5MP.. 1MP sounds lame. I do say 5MP in those cases because 2K or 4K products are still limited to the prosumer market

I would add that SD in our experience from customers requirements specifications is considered to be 720 x 576 (576p). Not the 640 x 480 PC resolution. Ref. PAL/NTSC.

I am sorry to say that will be almost impossible. The resolution term is so much used and try to change this now for widespread use will stop in too many obstacles. The companies marketing will always use resolution, because it is the well know term. Maybe a better strategy will be to start to use other term for the "real resolution", like "Definition" or something similar.

Awwww man! I have to memorize stuff!?!?!? ;-) I like the frame size more than the pixel count, but I agree with Ricardo. I think resolution is too ingrained in the CCTV nomenclature.

Ricardo, I do recognize it is a quixotic quest. However, I think it is important for the community to think hard about the shortcomings of how 'resolution' is used in the surveillance industry. If people think twice when they hear 'resolution' and remember pixel count and its limitations, we should all be better off.

I don't see the problem as long as everyone recognizes what the term means.

And SD could be 640x480, 704x480 or 720x480 in NTSC, depending on the system. In practice, there is little difference between the three, especially between 704 and 720. In ongoing VMS testing, we have evaluated five standalone encoders and six systems' encoders that are/were rated at 4CIF/4SIF, 704x480 (not sure what that is called) or D1 720x480. Using the highest resolution analog cameras we've found (inMotion in11S3N2D) on gaming tables, we found that playing cards viewed at either resolution appeared just about the same.

We viewed encoder images side-by-side in salvo and used the VMS' digital zoom at typically 3x and evaluated our ability to differentiate the suits of cards. 720 received a very slight nod over 640 but we could see no improvement over 704. And even between 720 and 640, the difference is extremely subtle at 3x zoom and not apparent at all at 1x.

Noise, including I-frame pulsing, macroblocking, deinterlacing (or lack thereof), "shimmering" colors and even "ringing" (doubling or more of edges) were far more apparent differentiators of picture quality.

Perhaps we all should make a New Years resolution pixel count to come up with a better term?

"I don't see the problem as long as everyone recognizes what the term means." Yes, if everyone understood what the term meant and didn't mean, we would be fine.

As for SD's resolution, there is variance amongst the pixel counts commonly labelled 'SD' but, as you say, from a practical perspective, it makes little difference.

Resolution IS pixel count. This field is littered with janky outdated terms related to image capture. What some silverback retired integrator did last century to bolt a repurposed TV camera on the side of an airport tower doesn't actually relate at all to the security video image collection devices used today. Yes, a straight answer on pixel count would be much better. Or, point at a spec which defines the term "resolution". Whatever. Get rid of the tower of babel, it hurts the marketplace.

Rodney, manufacturers clearly play it both ways, especially when it comes to definining 'camera coverage' or 'camera replacement'. When it is for those uses, for them, resolution is not simply pixel count, it is visible details.

I believe that the nomenclature should be:

Sensor resolution = image sensor resolution = number of pixels

Image resolution = image visibly resolution = ability to see details

A Swedish basketball player

The term resolution did not evolve in the security industry, nor did the application of it to pixel count (originally TV lines with regard to video).

The basis for the term can be found in the excellent Wikpiedia article Image resolution. But directly to your point, John, this article even has two pictures showing that higher pixel count does not equal higher resolution.

At least this IPVM post will focus attention on the fact that there is more to image resolution than number of pixels.

Jan, good contrast between image and sensor resolution.

Ray, thanks for the link. Interestingly, they segment resolution into 'pixel resolution', i.e., pixel count vs. 'spatial resolution', i.e., "measure of how closely lines can be resolved in an image"

John, I think we missed an element of this discussion, that being aspect ratio.

Personally, I don't mind using the term "resolution" because I'm an expert and I'm intimate with the definitions, but I certainly see your point. But what I run into A LOT when training customers and sales people is understanding that sometimes there is more than one way to get to a certain resolution.

The best example is 2 Megapixel. There's the older 4:3 aspect ratio of 1600x1200, and the newer 16:9 aspect ratio of 1920x1080 (1080p or 1080i). Both are the same "resolution" but provide very different solutions. This plays into your point that "pixel count" is a better definition, but it's also representative of the fact that understanding "Aspect Ratio" is also very important when defining camera performance.

Jason, good feedback. Btw, it seems that 1600 x 1200 is dying off. Are you seeing that? It seems most new 2MP cameras are now 1080p.


Yes that's true, but I'd also say it's true as a function of tech change than market demand. What I mean is, getting 1600x1200 imagers is harder these days and getting 1920x1080 imagers is a lot easier, and all the new efforts and improvements that imager manufacturers are achieving are all on the 1920x1080 side.

But there are still a few key verticals that really like the 4:3 aspect ratio, including Critical Infrastructure and Retail.

2 MP won't be the only confusing area. Once we get into the "4K" cameras which will be 8 MP, the water will muddy again. Not because there are 4:3 8 MP cameras out there, but because it fits nicely between 5 and 10 MP cameras and all imagers at those two resolutions are 4:3.

For analog, resolution is NOT pixel count. TVL of resolution is measured in a square where the width is equal to the total height. For instance, a 600TVL camera would actually be 600/3*4 or 800 pixels. Since only Sony Effio chips have more than 768 horizontal pixels, there can be no such thing as a 600TVL camera.

I believe manufacturers "cheat" when giving analog TVL specs and use the entire width. That would make sense since analog is not quite capable of delivering pixel-by-pixel resolution.

I agree Carl, but that way preceeds the IP revolution. I remember 10 years ago when manufacturers were putting "525 TVL" in their specs when NTSC only allows 480 TVL, and having customers actually put 525 TVL in their specs even though the difference was indeterminate.

For me personally, the use of TVL in the IP camera World is a strong indicator of the immaturity of the manufacturer. I often correct people who use the "TVL" terminology when speaking of IP cameras as being a useless metric.

I think the definition gap between PPF and image pixels is getting lost here and most of us really do know the difference. We had this exact same discussion a while back.

In looking at Ray's Wiki example note that while the images are technically different X:Y, the "higher resolution" image is acutally 1/3rd the compressed size of the original.

We are confusing displayed-on-my-screen-pixels with actual-unique-original-picture-image-pixels. In that case the image on the left is not higher in resolvable picture elements and I agree with the description.

For the most part, look at John's examples of the photo image resolution video streams. For years lens and film producers have struggled to get the clearest image onto paper from the original scene. We are just chasing the digital motion version.

Compression choices and ultra cheap lenses are the two bigest culprits here IMHO. (ignoring light levels and night pictures for now)

We do not seem to define/agree on what we are going to measure.

Corey, I think most insiders appreciate the difference. However, I don't think most people in the industry do. That's why manufacturers continue to equate pixels with resolution with coverage area with camera replacement. I am sure the manufacturers know that's flawed but they use it because they know lots of people can be tricked by this do not understand this.

I agree. most marketing is about convincing people they want your stuff, not showing them how your stuff is better for them.

The other probem is that with all the many, many bandwidth reduction and image compression options, apple-to-apples is very difficult for the majority of the users/consumers. I could build a system from only the very, very best components you have tested and still end up with poor performance in my environment. (assuming we could even pick a "best" since that term is so fuzzy)


I thought that was the case too, until we tested an inMotion in11S3N2D camera. We use the ability to identify the suits of cards laid out on a gaming table as one criteria for choosing cameras. 470TVL, 480TVL, 520TVL, 540TVL, 600TVL and even higher rated resolutions had no affect on our ability to identify card suits - which was typically around 70% of non-face cards and nearer 0% of face cards. Over the course of at least ten years, that was the case. We assumed the higher resolution ratings were BS and gave up - adding a second camera on the tables of games where the card suit has a bearing on the outcome.

All that changed with the inMotion cameras. I have absolutely no idea why they do, since they use a Sony Super HAD II sensor, supposedly rated at the common 768x494 pixel count, but for some reason, those cameras (and only those) gave us the ability to determine the suits of 100% of the non-face cards and nearly 100% of the face cards. That's via totally analog transport and display (though with Orion and ViewZ LCD monitors). Obviously, once encoded, that capability is lost.

Go figure!


The example I give often again equates to 2MP. I point out to customers that if you're using 1600x1200, you can't get the same horizontal coverage area that you can with a 1080p camera, even though both are the same "Resolution", but have very different horizontal resolutions and coverage areas.

Again, all of this is somewhat obvious to most on this thread, it's just up to us to keep preaching it to the market.

Corey, good point about bandwidth reduction and image compression issues. We just finished a new "How to measure compression levels" that addresses normalizes compression across cameras and checking how manufacturer default compression levels vary.

I would like to point out some things I believe is important for the further on discussion (please excuse me if I use the wrong word etc. not my native language).
First of all - it is important to not mix up TVL (i.e. horizontal readable television lines) with what comes in PAL and NTSC specification (the way a video image is built up interlaced in a monitor) – this is two different things. So it is not inadequate to use a high resolution sensor (i.e. 525 TVL specified) to create a “good” NTSC “broadcast”.

And to measure a “visible” picture or video resolution it actually works with the “analog” test target regardless if the camera is analog or IP Network camera – even multimega pixel ones.

About cameras (analog and IP) and pixel count there is “pixels” in the sensor in both cases (both CCDs and CMOS) and that’s why I like to make difference between Sensor Resolution and Image Resolution as I mention above.

This is a really interesting and important discussion and I do understand your approach John and I appreciate it, there is confusion and misunderstanding that make it hard for some customers to understand. But at the same time I believe we miss out the most important thing – namely how to make sure that the camera (scheme) measure up to its purpose. How do you do that?

Jan, good question/point: "How to make sure that the camera (scheme) measure up to its purpose. How do you do that?"

I think ppf / ppm is a good starting point for rough estimation but specifiers need to then consider other factors:

  • Will their scene be dark? How dark? How well do the cameras under consideration perform at night?
  • Will their scene face bright direct sunlight? If so, how well is the camera in WDR?
  • Do they have the right angles of incident to targets or else would they be better served with multiple lower resolution cameras?
  • If the camera's frame rate is less than 25/30fps, will that be a practical issue? Many super high res cameras are under 10fps.

It would be great if a single magic number guaranteed quality but it doesn't exist and trying to force resolution to be it creates more problems.

I'd use the above checklist and then try it out in my conditions before deploying (see these options for portable power / field tests).

Thanks John, you captured it perfectly and it sounds like most of the rest of us on here are singing from the same hymn book as well. I'm glad on this side of the fence all of us are on the same page.

Videoimage quality—suitability for the viewing task—is what the Video Quality in Public Safety initiative is about, which has developed a number of resources including an online Guide to Defining Video Quality Requirements. This is an online tool that follows a defined process for developing camera requirements, including lighting.

The guide is great John, I especially appreciate that you carefully point out that there is much more to consider than just PPF-calculation. And your approach to specifying and testing earns you respect. However – I’m aware about the industry is going towards PPF but this puzzles me somewhat since the method is lacking so much of accuracy. There should be other methods or performance specs that camera manufacturer could develop or use.

Let me try this theory with you and the other forum readers.
The “old” testing method with test target showing the horizontal TVL resolution is actually giving a more or less exact measurement of the image resolution (both live and recorded). Wouldn’t it be a perfect specification for the manufacture to present in product sheets which different TVL resolution (total/whole width) the camera can perform depending on which compression is used/set (this should be easy for the manufactures). Then you would be able to calculate TVL per Foot instead.
Then of course you still have to consider many of the other issues regarding facial angle, light etcetera but that is another problem to address…

Audio had the exact same problem when VoIP was introduced and what we ended up with was Mean Opionion Score (MOS)

Mean opinion score - Wikipedia

My point is that no matter what, video will remain subjective and the statistics and data will help, but the uses of video are so diverse... As always, your mileage will vary...

John, You can't change HISTORY, resolution is here to stay, its left over from the Days of Television, the Indian Resolution Pattern. It won't go away untill all the old timers are gone or the manufacturers start using Pixel Count in there marketing literature.

Ray, thanks for sharing the VQIPs documents. They look revised in the last year. Unfortunately, the overall vagueness and scatteredness of them makes it hard to understand. I think IPVM should prepare a free public 101 image quality guide to help end users.

Jan, you could use TVL for IP cameras but there are still some important shortcomings: The results would still be dependent on light levels (just like analog) so a camera good at ideal lighting might be weak at low light or WDR scenes. Plus, it depends on the compression level set and manufacturers would certainly game it for as low as possible to maximize their TVL rating, even if that compression level would rarely if ever be used in production.

Luis, it is history but it's more than history. The biggest part of the problem is that the word 'resolution' means so many things in the English language that it is inherently confusing to use.

It seems to me that this is just the tip of the iceberg. Unless I'm just not good at it, it seems almost impossible to me to get any real gauge on how good a camera is going to be in certain conditions just by reading the camera spec. Obviously you can normally get some idea as to whether a camera is going to be completely unsuitable by looking at things like minimum scene illumination, resolution, colour only/day night etc.However seems that there are very few items on a standard spec sheet that are a real measure of how good the picture is going to be when you get the thing on the ceiling.

There are also a lot of other terms which are very confusing and marketing orientated. I use a lot of Axis cameras, I really like them and we get good results from them. I'm using them as an example here just because I know them and not because I think that they are particularly guilty of this.

Lightfinder - Isn't this just a super-sensitive sensor (which is partly because it's bigger than it was before) with a very good image processing chip? Wouldn't it be nicer if we could have a real gauge on how sensitive the sensor actually is. Something akin to ISO on a DSLR camera would be a good start

P-iris - Isn't this just an accurate iris with some good algorythms for working out what F-stop is going to give the best image? Wouldn't it be nicer to say that the iris is accurate to 0.1 of an F-stop with algorythms that take into account depth of field, image brighness and gain level.

As it is I have no way to know if an Axis lightfinder p-iris camera is going to be better or worse than a camera from another manufacturer because they don't have these terms. Don't even get me started on WDR!

In my experience you can't really tell if the camera is going to be great or merely acceptable (at least I can't) just by looking at the spec. I find that the only way that I can really get a good idea of what cameras to use is to stick to manufacturers that I trust and to try cameras out in a similar environment before I spec them.


I disagree only about the P-iris. The P-iris is conceptually different from DC iris. With a DC iris the camera do not know the iris position because it is based in a galvanometer without feedback sensor. The camera only knows if the image level is too high or too low and try to open or close the iris, but there is no way to the camera know the actual aperture of the iris. With the P-iris, the iris control is based on step motors that allows the steps to be counted, so the camera has more precision and can know the actual position and use the algorythms to choose the best configuration.

I hear what you're saying, John, but I think there is more to say about it.

While I do not know if this was the right forum / thread to discuss what anyone should do, or what options are available. Based on the topic "Ban Resolution" I would be inclined to agree with you if there were no other options - but I am sure that there are indeed. We who are working on this at the professional level must somehow ensure that manufacturers are doing what they can to improve the situation. Not least you John, and IPVM, has in an exemplary manner pointed out shortcomings and misleading marketing of manufacturers, distributors and others and that I think is very important and I encourage you to keep on doing so.

And forgive me now because I'm stubborn, but I think you need to split the elephant into smaller pieces. First, determine the visible resolution a camera can perform - then you have basic conditions to relate to (and the horizontal TVL resolution is visually exact (and light conditions doesn’t affect that), pixel count is not). Thereafter you can take lighting conditions, etc. into account. And there are standards and norms that support my argument (but actually also the pixel count in the latest European standard).

You will find, for example, IEEE standard for measuring video resolution here (almost for free ;-).

Neil, thanks for raising a number of very interesting points. My feedback:

  • I agree: "It seems almost impossible to me to get any real gauge on how good a camera is going to be in certain conditions just by reading the camera spec." A few metrics can help - if the sensor is too small (1/4" or less) or the F stop is too high (f/2.0 or higher) but overall it's hard to tell just by reading. You need to do test - either yourself, IPVM tests, etc.
  • On Lightfinder, Axis' low light enhancements are legitimately impressive (see our Q1602 and Q1604 test results that compare side by side to competitors). That said, lux ratings just can't be trusted - even if Axis was 'truthful', many other manufacturers are not, making it impossible to compare (see Don't Trust Lux Ratings).
  • As for P-Iris, our P-iris test results show it does not make much of a difference, even if technically it's implementation is more precise than auto iris.
  • For WDR, we have 3 test reports that can help you - the 2013 WDR shootout, the Megapixel WDR shootout and our original 2011 SD vs MP WDR shootout.

Let us know if you have other questions or feedback.

John, I agree about the VQIPS web pages not having intuitive user interaction. You said, "I think IPVM should prepare a free public 101 image quality guide to help end users." I'd be in favor of that.

One of the things I like about the VQIPS initiative is that they are collecting image examples and I think a collection of real scene images at various example resolutions (oops!) would really help. Like you, I am a big fan of field testing and each time I am considering a new technology I do a field test on it in the target environment. But I like to have a good starting point and be able to do the test only once! So that's where a good set of example images to use as standards would help, I think.

Focus is a big issue, along with depth of field, and I have seen many projects where cameras that were capable of doing the job were focused in bright daylight (long depth of field) and had target scene areas out of focus once the light levels started dropping and the auto-iris opened up. I think that good "before" and "after" examples of right focus and other elements would be very helpful.

Login to read this IPVM report.
Why do I need to log in?
IPVM conducts reporting, tutorials and software funded by subscriber's payments enabling us to offer the most independent, accurate and in-depth information.
Loading Related Reports