Wrong: Why Imager Size Is NOT Key To Low Light

By Ethan Ace, Published Feb 06, 2014, 12:00am EST (Info+)

We hear it all the time: bigger imager means better low light performance. But is it really true? In this note we compare imager size to low light performance from our industry leading testing in order to settle this once and for all.

The Truth

1/3" inch imagers are by far most common in our tests (and in IP cameras), followed by the marginally larger 1/2.8" and 1/2.7" sensors. Cameras with these imager sizes, similar resolution, and F stop ranged from best to worst, and everywhere in between, illustrating the point: better gain control, noise removal, and other image improvements resulting from increased processor performance in recent cameras far outweigh simple imager size.

Here is our test chart with highlights showing how the same hardware specs (such as imager size) delivered radically different low light performance:

Imager **** ***** ***....

****** **** ******* **** *** **** but:

  • ** **** ****** ********* ** *** larger ******* (********** */*" ** *****), both ** *** **** ** *** imagers *** ****** ** *** **** them. Aside **** ********'* *** ****** (*** **** ****** ******* ** ****** 35mm ******* ** ************) *** *** new */*" ****** **** ***-*****, *** ******** *** ****** ** surveillance.
  • ******** ********* ** ****** **** (*/*" to */*.*") *** *** **** ********* than ******** ** **** ******* / image ********** *** ***** ****** (**** size *** ******). *** ***** ***** shows *** **** ****** ***** ********** radically ********* *** ***** *********** *** directly ** *** ****** **********.

Image **********

*** *** **** ***** *** '*****' low ***** *********** - ***********, ************, StarLight, ***** ***** ********, ***. - what **** *** **** ** ****** is (*) ******** ***** ********** *** (2) ******* **** *******.

What ** **?

** ********** **** ** ***** ** easier ** ****** **** ** *** imager **** ** * **** ***** and ******** **** *** ****** ****** is ****. *** *******, ******, ** that ** * *** ****** ** use. 

Comments (21)

I think the mistake that gets made a lot, is people get the idea that any ONE factor is the be-all and end-all to low light performance... or just about any other "quality comparison" metric (and to be sure, this applies in just about any area of life). Comparing apples and oranges is one thing, but people start comparing Red Delicious apples to Granny Smiths, and the simple fact is, some are better for eating and some are better for baking pies.

No, a larger imager doesn't automatically mean better low light... but ALL ELSE being equal, it TENDS to be better, just as a wider aperture lens will TEND to be better for low light as well (both examples based simply on the laws of physics).

Problem is, all else is never equal... yet people translate "TENDS to be" into "WILL be".

Agreed. What we also see is that the impact of image processing / advanced gain control is something ignored or unknown to many industry people, especially since it lacks any easy metric liked imager size does.

I agree with both your and John's comments. Everything else ( including sensor technolgies) being equal would mean bigger imager has better low light performance.

However, indeed the ISP plays a huge role in defining the overall systems low ligth performance. e.g. Digital gain, noise reduction (2D, 3D, bayer domain, etc.), WDR, Gamma correction, good H264 encoder are all important parameters for good low light performance.

I mention the encoder since more noise typically in low light would generally mean higher bit rate as well.

"I mention the encoder since more noise typically in low light would generally mean higher bit rate as well."

Huzaifa, good feedback. Historically, that was our experience as well. In the last year, however, it has changed. A lot of the new cameras have really good low light image quality and very low bit rate. Bosch Stalight is one that comes to mind but they are not the only one.

This too looks to be a result of advanced processing / analysis that is better at filtering out / detecting what is real motion.

Because we are seeing such a shift here, we are currently working on a new report to analyze and contrast bandwidth consumption from leading cameras.

Bosch Starlight is a good benchmark camera, though not neccesarily the best. It has 3D noise reduction but the ghosting effect is not so good (better than before but still not enough).

Good 3D NR definitely helps in reducing the high frequency noise - something that helps the encoders tremendously to achieve low bit rate.

The key here is how good is your 3D NR- how much details /edges do you lose and what is the ghosting effect like.

My point is not whether the Bosch camera was the 'best' but that it was an example against the historic pattern of low light cameras having very high bit rates.

Since you brought up the topic, which camera do you think is 'best' in this aspect?

Axis Lightfinder is one of the better cameras I feel. Of course with newer sensor which have better low light performance it is now unfair to compare a old camera with those that have the latest sensor improvements to rely upon.

i am not 100% sure but I understand that both Bosch Starlight and Axis use the same sensor. Bosch starlight is good in static low light but with motion their performance is not so good while with Axis Lightfinder, the performance is good in motion but the bitrate jump is huge (their H264 is only mian profile and not high profile).

The spike in bandwidth for Axis cannot be explained by main vs high profile. Those Lightfinder cameras are frequenty 10Mb/s+ at full res / 30fps compared to Starlight with newest firmware frequently at 1/10th that.

Bosch claims one of the reasons for their lower bandwidth performance is using their video analytics to differentiate real motion from visible noise.

Indeed, Axis main profile is not the main reason (just one of the reasons) for high bandwidth. it indicates that they have not done enough catching up on their proprietory ASIC H264 encoder and /or Video processing pipe.

For bosch, I am not too sure if its really Analytics, but maybe only related to their usage /tuning of 3D NR- something like motion adaptive 3D Noise Reduction. The concept is now common , but the tuning of such blcoks is what differentiates one camera from another.

Don't pay me no never mind but I spy a wolf in sheep's clothing with that there Avigilon model in mix with his 1/2.7 imager size. Or is that one different on purpose to illustrate something in particular?

Why were the two matching sonys not red-boxed?

Nope, that's a misfire on my red rectangle. Let me fix that right up, thanks.

I agree with the statement. I'm not seeing any reference here however to pixel size. Typically (not always), larger pixel size equates to better low light performance as more light is captured by each pixel. Not all 720p / 1.3" image sensors have the same pixel size.

On top of that, technologies for getting more light to the photosites will help as well I believe. Take the example of back illuminated sensors which on the surface appear to have better low light performance than traditional image sensors which require the light to pass through a lot of tiny components before getting to the photosites hence reducing the final amount of light being processed. The more processing of the light is required, the worse the final image will be, regardless of how much technology you apply to improve it

Those are good points, regarding pixel size and pixel 'technologies'. Most likely they have an impact, though even to the extent they do, it is not something one will be able to determine from looking at the sheer imager size (i.e., 1/2.7", 1/4"). Unfortunately, actual pixel pitch and imager technology are usually not disclosed.

This was a good topic with some great testing to back it up. IPVM testing suggests that the primary sensitivity drivers are likely only weakly related (if at all) to imager size.

This discussion raises several interesting questions:

Does IPVM observe any greater challenge focusing on smaller imagers than on larger imagers? Theoretically the lens matches incoming photons to appropriate pixel size, but are there practical issues?

Typically mature fabrication runs demonstrate predictable costs per square millimeter which depend upon number of masks and the chip material. I have claimed that smaller is cheaper and that cheaper is better if performance is comparable, but in the real world, to what degree does imager cost drive camera cost? Does IPVM see a correlation between imager size and camera cost?

Also, optical path considerations are only the first steps in video capture. Are there customer accessible imaging and H.264 settings which are important to effective low light performance? If they are, does chosing them tend to degrade performance in other imaging conditions? For example, IPVM has extensively discussed the relationship of frame rate to sensitivity at the cost of moving object ghosting. I wonder if IPVM has hosted a presentation of these sorts of settings and trades as they relate to low light performance?

The key to low light performance is photonic capture and conversion.

All other things being equal:

Larger lens apertures capture more incident light

Higher performance lenses focus more of the incident light onto the active sensing area

Chips which have a lot of space between each pixel ignore some of the light which was directed onto the sensor

Sensors with higher photon-to-electron conversion efficiency are more sensitive

These indicated primary performance drivers are at best weakly related to pixel size, and hence, imager size.

With a lens matched to sensor size, smaller pixels deliver comparable performance to larger pixels, until pixels become so small that they approach the Rayleigh limit and an Airy disc takes up more than a small fraction (say 10% or so) of a pixel. Until approaching the Rayleigh limit, the lens directs incident photons onto the pixels, matched to pixel size, and each pixel converts light into a signal that constitute the fundamental building blocks of video. Any chip fabrication process has errors which invalidate some proportion of the run, and smaller ship sizes improve yield by a squared factor. Also, chip material and processing costs are generally proportional to chip size. If smaller chips perform equally well yet cost substantially less than larger chips, it would seem that smaller chips would be more desirable. An oft-cited claim that larger chips perform better seems unjustified by optical path considerations and by these IPVM tests. At the microelectronics level, if a smaller chip could not fit all the necessary devices into each pixel, and fit all overarching devices onto the chip, then a larger chip would simply demonstrate better performance. Since these test results show little correlation among chip size and low light performance, then it seems that chip manufacturers have figured out how to arrange chipscale devices within smaller pixel sizes and chip sizes without compromise.

The one confusing factor in all of this is the age of the sensor being used. The larger imagers usually touted by manufacturers now are ancient sensors relatively speaking. Pixel technology has improved remarkably over the years which now means that the best 1/2" of 2006 can't hold a candle to a 1/4" sensor of 2014 (no pun intended).

Categorically speaking, a larger pixel *will* gather more light than a smaller one. Larger pixels translate to larger imagers.

Large pixels are also expensive and not at all appropriate for the #1 consumer of imagers - cell phones.

The fact that cell phones are ubiquitous, hand held and targeted to normal people is a wonderful thing for security applications. This fact alone has driven sensor companies to develop unholy technologies to create a damn good looking image from a hand held, semi-drunk selfie in a hip NY bar.

As has been mentioned in other comments, the ISP - or Image Signal Processor - plays a huge role in teasing every bit of information out of these pixels. ISP, or 'sensor pipeline' is also traditionally sold separately from the imager manufacturer and bundled along with your favorite H.264 encoder (Ambarella, TI, HiSilicon, etc, etc).

This is also one of the best means for people to differentiate themselves in the security space. ISPs are often pretty generic and targeted for the mass population. Props to Axis for all the work and money they've spent to create their 'Lightfinder' algorithms.

Axis' dominance and ability to design and manufacture their own ASICs enabled them to aggressively design and optimize their ISP for one particular sensor. They have some very smart people working there and they have put their money into the right places technology wise.

It's a great time to be in the industry!

Suppose you have a 1" lens aperture that focuses all of its light onto a 2/3", 1-mega-pixel sensor.

Suppose you have a 1" lens aperture that focuses all of its light onto a 1/3", 1-mega-pixel sensor.

Is it the case, categorically speaking, that the larger pixels *will* gather more light than the smaller ones?

If your answer is yes, can you be clear about what it is that limits the light gathering capacity of the smaller pixels?


Smaller pixels have less surface area for the light reception.

That's a great answer! Since it is both simple and intuitive, I believe this simple fact forms the basis for statements such as,

"...a larger pixel *will* gather more light than a smaller one."

"The brute force method to improving light sensistivity... is to just use a more expensive, larger sensor."

"The larger image sensors provide larger pixel sizes allowing the camera to absorb more light..."

And, it's absolutely true, FOR PIXELS EXPOSED TO THE SAME LIGHT INTENSITY, which is simply not the case within optical cameras.

What if we amend these statements just slightly?

"...a larger LENS *will* gather more light than a smaller one."

"The brute force method to improving light sensistivity... is to just use a more expensive, larger LENS."

"The larger LENSES provide larger APERTURE AREA allowing the camera to absorb more light..."

LENSES are exposed to the same light intensity from a given scene, and larger LENSES capture more of the scene's light.

Standard optical design directs the light from a field of view onto the active sensor area.

Since the amount of incident light was determined by the size of the lens aperture, then two sensors with the same number of pixels have the same amount of light per pixel, regardless of pixel size.

That leaves other factors to distinguish sensor performance.

Will a lens that must focus all the scene's light onto 1/4 the area perform more poorly? This comes into play when a lens approaches the Rayleigh limit, which is not generally the case (yet) for surveillance camera designs.

Is there a lot more inactive area between pixels on one sensor than the other? Then the sensor with the poor pixel fill factor will in fact receive less light per pixel than the sensor with the better pixel fill factor, regardless of sensor size.

Is the conversion efficiency of one sensor substantially better than the other? Then the sensor with poorer conversion efficiency will perform more poorly in low light, regardless of sensor size.

All other things being equal, I would prefer the less expensive, smaller imager. However, as indicated above, I wonder if smaller sensor sizes are harder to focus, and also how much sensor cost is a driver in camera cost.

Clever question. But I wonder if a lens that could focus a certain size circle could focus to a smaller size circle without increasing the effective focal length?

Just on the assumption that the rays would need more refraction and so more glass and then therefore lose more light. My experience lies more with telescopy than cctv lenses, so I'm not quite certain myself. :-O


"Lose more light" is an interesting notion: While lens material and quality is certainly a cost driver, is the absorbption and scattering of lens glass so significant that thickness measured in inches is relevant to transmissivity? One data point: glass fibers carry light for kilometers.



Probably not much light lost thru absorption or scattering. Poking around on the web estimates for uncoated lens transmittance range from .90 to .96, coated go right to .98. For the whole lens.

But I was also saying that since you are essentiially boosting the magnification by make the image circle smaller, that the lens focal length would be longer (all other things being the same) and therefore the f-number. Otherwise what changes do you make to a lens assembly to reduce the image circle? But by now I'm definitely pushing the limits of my understanding so I'll defer to you.

So lets assume you are right, how do you explain the (usually) dismal low-light performance of > 3MP cctv cameras?

Maybe the same light spread out over more pixels doesn't create as much usable information, because the noise floor increases due to sum total of more individual photoreceptors?

Light goes km's in fiber only because its 'trying' to go thru as little glass as possible, not because it is not affected by it. Water goes miles thru steel pipe just fine, but it doesn't go thru it very well at all. But that's the idea, right?

Login to read this IPVM report.
Why do I need to log in?
IPVM conducts reporting, tutorials and software funded by subscriber's payments enabling us to offer the most independent, accurate and in-depth information.
Loading Related Reports