X
Get all access to the world's best video surveillance information.
Logo
Free-book-promo-680-70

Testing The Resolution of the Human Eye

by Ethan Ace, IPVM posted on Dec 04, 2013

All sorts of wild guesses and theoretical calculations exist about what the resolution of the human eye is, with 576MP a common claim.

No one we can find has ever actually tested it . . . until now.

In this report, we share our findings of testing various resolution IP cameras against the human eye.

Results Summary

Here's a summary of the full test results, explained inside the report below.

The Baseline

We used the Snellen Eye chart as a baseline. A human is considered to have 20/20 vision if they can read line 8 on the chart from 20 feet away. 

 

The goal then is to find what camera, with what resolution, could 'read' / 'see' the same line, same distance on that same chart to see if it could be as good as the human eye.

The Test

We took a series of IP cameras (720p, 1080p, 5MP, and 10MP) and set them to a 60° angle of view. While humans have wider peripheral vision, in our testing this 60° range represented the area of main vision.

Just like a human, we placed the cameras 20 feet away from the Snellen eye chart (pictured below) to see what the camera's "eyesight" would be and whether any cameras could match or beat the human eye.

The Results

We began with the four cameras in well lit room, about 160 lux. The image below shows the full field of view in this scene.

160 Lux

First, the 720p camera was only able to resolve about line 4 on the Snellen chart, equal to about 20/50 vision.

1080p resolution was able to read only one line further, to line 5, 20/40 vision.

And moving to 5MP,  one line is clearly gained, 20/30 though one may contend that the next line 20/25 is also readable.

Finally, 10MP provides a further jump in resolving power, able to read line 8, equal to 20/20 vision in humans.

So that's it. In ideal even bright lighting, a 10MP can match (or maybe even slightly beat) a human eye.

3 Lux

Dimming the lights in the room to approximately 3 lux, we tested again. At this light level, multiple human test subjects with 20/20 vision were able to make out line 6, 20/30 (EDFCZP), a two line reduction in ability from the brightly lit room.

720p resolution was only able to view down to line 3 (20/70) in this case, due to increased noise and a dimmer image.

The 1080p camera did not fare any better, with even more increased noise and artifacting obscuring lines below 3, actually making letters slightly less legible than in 720p.

At 5MP resolution, line 4 (20/50) was legible. Note that some letters of line 5 are clear, but noise and artifacts do not allow the full line to be easily read.

Finally, the 10MP camera was able to read down to line 5, 20/40. Again, some letters on line 6 are easily legible, but we cannot conclusively claim all of them are clear.

1 Lux

For our final test, we lowered the lights to approximately 1 lux, a very dark room. At this light level, only the 720p and 1080p cameras produced usable images of the chart. Human subjects were able to make out line 5 on the chart, equal to 20/40 vision (PECFD).

The 720p camera was able to resolve line 2, equal to 20/100 vision, with others below becoming unclear.

And moving up to 1080p resolution, line 3, 20/70, can be read.

The 5 and 10 megapixel cameras produced only a nearly black image and noise. So let's call them 'blind' in low light.

    

More Test Reports 

If you liked this, IPVM has over 200 other similar test reports, including favorites like:



Comments (31)

Only IPVM PRO Members may comment. Login or Join.

The eyes are an amazing machine when you consider the image is upside down when transmitted along the optic nerve and then turned again upright in the brain and then the signals from the two eyes are compared to calculate depth perception. However, the eyes only seem to have a small focus in the center and on the peripherary it has reduced bandwidth. This is meant to keep consumption of bandwidth down on areas that are not important. Is that the principal that Avigilon realized and prompted their interface development?

How does the eye stack up on a WDR test?

Has anyone done a "detailed" comparison of the eye/ optic nerve/ brain design and compared it to modern camera design - I would find that interesting.

It would be very interesting to see how it compares with a WDR test.

These guys over at cambridge in color seem to back you up

... Away from the center, our visual ability decreases dramatically, such that by just 20° off-center our eyes resolve only one-tenth as much detail. At the periphery, we only detect large-scale contrast and minimal color

Qualitative representation of visual detail using a single glance of the eyes.
Taking the above into account, a single glance by our eyes is therefore only capable of perceiving detail comparable to a 5-15 megapixel camera (depending on one's eyesight).

too bad a human eye cannot zoom on the subject.

Not exactly accurate as you can't take a "snapshot" of what the eye sees and zoom in digitally.

HI John:

would you explain the meaning the table on the article

Human : 20/20 , 20/30

and how do I know the lux data on real environment ?

for example 1 lux means ? 100 lux means ?

Jerry Chang 12/5

James, we had multiple 'humans' test their vision in full, low light and dark conditions. The 20/20 means that they could read the eight line. The 20/30 in low light means they can only read the 6th line, etc.

Lux is a measurement of visible light. We used a lux meter. 1 lux is roughly moonlight. 100s of lux are what you would find in an office. 1000s of lux is typical outdoors, etc.

John

Very cool. It's fun to see how well electronics can do against their human counterparts. It looks like we, humans, are still a ways off from perfecting this technology...

I do have a question regarding your test. What was your criteria on picking the cameras that you chose and what cameras did you actually use. I other words, did you choose cameras that you knew would preform the best In low light conditions? Also, what did you end up setting the focal length of the lens at, 3mm, 2.8mm? I think that is equally as interesting. I would love to know what the focal length of the human eye would be versus any given lens and sensor size. The reason behind the first question is that I was a little surprised that The 1080P cameras outperform their lower megapixel counterparts as the light Levels drop. Usually we see the opposite in the field.

The focal length of the cameras would be about 4mm (depends on the imager size - 1/3", 1/2.7", etc.). The focal length of the human eye would differ, because the size of its 'imager'/'sensor' is different.

In lower light, newer 1080p cameras tend to be about the same as 720p ones. They might be slightly darker but the extra pixels help when looking at small lines of texts.

Obviously, though, the point here was not to say that 1080p was definitely better or worse than 720p but to give people a sense of the tradeoffs between the human eye and cameras, in general.

We used a mix of Axis, Bosch and Arecont in the testing.

As usual, I like your 'nuts and bolts' approach to your testing methods. Not sterile laboratory type of experimental methodology but real-world approaches to basic comparisons of technology. I think this is a very effective way for you to communicate what you are trying to convey your audience.

keep up the good work!

Great test to show visually how these cameras really work. Camera specs don't tell the true story. Basically we see contrast (difference of dark to light). As the light gets less the signal to noise ratio will drive that contrast ratio lower. Pixel size, read circuit noise, pixel quantum effeciency, dark current noise, etc. all effect this performance. Marketing specs on sensor, or cameras will not tell you the story. That why independent tests like these help show the difference in products. The biggest problem in IPV is illumination in that you can't control it.

This is clearly not a test of the resolution of the human eye, if anything it is a test of the pixel density of the human eye at the focal point.

Dustin, good point.

Earlier this year, I argued that we should ban the term resolution in surveillance because resolution is now so commonly used to mean pixel count. To that end, and since the industry overwhelmingly means pixel count when the say resolution, that's how we are using the term.

Quite an interesting exercise...

  • There are about 126 million photo-receptors in the eye contributing to normal vision - 120 million rods and 6 million cones. So one could naively state: resolution of 126 million pixels (or rather, the angular equivalent in seconds of arc).
  • However, the receptor density is not uniform: significantly greater in the central vision, so we have much better "resolution" for things we are looking straight at. Also, vertical and horizontal "resolution" is asymetric
  • Dynamic range is exceptional: there are four types of receptors contributing to normal vision: rods and three types of cones. The rods are exceptionally sensitive to light and provide the entire low light vision (without colour) as well as through the whole range. The three types of cones need more light but respond differently to different wavelengths / bands and hence jointly provide colour.
  • Additionally, the cones (colour sensors) are high concentrated near the "centre" of the visual field, so colour vision is better there too.

I didn't know we had different cones. Apparently they use 3 different proteins that respond to the light spectrum with one for the red and blue and the other yellow. Apparently 60% of the cones are red sensitive while only 2% of them are sensitive to blue. For some unknown reason the brain boosts the blue signal to make the response similar.

Our night vision takes almost 30 minutes to take effect. The rods (uses a 4th light sensitive protein) are not sensitive to red light therefore, they use this feature on ship control panels and use red indicator lights so that our night vision is not impaired.

The other thing that goes along with Henry's description is how lenses work in general. The dead center is the best true resolution (the real kind). Resolving accuracy falls off as one moves out from the center. I used to design optical testing machines and this was an aspect we had to measure. The human lens and the center rod/cones density work together.

Excellent article. Good use of scientific methodology

It would be interesting to know the maximum exposure setting on the cameras. A longer exposure would render clearer text in low light, but as we all know, in most security applications, a longer exposure is not acceptable due to motion blur. But there are certainly applications where 1/15 or 1/10 of a second is acceptable.

Vance, good point / question. We always use 1/30s unless otherwise explicitly noted. As you say, a slower shutter would make those cameras 'see' the chart a lot better in low light.

Great article and discussion!

It would be interesting to see an OCR/LPR software ability to read the letters. One may argue that the results are subjective to the human eye ability to read the letters off the screen..

Certainly, there's a degree of subjectivity for humans to read letters off the screen. For instance, some people will certainly look at the images displayed above and argue that they can make out a line lower than we picked. Even going to the eye doctor, such judgments are always going to be a little fuzzy as it depends on people guessing what the letters are, etc.

From what I have seen with OCR/LPR, they need more details / greater ppf / etc., to get the same accuracy as a human looking at an image. Humans still seem to be better at guessing fuzzy, small characters than computers.

I would have agreed with you a year ago, but doesn't it seem to you that some of the captchas out there have gotten at least twice as hard as they used to be... There was a facebook one today, the kind with two words, one a reference word, the other a twisted jumble of glyphs straight out of the blackforest! It took three attempts, and I was really trying...

That says to me computers must be at least chompin' at the bit of human recognition otherwise they wouldn't be so tough, right?

So ill try downloading one of the captcha buster hack program and see how it does on licencse plates...

There are two very important differeneces across those applications:

  • Expectations of accuracy: A captcha cracker that works 1 out of every 3 times is a goldmine. An LPR system that works 1 put of every 3 times gets everyone involved fired.
  • Load: LPR systems need to work on X images per second per camera while a captcha cracker does 1 images per site.

In sum, captcha crackers can throw more more resources and accept lower accuracy results than LPR.

Robert, thanks for clearing up the rod/red relationship. I remember learning in Boy Scouts to use red light at night so your night vision wasn't messed up, but never asked why. I have read theories about blue light though, suggesting that the reason why we are relatively insensitive to it is the abundance of blue light due to the diffractive qualities of our atmosphere.

Alright then, here is my proof that the human eye has amazing abilities. I the middle of the night, in pitch black conditions, I can walk to the bathroom and be able to delineate the edges of the door casement and not kill myself walking through the door.!

Interesting article.

Eyes have various advantages that cameras don't. Firstly, WDR as others have mentioned. Mythbusters did a test with a very dark room full of junk - an obstacle course. The test subjects, after adjusting to the light, were able to easily navigate the room, safely.

The second thing that eyes have is two of them. Using two of them, we can see a lot better than just one.

Third thing eyes have is the brain. The brain does all sorts of processing to improve image quality where it matters.

The advantge of the human eye is that it has two types of sensors: rods and cones. One is for bright light and color vision, the other for dimly lit situations. This is the direction, I feel, that cameras sensors should move in as well, which is why Pixon Imaging is exploring, and has patented, adaptive-binning CCD cameras. On the question of WDR performance, here correctly designed adaptively binned sensors can probably do one better than the human eye by effectively taking multiple exposures at the same time with pixels of different sizes, big ones for high sensitivity, and small ones for the bright areas of the scene. Concerning the human eye, however, I am not sure if the rods and the cones can work together on this, so WDR tests of the human eye would be highly interesting.

Great article!

Is Pixon where Arecon't gets there binning technology?

Hi Luis. No. We pioneered some similar technology for many years called the Pixon method that is spatially adaptive, but this was never picked up by any manufacturers. This is after-the-fact binning, i.e., after the sensor is read out and every pixel has suffered read-noise. What we at Pixon are advocating now is the use of multiple binning schemes on-chip, before reaching the output amplifier. That is why this needs to be done on CCDs rather than on CMOS devices. The advantage is that if you bin on-chip, say a group of 4x4 pixels, you suffer only one unit of read noise. If this is done after reading out all the pixels, you get 16 units of read noise, which give an effective noise for the sum of the 16 pixels of 4 units of noise (noise adds in quadrature). So the CCD pre-readout binning approach is 4x more sensitive than what can be achieved on a CMOS device.

Vsauce has an amazing video on this.

Also - I encourage you to try the "blind spot" exercise he describes, I found it quite surprising when my thumb disappeared in front of me.

Agree: 1 Disagree: 0 Informative: 0 Funny: 0



Most Recent Industry Reports

IP Camera Firmware Upgrade Directory on Aug 28, 2014
Updating firmware can be one of the most tedious tasks involved in any IP camera system. It is not always clear what version is latest, what has changed or been fixed, and where to even download it...

Testing Hikvision High End Camera on Aug 27, 2014
Chinese manufacturer, including the two mega ones, Dahua and Hikvision, are best known in the West for super low cost, entry level cameras. However, how well do their 'higher end' ones one work? ...

A Major Flaw in Long Lenses and PTZs Found on Aug 26, 2014
Theoretically, long lenses should let you capture faces and license plates very far away. For example, over 900 feet away from a 1080p camera one should get over 40ppf from a 100mm lens. But how ...

Camera Calculator Adds Personal Scenes on Aug 25, 2014
By popular request, now you can visualize and optimize your own applications with your own images. The IPVM Camera Calculator has added personal scenes to our 6 pre-set ones. You are going to love...

Manufacturer Salary Results 2014 on Aug 19, 2014
IPVM has determined how much sales people, engineers, developers, and tech support are earning in our 2014 Manufacturer Salary Survey. This is the companion to our Integrator Salary Survey 2014 ...

Testing HD Lipstick Camera on Aug 15, 2014
Miniature IP / HD cameras are a growing trend. One of the downsides has been the addition of a 'head' or 'base' unit to do the encoding, often large or cumbersome enough to create installation cha...

Lockitron Tested on Aug 14, 2014
Lockitron is one of the most hyped products in years and maybe the most well known access control product ever. Ever since their crowdsourcing campaign began, this red-hot startup has won mill...

Integrator Salary Survey 2014 on Aug 13, 2014
IPVM has determined how much sales people, senior technicians, and entry level technicians are earning in our 2014 Integrator Salary Survey. Key highlights include: Good: Technicians can expe...

Testing IP Video - Super Low Bandwidth on Aug 08, 2014
Even today, there are remote locations, especially in security applications, where extremely limited bandwidth is available. Despite that, users want to be able to monitor video live. In a world ...

Testing Exacq VMS on Aug 06, 2014
This test is part of an ongoing VMS test series were we provide in-depth explanation and analysis of video management software manufacturers. Inside this report, we have 40+ minutes of video scree...