Testing The Resolution of the Human Eyeby Ethan Ace, IPVM posted on Dec 04, 2013
All sorts of wild guesses and theoretical calculations exist about what the resolution of the human eye is, with 576MP a common claim.
No one we can find has ever actually tested it . . . until now.
In this report, we share our findings of testing various resolution IP cameras against the human eye.
Here's a summary of the full test results, explained inside the report below.
We used the Snellen Eye chart as a baseline. A human is considered to have 20/20 vision if they can read line 8 on the chart from 20 feet away.
The goal then is to find what camera, with what resolution, could 'read' / 'see' the same line, same distance on that same chart to see if it could be as good as the human eye.
We took a series of IP cameras (720p, 1080p, 5MP, and 10MP) and set them to a 60° angle of view. While humans have wider peripheral vision, in our testing this 60° range represented the area of main vision.
Just like a human, we placed the cameras 20 feet away from the Snellen eye chart (pictured below) to see what the camera's "eyesight" would be and whether any cameras could match or beat the human eye.
We began with the four cameras in well lit room, about 160 lux. The image below shows the full field of view in this scene.
First, the 720p camera was only able to resolve about line 4 on the Snellen chart, equal to about 20/50 vision.
1080p resolution was able to read only one line further, to line 5, 20/40 vision.
And moving to 5MP, one line is clearly gained, 20/30 though one may contend that the next line 20/25 is also readable.
Finally, 10MP provides a further jump in resolving power, able to read line 8, equal to 20/20 vision in humans.
So that's it. In ideal even bright lighting, a 10MP can match (or maybe even slightly beat) a human eye.
Dimming the lights in the room to approximately 3 lux, we tested again. At this light level, multiple human test subjects with 20/20 vision were able to make out line 6, 20/30 (EDFCZP), a two line reduction in ability from the brightly lit room.
720p resolution was only able to view down to line 3 (20/70) in this case, due to increased noise and a dimmer image.
The 1080p camera did not fare any better, with even more increased noise and artifacting obscuring lines below 3, actually making letters slightly less legible than in 720p.
At 5MP resolution, line 4 (20/50) was legible. Note that some letters of line 5 are clear, but noise and artifacts do not allow the full line to be easily read.
Finally, the 10MP camera was able to read down to line 5, 20/40. Again, some letters on line 6 are easily legible, but we cannot conclusively claim all of them are clear.
For our final test, we lowered the lights to approximately 1 lux, a very dark room. At this light level, only the 720p and 1080p cameras produced usable images of the chart. Human subjects were able to make out line 5 on the chart, equal to 20/40 vision (PECFD).
The 720p camera was able to resolve line 2, equal to 20/100 vision, with others below becoming unclear.
And moving up to 1080p resolution, line 3, 20/70, can be read.
The 5 and 10 megapixel cameras produced only a nearly black image and noise. So let's call them 'blind' in low light.
More Test Reports
If you liked this, IPVM has over 200 other similar test reports, including favorites like:
The eyes are an amazing machine when you consider the image is upside down when transmitted along the optic nerve and then turned again upright in the brain and then the signals from the two eyes are compared to calculate depth perception. However, the eyes only seem to have a small focus in the center and on the peripherary it has reduced bandwidth. This is meant to keep consumption of bandwidth down on areas that are not important. Is that the principal that Avigilon realized and prompted their interface development?
How does the eye stack up on a WDR test?
Has anyone done a "detailed" comparison of the eye/ optic nerve/ brain design and compared it to modern camera design - I would find that interesting.
It would be very interesting to see how it compares with a WDR test.
These guys over at cambridge in color seem to back you up
... Away from the center, our visual ability decreases dramatically, such that by just 20° off-center our eyes resolve only one-tenth as much detail. At the periphery, we only detect large-scale contrast and minimal color
Qualitative representation of visual detail using a single glance of the eyes.
Taking the above into account, a single glance by our eyes is therefore only capable of perceiving detail comparable to a 5-15 megapixel camera (depending on one's eyesight).
too bad a human eye cannot zoom on the subject.
Not exactly accurate as you can't take a "snapshot" of what the eye sees and zoom in digitally.
would you explain the meaning the table on the article
Human : 20/20 , 20/30
and how do I know the lux data on real environment ?
for example 1 lux means ? 100 lux means ?
Jerry Chang 12/5
Very cool. It's fun to see how well electronics can do against their human counterparts. It looks like we, humans, are still a ways off from perfecting this technology...
I do have a question regarding your test. What was your criteria on picking the cameras that you chose and what cameras did you actually use. I other words, did you choose cameras that you knew would preform the best In low light conditions? Also, what did you end up setting the focal length of the lens at, 3mm, 2.8mm? I think that is equally as interesting. I would love to know what the focal length of the human eye would be versus any given lens and sensor size. The reason behind the first question is that I was a little surprised that The 1080P cameras outperform their lower megapixel counterparts as the light Levels drop. Usually we see the opposite in the field.
As usual, I like your 'nuts and bolts' approach to your testing methods. Not sterile laboratory type of experimental methodology but real-world approaches to basic comparisons of technology. I think this is a very effective way for you to communicate what you are trying to convey your audience.
keep up the good work!
Great test to show visually how these cameras really work. Camera specs don't tell the true story. Basically we see contrast (difference of dark to light). As the light gets less the signal to noise ratio will drive that contrast ratio lower. Pixel size, read circuit noise, pixel quantum effeciency, dark current noise, etc. all effect this performance. Marketing specs on sensor, or cameras will not tell you the story. That why independent tests like these help show the difference in products. The biggest problem in IPV is illumination in that you can't control it.
This is clearly not a test of the resolution of the human eye, if anything it is a test of the pixel density of the human eye at the focal point.
Quite an interesting exercise...
- There are about 126 million photo-receptors in the eye contributing to normal vision - 120 million rods and 6 million cones. So one could naively state: resolution of 126 million pixels (or rather, the angular equivalent in seconds of arc).
- However, the receptor density is not uniform: significantly greater in the central vision, so we have much better "resolution" for things we are looking straight at. Also, vertical and horizontal "resolution" is asymetric
- Dynamic range is exceptional: there are four types of receptors contributing to normal vision: rods and three types of cones. The rods are exceptionally sensitive to light and provide the entire low light vision (without colour) as well as through the whole range. The three types of cones need more light but respond differently to different wavelengths / bands and hence jointly provide colour.
- Additionally, the cones (colour sensors) are high concentrated near the "centre" of the visual field, so colour vision is better there too.
It would be interesting to know the maximum exposure setting on the cameras. A longer exposure would render clearer text in low light, but as we all know, in most security applications, a longer exposure is not acceptable due to motion blur. But there are certainly applications where 1/15 or 1/10 of a second is acceptable.
Great article and discussion!
It would be interesting to see an OCR/LPR software ability to read the letters. One may argue that the results are subjective to the human eye ability to read the letters off the screen..
Alright then, here is my proof that the human eye has amazing abilities. I the middle of the night, in pitch black conditions, I can walk to the bathroom and be able to delineate the edges of the door casement and not kill myself walking through the door.!
Eyes have various advantages that cameras don't. Firstly, WDR as others have mentioned. Mythbusters did a test with a very dark room full of junk - an obstacle course. The test subjects, after adjusting to the light, were able to easily navigate the room, safely.
The second thing that eyes have is two of them. Using two of them, we can see a lot better than just one.
Third thing eyes have is the brain. The brain does all sorts of processing to improve image quality where it matters.
The advantge of the human eye is that it has two types of sensors: rods and cones. One is for bright light and color vision, the other for dimly lit situations. This is the direction, I feel, that cameras sensors should move in as well, which is why Pixon Imaging is exploring, and has patented, adaptive-binning CCD cameras. On the question of WDR performance, here correctly designed adaptively binned sensors can probably do one better than the human eye by effectively taking multiple exposures at the same time with pixels of different sizes, big ones for high sensitivity, and small ones for the bright areas of the scene. Concerning the human eye, however, I am not sure if the rods and the cones can work together on this, so WDR tests of the human eye would be highly interesting.
Is Pixon where Arecon't gets there binning technology?
Hi Luis. No. We pioneered some similar technology for many years called the Pixon method that is spatially adaptive, but this was never picked up by any manufacturers. This is after-the-fact binning, i.e., after the sensor is read out and every pixel has suffered read-noise. What we at Pixon are advocating now is the use of multiple binning schemes on-chip, before reaching the output amplifier. That is why this needs to be done on CCDs rather than on CMOS devices. The advantage is that if you bin on-chip, say a group of 4x4 pixels, you suffer only one unit of read noise. If this is done after reading out all the pixels, you get 16 units of read noise, which give an effective noise for the sum of the 16 pixels of 4 units of noise (noise adds in quadrature). So the CCD pre-readout binning approach is 4x more sensitive than what can be achieved on a CMOS device.
Most Recent Industry Reports
Testing Axis' Top Low Light Camera Q1635 on Nov 23, 2015
Low light performance continues to improve, first driven by advances in image processing and now increasing number of 1/2" imagers in 1080p HD cameras. IPVM has recently tested new super low light...
Audio Analytics Aggression Tested on Nov 20, 2015
What if you could use your IP cameras to detect fights before they start? That is the goal of Louroe / Sound Intelligence with their recently released Aggression Detector audio analytics. Cl...
Pelco Optera 12MP Multi-Imager Tested on Nov 09, 2015
This summer, Pelco came out firing against Arecont, touting the superior performance of its new multi-imager line vs Arecont's. But is this really the case? We bought a Pelco Optera 180° multi...
IP Camera Bootup Shootout 2015 on Nov 04, 2015
IP cameras, like PCs, take some time to boot up. And just like PCs, the amount of time can vary greatly. Many people do not care but some people find it annoying. Perhaps more importantly, in surve...
Live From China on Nov 02, 2015
China's growing influence, if not dominance, of the global video surveillance market is unquestionable. To better understand this, IPVM has gone to China. Our first stop is CPSE, which claims ~100...
Network Cabling for Video Surveillance Guide on Oct 30, 2015
In this 14 page guide, we teach the fundamentals of network cabling for video surveillance networks, how they should be installed, and the differences in testing them for production networks. Spec...
Large Video Surveillance Systems Guide on Oct 29, 2015
This 14 page guide explains the key uses, design factors, and players in the large system surveillance market. A global group of 80 integrators responded, each offering insig...
Sony 20MP / 4K Camera Tested on Oct 26, 2015
For 18 month, Sony has been hyping 4K cameras, a year before they even announced a 4K network camera. Now, amidst intense competition and price pressure, Sony has released their long awaited 2...
ONVIF Screen Capture Tested on Oct 23, 2015
Recording a PC's screen to a VMS has several uses, but historically has required expensive dedicated encoders or specialized software for each VMS. Now, a new offering called Screen ONVIF has...
Milestone Arcus VMS Tested on Oct 21, 2015
For more than a decade, Milestone was a Windows only VMS. With the Internet shifting power away from Windows OSes, Milestone launched a new VMS, called Arcus, which can be embedded onto Linux ...