The Resolution of the Human Eye Tested Is 10MP

Published Dec 04, 2013 05:00 AM

*** ***** ** **** ******* *** theoretical ************ ***** ***** **** *** resolution ** *** ***** *** **, with ***** * ****** *****.

** **** ******, ** ***** *** findings ** ******* ******* ********** ** cameras ******* *** ***** ***.

Results *******

****'* * ******* ** *** **** test *******, ********* ****** *** ****** below.

The ********

** **** ********** *** ******* * ********. * ***** ** considered ** **** **/** ****** ** they *** **** **** * ** the ***** **** ** **** ****.

*** **** **** ** ** **** what ******, **** **** **********, ***** 'read' / '***' *** **** ****,**** ******** ** **** **** ***** to *** ** ** ***** ** as **** ** *** ***** ***.

The ****

** **** * ****** ** ** cameras (****, *****, ***, *** ****) and *** **** ** * **° angle ** ****. ***** ****** **** wider ********** ******, ** *** ******* this **° ***** *********** *** **** of **** ******.

**** **** * *****, ** ****** the ******* ** **** **** **** the******* *** *****(******** *****) ** *** **** *** camera's "********" ***** ** *** ******* any ******* ***** ***** ** **** the ***** ***.

The *******

** ***** **** *** **** ******* in **** *** ****, ***** *** lux. *** ***** ***** ***** *** full ***** ** **** ** **** scene.

*** ***

*****, *** **** ****** *** **** able ** ******* ***** **** * on *** ******* *****, ***** ** about **/** ******.

***** ********** *** **** ** **** only *** **** *******, ** **** 5, **/** ******.

*** ****** ** ***, *** **** is ******* ******, **/** ****** *** may ******* **** *** **** **** 20/25 ** **** ********.

*******, **** ******** * ******* **** in ********* *****, **** ** **** line *, ***** ** **/** ****** in ******.

** ****'* **. ** ***** **** bright ********, * **** *** ***** (or ***** **** ******** ****) * human ***.

* ***

******* *** ****** ** *** **** to ************* * ***, ** ****** again. ** **** ***** *****, ******** human **** ******** **** **/** ****** were **** ** **** *** **** 6, **/** (******), * *** **** reduction ********* **** *** ******** *** ****.

**** ********** *** **** **** ** view **** ** **** * (**/**) in **** ****, *** ** ********* noise *** * ****** *****.

*** ***** ****** *** *** **** any ******, **** **** **** ********* noise *********************** ***** ***** *, ******** ****** letters ******** **** ******* **** ** 720p.

** *** **********, **** * (**/**) was *******. **** **** **** ******* of **** * *** *****, *** noise *** ********* ** *** ***** the **** **** ** ** ****** read.

*******, *** **** ****** *** **** to **** **** ** **** *, 20/40. *****, **** ******* ** **** 6 *** ****** *******, *** ** cannot ************ ***** *** ** **** are *****.

* ***

*** *** ***** ****, ** ******* the ****** ** ************* * ***, a **** **** ****. ** **** light *****, **** *** **** *** 1080p ******* ******** ****** ****** ** the *****. ***** ******** **** **** to **** *** **** * ** the *****, ***** ** **/** ****** (PECFD).

*** **** ****** *** **** ** resolve **** *, ***** ** **/*** vision, **** ****** ***** ******** *******.

*** ****** ** ** ***** **********, line *, **/**, *** ** ****.

*** * *** ** ********* ******* produced **** * ****** ***** ***** and *****. ** ***'* **** **** 'blind' ** *** *****.

More **** *******

** *** ***** ****, **** *** over*** ***** ******* **** *******, ********* ********* ****:

Comments (32)
Avatar
Robert Baxter
Dec 05, 2013

The eyes are an amazing machine when you consider the image is upside down when transmitted along the optic nerve and then turned again upright in the brain and then the signals from the two eyes are compared to calculate depth perception. However, the eyes only seem to have a small focus in the center and on the peripherary it has reduced bandwidth. This is meant to keep consumption of bandwidth down on areas that are not important. Is that the principal that Avigilon realized and prompted their interface development?

How does the eye stack up on a WDR test?

Has anyone done a "detailed" comparison of the eye/ optic nerve/ brain design and compared it to modern camera design - I would find that interesting.

Avatar
Vincent Tong
Dec 05, 2013

It would be very interesting to see how it compares with a WDR test.

CD
Chris Dearing
Dec 05, 2013

These guys over at cambridge in color seem to back you up

... Away from the center, our visual ability decreases dramatically, such that by just 20° off-center our eyes resolve only one-tenth as much detail. At the periphery, we only detect large-scale contrast and minimal color

Qualitative representation of visual detail using a single glance of the eyes.
Taking the above into account, a single glance by our eyes is therefore only capable of perceiving detail comparable to a 5-15 megapixel camera (depending on one's eyesight).

KT
Karoly Turoczi
Dec 05, 2013

too bad a human eye cannot zoom on the subject.

SH
Stephen Holtzhausen
Dec 05, 2013

Not exactly accurate as you can't take a "snapshot" of what the eye sees and zoom in digitally.

JC
James Chang
Dec 05, 2013

HI John:

would you explain the meaning the table on the article

Human : 20/20 , 20/30

and how do I know the lux data on real environment ?

for example 1 lux means ? 100 lux means ?

Jerry Chang 12/5

JH
John Honovich
Dec 05, 2013
IPVM

James, we had multiple 'humans' test their vision in full, low light and dark conditions. The 20/20 means that they could read the eight line. The 20/30 in low light means they can only read the 6th line, etc.

Lux is a measurement of visible light. We used a lux meter. 1 lux is roughly moonlight. 100s of lux are what you would find in an office. 1000s of lux is typical outdoors, etc.

Avatar
Phil Coppola
Dec 05, 2013

John

Very cool. It's fun to see how well electronics can do against their human counterparts. It looks like we, humans, are still a ways off from perfecting this technology...

I do have a question regarding your test. What was your criteria on picking the cameras that you chose and what cameras did you actually use. I other words, did you choose cameras that you knew would preform the best In low light conditions? Also, what did you end up setting the focal length of the lens at, 3mm, 2.8mm? I think that is equally as interesting. I would love to know what the focal length of the human eye would be versus any given lens and sensor size. The reason behind the first question is that I was a little surprised that The 1080P cameras outperform their lower megapixel counterparts as the light Levels drop. Usually we see the opposite in the field.

JH
John Honovich
Dec 05, 2013
IPVM

The focal length of the cameras would be about 4mm (depends on the imager size - 1/3", 1/2.7", etc.). The focal length of the human eye would differ, because the size of its 'imager'/'sensor' is different.

In lower light, newer 1080p cameras tend to be about the same as 720p ones. They might be slightly darker but the extra pixels help when looking at small lines of texts.

Obviously, though, the point here was not to say that 1080p was definitely better or worse than 720p but to give people a sense of the tradeoffs between the human eye and cameras, in general.

We used a mix of Axis, Bosch and Arecont in the testing.

Avatar
Joel Kriener
Dec 05, 2013
IPVMU Certified

As usual, I like your 'nuts and bolts' approach to your testing methods. Not sterile laboratory type of experimental methodology but real-world approaches to basic comparisons of technology. I think this is a very effective way for you to communicate what you are trying to convey your audience.

keep up the good work!

MF
Mark Fiscella
Dec 05, 2013

Great test to show visually how these cameras really work. Camera specs don't tell the true story. Basically we see contrast (difference of dark to light). As the light gets less the signal to noise ratio will drive that contrast ratio lower. Pixel size, read circuit noise, pixel quantum effeciency, dark current noise, etc. all effect this performance. Marketing specs on sensor, or cameras will not tell you the story. That why independent tests like these help show the difference in products. The biggest problem in IPV is illumination in that you can't control it.

DG
Dustin Graybill
Dec 05, 2013

This is clearly not a test of the resolution of the human eye, if anything it is a test of the pixel density of the human eye at the focal point.

JH
John Honovich
Dec 05, 2013
IPVM

Dustin, good point.

Earlier this year, I argued that we should ban the term resolution in surveillance because resolution is now so commonly used to mean pixel count. To that end, and since the industry overwhelmingly means pixel count when the say resolution, that's how we are using the term.

HD
Henry Detmold
Dec 05, 2013

Quite an interesting exercise...

  • There are about 126 million photo-receptors in the eye contributing to normal vision - 120 million rods and 6 million cones. So one could naively state: resolution of 126 million pixels (or rather, the angular equivalent in seconds of arc).
  • However, the receptor density is not uniform: significantly greater in the central vision, so we have much better "resolution" for things we are looking straight at. Also, vertical and horizontal "resolution" is asymetric
  • Dynamic range is exceptional: there are four types of receptors contributing to normal vision: rods and three types of cones. The rods are exceptionally sensitive to light and provide the entire low light vision (without colour) as well as through the whole range. The three types of cones need more light but respond differently to different wavelengths / bands and hence jointly provide colour.
  • Additionally, the cones (colour sensors) are high concentrated near the "centre" of the visual field, so colour vision is better there too.
Avatar
Robert Baxter
Dec 05, 2013

I didn't know we had different cones. Apparently they use 3 different proteins that respond to the light spectrum with one for the red and blue and the other yellow. Apparently 60% of the cones are red sensitive while only 2% of them are sensitive to blue. For some unknown reason the brain boosts the blue signal to make the response similar.

Our night vision takes almost 30 minutes to take effect. The rods (uses a 4th light sensitive protein) are not sensitive to red light therefore, they use this feature on ship control panels and use red indicator lights so that our night vision is not impaired.

Avatar
Mike Dotson
Dec 10, 2013
Formerly of Seneca • IPVMU Certified

The other thing that goes along with Henry's description is how lenses work in general. The dead center is the best true resolution (the real kind). Resolving accuracy falls off as one moves out from the center. I used to design optical testing machines and this was an aspect we had to measure. The human lens and the center rod/cones density work together.

PC
Paul Covey
Dec 05, 2013
Excellent article. Good use of scientific methodology
VK
Vance Kozik
Dec 05, 2013

It would be interesting to know the maximum exposure setting on the cameras. A longer exposure would render clearer text in low light, but as we all know, in most security applications, a longer exposure is not acceptable due to motion blur. But there are certainly applications where 1/15 or 1/10 of a second is acceptable.

JH
John Honovich
Dec 05, 2013
IPVM

Vance, good point / question. We always use 1/30s unless otherwise explicitly noted. As you say, a slower shutter would make those cameras 'see' the chart a lot better in low light.

Avatar
Brice Sloan
Dec 06, 2013

Great article and discussion!

Avatar
Sagy Amit
Dec 08, 2013

It would be interesting to see an OCR/LPR software ability to read the letters. One may argue that the results are subjective to the human eye ability to read the letters off the screen..

JH
John Honovich
Dec 08, 2013
IPVM

Certainly, there's a degree of subjectivity for humans to read letters off the screen. For instance, some people will certainly look at the images displayed above and argue that they can make out a line lower than we picked. Even going to the eye doctor, such judgments are always going to be a little fuzzy as it depends on people guessing what the letters are, etc.

From what I have seen with OCR/LPR, they need more details / greater ppf / etc., to get the same accuracy as a human looking at an image. Humans still seem to be better at guessing fuzzy, small characters than computers.

CD
Chris Dearing
Dec 11, 2013

I would have agreed with you a year ago, but doesn't it seem to you that some of the captchas out there have gotten at least twice as hard as they used to be... There was a facebook one today, the kind with two words, one a reference word, the other a twisted jumble of glyphs straight out of the blackforest! It took three attempts, and I was really trying...

That says to me computers must be at least chompin' at the bit of human recognition otherwise they wouldn't be so tough, right?

So ill try downloading one of the captcha buster hack program and see how it does on licencse plates...

JH
John Honovich
Dec 11, 2013
IPVM

There are two very important differeneces across those applications:

  • Expectations of accuracy: A captcha cracker that works 1 out of every 3 times is a goldmine. An LPR system that works 1 put of every 3 times gets everyone involved fired.
  • Load: LPR systems need to work on X images per second per camera while a captcha cracker does 1 images per site.

In sum, captcha crackers can throw more more resources and accept lower accuracy results than LPR.

Avatar
Jonathan Lawry
Dec 09, 2013
Trecerdo, LLC
Robert, thanks for clearing up the rod/red relationship. I remember learning in Boy Scouts to use red light at night so your night vision wasn't messed up, but never asked why. I have read theories about blue light though, suggesting that the reason why we are relatively insensitive to it is the abundance of blue light due to the diffractive qualities of our atmosphere.
Avatar
Joel Kriener
Dec 09, 2013
IPVMU Certified

Alright then, here is my proof that the human eye has amazing abilities. I the middle of the night, in pitch black conditions, I can walk to the bathroom and be able to delineate the edges of the door casement and not kill myself walking through the door.!

MP
Michael Peele
Dec 09, 2013

Interesting article.

Eyes have various advantages that cameras don't. Firstly, WDR as others have mentioned. Mythbusters did a test with a very dark room full of junk - an obstacle course. The test subjects, after adjusting to the light, were able to easily navigate the room, safely.

The second thing that eyes have is two of them. Using two of them, we can see a lot better than just one.

Third thing eyes have is the brain. The brain does all sorts of processing to improve image quality where it matters.

Avatar
Rick Puetter
Dec 10, 2013

The advantge of the human eye is that it has two types of sensors: rods and cones. One is for bright light and color vision, the other for dimly lit situations. This is the direction, I feel, that cameras sensors should move in as well, which is why Pixon Imaging is exploring, and has patented, adaptive-binning CCD cameras. On the question of WDR performance, here correctly designed adaptively binned sensors can probably do one better than the human eye by effectively taking multiple exposures at the same time with pixels of different sizes, big ones for high sensitivity, and small ones for the bright areas of the scene. Concerning the human eye, however, I am not sure if the rods and the cones can work together on this, so WDR tests of the human eye would be highly interesting.

Great article!

Avatar
Luis Carmona
Dec 10, 2013
Geutebruck USA • IPVMU Certified

Is Pixon where Arecon't gets there binning technology?

Avatar
Rick Puetter
Dec 10, 2013

Hi Luis. No. We pioneered some similar technology for many years called the Pixon method that is spatially adaptive, but this was never picked up by any manufacturers. This is after-the-fact binning, i.e., after the sensor is read out and every pixel has suffered read-noise. What we at Pixon are advocating now is the use of multiple binning schemes on-chip, before reaching the output amplifier. That is why this needs to be done on CCDs rather than on CMOS devices. The advantage is that if you bin on-chip, say a group of 4x4 pixels, you suffer only one unit of read noise. If this is done after reading out all the pixels, you get 16 units of read noise, which give an effective noise for the sum of the 16 pixels of 4 units of noise (noise adds in quadrature). So the CCD pre-readout binning approach is 4x more sensitive than what can be achieved on a CMOS device.

Avatar
James Talmage
Jul 09, 2014
IPVMU Certified

Vsauce has an amazing video on this.

Also - I encourage you to try the "blind spot" exercise he describes, I found it quite surprising when my thumb disappeared in front of me.

Avatar
Brad Peterson
Dec 22, 2017

Many impressive responses to this review by IPVM.  Some terminology had my head spinning (captcha cracker; rod/red relationships) but none the less quality questions.

That said this IPVM report is exactly why I am a member,  good job John!

(1)