Subscriber Discussion

Why Use A Live Model For Tests?

RW
Rukmini Wilson
Mar 18, 2014

First off I would like to say thanks to Ethan for a useful test!

I also want to apologize for commenting only when I see what I think is a problem. Having said that I feel compelled to comment on something that I simply can't understand regarding the way this test is done and many of the recent ones I've seen are done.

To wit: Why is there a live model in these tests that are simply straight-on motionless tests? No offense to the uncredited assistant who obviously toils for hours in good spirits, but is it necessary for these types of tests? Of course I understand the difficulty in testing video with motion or 3d aspects, like the one on camera height, but these 2d frame tests have no reason to take this approach. If the model is adding his visage for comparative purposes we might be better served by a high-def gamut of several faces printed at 4800 dpi that would be attached to the top of the Snell.

Furthermore it would seem to be counter-productive, since despite best efforts, the model is only human, and therefore can only hold the eye chart at more or less the same angle, and more or less the same height, and more or less the same orientation. And that's leaving aside differences in the face presented. Taken together although they are all similar, yet in total range, they are quite variable.

Why does it matter, aren't they close enough? No, and Margaret's comment made me confront it once again. The fact is that inevitably we end up scrutinizing and ranking the images by the measure of a pimple on a flea's derrière! And at that minute level the differences in angle and level have can have a noticeable impact.

What differences? Like the amount of incident light on the target, for one. Also pixel arrays, whether sensors or monitors, don't like anything but straight lines. Not just any straight lines either, because to a typical array rendering device, any other geometric primitive other than a straight line from top to bottom or left to right, cannot be represented perfectly by square pixels. Hence the the whole sub-science/art of anti-aliasing.

Take a good look at Margaret's example (or the Garage next to open door), see where and on what line the Snell chart the letters basically go from legible to illegible. Where a K might start to look like a Z or a straight line. These differences are all because of a handful of pixels that could be significantly different if the level or angle is changed slightly.

All differences cannot of course be eliminated, but why not eliminate 99% of the variability by just using a calibrated mechanical stand, like the one in the low-light shoot-out. What am I missing?

JH
John Honovich
Mar 18, 2014
IPVM

For measuring how well a camera captures a face, a real face is preferable to a 2d picture of one.

All of these images in the day light and WDR tests are taken at the same time for each camera in a batch, so there would not be differences in how the chart is being held.

"These differences are all because of a handful of pixels that could be significantly different if the level or angle is changed slightly."

If you have evidence supporting this theory, please share.

MI
Matt Ion
Mar 18, 2014

If all installations had even lighting, your idea of simply using a photo of a face might work... however, when it comes to identifying faces in public areas, there's often harsh lighting from a variety of angles to deal with. When you're comparing how cameras deal with those differences in lighting, you have to have a 3D face. I suppose one could use a mannequin, but then someone has to go up and move it around when you want to change the scene... it's easier and far more efficient to simply have the flunky stand in front of the camera, than have him stand by to move a dummy around.

RW
Rukmini Wilson
Mar 18, 2014

I agree the human head/mannequin part of the discussion is not the most straightforward, and not the main thrust of my observation so let's drop that until later. Which leaves us with the Snell...

Why not just have the model stand behind the stand? Is it that much more work? Certainly it would make the tests appear less random. But would it make any real difference? Not usually no, but what I am saying is this, when its close and we are trying to decide which camera is 'better', do we or don't we try to see how far down we can read the letters? Maybe its not the only criteria, but arguably the most important criteria. Otherwise what's the eyechart there for? If I'm doing this differently than you are, let me know!

So when you are trying to decide like Margaret was, is the Pelco or the Samsung 'better' you end up looking at the six or seventh line of the eyechart to decide what letters you can discern or not. You could tell me what the actual ppl (pixels per letter) is, but I bet on line seven its ten or less? If so, then when a straight up and down or left to right line segment (which letters are full of) is picked up a sensor's adjacent pixels in the same row or column then Voila!, you get a perfectly straight line on the screen. But as anybody that's done a little Photoshop in his day (Marty, Matt) can tell you, a line that just angles slightly down, looks like a lightning bolt or a sawtooth saw blade, unless of course its anti-aliased but even then you need some spare pixels to work with. So its just my intuition, but I guess that at that level of scrutiny, the levelness (among other things) is going to make the difference between legibility and illegibilty.

So that's my intuition... What's yours?

I'll throw a Snell in front of a camera and then rotate it a nudge, and see at what snell line number it makes a difference in how represents the letter.

Avatar
Richard Lavin
Mar 18, 2014
Salas O'Brien • IPVMU Certified

John, correct me if I'm wrong but my understanding of your typical test proceedure is that you have all of your test cameras recording simultaneously. When you show 5, 6 or however many images you are comparing, side by side, those images were all captured at exactly the same time, (or at least as close to the same time as is possible without frame syncronizing the cameras). If that is the case, then it is not possible that the model has rotated the chart between the images. [Edit - I see that you state exactly that in your comment upthread. I would have saved myself some typing if I had read that comment before posting my own.]

Now, there will be slight diffences in the angle of the view due to the physical distance between the cameras on your test rig. If you want to have them all recording simultaneoulsy, I don't see any way to avoid that. My Bachelor Degree is in Physics and I have to admit that I don't know how to get 5 cameras in exactly the same point in space at exactly the same point in time.

Point to Mr. Wilson being that there are always going to be tradeoffs. If you want all of the test cameras recording simultaneously, then there will be slight differences in the viewing angles. If you want exatly the same viewing angle, then you can't record all cameras simultaneously. In doing any comparisons involving natural sunlight, simultaneous recording would have to take precedence over having the same viewing angle. Otherwise, there is no way to ensure that all cameras are dealing with the same lighting conditions.

UI
Undisclosed Integrator #1
Mar 25, 2014

Well to Mr. Wilson's point you do not need to have two objects take the same space at the same time if you have a rig at both ends to eliminate more of the test variance. You can clearly see slight shifts in the chart between images so we know the chosen images displayed are not taken at the same time even if the video was recorded at the same time. Mr. Lavin - you and your bachelors degree in physics will also have to admit that even the slightest variances in testing do affect results - especially when scrutinizing to a very detailed level.

JH
John Honovich
Mar 25, 2014
IPVM

"You can clearly see slight shifts in the chart between images so we know the chosen images displayed are not taken at the same time even if the video was recorded at the same time."

No, the 'slight shifts' are due to the cameras being placed next to each other in a row and therefore having ever so slightly different angles of incident.

I still do not believe this is causing any material difference in image quality nor is there another way to test that would be fairer.

Avatar
Richard Lavin
Mar 25, 2014
Salas O'Brien • IPVMU Certified

"have to admit that even the slightest variances in testing do affect results"

I don't dispute that at all. My point was that there are always going to be variances. It is not possible minimize all of the variables. You have to choose which parameters to focus on, optimize those, do the best you can with the other parameters, and accept variances caused by those parameters that you are not able to optimize. There is no way around it. In the test being referenced, WDR performance was being tested and natural sunlight was part of the test. In that case, it is most important to minimize variances in lighting conditions. John does that by recording all cameras simultaneously, viewing the same test subject. With that being done, it must be accepted that there will be slight differences in the angle of incidence for each camera on the test rig.

Avatar
Carl Lindgren
Mar 25, 2014

IPVM could theoretically do one test, scramble the order of the cameras, then do another test - all while the model stands perfectly still.

Avatar
Carl Lindgren
Mar 25, 2014

"Why Use A Live Model For Tests?"

Because dead models have trouble holding the chart! <Drum Roll>

New discussion

Ask questions and get answers to your physical security questions from IPVM team members and fellow subscribers.

Newest discussions