First off I would like to say thanks to Ethan for a useful test!
I also want to apologize for commenting only when I see what I think is a problem. Having said that I feel compelled to comment on something that I simply can't understand regarding the way this test is done and many of the recent ones I've seen are done.
To wit: Why is there a live model in these tests that are simply straight-on motionless tests? No offense to the uncredited assistant who obviously toils for hours in good spirits, but is it necessary for these types of tests? Of course I understand the difficulty in testing video with motion or 3d aspects, like the one on camera height, but these 2d frame tests have no reason to take this approach. If the model is adding his visage for comparative purposes we might be better served by a high-def gamut of several faces printed at 4800 dpi that would be attached to the top of the Snell.
Furthermore it would seem to be counter-productive, since despite best efforts, the model is only human, and therefore can only hold the eye chart at more or less the same angle, and more or less the same height, and more or less the same orientation. And that's leaving aside differences in the face presented. Taken together although they are all similar, yet in total range, they are quite variable.
Why does it matter, aren't they close enough? No, and Margaret's comment made me confront it once again. The fact is that inevitably we end up scrutinizing and ranking the images by the measure of a pimple on a flea's derrière! And at that minute level the differences in angle and level have can have a noticeable impact.
What differences? Like the amount of incident light on the target, for one. Also pixel arrays, whether sensors or monitors, don't like anything but straight lines. Not just any straight lines either, because to a typical array rendering device, any other geometric primitive other than a straight line from top to bottom or left to right, cannot be represented perfectly by square pixels. Hence the the whole sub-science/art of anti-aliasing.
Take a good look at Margaret's example (or the Garage next to open door), see where and on what line the Snell chart the letters basically go from legible to illegible. Where a K might start to look like a Z or a straight line. These differences are all because of a handful of pixels that could be significantly different if the level or angle is changed slightly.
All differences cannot of course be eliminated, but why not eliminate 99% of the variability by just using a calibrated mechanical stand, like the one in the low-light shoot-out. What am I missing?