Fragility of Facial RecognitionBy: John Honovich, Published on Dec 22, 2009
The December 2009 uproar over 'racist' facial recognition from HP demonstrates an important risk for real world video analytic deployments. For background, watch the viral video with almost 3 million views below:
It's fairly clear that neither facial recognition nor HP are purposefully racist. As HP [link no longer available] and most commentators note, this is almost certainly a lighting issue.
Here's the key practical problem: If users can see the face, they expect computers to see the face. This is almost a law in real world video analytics performance: Anything a user can detect, they expect a computer to be able to do. This may be unfair and certainly technically incorrect but it's a basically a sociological 'fact.'
Video analytics are much more fragile to real world conditions (lighting, weather, etc.) than the human eye. It's why video analytics can demo very well in the lab and the office but struggles so frequently in the field.
If you want proof of this tragicomedy, watch the video below of Consumer Report showing how to make facial recognition work with black people. They fail at first and struggle to do so.
Most people do not care about the technical limitations nor do they want to have to optimize their lighting or scenes to accomplish such recognition (contra to the demanding requirements we examined for facial recognition in October 2009).
While facial recognition in professional video surveillance is certainly far more sophisticated than HP's consumer face tracking, as SSN points out, this is yet another black eye for video analytics.
Video analytics tend to be fragile to a wide variety of real world conditions. The free flow of information on the Internet only makes these problems more visible and broadly understood.