Subscriber Discussion

Detect A Criminal Based On Facial Features Alone?

U
Undisclosed #1
Nov 23, 2016
IPVMU Certified

Common sense says no, right?  

The Men

Two Chinese researchers from Shanghai Jiao Tong University, who wrote a paper titled "Automated Inference on Criminality using Face Images.", say differently.

The authors stated, "We are the first to study automated face-induced inference on criminality free of any biases of subjective judgments of human observers. By extensive experiments and vigorous cross validations, we have demonstrated that via supervised machine learning, data-driven face classifiers are able to make reliable inference on criminality."

The Method

...they used standard ID photographs (not mugshots) of Chinese males between the ages of 18 and 55. The men did not have facial hair,
("We stress that the criminal face images in Sc are normal ID photos not police mugshots," wrote the researchers.) MIT Technology Review picked up on their methods: "They then used 90 percent of these images to train a convolutional neural network to recognize the difference and then tested the neural net on the remaining 10 percent of the images." Results? They said that the classifiers performed "consistently well" and produced "evidence for the validity of automated face-induced inference on criminality."

MIT Technology Review's discussion of their findings in "Emerging Technology from the arXiv," said that the pair found that "the neural network could correctly identify criminals and noncriminals with an accuracy of 89.5 percent."

The MIT Conclusion

Of course, this work needs to be set on a much stronger footing. It needs to be reproduced with different ages, sexes, ethnicities, and so on. And on much larger data sets." Also, said the report, "All this heralds a new era of anthropometry, criminal or otherwise," and there is room for more research "as machines become more capable."

Comments?

UE
Undisclosed End User #2
Nov 23, 2016

Yes you can.

(2)
U
Undisclosed #1
Nov 23, 2016
IPVMU Certified

As is standard procedure, OCR was disabled for this test.

(1)
JH
John Honovich
Nov 24, 2016
IPVM

"the neural network could correctly identify criminals and noncriminals with an accuracy of 89.5 percent."

I'd be curious to what that means in practical usage. Does that mean every 10 times they use this, that 1 time an innocent person is identified as a criminal or?

U
Undisclosed #3
Nov 24, 2016

Sam Biddle of The Intercept thinks this claim is pure snake oil - and I agree with him.

U
Undisclosed #1
Nov 24, 2016
IPVMU Certified

I would think, as I said, no way. And I think the study probably flawed.

But I can't say Mr. Biddle has shed any light on the matter for me.

Basically, the article is mostly hand-waving like this:

The bankrupt attempt to infer moral qualities from physiology was a popular pursuit for millennia, particularly among those who wanted to justify the supremacy of one racial group over another. But phrenology, which involved studying the cranium to determine someone’s character and intelligence, was debunked around the time of the Industrial Revolution, and few outside of the pseudo-scientific fringe would still claim that the shape of your mouth or size of your eyelids might predict whether you’ll become a rapist or thief.

also,

Though long ago rejected by the scientific community, phrenology and other forms of physiognomy have reappeared throughout dark chapters of history. A 2009 article in Pacific Standard on the racial horrors of colonial Rwanda might’ve been good background material for the pair:

and then

This can’t be overstated: The authors of this paper — in 2016 — believe computers are capable of scanning images of your lips, eyes, and nose to detect future criminality. It’s enough to make phrenology seem quaint.

All of that to tell us that we should simply reject it out of hand, because we know better:

They conclude that “all four classifiers perform consistently well and produce evidence for the validity of automated face-induced inference on criminality, despite the historical controversy surrounding the topic.”

But of course scientific conclusions should not be suppressed because of historical controversy.

Indeed, its just a result that is statistically unexpected. There are no required theories as to why.

He can attack the study based on its methods, but one should not reject a priori the conclusion because it conflicts with current scientific thinking.

U
Undisclosed #3
Nov 24, 2016

And how would you expect Mr. Biddle to be able to prove a negative?

What position - besides pointing to the historical failure of phrenology when scrutinized via scientific methodology - can be taken?

You state your belief that the study is probably flawed - and he agreed by his comment at the end of his piece (that you didn't quote) when he discounts their statement that (and I'm paraphrasing) they have removed any kind of testing bias because algorithms can't be biased like human testers can.

"This misses the fact that no computer or software is created in a vacuum. Software is designed by people, and people who set out to infer criminality from facial features are not free from inherent bias."

I will refrain from interacting with your last 3 statements as they are correct and not being debated (nor do they support your argument about Mr. Biddle's article not 'proving' anything).

Phrenology is junk science and this 'study' is bullshit.

U
Undisclosed #1
Nov 24, 2016
IPVMU Certified

Let's say I asked you to obtain 2000 head shots, half of them criminals, and then we separated then randomly in to two groups of 200 and 1800.

First we would take the 1800 and import them into our empty facebase. You would also indicate for each of the 1800 in our database whether the face belonged to a criminal or not.

Then you would give me the other 200 to import without any indication of whether they were criminals or not.

And then I would analyze them and come up with a prediction for each of the 200 unknowns.

Finally, lets say I predicted criminality correctly for 180 out of the 200 faces.

This is essentially the claim made by the paper.

My question is, do you believe that they are lying about this, or do you believe that the result is to be expected, or?

U
Undisclosed #3
Nov 24, 2016

Here is the gist of the data that they are using to differentiate between 'criminals' and 'non-criminals' (from page 6 of their 11 page submitted study)

"We apply the Feature Generating Machine (FGM) of Tan et al. [29] to the task; it identifies the red-marked regions in Figure 8 (a) as the most critical parts for the separation of criminals and non-criminals. Guided by FGM, we discover that the following three structural measurements in the critical areas around eye corners, mouth and philtrum that have significantly different distributions for the two populations in Sc and Sn: the curvature of upper lip denoted by ρ; the distance between two eye inner corners denoted by d; and the angle enclosed by rays from the nose tip to the two corners of the mouth denoted by θ. The three discriminating structural features ρ, d and θ are shown in Figure 8 (b). We stress that the upper lip curvature ρ is measured on standard ID photos taken with the person in neutral facial expression"

.....

"the angle θ from nose tip to two mouth corners is on average 19.6% smaller for criminals than for non-criminals and has a larger variance. Also, the upper lip curvature ρ is on average 23.4% larger for criminals than for noncriminals. On the other hand, the distance d between two eye inner corners for criminals is slightly narrower (5.6%) than for non-criminals."

======================================

This is what they claim allows their machine learning to differentiate between criminals (Sc) and non-criminals (Sn):

Average distance orientation of nose, mouth and eyes on test subjects which were 'normalized' via histogram.

Note that they claim their 'study' is verified by their own data (based on the subject pictures they used), yet they don't take the next obvious step to 'prove' their machine learning method of Automated Inference of Criminality - they don't test their algorithms on face pictures outside their own test subjects.

If what they claim is true, they should then be able to use the same methodology to infer criminality at a 90% accuracy rate on any face picture they run the algorithm on.

Question: Why would they not do that before releasing their 'study' results?

Answer (imo): Their study is flawed and won't produce the same results when this algorithm is empirically tested by others (hopefully across a larger pool of test subject face pictures).

U
Undisclosed #1
Nov 24, 2016
IPVMU Certified

What's your opinion of this study by Cornell:

U
Undisclosed #3
Nov 24, 2016

My opinion is that it is not complete. There are other ways to try and remove bias besides 'removing all of the really criminal-looking pictures from the mix' (which they claim they did, and that this action somehow eliminates any claims of testing bias).

I'd be interested in the results of a complimentary test using the exact same methodology - except using pictures of all non-criminals.

Then run another complimentary test using pictures of just criminals.

In both of these tests, the subjects should be told the same thing as they were from the original study testing: that some are pics of criminals and some aren't.

Do you surmise that either of these two complimentary tests would show that test subjects are capable of choosing these percentages correctly? i.e. that subjects would choose non-criminal with a high degree of accuracy when they were shown all non-criminal pics? or they would choose criminal with a similarly high degree of accuracy when shown all criminals pictures?

If so, then this data would support the inference they claim their data indicates.

If not, then their data is skewed (at least in part) by bias that their methodology does not account for.

U
Undisclosed #1
Nov 26, 2016
IPVMU Certified

...they don't test their algorithims on face pictures outside of their own test subjects...

The people in the pictures weren't their subjects per se, they were just pictures collected for the test.

In order to conduct our experiments and draw conclusions with strict control of variables, we collected 1856 ID photos that satisfy the following criteria: Chinese, male, between ages of 18 and 55, no facial hair, no facial scars or other markings, and denote this data set by S. Set S is divided into two subsets Sn and Sc for non-criminals and criminals...

Sure, they can do it again and find another sample set, but how would that satisfy you, since whatever they choose can always be suspect to unintended bias or outright deception?

What will happen now, if anyone cares to devote the time and energy, is that some other independent facility will try to replicate their findings with a similarly constructed setup.

Remember, there is no claimed secret sauce here, unlike various commercial undertakings which you have no way of verifying. Neural nets are well understood, and they are explicit about how they went about the experiment.

Any detail not in the paper would likely be provided to another team trying to replicate. Because that's how it works in academia.

Still, I'm skeptical, but this is the right way to go about such a claim.

Avatar
Ari Erenthal
Nov 24, 2016

So basically phrenology?

UM
Undisclosed Manufacturer #4
Nov 24, 2016
(2)
U
Undisclosed #1
Nov 24, 2016
IPVMU Certified

Good memory!

I was a bit more pessimistic about that one, mainly because of non-scholarly commercial use and hinky founders...

New discussion

Ask questions and get answers to your physical security questions from IPVM team members and fellow subscribers.

Newest discussions