A new peer-reviewed study says that facial recognition technology can accurately "read" a person's political affiliation just by looking at their face.
So, in conclusion: If your face is large, you’re a conservative; if it’s skinny, you’re a liberal; and facial recognition is bad—we all know that. That seems to be all you need to know.
The paper:
Our results, suggesting that stable facial features convey a substantial amount of the signal, imply that individuals have less control over their privacy. The algorithm studied here, with a prediction accuracy of r = .22, does not allow conclusively determining one’s political views, in the same way as job interviews, with a predictive accuracy of r = .20, cannot conclusively determine future job performance.
r=0.22 is a weak to moderate correlation, btw. An actual predictor will need more data than just one’s face in order to have a decent chance.
What’s amusing to me is that they referred to the job interviewer having similar reliability, but didn’t say whether it was good or not. Purely let the bias of the article imply that they were highly reliable.
sigh
The news:
The paper:
r=0.22
is a weak to moderate correlation, btw. An actual predictor will need more data than just one’s face in order to have a decent chance.So the headline is 100% wrong.
Only the words between AI and find. The rest of the headline is fine.
So, like only 95% is bad? That’s certainly not 100%!
I can say with a confidence interval of 95% that the headline is bad
It almost seems like someone did a linear regression, when a logistic regression model would be more appropriate.
What’s amusing to me is that they referred to the job interviewer having similar reliability, but didn’t say whether it was good or not. Purely let the bias of the article imply that they were highly reliable.