There have long been studies about “cross-racial identification” being less reliable than someone distinguishing facial features among someone of their own ethnic group. Facial recognition systems are still designed by humans, after all, so the “training” of those systems can be flawed. The final match after a facial recognition system turns up possible matches is left to a human operator much of the time, so even if the software is working the way it should, there’s plenty of room for human bias. Two different facial recognition companies interviewed for the study admitted that their software had not been tested for racial bias.
Friday, October 21, 2016
"Study Finds Racial Bias In Facial Recognition Software Used By Police"
Earlier this week, the Centre of Privacy and Technology at Georgetown Law released a study examining facial recognition technology used by police departments across the country, which is unregulated. While the report has a total of 11 “key findings,” one in particular is getting the most attention: “Police face recognition will disproportionately affect African Americans.” While this doesn’t necessarily suggest a conscious racial bias by the developers of the software, there are a number of factors at work that may lead facial recognition programs being flawed in identifying black men and women.