In 2018, MIT grad student Joy Buolamwini discovered that face recognition technology failed to register faces that weren’t, in her words, “pale males,” leading her to conduct research on this gender and racial gap in artificial intelligence technologies. Her thesis project, titled “Gender Shades,” found that facial recognition technologies from Microsoft, IBM, and China-based Megvii (aka FACE++) struggled with accurately identifying dark-skinned women compared to light-skinned men. Although those tools have been improved since then, they still have many inherent technological flaws, and their implementation begs consideration.
But first, how did this bias come to be? The flaws in the code are imbedded in the very roots of human society. Machines learn facial recognition through data sets. Those data sets come from a vast majority of white programmers who select from existing information in the world that is riddled with inequalities and biases. (Just think of the top names in tech for the last several decades — Bill Gates, Steve Jobs, Mark Zuckerberg, Elon Musk, Jeff Bezos. See a trend?) Given that historical data can be biased, one can understand how the algorithms fed into artificial intelligence tools perpetuate inequalities and discrimination.
In her TED Talk, The Era of Blind Faith in Big Data Must End, data scientist and author Cathy O’Neil stated: “Algorithms don’t make things fair if you just blithely, blindly apply algorithms. They don’t make things fair. They repeat our past practices, our patterns. They automate the status quo. That would be great if we had a perfect world, but we don’t.”
We must remember that machines make mathematical judgments—not ethical ones—and even then, the math is often faulty. The in-built challenges of machine learning mean that facial recognition is still inaccurate today, often with a high percentage of false positives (wrong matches) and false negatives (missed matches) in identifying people. Other factors—like the quality of the images, facial expressions, lighting, pose, and more—can affect accuracy. Imagine trying to figure out the identity of someone wearing a hat, moving quickly on a grainy CCTV screen.
Harrowingly, African American Michigan resident Robert Julian-Borchak Williams was misidentified as a robber by police facial recognition software and falsely arrested in 2020. The computer (and the police officers) could not accurately differentiate among Black faces, and pointed a proverbial finger at an innocent man.
Another thing to remember about artificial intelligence and facial recognition tools is that the main drivers of the technology in the West are corporations. A majority of AI technologies are being developed by companies like Amazon, Microsoft, and IBM, whose goals are for their own financial benefit and rarely for social good. For lack of other sources, governments often buy these flawed and biased technologies and implement them in the public sphere without the informed consent of citizens.
Since 2017, the South Wales Police department in the UK began using automated face recognition surveillance cameras to scan streets and make arrests based on matches in their criminal database. In the process, they were gathering biometric data of passersby without their knowledge. This breach in privacy led to a landmark ruling in 2020 over the use of such technology without proper legal oversight.
In Buolamwini’s TED Talk, How I’m Fighting Bias in Algorithms, she points to the fact that half of the adults in the US are logged in facial recognition networks, saying “Police departments can currently look at these networks unregulated, using algorithms that have not been audited for accuracy. Yet, we know facial recognition is not fail-proof, and labeling faces consistently remains a challenge.”
However, governments are not the only ones using flawed technologies. Many institutions utilize artificial intelligence to make decisions rife with hidden biases. Discrimination in college admissions, job submissions, employee performance reports, mortgage applications, prison sentencing, recidivism assessments, and access to healthcare are only some of the examples. If left unchecked, flawed AI will continue to uphold systems of exclusion and oppression.
What can be done? Referring again to her TED Talk, Buolamwini presents three considerations: “Who codes matters, how we code matters and why we code matters.” It is important to have representative groups of programmers writing inclusive code. Having diverse teams means that one person’s biases and blind spots can be uncovered by another. Different voices and perspectives make for a richer and more complex AI capable of portraying real-world variances more accurately.
Ethical algorithm auditing is also a burgeoning field that can make a huge difference in managing the “black box” problem of artificial intelligence. Programmers themselves acknowledge that they don’t really understand the behind-the-scenes processing that an AI program performs to generate outputs from given inputs. As a result, since machines can’t be expected to be ethical, they may provide results that are faulty or exclusionary. The solution can be found in programmers mindful of the aspects of ethicality auditing for any biases in data sets and results. Prioritizing diversity, equality, and inclusion at the foundations of artificial intelligence is vital and examining the past and present will help us write a better future.
Rose Ho | Assistant Editor