By extending the limitations and abilities of humankind, AI technology has made a powerful and life-altering entrance into our daily lives. Algorithms and machines are constantly being developed to predict and mimic the thoughts and actions of humans, turning our once sci-fi movies into a real life documentary; as Ray Kurzweil writes in The Singularity is Near:
“The combination of human-level intelligence with a computer’s inherent superiority in speed, accuracy, and memory-sharing ability will be formidable.”
From spam email filtration to predicting our next favorite movie on a streaming service to iPhone’s face ID, AI is everywhere and as humans living in the current era, we have already adapted parts of our life to it. Particularly, the development and use of facial recognition technology has promoted various debates surrounding the ethical implications and consequences of it.
In order for facial recognition technology to accurately function, a database filled with billions of images of people from the news, TV, and social media is required. Did these database engineers and researchers ask a couple billion people for their consent? No, and many people were unhappy with the way their pictures were being used. Their features are categorized based off of skin color, hair color, sizes of their facial features, etc. leading to a greater scale of discrimination, inaccuracies, and bias. With technology that can identify people at a much faster and larger scale, people will be targeted at an unprecedented scale.
Talk of designing facial recognition technology for areas with pre-existing human bias is prevailing and has stirred a debate about bias and fairness. Will AI technology be less biased than human? Or will it exacerbate the problem? There are various discussions surrounding whether the use of AI is ethical but debates surrounding “ethical AI” is also prevalent.
AI(de) and Seek
For decades, the Chinese government has been targeting Uighur Muslims and more recently they have been prosecuting, detaining, sterilizing, etc. Uighurs. With the database of multi-billion faces, AI is capable of identifying specific ethnic groups. Huawei — a Chinese telecommunications electronics company banned in the US —allegedly tested AI software in surveillance cameras that could recognize Uighur Muslims, and report them to the police for detainment. Initially, Huawei partnered with a facial recognition startup Megvii to identify age, sex, and ethnicity from within a crowd. Now the system has been programmed to trigger a “Uighur alarm” to authorities if a Uighur is identified.
Per Chinese officials, such technology is implemented to keep people safe; however, through the lens of any social rights activists, it is clear that the technology is being abused for social “control” and moderation. Such misuse of an evolving and groundbreaking technology could ultimately lead to greater harm than its initial intention of good — an intention of eliminating bias.
Let’s FACE the Issue
Scientists should also accept the morally questionable roots of much of the research work in the area, such as experiments that have amassed massive data sets of people’s faces without their permission, many of which have been used to fine-tune commercial or military surveillance algorithms. The bias can come from the selected data integrated within the AI, whether it is considered “good” or “bad” is based on how the AI technology is introduced to the data. If a majority of the data is labeling an ethnic or minority group as likely targets, then bias is created. Field experiments and oversampling statistics of, for example, which people get the employment or which group of people are likely to commit higher felonies, are all ways AI can form a bias.
To quote the New York Times’ journalist Steve Lohr, Facial Recognition is Accurate, if You’re a White Guy. If the person is a white male, the facial recognition software is wrong 1 percent of the time. For the colored skin, the error percentage rises to roughly 35 percent, for colored women. But again, the AI is only as smart as the data and information used to train it. If there are more images of white men than colored women, AI will be better at identifying white men. According to New York Times, an analysis of the data set used in a widely used facial recognition software “ estimated to be more 75 percent male and more than 80 percent white”. Misdirection by oversampling has increased bias on minority groups and has made them more vulnerable to false accusations.
With a huge margin of error on minority groups, chances of being selected as, for example, a suspect is greater. Even though they may be innocent, their image may be stored in the facial recognition database as a person of interest. Though it doesn’t seem to be an issue at first, the person could potentially have their image stored alongside genuine criminals, indicating to the AI algorithm that a majority of felons are those from minorities.
In early 2020, a facial-recognition system was being used by police in Farmington Hills, Michigan to identify a watch thief on the basis of a blurry surveillance footage. The footage showed a hard-to-make-out image of a black man which they said the computer identified as Robert Williams based on the image in his driver’s license. Williams recounts what happened with the police, “I picked that paper up, held it next to my face and said, ‘This is not me. I hope y’all don’t think all Black people look alike.’ and then he [the detective] said: ‘The computer says it’s you’”. After this incident, the American Civil Liberties Union (ACLU) pushed for the technology to be banned. ACLU attorney Phil Mayor says “it doesn’t work, and even when it does work, it remains too dangerous a tool for governments to use to surveil their own citizens for no compelling reason”.
Being a technology that requires constant data testing, the human bias within the data is being passed onto the AI. With an increase in balanced data, researchers across the globe are aiming for a technology that will eliminate bias, providing a clear and impartial lens for various situations. Imagine the number of people that would be living to this day if the bias of a police officer or a person didn’t immediately label a person as a criminal, terrorist, or “alien”. Every new technology requires an evaluation of the limitations and applications of it. Whether AI facial recognition technology should be restricted to unlocking phones and computers or be used for other purposes — such as law enforcement — is a subject of debate amongst bioethicists for years to come. While the word “Artificial” is derived from Latin word “artificium” which means handicraft, Artificial Intelligence will continue to have its share of flaws and biases. The need of the hour is to recognize them and re-work the algorithms that determine the outcome of AI machines.
References
Castelvecchi, D. (2020, November 18). Is facial recognition too biased to be let loose? Retrieved March 18, 2021, from https://www.nature.com/articles/d41586-020-03186-4
Drew Harwell, E. (2020, December 08). Huawei tested AI software that could recognize Uighur minorities and Alert police, report says. Retrieved March 18, 2021, from https://www.washingtonpost.com/technology/2020/12/08/huawei-tested-ai-software-that-could-recognize-uighur-minorities-alert-police-report-says/
Jobin, A., Ienca, M., & Vayena, E. (2019, September 02). The global landscape of ai ethics guidelines. Retrieved March 18, 2021, from https://www.nature.com/articles/s42256-019-0088-2
Kuflinski, Y. (2019, April 11). How ethical is facial recognition technology? Retrieved March 18, 2021, from https://towardsdatascience.com/how-ethical-is-facial-recognition-technology-8104db2cb81b
Lohr, S. (2018, February 09). Facial recognition is accurate, if you’re a white guy. Retrieved March 18, 2021, from https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html
Ng, A. (2020, August 11). China tightens control with facial Recognition, public shaming. Retrieved March 18, 2021, from https://www.cnet.com/news/in-china-facial-recognition-public-shaming-and-control-go-hand-in-hand/
Noorden, R. (2020, November 18). The ethical questions that haunt facial-recognition research. Retrieved March 18, 2021, from https://www.nature.com/articles/d41586-020-03187-3
Silberg, J., & Manyika, J. (2020, July 22). Tackling bias in artificial intelligence (and in humans). Retrieved March 18, 2021, from https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans#
2 Responses
Outstanding post. However, your comment, “people will be targeted at an unprecedented scale” is a big statement. What people? What do you mean by targeting? This is so new an area, involving everything from metaphysics to the Supreme Court. Let me know what you specifically are referring to. Thank you for this engaging, thought provoking article.
Thank you John. As researchers will be providing the data, oversampling of certain statistics and field experiments may lead the the algorithm developing its own bias — as the data received comes from experiments done by naturally biased beings. AI data is based on physical characteristics of the person so the sampling may be skewed. For example, if a facial recognition software was being used to help the police, its data would include statistics relevant to the police work. What if the data provided categorized a certain community or ethnic group as more likely to be felons? Wouldn’t it conclude to the AI that they are more likely going to be suspects? For instance, if the data shows that more Blacks are arrested for certain crimes, Caucasians may not be subject to suspicion for the same. It is hard and will require a great amount of data to eliminate bias in a technology that is fed data from biased beings.