In a world where digital technologies are advancing at an exponential rate, it is important to look at their ethical implications. Facial recognition, one such technology, is a software that identifies human faces. The software uses “deep learning” algorithms to determine the similarities and differences between images of faces ("What Facial Recognition Steals" 02:49-02:50). After being trained with millions of images, the algorithms catalog which measurements most efficiently distinguish between people ("What Facial Recognition Steals" 03:14-03:16). Facial recognition is used in technologies from Apple’s FaceID to social media filters to surveillance cameras, and it threatens to exacerbate pre-existing biases if not carefully monitored.
Facial recognition technology has the potential to make travel faster and more efficient in “airports, train stations, and border crossing” areas (Lewis and Crumpler 4). Over 90% of passengers state they would prefer its use over traditional security processes (Lewis and Crumpler 4). Additionally, the technology can promote safer travel by identifying known terrorists.
While facial recognition technology is advantageous because it increases safety, it is prone to false positives. In Detroit, a black man named Robert Julian-Borchak Williams was arrested because a facial recognition software used by the police matched his license photo with footage of a robbery suspect (Botkin-Kowacki). However, “a human eye could” see that Williams was not the suspect (Botkin-Kowacki). Other Black men like Michael Oliver and Nijeer Parks were also wrongfully arrested (“Coalition Letter” 2). In 2019, the National Institute of Standards and Technology found that facial recognition technology was more likely to “make mistakes [when] identifying people of color”, (Botkin-Kowacki) “women, and other marginalized” communities (“Coalition Letter” 1). This occurs because the people designing the software are “usually males”, and their views and ideals can influence the datasets they use to train the software (Botkin-Kowacki). Their datasets often lack diversity and contain images of people that “fit racial [and gender] stereotypes” (Botkin-Kowacki).
Obscurity is the idea that “information is safer…when it is hard to obtain”, but companies and the government using facial recognition technology can easily access faces stored on phones, websites like Facebook and Google, and documents like passports ("What Facial Recognition Steals" 05:10-0:11). Peoples’ profiles from sites like Instagram, Twitter, and LinkedIn can also allow easy access to their identities. Experts argue that people should be able to express themselves on social media and still expect their privacy to be protected ("What Facial Recognition Steals" 07:47-07:53). Furthermore, search engines like Yandex use facial recognition software to “reverse image search”, which allows anyone to take a picture of a person in public and learn about them without even knowing their name ("What Facial Recognition Steals" 00:15-00:16).
While some people argue it is important to not let caution of the technology “stop innovation” and allow other countries to gain a “technological advantage”, facial recognition technology needs to be strictly regulated (Lewis and Crumpler 7). Current facial recognition varies greatly in accuracy between 30-90%, and most of the commercially available technology is lower on the spectrum (Lewis and Crumpler 3).
The term “automation bias” refers to when people rely on technology without consulting information on their own (Lewis and Crumpler 2). While police departments and the FBI claim to use facial recognition only to “produce investigative leads” or justify an arrest, false positives like the wrongful arrests of Williams, Oliver, and Parks prove that this is inaccurate (Lewis and Crumpler 2). Many people, including myself, expect some degree of privacy in public, and facial recognition technology violates that (“Coalition Letter” 2). Additionally, this “all-seeing” technology is often used to monitor peaceful protesters, which limits their “First Amendment rights” (“Coalition Letter” 1).
Facial recognition must be regulated at the “federal level”, and the public should choose whether they want to be subjected to it (Lewis and Crumpler 7). Policies on the use of government documents as image sources, “human review” of arrests, and current laws on privacy protection, need to be reviewed (Lewis and Crumpler 3). Furthermore, when facial recognition technology is used in policing, officers need to be trained to prevent “automation bias” and to arrest only when sufficient evidence is gathered (Lewis and Crumpler 2).
Botkin-Kowacki, Eva. “Humans Are Trying to Take Bias out of Facial Recognition Programs. It's Not Working–Yet.” News@Northeastern, 22 Feb. 2021, https://news.northeastern.edu/2021/02/22/humans-are-trying-to-take-bias-out-of-facial-recognition-programs-its-not-working-yet/. Accessed 1 March 2021.
“Coalition Letter to President Biden on Use of Facial Recognition Technology.” American Civil Liberties Union, 16 Feb. 2021, www.aclu.org/letter/coalition-letter-president-biden-use-facial-recognition-technology. Accessed 17 March 2021.
Lewis, James A., and William Crumpler. “Questions about Facial Recognition.” JSTOR, 1 Feb. 2021. Center for Strategic and International Studies (CSIS), www.jstor.org/stable/resrep28766. Accessed 1 March 2021.
“What Facial Recognition Steals from Us.” YouTube, uploaded by Vox, 10 Dec. 2019, www.youtube.com/watch?v=cc0dqW2HCRc. Accessed 1 March 2021.