artificailbias
artificailbias
Assumptions in Automated Facial Regonition Tech
2 posts
Don't wanna be here? Send us removal request.
artificailbias · 3 years ago
Quote
my face when I submitted before the FASER deadline!
Tumblr media
0 notes
artificailbias · 3 years ago
Text
Blog Post 1
The use of automated facial recognition under the umbrella of artificial intelligence has severe human rights implications for marginalized groups who are in custody under the clutches of law enforcement. The individuals (such as police officers or detectives) who enforce the law and implement problematic methods such as police surveillance often times operate with an unconscious bias which then leads to an increasing likelihood that discrimination will occur. The human rights implications that are at stake is quite severe. Firstly, the process of “booking” a suspect within itself is dehumanizing. The use of automated facial recognition fails to provide a contemporary combination that should be able to implement consent, even elements of the Miranda rights (Miranda v. Arizona) a famous case in the U.S. and ethical best practices. For instance, the article “Assisted Facial Recognition and the Reinvention of Suspicion and Discretion written by Pete Fussey, Bethan Davies and Martin Innes validates this argument. Fussey et al. explicitly states that “, the social biases of police activity that disproportionately focuses on young people and members of African Caribbean and other minority ethnic groups (inter alia The Lammy Review 2017) are further inflected by alleged technological biases deriving from how technical accuracy recedes for subjects who are older, female and for some people of colour” (338). The technological element of automated facial recognition exacerbates the likelihood of African Americans, Latinx, South Asians, and women. Essentially, groups of marginalized people who are disproportionately more likley to experience some form of racism and profiling while law enforcement suspects them to be a “person of interest.”
Furthermore, according to the conference paper Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification written by Joy Bulamwini and Timmit Gerby explicitly re enforces the validity of the argument that during the contemporary era of the information age, the algorithms that co-exist within the structure of automated facial recognition  are more likley to target African Americans. According to Bulamwini et al, “ A year long research investigation across 100 police departments revealed that African-American individuals are more likely to be stopped by law enforcement and be subjected to face recognition searches than individuals of other ethnicities.” (2) The misuse of automated facial recognition allows law enforcement to profile black people in a faster time frame. The automated facial recognition artificial intelligence technology exacerbates an already pertinent issue that is pervasive and leads to the decimation of the African American population in the United States. This is a human rights violation. An article titled UN panel says the U.S. owes reparations to African Americans published by the Public Service Station illustrates that by stating the following “contemporary police killings and the trauma that they create are reminiscent of the past racial terror of lynching.” (Mason, 2016). “Prominent examples include Sandra Bland, Breonna Taylor and George Floyd it is s pretty obvious that automated facial recognition influences unconscious bias, discrimination and perpetuates human rights violations.
Lastly, the academic research article, The Perpetual Line Up, Unregulated Police Face Recognition in America published by Clare Garvie et al and the Georgetown Law Center on Privacy and Technology provides another critical element pertaining to how race and gender can intersect with facial recognition technology. The article provides numerical statistical evidence that African American across various jurisdictions -ranging from as far west as Los Angeles to the state of Pennsylvania that African Americans are more likley to be arrested, interrogated and investigated be law enforcement. The reading explicitly states that “African Americans are disproportionately likely to be subject to police face recognition. A face recognition system can only “find” people who are in its database; in systems that rely on mug shot databases, racial disparities in arrest rates will make African Americans much more “findable” than others—even though those identifications may themselves be more likely to be erroneous.” (56) Automated facial recognition technology rapidly accelerates law enforcement agencies and officials profiling African Americans.
Overall, the development of automated facial recognition technology has its advantages and disadvantages. Internationally, and in the context of international criminal law the use of biometrics is a very useful tool to apprehend nefarious individuals who may potentially commit acts of terror against certain governments, and innocent civilians. The headshot photo that is located on the left hand side of your passport is a great example of how far artificial intelligence has come in recognizing who is who in a variety of different air spaces. On the other hand, domestically in the United States of America African Americans (both female and male) are more likely to be victimized at an accelerated rate due to the problematic digital profiling in many jurisdictional police databases. This accelerated version of digitalized racial profiling leads to the risk of black people’s life in danger at the hands of law enforcement. An already prevailing issue that may be exacerbated by this particular component of artificial technology.
1 note · View note