The Bias in AI: Case study of facial recognition technology

The Bias in AI: Case study of facial recognition technology

What is Facial Recognition Technology

Facial recognition’s history can be traced back to its beginning in the sixties, which continued to develop until the recent research in this field (Libby and Ehrenfeld, 2021). In the seventies, NEC used the technology of facial recognition in Osaka Expo ’70 in Japan and it attracted a huge audience (Gates, 2016) and is still evolving to unknown destinations, like any other new technology it has its share of controversial debates, arguments, and ethical dilemmas.

In this case study we will explore and understand what facial recognition is, what the possible biases of this technology are. Long ago boarder control used witnesses to identify a suspect, or even use a professional artist to draw a picture of the suspect from the witness description to compare it with existing criminal data base (Horkaew et al., 2020).

Facial Recognition Technology (FCT) is an artificial intelligence (AI) tool used to recognize and identify digital forms of facial footage or videos and cross-link the findings to existing databases using algorithms, (Thomas, 2018; Deborah Raji et al., 2020; European Union Agency for Fundamental Rights (FRA), 2020). Recently facial detection and recognition are widely used in our daily lives ranging from different electronic gadgets such as smartphones, smart TVs, laptops, Closed Circuit Televisions (CCTV), to applications software.

A data base is compiled whenever a photo is taken by a smartphone, public transportations such as train stations and airports equipped with camera in every corner that scans people faces and compare it with existing data bases that they acquired from the main of social media companies. Even commercial companies are using facial recognition and detection, or by for marketing purposes. As we use the technology more and individuals are indulged in these technologies the more, we provide materials for the tools of facial recognition. Smartphone companies recently pushed facial recognition technology to be used by smartphone users to unlock their phones with the facial print, and as we know even children have their own phones and uses facial print, play with facial filters on Instagram, update a profile or download a new application. they use this technology without the need of a consent from their parents. Or use and the professional field such as the medical and psychological field in diagnostics and profiling are contributing to the growth of this market and creating a demand for this market, which encourage the companies to invest more in the research of this field (Lundh and Ost, 1996). The financing and buying of the research of facial recognition technology int the private sector by the governmental institute in different application had resin the importance of this technology. For example, the law enforcement entities are utilizing facial recognition technology for different purposes and in different methods and ways, this does apply for democratic and authoritative governments alike.

The usage of this technology by the law enforcement is mainly in the identification of potential suspects from CCTV images and videos, or identification of potential threats by monitoring for predefined facial features and aspects to categorize groups of suspects that may not be present in the data base of the criminal records (Smith and Miller, 2021). The technology of facial recognition is everywhere, and it is spreading in different shapes and forms. We need to think about facial recognition technology, missuses, and unethical applications. The facial recognition technology industry is growing readily. In 2024 in the United States market alone this industry is expected to reach USD seven billion (Mohsin, 2020), the growth is in both the equipment’s and algorithms, this growth and the revenue lures a lot of companies either big, small, and even individual programmers to indulge itself in this industry, trying to get a share of the growing profit in addition to the growing interest of governments in this industry (Lundh and Ost, 1996; Thomas, 2018; European Union Agency for Fundamental Rights (FRA), 2020; Lunter, 2020; Mohsin, 2020; Cavazos et al., 2021).

What are the biases in this technology

The result of a test on facial recognition technology algorithms ran by the Institute of Standards and Technology (NIST) showed poor distinguishing of people of dark skin and people from East Asia in comparison to the facial distinguishing and recognition of people with Caucasian origin, with a margin of error much bigger in the case of people of dark skin and people from East Asia (Lunter, 2020).

Racial biases have been also reported in AI in general as Cave and Dihal, (2020) stated that “Typing terms like “robot” or “artificial intelligence” into a search engine will yield a preponderance of stock images of white plastic humanoids. Perhaps more notable still, these machines are not only white in color, but the more human they are made to look, the more their features are made ethnically White” (Cave and Dihal, 2020).

Studies had showed that racial and gender biases have been identified in facial recognition technology in law enforcement authority practices in most countries (Lundh and Ost, 1996; Thomas, 2018; European Union Agency for Fundamental Rights (FRA), 2020; Lunter, 2020; Mohsin, 2020; Cavazos et al., 2021).

Facial recognition technology depends on algorithmic programs built on analyzing the images fed into it and comparing it with data bases or trying to identify specific features for example, facial color or skin tone, eye color, facial sizes that may indicate age and gender, head wear, etc. these algorithms uses machine learning (ML) programs to continuously improve its purpose and accumulate its familiarized databases for future use and utilization by the same algorithm to make future machine eye blink judgements. Hence the outcome of the recognition is accumulated data dependent (Mohsin, 2020). There are two main examples of the law enforcement authorities use of facial recognition technology they are: the first is that the law enforcement authority compares an acquired image of a suspect with the existing data base obtained from data bases gathered from all sorts of official documentations that have personal photos such as identification cards, driving licenses, passports to compile a list of names that have a high percentage of matching, and the second using videos from live surveillance cameras that is spread in airports, train stations, etc. to find a suspect (Mohsin, 2020).

Cavazos et al., (2021) explains the “other race effect”, the inhabitants of a geographical location such as country or group of countries are used more to recognize and distinguish people form the same region and the face difficulty in distinguishing peoples faces from other geographical locations. Hence, the place, race, ethnicity where the programming and engineering of the algorithms of the facial recognition technology happened affect the bias and its type and degree.

Smith and Miller (2021) study shed the light on the fact that facial recognition technology algorithms that were engineered and programmed in the far east showed higher error margins for the recognition of Caucasian African samples while the facial recognition technology algorithms that were engineered and programmed in the West showed higher error margins for the recognition of far East African samples. Some other studies showed that facial recognition technology also has gender biases more specifically when the compared samples are from different races and gender. Meaning the Western facial recognition technology will show greater bias with samples are from far East females or African females. It has been shown in recent studies that the error margins and bias in facial recognition technology is increasing (Lundh and Ost, 1996; Mohsin, 2020).

A person questions himself if the facial recognition technology and the algorithms can be measured and how? Some studies have checked and questioned the facial recognition technology accuracy based on three criteria: firstly, the specification of the tested digital sample such as color intensity, brightness, contrast, sharpness, resolution, etc., Secondly, the personal context of the tested digital sample from perspective of ethnicity, race, gender, age, etc., and finally the algorithm specification such as program code, the architecture of the code, complexity, and the use of the data bases (FRA, 2020; Mohsin, 2020; Cavazos et al., 2021). In June 2020, Arvind Krishna CEO of IBM announced that IBM stopped selling the facial recognition services and he wondered if facial recognition should be used at all calling for a “national dialogue” (Coldewey, 2020). However other commercial facial recognition companies such as Amazon, Microsoft, and other refuse to admit that there is gender, ethnic, and racial biases in this technology (Lunter, 2020). November 2021 “Facebook is shutting down its facial recognition software” (Business, 2021). European Union Agency for Fundamental Rights (FRA) had declared the presence of racial and gender biases in the facial recognition technology saying, “the risk of errors remains real” (European Union Agency for Fundamental Rights (FRA), 2020).

Not only such biases should we be cautious of. The intention of use of this technology that may hinder, oppress, violate the right of privacy, equality, and freedom. The more the technology is in the private sector then “transparency and accountability are under threat” (Cinnamon, 2019). This do not apply to the authoritative governments.

If this technology is misused may pose a tremendous threat. “Social Credit System” as a civic ranking system that is under testing but still widely adopted with millions of CCTV cameras spread in some cities may be an example (Smith and Miller, 2021). You may think that if this system is implemented in a liberal country, protests and riots may spread all over the country since it will violate human privacy, freedom, and civic rights (Mohsin, 2020). In the UK, South Wales police has already used live surveillance cameras for the facial recognition technology in sport events (European Union Agency for Fundamental Rights (FRA), 2020).

The intentional targeting and categorizing of certain ethnic groups or races may occur due to the face that such ethnicities and races are depicted as linked to violence and terror, hence they are high risk suspects (Deborah Raji et al., 2020; European Union Agency for Fundamental Rights (FRA), 2020). The third pillar in data justice “The third pillar within this proposed framework is nondiscrimination. It is composed of two dimensions: the power to identify and challenge bias in data use, and the freedom not to be discriminated against.” (Taylor, 2017).

The transportation business brings almost one million visitors to the United States annually. In the Unted States and the EU there is a suggestion to implement passports containing a microchip that holds all the biometric data of the passport holder including a photo (Brey, 2004).

Some airline carriers implemented facial recognition technology in boarding 2018. Border control in the US implements facial recognition to distinguish US citizens from others (Tucker, 2020).

“On-spot” facial recognition using drones to mitigate terrorist attacks implementation is rising daily (Bálint, 2018). The border segregation of other ethnicities and races ignoring the biases that happen with non-white samples may increasingly occur. Too high a matching threshold can result in missed matches or false negatives; too low a matching threshold can result in mismatches or false positives” (Gates, 2002). In 2021 the UAE is promoting the use of facial recognition as a mean of identification in processing official documents (Husain, 2021). With the effect of globalization as Ericson (2014) explains that globalization bring people closer “A shrinking planet” (Ericson, 2014), hence means of segregation is still present among us.

Reflection

This experiment of blogging was my first, I have never posted any blog before, this experiment gave me a wider perspective about blogging and communicating with other colleagues in the field, gave the chance to express myself, to respect other’s thoughts and comments. The constructive feed back that I got made me more daring and positive to express more of my thoughts. The Ideas that I got through the comments on my posts gave me a better window for improvement. I have noticed that the field is full of creative people with ideas I could not even thought of. Now I am more daring to express my ideas and know for sure that I would be appreciated as I appreciate other’s opinions and that my comments and contribution is important for them as their constructive comments are important for me. The blogging experience gave me a new language that I did not know before. All I trying to learn before was the academic writing style with citation and professional writing, while now I know that there is another language that I can communicate with a bigger group of people. My intention is to continue the blogging experience since I feel that I have not learned how to use social media effectively to promote and spread my posts. I want to thank the group members that I worked with to bring this blog to live, they were active like bees, managed time, and tasks among us all very efficiently. The blogging experiment gave me threads that I can pursue to gain better access to new knowledge. After all I am studying communication for development, and I think blogging is a very useful tool to develop and contribute to this domain. Finally, I want to thank Malmö University and our teachers for the course for this brilliant idea.

References

Bálint, K. (2018) ‘UAVs with Biometric Facial Recognition Capabilities in the Combat Against Terrorism’, SISY 2018 – IEEE 16th International Symposium on Intelligent Systems and Informatics, Proceedings. IEEE, pp. 185–189. doi: 10.1109/SISY.2018.8524800.

Brey, P. (2004) ‘Ethical aspects of facial recognition systems in public places’, Journal of Information, Communication and Ethics in Society, 2(2), pp. 97–109. doi: 10.1108/14779960480000246.

Business, R.M., CNN (2021). Facebook is shutting down its facial recognition software. [online] CNN. Available at: https://edition.cnn.com/2021/11/02/tech/facebook-shuts-down-facial-recognition/index.html [Accessed 8 Nov. 2021].

Cave, S. and Dihal, K. (2020). The Whiteness of AI. Philosophy & Technology. doi:10.1007/s13347-020-00415-6.

Cavazos, J. G. et al. (2021) ‘Accuracy Comparison Across Face Recognition Algorithms: Where Are We on Measuring Race Bias?’, 3(1), pp. 101–111.

Cinnamon, J. (2019). Data inequalities and why they matter for development. Information Technology for Development, pp.1–20. doi:10.1080/02681102.2019.1650244.

Coldewey, D. (2020). IBM ends all facial recognition business as CEO calls out bias and inequality. [online] TechCrunch. Available at: https://techcrunch.com/2020/06/08/ibm-ends-all-facial-recognition-work-as-ceo-calls-out-bias-and-inequality [Accessed 1 Nov. 2022].

Crawford, K. (2019) ‘WORLD VIEW Regulate facial-recognition technology’, Nature, 572, p. 565.

Deborah Raji, I. et al. (2020) ‘Saving face: Investigating the ethical concerns of facial recognition auditing’, 3(2017), pp. 54–67. Available at: http://repositorio.unan.edu.ni/2986/1/5624.pdf.

Ericson, T. H. (2014) Globalization: the key concepts, Paper Knowledge . Toward a Media History of Documents. Bloomsbury, Cop.

European Union Agency for Fundamental Rights (FRA) (2020) ‘FRA Focus: Facial recognition technology: fundamental rights considerations in the context of law enforcement’, pp. 1–34. Available at: https://fra.europa.eu/sites/default/files/fra_uploads/fra-2019-facial-recognition-technology-focus-paper-1_en.pdf.

Gates, K. A. (2002) ‘Wanted Dead or Digitized: Facial Recognition Technology and Privacy’, Television & New Media, 3(2), pp. 235–238. doi: 10.1177/152747640200300217.

Gates, K. A. (2016) ‘Our Biometric Future’, Our Biometric Future. doi: 10.18574/nyu/9780814732090.001.0001.

Horkaew, P. et al. (2020) ‘Eyewitnesses’ Visual Recollection in Suspect Identification by using Facial Appearance Model’, Baghdad Science Journal, 17(1), pp. 190–198. doi: 10.21123/bsj.2020.17.1.0190.

Husain, Z. (2021). UAE: Facial recognition instead of Emirates ID card readers will now verify identity. gulfnews.com. [online] 19 Oct. Available at: https://gulfnews.com/living-in-uae/ask-us/uae-facial-recognition-instead-of-emirates-id-card-readers-will-now-verify-identity-1.1634628283290 [Accessed 8 Nov. 2021].

Libby, C. and Ehrenfeld, J. (2021) ‘Facial Recognition Technology in 2021: Masks, Bias, and the Future of Healthcare’, Journal of Medical Systems. Journal of Medical Systems, 45(4). doi: 10.1007/s10916-021-01723-w.

Lundh, L.-G. and Ost, L.-G. (1996) ‘Recognition bias in critical faces in social phobics’, Behm’. Res. Ther, 34(96), pp. 787–794.

Lunter, J. (2020) ‘Beating the bias in facial recognition technology’, Biometric Technology Today. Elsevier Ltd, 2020(9), pp. 5–7. doi: 10.1016/S0969-4765(20)30122-3.

Mohsin, K. (2020) ‘Facial Recognition – Boon or Bane’, SSRN Electronic Journal, (2013). doi: 10.2139/ssrn.3666397.

Smith, M. and Miller, S. (2021) ‘The ethical application of biometric facial recognition technology’, AI and Society. Springer London, (0123456789). doi: 10.1007/s00146-021-01199-9.

Taylor, L. (2017). What is data justice? The case for connecting digital rights and freedoms globally. Big Data & Society, 4(2), p.205395171773633. doi:10.1177/2053951717736335.

Thomas, D. (2018) ‘The digital’, The Silence of the Archive, pp. 65–100. doi: 10.29085/9781783301577.006.

Tucker, A. (2020) ‘The Citizen Question: Making Identities Visible Via Facial Recognition Software at the Border’, IEEE Technology and Society Magazine, 39(4), pp. 52–59. doi: 10.1109/MTS.2020.3031847.