A blog about social media, datafication and development
Learning new or strengthening the old? The role of AI and learning algorithms in formulation of stereotypes through social media

Learning new or strengthening the old? The role of AI and learning algorithms in formulation of stereotypes through social media

Many tend to view social media as a way to expand their understanding of the world and its phenomena – whether those were completely new or something that the user is already familiar with. In other words, social media enables exploring new information and content on topics that one would like to learn more about. In addition, social media serves as a platform that facilitates our communication with like-minded people. Through platforms such as Instagram and Twitter, it is relatively effortless to connect with people despite our physical distances. (Baldwin, 2016; Giddens, 1999). As there sure are positive aspects to the possibilities that social media provides to its users in terms of discovering new information, – or building up to the knowledge that one already has – , treating social media and the way it functions as purely unbiased and information-focused is highly problematic. Therefore, it is necessary to ask that does social media always serve as a platform for us to learn new and expand our knowledge? Moreover, instead of offering new information and aspects to topics that we are interested in, can social media actually function so that it reinforces the impressions and opinions that we already have – even on to the detriment of offering us insufficient or not fact-based information, that does not take into consideration alternative aspects to the topic?

The core function of artificial intelligence (AI) is that it utilizes computers in decision-making. Instead of counting on human-run processes only, in AI decision processes are being automated. However, AI can incorporate humans in some stages of the decision-making process. AI often applies machine learning (ML) that allows computers to make generalizations from the existing data. By applying ML, it is possible to make predictions of how the data will look like in the future. Still, in the light of possibilities of finding new and unbiased information from social media platforms, there are at least a few downsides to using ML tools in AI. For example, one of the most fundamental challenges is that there is certainly not data available on everything – or everyone. In fact, some people might be even left out from the data. One possible explanation to this can be a person’s low socio-economic status, which could lead to a situation that officially they do not even exist. In other words, the person does not own a passport or is not registered to any systems run by the country’s authorities. In some cases, it can also be the people themselves who do not want to be recognized or registered to any government systems because of mistrust towards the authorities. The very fact that data does not always take into consideration all human groups equally means that conclusions derived from the data are untrustworthy per se. (Paul et. al., accessed 3 Nov 2021, Taylor 2017).

 

Algorithms also play a role in producing unfair and biased outcomes on social media platforms. In fact, the so-called learning algorithms search for patterns and relationships in data in order to make future predictions of what kind of content the user could be interested in. For instance, if a person has visited similar kind of Instagram accounts frequently or multiple times in a short period of time, it is likely that Instagram’s algorithms suggest the user to visit accounts that offer similar content. In other words, on the base of the person’s social media user habits, learning algorithms make a prediction that the person is likely to be interested in similar kind of content in the future, as well. Instead of encouraging social media users to visit various different accounts that could offer some alternative aspects on the topic that the user is interested in, learning algorithms tend to guide us to familiarize ourselves with the kind of content that we are already consuming. (Paul et. al., accessed 3 Nov 2021). The very observation arouses a question that what kind of consequences can it have if a person only consumes content that offers a very one-sided view to certain topics? Ultimately, can our social media consuming habits and having “easy access” to one-sided information actually affect the formation of stereotypes – or even discrimination?

One explanation to why people might easily end up consuming social media content that only provides one-sided information is our historically and socially recognized need for a group-identity. Moreover, throughout history, humans have belonged to various distinct groups and units (Zur, 1991). Social media platforms have played a part in widening people’s possibilities to belong to and join different groups. In fact, several new group identities have formed since the emergence of social media. It could be argued that since people feel the need to be part of groups, this can also be recognized in the way they use social media. In other words, being part of a certain unit entails that you follow certain people and their accounts on social media. Depending on how many social groups the person is part of, also affects the amount and variety of social media accounts they follow. Also, the question of who the group consists of can vary greatly depending on the context. As mentioned, we all belong to many different social groups (Zur, 1991). This being said, AI and learning algorithms cannot be the only ones to blame for offering one-sided or biased information – even though the way they function seems to be strengthening the way we use and look for new content in social media.

In today’s digital world, we are all surrounded by masses of information. In fact, sometimes the amount of information we receive daily can be overwhelming – understanding what information is relevant in order to gain a comprehensive understanding of a certain topic can be rather challenging. This being said, we should consider if it is too much required from an individual person or social media user to search for content from various different sources and makes sure that together those contents form as comprehensive view over a certain topic as possible. If everyone would look for as much information on every topic as possible, just to gain an all-encompassing understanding of the topics and alternative approaches to it, we would all be spending hours and hours on our electronic devices looking for information. Still, there would be no guarantee that we are aware of all possible information and alternative aspects to all topics.

In some perspective, it can be claimed that AI is trying to make our processes of information searching lighter and more efficient – with the risk that algorithms encourage us to visit internet sites and social media accounts that provide one-sided information on the topic. Therefore, it is necessary to consider that what is the role of authorities in making sure that we are given versatile information? In fact, do authorities have a role in this at all? As in some countries, authorities are heavily invested in monitoring what kind of internet and social media content their citizens consume. This, again is recognized as a serious issue by not only many human rights organizations, but also other states. Taking these notions into consideration, would any sort of authority-driven control over social media and the content that we are being suggested to familiarize ourselves with be considered as restricting people’s access to information and exploitation of freedom? Could there be a middle-way solution to how much authorities can affect the information we consume? Of course, authorities have some legal obligations to observe if any illegal content is being spread through the Internet. However, the actions that authorities can do in this regard is also restricted by the fact that there is a lot of information and web-based content out there – and new content is being created every moment.

All in all, constructing concepts and definitions of subjects is to some extent necessary for all humans. To be able to communicate with each other, we need to formulate a common understanding of what a certain subject is and/or what it consists of (Dovidio et. al. 2013). However, having concepts can also become negative if those concepts turn into stereotypes. AI and algorithms can have a great effect on how we perceive things and other people. In fact, they can even enforce the formulation of stereotypes. As many tend to follow the kind of social media accounts that support the user’s own worldview and values, the likelihood that learning algorithms encourage the users to visit similar kind of social media accounts that they already follow or visit from their accounts is very likely. On the other hand, the probability that algorithms would recommend social media users to consume content that they would usually not look for is very unlikely (Paul et. al., accessed 3 Nov 2021). Since stereotypes that we uphold tend to become even stronger the more we receive information that supports them, there is a danger in following the kind of social media accounts that share a lot in common. Even if those accounts and the values they represent were something that the user believes in, consuming content produced only by like-minded people and groups is one of the first stepping stones when it comes to formulating a strong sense of “us” and the self which again leads to the formulation of “them” and the other – also known as othering. Essentially, othering refers to a process where a person or group is recognized not to follow the norms of “us” or the so-called in-group, which leads to the conclusion that the person or group belongs to the out-group. (Brewer, 1996:292-295, Dovidio et. al. 2013, Keen 1991).

It is worth of acknowledging, that separating “us” from “the other” does not necessarily include hostility. “The other” can also have positive features. In fact, some of those features might even be the same as the in-group or “us” have (Keen 1991). However, at worst, othering can have drastic effects. As Petersson (2009) acknowledges, othering can lead to perceiving the out-group as an enemy to the in-group. Furthermore, viewing the other as an enemy can influence the norms, ethics and values in a society – and even lead to legitimized political actions (Petersson, 2009:261). Potentially, it can lead to formulation of prejudice, social segregation and even practicing discrimination against certain people and human groups. The fact that stereotypes can also arise from and be reinforced by discrimination, forms an even more vicious and dangerous cycle (Dovidio et. al., 2013). This being said, it is necessary for every social media user to critically examine the kind of social media accounts they follow. Do those accounts and their content or information they share have a lot in common, and what could potentially be the risks of it?

References

Baldwin, R. (2016). The Great Convergence: Information Technology and the New Globalization. Cambridge: The Belknap Press of Harvard University Press.

Brewer, M. B. (1996) ”When Contact is not Enough: Social Identity and Intergroup Cooperation”. International Journal of Intercultural Relations. Vol. 20, No. 3-4, pp. 291-303.

Dovidio, J. F., Hewstone M., Glick, P. and Esses. M. V. (2013). The SAGE Handbook of Prejudice, Stereotyping and Discrimination. London: SAGE.

Giddens, A. (1999). Runaway World: How Globalization is Reshaping Our Lives. Oxon: Routledge.

Keen, S. (1991). Faces of the Enemy: Reflections of the Hostile Imagination. New York: Harper & Row.

Taylor, L. What is data justice? The case for connecting digital rights and freedoms globally. Big Data & Society. December 2017. DOI: 10.1177/2053951717736335

Paul, A., Jolley, C. & Anthony, A. Reflecting the Past, Shaping the Future. Making AI work for International Development. USAID. Available at:https://www.usaid.gov/sites/default/files/documents/15396/AI-ML-in-Development.pdf

Petersson, B. (2009). ”Hot Conflict and Everyday Banality: Enemy Images, Scapegoating and Stereotypes”. Development. (2009). Vol. 52, No. 4, pp. 460-465.

Zur, O. (1991). ”The Love of Hating: The Psychology of Enmity”. History of European Ideas. Vol. 13, No. 4, pp. 345-369.

3 Comments

  1. Samuel Hooper

    AI. What a drag. I’m pretty sure, in the very near future, AI will be able to do my current job as a copywriter as well (or even better ;)) than I do. Bummer… This was an interesting post and got me thinking about my kids and how their future online behaviour will be affected by algorithms. As an adult (and someone who works and studies communication) I feel like I’m aware of what is going on behind the scenes. I know the algorithms are feeding me what I “like” and pinning me into my own little world. But I’m aware of that. I try to keep my interests broad and my social media usage minimum. But when my kids reach the age where they’re online, I’m concerned that their young, potentially uncritical minds won’t see the bigger picture and their preferences online will be used to “shape” their personalities in ways that might be restricting or even harmful. That’s scary. Thanks for the post!

    1. Paul Denys

      Very interesting and reflective. The possibility of AI to take over will present an interesting dilemma, will it still have soul or will it be merely a well put together composition? In the absence of thoughts, experiences and creativity, will it be a mere step by step assembly; like Ikea furniture; of whatever the AI detects as a hot topic? Will it run some metrics to determine if the piece is supposed to be critical or supportive of the issue? Will it become sanitized, predictable and bland but well written with perfectly chosen vocabulary, cadence and SEO score? I too worry for youngsters who will enter adulthood never having experienced anything outside of a digital reality. Will they know enough to rebel against the blandness and simulated creativity?

  2. It is interesting to read how artificial intelligence can be used in desicion making. It was also enriching to read about the concept of us and them, when you write ”formulation of “them” and the other – also known as othering. Potentially, othering can lead to formulation of prejudice, social segregation and practicing discrimination against certain people and human groups.” I loke the word othering, i havent heard about it before. This discussion has been an eye opener for me in some ways I was not aware about before.

Comments are closed.