Hello and thank you for joining!
Today we feature a discussion revealing the catastrophic real-world consequences that biases in technology can have. Our panel reviews a report published by independent consultancy and auditor: Business for Social Responsibility (BSR), in which they look at Meta and their handling and misguiding censoring of Palestinian activists’ online activities. (https://www.washingtonpost.com/technology/2022/09/22/facebook-censorship-palestinians/)

 

Listen or read the transcript below!

 

Podcast transcript:

Kristina: Welcome everyone. Welcome to the Big Data, Big Responsibility podcast. Allow me first to introduce our guests. We have present Annika, Maja, Jenny, Francisco, and myself, Kristina, as the moderator of today’s discussion. In today’s episode, we’re going to discuss the theme that has caught our group’s particular attention and we wanted to address the questions and thoughts we have in that regard. We aim for a fruitful discussion.

The central theme is that an outside audit revealed that Facebook restricted Palestinian posts during the Gaza war and how the company-commissioned audit became one of the first insider accounts of the failures of a social platform during wartime. One of the first questions that come up is what implications does this kind of online censorship and policing have on human rights? I would like to first ask Francisco; would you like to start this discussion?

Frank: Sure, sure no problem. I think that’s an interesting question and I don’t think we could talk about every implication that has to do with this type of censorship in technology. Whenever we did, whenever someone digs something up, we usually hear about it in the news. It seems new issues are popping up around the world daily as new technology spreads far and wide, and I think that we’ve been seeing a lot of this for quite some time now. So, I don’t really think I can cover the many implications that this has here in this short time, but I’ll definitely try.

Something that I think is very important yet unfortunate to mention about human rights is that they are somewhat subjective around the world. As controversial as that may be in sound, while we hope that most humans have and believe in at least a broad sense of human rights, that does not always hold true. So, I think that’s an important point in this conversation. Even organizations that focus on unity have a very broad definition or declaration of what human rights really are. You know, for example, the United Nations. Their declaration says all humans are born free, equal and dignity and rights. I’m paraphrasing, of course, but we hope that people believe this somewhat. But the fact of the matter is, unfortunately, some do not. So I think that the idea of knowing what your human rights are or not plays an important role when it comes to new forms of censorship in this vast world of technology. Because knowing this, you can almost predict certain implications that censorship could have on a society. Of course, if a person does not have a right or does not know what their rights are, then of course they would not know if it’s being violated or not. And I think unfortunately, it seems that only a terrible situation like this and the loss of life that took place at Al Aqsa compound bring these types of potential violations to light. In my blog post, kind of a shameless plug there, I’ve been concentrating on end-user license agreements or terms of service because these can be a starting point for people to learn what their rights are in the world of technology, and this would be specifically important for instances of communication or outreach. Knowing how something could possibly be censored by a platform could lead a group to change their message, their methods of operation, their verbiage or their hashtags, etc. I know, but of course in this situation this was not the first thing people were probably thinking about. But in this case, this specific event censorship not only possibly costed lives, it created a major power dynamic in a way giving one group power over the other. And from what I see, whether it was by human or algorithmic error, it was unintended. At least I hope it was unintended. It created this terrible power dynamic. Unfortunately, this worked out the same way as if somebody high up in Meta just took a side in the conflict and intentionally policed the posts. And that’s a very important factor here I believe. It’s interesting that if such a large company were to champion a country or a group of people, or a political party, it will of course affect the area in which it operates. Pushing that information or agenda over human rights. And this is just another glimpse of when that happens, I believe. And I think we’ve seen these poor controls of information from Meta over and over. It in some of their biggest markets, whether it was the propaganda in 2019, the anti-Muslim hate speech in India. The infamous Cambridge Analytica debacle in the US. This just kind of echoes back to the age-old idea. That information, or lack thereof, creates an imbalanced power structure and I think that is what happened here, and it continues to happen and that has a direct effect on human rights. Those who have the information can thrive. This is, of course, very similar to the way that we see reclusive governments around the world, such as North Korea, where they essentially control society in many different ways, granting very few rights with technology by creating a digital divide. Cutting off information in their case on an extreme Internet isolation or cyber repression or whatever you would call it. And there are many examples of this in countries like Tunisia where governments are creating and investing in technology and into things like Internet cafes where you can go sit down and use a computer. But you would initially have a list of websites that you could not visit because they publish information about human rights abuses by the Tunisian Government. So, this is kind of not a new thing, but it happened during a terrible period that cost the ultimate sacrifice of lives. And I think we’ve seen this through history. Those who have had information, they thrived, and we see it all around the world. And it’s definitely been a mainstay in repressing people and neglecting people, their human rights, whatever that may be. Essentially, those who have the information to make the bow and arrow, or the weapon have thrived and conquered, and those who have lacked information were enslaved or starved. In my opinion, it’s kind of the same concept. It’s just now available streaming in high definition.

Kristina: Thank you, Francisco, for this incredibly valuable input. This indeed sounds like an interesting view, and of course opens the floor to other interpretations and opinions. Maja, would you have something else to add to this question?

Maja: Well, I think Frank did a brilliant job of answering it. I can just touch upon maybe the opposite of this issue where we can talk a little bit about what gets censored and what doesn’t, because there are a lot of instances where Facebook and Twitter seem to simply allow certain types of posts to exist on their platform. For instance, I found some research that went through Twitter posts and found that from August 2019 to August 2021 there were 3.7 million Islamophobic posts that were made on Twitter. And what’s really concerning is that only 14% of those Tweets have been removed. I’m not sure where the power lies in these platforms. Who decides that racism is not OK while Islamophobia is? Who gets to decide which people in India have access to which parts of the social media posts? Because we have probably seen what’s been going on in Kashmir these last few years and how the Indian Government has been suppressing people who have voiced their opinions that were anti-government. And how big social media platforms like YouTube and Twitter played along. They agreed, based on Indian government’s requests, to suppress some profiles, make them unavailable for Indian users. There has to be a way to regulate this and to make sure that private individuals – so people who are not a part of the UN or any bigger governing body in the world – have so much power to influence so many millions and billions of people by simply picking a side.

Kristina: Thank you so much Maja, this is indeed a thought triggering. I suggest we move on to our second question in today’s theme. Who should set the rules and framework to ensure fair representation in the social media sphere? Jenny, what would you say to that?

Jenny: Yes, indeed, this is a very critical question to highlight. So today, as many have noticed, the big social media companies such as Meta, Google, Tencent, ByteDance to name a few, are the ones who set their own rules within their social media spheres. So that is to say, we don’t really have any global conventional rules or frameworks to stand on. And in this case, Facebook’s action to restrict Palestinian posts during the Gaza war demonstrated how the company, and sometimes together with the government, can set rules for the visibility of certain posts and expressions. And back in 2019, Mark Zuckerberg spoke at Georgetown University about the importance of protecting free expression. Giving everyone a voice was a core belief of Facebook and what’s more, he put forward arguments that we should fend off the urge to define speech we don’t like as dangerous and build new institutions so that companies like Facebook aren’t making so many important decisions about speech on their own. Unfortunately, the Facebook we see today is not holding to what they believe and promise. We keep seeing the violations of the right to free expression. People are not being heard everywhere and as indicated in the VSR report, artificial-intelligence-based hate speech system used an updated list of terms from the US Treasury Department containing the word Al Aqsa Brigade, resulting in #AlAqsa, which has nothing to do with the terrorist organization, being hidden from search results. So not only is Meta setting rules for what information is OK, apparently the US government also plays a critical role in setting the rules here. Taking a closer look, one company owning several influencing platforms increases the risk of an expanded range of limiting free expression. For instance, Meta owns Facebook, WhatsApp, Instagram etc. Allowing the company to capture a variety of audience from all over the world. And with this access they have a global audience and their data. This company can have a huge influence on how and what the audience is allowed to do, to see and to say on their platforms and this certainly creates an imbalance of the view of the world. So from there also creating and maintaining a powerful discourse environment that is beneficial to the company’s interest instead of a fair environment for the users, and simultaneously the influence also allows the company, and sometimes the government, to use surveillance and censor expressions that are classified as dangerous according to their standards. This power in itself is creating imbalances and inequality. And it’s frightening. So back to the question of who should set the rules or frameworks. Ideally, it should be an impartial, global organization that sets and regulates a fundamental standard for what social media companies, regardless of country, should do to protect the right to free expression and to maintain a healthy and equal social media environment. But we are not there yet. And finally, it should be the users who have the right to decide their activities and data on the social media platforms they use. Thank you.

Kristina: Thank you, Jenny. I think this is both interesting and thought provoking. The themes that you raised and the points that you are bringing up, especially putting their name out there as Mark Zuckerberg and the Meta companies and the power and influence that they have owning so many platforms. As you noted correctly, to my knowledge there is no such organization either that is existing today in the world that can sort of act as the United Nations and govern and protect the rules of social media and what should be allowed to be posted and how equal rights shall be supported by all the readers that are accessing social media. This is definitely something to come. Something that is worth discussing. Francisco, what would you like to add to this theme?

Frank: Well, I think this was a very interesting question and you brought up some very good points about there not being an international organization that can continue this work. That’s a very serious topic. As far as working on censorship, though, I also think about playing devil’s advocate a little bit, because if Meta, or any organization that had the power Meta has in this case, actively takes themselves out of the equation as far as representation and censorship, you have to kind of think about what that might trigger. For instance, if they chose not to work on censorship or do the work of policing these different posts or whatever it may be, if they chose not to use an algorithm or program in attempt to, say, negate communication of terrorist groups for example. That may invite governments to step in, and to what effect? What would fair representation look like if you got governments involved? We kind of already know in a lot of places it’s not great as far as fair representation goes. Getting governments involved before there is an organization that could take over, that would be kind of a slippery slope. I mean, I know last year in the United States as misinformation spread about COVID-19 and the press secretary of the White House came out and stated that they believe their policy on misinformation is that media users who post information should be banned from social media platforms. Once that was said, all hell broke loose, people came out of the woodworks. Once the government chimed in saying “you can’t say that”, that’s a violation of my free speech, so it kind of is a two sided sword here. If Meta is really doing the work and they’re not succeeding, we definitely need that watchdog or that organization. I think that probably is the best chance for something successful to come about.

Kristina: Thank you, Francisco. The next question I would like to bring up for discussion is how does this kind of algorithm bias tie in with unjust economic structures? Annika, would you like to take this one?

Annika: Sure, so the issue here was that Facebook had denied Palestinian uses their freedom of expression by randomly removing their content, and it seems like online censorship fell more heavily on Arabic speaking users than Hebrew speakers, particularly during the protests. Social media channels such as Instagram, WhatsApp and Facebook began restricting content containing the hashtag #AlAqsa. At first, the company blamed the issue on an algorithm error, later it was changed to be a human error. In the end, it turns out that the hashtag was mistakenly added to a list of terms associated with terrorism by a content moderator working for a third-party contractor. There are a lot of issues here tying into economic injustice, but I’ve actually decided to focus here on the content moderators as these people do incredibly important work for Facebook and other social media channels. For almost all of those 7000 plus languages spoken in the world, for 90% of which are spoken in the Global South, there are no automated systems (or classifiers as they are called) available. So, Facebook and other social media channels rely heavily on humans to do the job for them. However, in 2021, the New York Times reported that there are only 15,000 content moderators sifting through posts from 2.85 billion Facebook users. And of course, content moderators find themselves on the lower end of the value chain here. In the Global North as well as in the Global South, these workers face various working conditions. They must view very graphic content for long hours every day for very little compensation. One example I can tell you about is in Nairobi. There’s a company called Sama and they employ 200 content moderators speaking 11 African languages between them, and these workers toil day and night and get paid as low as $1.50 an hour for their efforts and they receive very little support and are refused to unionize. And in the Global South particularly, Facebook is chronically underinvesting in content moderators. So then we see this situation as in Palestine, where a content moderator mistook the Al Aqsa hashtag for the terrorist group, Al Aqsa Brigade, and therefore it was banned. And often content operators don’t speak the language or the local dialect. And this whole situation not only denies people their freedom of expression, but also leaves millions of users exposed to misinformation, hate speech and violent content. And again, we have a situation with economic power concentrated in the hands of the very few, and with the masses in the Global South particularly losing out once again. And content moderators have to watch horrific content for very little pay, and people, mostly in the Global South, have a hard time making their voice heard and they are more subjected to misinformation and violent content because there’s just not enough of these automated systems and definitely not enough content moderators. And this, in turn, results in underinvestment, especially in non-English-speaking markets in the Global South. So again we see the same old story that we heard so many times before and it’s repeating itself, but this time in a digital world.

Kristina: Thank you Annika. This was definitely enlightening to hear, and I think the examples that you bring up in this discussion are definitely worth mentioning. I feel like they can open the floor to very different discussions and topics. And also the numbers that you refer to, I think it’s quite an amazing to know the numbers of moderators and a certain misalignment with the multibillion audience that social media platforms have. So yeah, thanks for sharing. As we approach the end of today’s episode, we have one final item to address, and the question is what role does social media play in war and conflict and I could from my side contribute to this discussion by adding that social media can play quite an influential role in a war conflict. The way the word and messaging can be phrased, and the content can be restricted may sometimes leave readers with a limited or one sided image of the reality and what’s really happening. Also every individual interpretation varies and that also can leave the audience potentially in the dark of what’s really going on and happening behind the closed doors. It’s important, I think, for those in power of social media organizations to stay as objective as possible and provide equal rights to the full content of anything that is being posted and also of course provide equal rights to all groups of people despite where they are based and located, despite the level of education that they have and anything certainly that is referring to this. And of course, despite agreeing or not agreeing with their personal views. I think that social media can both facilitate a conflict resolution, but also, on the contrary, causing the war fire to heat up and that is where I think this sensitive line lies in. What about you Maja? Would you like to state your view on this?

Maja: Sure! For me the role of social media in warfare is simply incredible from so many different points of view, because you can look at how we, as bystanders of a war, that are sitting in our peaceful Western countries mostly, how we are spectating a war that’s happening in real time. We can go with an example of Ukraine here because I’m sure that all of us every day when we log into any of the social media platforms we will see at least one post relating to the Ukrainian war and it’s shaping our perceptions in ways that we will probably not understand for decades to come. And it is, I think, eventually going to influence the outcome of this war, because our opinions that are formed based on social media are going to influence lawmakers and they have already – all of the sanctions, everything that other countries are doing to sort of try to level the odds in that conflict. It comes from public outrage from their citizens and the way they have been influenced is through social media. It’s also very interesting the effect social media has internally on countries at war because we have two sides of a coin here: Ukraine, which has completely open social media access and President Zelensky that goes online every day to post a video addressing them personally giving them information, reassuring them, trying to energize the troops, trying to spread optimism, trying to rally everyone behind this idea and also addressing leaders of other countries and citizens. And on the other side of the coin, we have Russia where our outrage of the war has led a lot of companies, including social media platforms, to ban access to Russians who are now left just listening to the propaganda that their state is providing, because through all of our attempts to make this as fair as possible, we kind of put a curtain on Russia and we did, through our support, sort of mask what they get to see. And the social media platforms played a huge role in that. And now everyone is talking about how a lot of Russians don’t understand the problem of this “special military operation” and how it’s just “denazifing” and all of those things. But the reason for that is their inability to access content from outside. And of course, some of it was sponsored by Russia itself in banning content coming from the outside, but so many companies decided to boycott Russia as a market. So it’s really interesting to see in how many ways social media has shaped this war in a way, and I think after the Syrian war, this is the conflict that was most shaped by social media, so I’m very curious how this is all going to unfold and how big of an influence social media is going to end up playing in all this.

Kristina: Thank you, Maja, for your input. You definitely covered with some very important and I would also say very sensitive topics in there with the Ukrainian and Russian example. We’ll definitely have to wait and see, perhaps for years, how this is going to unfold. But I think at a personal level, of course, it’s also up to us on what we choose to promote as we look into different social media, either from Russian or Ukrainian side. And what do we choose as individuals to promote? Is it the hatred or is it promoting the peace? And hopefully still respecting and trying to find ways to coexist together on this planet. But thank you for that. This was really thoughtful and really thought triggering. Jenny, do you have anything else to add to this?

Jenny: First of all, very thoughtful points Maja mentioned. And I also had the Ukraine war in mind when thinking about this question. And so to my observation, people in the West are more pro and supportive of Ukraine, while people in, for example China, more understanding of Russia’s perspective and this is much shaped by the influence of social media. So, I imagine social media primary role is to be a place where information from diverse perspectives can be exchanged and protected. It can be first-hand information that people share directly from the war fields, and these users are then acting as watchdogs in this sense. All in all, the most important role should be to protect everyone’s voice to be equally heard. No side should be weighed heavier than the other ones. And this creates a surreal image of what is actually happening. So more importantly, these protections should not align itself with the government policies in the country where the social media company runs its business. Neither should the algorithms be biased, and in this way, social media can perhaps contribute to the world with a more or less balanced and realistic view of the conflict and war. Instead of nowadays extremely biased and divided understandings of conflicts and wars.

Kristina: Thank you, Jenny. Thank you so much. We are about to finalize our today’s discussion, and I’d like to ask if anyone has any other closing comments before we wrap up.

This was a very interesting discussion. A lot of thought triggering items that we covered today, and we can agree that there is a lot more that can be said in regards to the central theme of today’s podcast, but we hope that our group of speakers has managed to give you a little insight into this challenging topic of discussion. We, of course, thank each guest once again for joining today and hope our listeners have benefited from today’s episode. We do not say goodbye, but only see you and speak to you later. Thank you.

Please follow and like us: