HumanitarAI delves into the dynamic intersection of datafication, AI, and social media, exploring how these tools are reshaping the way we approach humanitarian efforts and communication for development.
 
Are we debunking fake news with fake news?

Are we debunking fake news with fake news?

As previously mentioned in my past blog posts, deepfakes hold a major threat to women everywhere. But unfortunately, deepfakes are not only found on the darkest parts of the internet. A common misconception may be that deepfakes are only featured on the darkweb, but nowadays deepfakes are manipulating scientific publications to misinform its readers. The concerning part is, even experts in the field have difficulty differentiating between real and deepfake images. Videos of politicians and research publications that we believe to be true, could be artificially engineered by deepfakes to spread ‘fake news’. Some of today’s most trusted world leaders are victims of deepfakes, with videos of them making derogatory and offensive remarks in order to wreak havoc within the country’s population. 

Although there has been recent debate about the upside of deepfakes, where is the line drawn in terms of the ethics of AI? The production of deepfake-altered published information and political campaigns are widening the gap of inequality between the creators and the viewers. By censoring what is posted onto social media, we would be eliminating individual freedom. But keeping our citizens and readers informed with valid and non-ill intentioned data, is also a right they deserve to have. 

What can we do to make sure we do not fall victim to this AI technology? 

Deepfakes in Research 

Deepfakes have unmasked themselves to be the latest parasite in the research industry. In 2014, a new form of technology was developed that posed an international threat that came as a surprise to most. Generative adversarial network (GAN) is a form of artificial intelligence (AI) that is able to produce ‘highly realistic-looking synthetic contents’ (Wang et al, 2022, pg. 1). 

Unfortunately, not everyone on social media are avid researchers, like ourselves. If a research article is published that looks remotely legitimate, people will believe the content. However, with the new deepfake technology, even experts in the respective fields are having a hard time recognizing the illegitimate research results. 

Picture by (Wang et al, 2022, pg. 2)

The picture above features an image of a cancer-free esophagus on the left, the two images in the middle and right were artificially generated by GAN to show cancer spreading within the esophagus. Although GAN has now been banned by several platforms across the internet, the software has unleashed many similar (but different enough) softwares that can produce the same results, (Wang et al, 2022, pg. 2). According to a recent study, even experts are unable to differentiate between an original scientific and the respective synthetic image, (Chen et al, 2021). Experts in Ophthalmology were asked to identify the synthetic vessel images of the eye. During this process, only about half of the experts were able to correctly differentiate between the real and fake images.  

If even experts are unable to spot the deepfake, how are we able to protect our world from the technology? Even if the publication came from a trustworthy source, the content may have already been altered. Deepfakes are an epidemic to our society, causing readers to spiral, wondering what they are reading or viewing is real. There are technologies to identify and detect deepfakes in science publications; however, a popular champion deepfake detection algorithm scored less than a 50% on its ability to detect the GAN, (Wang et al, 2022, pg. 3). Not to mention, the ability to access these technologies are more difficult to those not in the field. 

Although GAN has had a relatively negative connotation surrounding the creation of synthetic images, GAN has also been trained in the livestock industry to monitor an animal’s health and recognize pathogens in cancer patients, (Neethirajan, 2021). One of the most recent developments of deepfakes is regenerating a person’s voice who is no longer able to speak. The GANs allow the user to upload an old recording of the patient’s voice, and they can then use the synthetic voice to communicate through a list of common words and phrases, (Neethirajan, 2021).

GANs appear they could have life changing effects on the world, however all it takes is for the algorithm to fall in the wrong hands. But does the good outweigh the bad? 

Deepfakes in Politics 

As we approach the 2024 elections, here in the United States, our ‘spider-senses’ need to be on high alert when scrolling through our social media. Deepfakes are often used to sway voter preference. My TV will be filled with campaign ads made by presidential candidates asking for your votes. A common tactic is to undermine their rival by destroying their credibility. 

As you can see in the table below, election-oriented deepfakes are meant to toy with the viewers emotions, (Diakopoulos & Johnson, 2021). Several of the scenarios featured in the table below are simply exaggerated situations. If a politician does have a military background, but creates a deepfakes for his campaign advertisements featuring him in the mindset of war, does that qualify as misinformation or disinformation? (Diakopoulos & Johnson, 2021). And how can we differentiate the two in order to draw clear lines in the sand? 

Table by Diakopoulos & Johnson, 2021

As long as viewers are informed that the video they are viewing of their potential candidate (or rival) is fabricated, then their intent is not necessarily unethical. However, no one wants to vote for a liar, so the likelihood of that happening is very small. To deceive the viewer is the main motivator for creating deepfakes within politics. Experts are concerned that the fabricated videos are causing voters to vote ‘for the deceiver’, (Diakopoulos & Johnson, 2021). The downward spiral this can cause for my country, as someone who is malicious and perjurer can now take office. 

Nowadays, it is hard to distinguish what is legitimate news and what is ‘fake news’, especially with the polarized politics of my home country, the United States. In 2018, a video of former US President Barak Obama was shared of him making very offensive statements, (Lucas, 2022). This video spread like wildfire online after everyone who saw it thought it to be true, and those who disagree with him may still claim it to be true. Jordan Peele, a famous American actor and producer, turned out to be the man behind the video, and created it as a PSA to us all, (Buzzfeed, 2028). He showed us exactly how easy it was to look and sound like President Obama, with a click of a button. As you can imagine, this video put experts on high alert. With the realization that the field of international and domestic politics is no longer safe. 

View video message here: 

You Won’t Believe What Obama Says In This Video!

Now I ask you, my readers, did you spot the deepfake immediately? I’m sure you are mumbling to yourself, ‘of course! How could anyone not see that it was AI?’, well Kobis, Doležalová, and Soraperra prove that viewers are overconfident in their ability to spot a deepfake, but fail most of the time. They concluded in their 2021 study, that we cannot rely on ourselves to identify a fabricated video. Only about 50% of the time, their participants were able to correctly recognize the deepfake, (Kobis, Doležalová, & Soraperra, 2021, pg.11). 

Ethics of AI

Every aspect about deepfakes radiates unethical. The quality of the information and the intentions are harmful, not to mention the changes made by deepfakes are almost always biased, trying to sway the viewers opinion in one direction, (Zwitter & Gstrein, 2020). How can we ensure that what we are viewing on the internet is true? Does censorship come into play if a government were to monitor and remove modified data? 

The issue comes down to security and freedom. As Zwitter and Gstrein elaborate, the need for governments to monitor and control what is posted online does coincide with individual freedom, (2020). However, as featured in the Signal Code of the Harvard Humanitarian Initiative, one of the key points is the right to protection, which protects citizens from any misuses of data, which several cases of misuse were covered above and in earlier blogs, (Zwitter & Gstrein, 2020). With this into consideration, the government has a responsibility to protect us from any abuse of data, especially during a crisis. With the deepfake technology now a free-for-all, this seems harder and harder to do. 

I can imagine that here in the United States, crossing the line with individual freedom would cause an absolute uproar. The polarization of the states causes the tension to be thick, and censoring anything on their social media would not go over very well. I can recall that as soon as Meta began flagging posts that contained misinformation, the far-right party took to their own platform where they could freely share posts about how the Biden election was rigged. 

There is also a growing gap between those who are familiar with data and technology and those who are not. As deepfakes continue to sweep the media, people who are unfamiliar with this technology and how it works, are more likely to fall victim to the false information. This is a major threat to international development, as many agencies use the spread of information as a tool. The term ‘information poverty’ was coined to explain these inequalities in access to data and information, all effects of community, society, and individual culture, (Cinnamon, 2020). Are these tactics of spreading misinformation just widening the gap even further? 

In this blog, we have discussed different forms of data that have been produced by the public. These forms of ‘user-generated data’ tend to exclude those who are not involved and can negatively affect them as well, like falling for the misinformation, (Cinnamon, 2020). Perhaps the first step is to inform the public this AI is alive and prominent across their social media. 

At this point, the ethics of GANs and other forms of deepfake algorithms are not in question- it is very clear that they are unethical due to their malicious intent and nature. The new question is, how can we separate the good from the bad? GANs are able to better the lives of those who are disadvantaged but how can we limit the bad to let the good continue to grow. 

Concluding Reflections 

Throughout the last couple of months with my new blogging experience, I have unmasked the horrors of new developments in AI. As you can see, they are haunting almost every corner of the internet. In my first post, I introduce the term deepfake as a new form of AI technology. Secondly, in my interactive post, I uncovered the fears that women have every day as Nina Jankowicz discusses her experience being victim of sexual harassment, because of deepfakes. In my final blog post, I delve into the realization that even when we may believe something on the internet may be true, such as a scientific publication or a presidential address, it could have already been synthetically modified. 

Of course, I do not mean to scare you. AI has brought a lot of good to the world that I cannot deny. However, unless restrictions and boundaries can be placed, AI holds a legitimate threat against our society. Developments in technology have the tendency to marginalize the marginalized, (Taylor, 2017). As we covered in my interactive post, women in Africa are being shunned from their families and societies due to deepfakes. 

We already knew that disinformation was a common practice throughout politics. Now that deepfakes are running rampant through our society, how can we be sure we are not debunking fake news with fake news? The scariest part about all my studies regarding the subject, we probably come across deepfakes everyday and have no way of knowing. 

Working with my group has been a rewarding experience, as we learned from each other throughout the entire process. As we wrap up our final post, we hope to have the opportunity to work together again in the future. Although this is my final post (on this blog), I would love to hear from you in the comments below! 

References

BuzzFeed. (2018). You Won’t Believe What Obama Says In This Video! YouTube. Retrieved from https://www.youtube.com/watch?v=cQ54GDm1eL0.

Chen, J., Coyner, A., Chan, R,. Hartnett, M., Moshfeghi, D., Owen, L., Kalpathy-Cramer, J.,  Chiang, M., Campbell, J. (2021). Deepfakes in Ophthalmology:

Applications and Realism of Synthetic Retinal Images from Generative Adversarial Networks. Ophthalmology Science, (1)4,

https://doi.org/10.1016/j.xops.2021.100079.

Cinnamon, J. (2020). Data inequalities and why they matter for development, Information  Technology for Development, 26:2, 214-233, DOI:

10.1080/02681102.2019.1650244

Diakopoulos, N., & Johnson, D. (2021). Anticipating and addressing the ethical implications of deepfakes in the context of elections. New Media & Society,

23(7), 2072-2098. 

Köbis, N., Doležalová, B., & Soraperra, I. (2021). Fooled twice – people cannot detect deepfakes but think they can. SSRN Electronic Journal, 1–17. 

Lucas, Kweilin T. (2022) Deepfakes and Domestic Violence: Perpetrating Intimate Partner Abuse Using Video Technology, Victims & Offenders, 17:5, 647-659,

DOI: 10.1080/15564886.2022.2036656

Neethirajan, S. (2021). Is seeing still believing? leveraging deepfake technology for livestock farming. Frontiers in Veterinary Science, 8.

https://doi.org/10.3389/fvets.2021.740253

Taylor, L. (2017). What is data justice? The case for connecting digital rights and freedoms globally. Big Data & Society, 4(2).

https://doi.org/10.1177/2053951717736335

Wang, L., Zhou, L., Yang, W., & Yu, R. (2022). Deepfakes: A new threat to image fabrication in scientific publications? Patterns, 3(5), 1–4. 

Zwitter, A., Gstrein, O.J. Big data, privacy and COVID-19 – learning from humanitarian expertise in data protection. Int J Humanitarian Action 5, 4 (2020).