Twitter Takeover – The Meat and Potatoes of the Musk’s Pledge to Tackle Content Moderation Issues

The Likely Future of Social Media Monitoring

Elon Musk’s journey to Twitter has been a ‘wild ride’. It should be no surprise to many that his eventual takeover has begun on the heels of controversy and likely massive changes in store for the platform.

Following the dismissal of several Twitter executives, Elon Musk tweeted the above, followed a day later by these tweets:

The banning of former United States President Donald Trump and former Ku Klux Klan leader David Duke, among others, has brought the issue of free speech to the forefront. However, Musk has shown caution in approaching content moderation and making decisions on it by forming a council. This signals a dynamic shift in online censorship and free speech.

We cannot deny the importance of these changes. As Taylor Hatmaker puts it that these actions may signal a new era. I believe this era be a turning point for Twitter and other platforms – in fact, all digital spaces.

Shortly after this announcement, Musk seemed to undermine his claim about a formalised decision-making system. He replied to the tweet of Jordan Peterson’s daughter, saying, “Anyone suspended for minor & dubious reasons will be freed from Twitter jail”.

However, the keyword to note is minor. Vittoria Elliott tapers expectations of Musk’s likely approach to content moderation by saying it could mean less free speech for many users across the globe, citing Musk’s May 9 tweet. Elliot notes that while a content moderation system would benefit users in the US whose right to free speech is protected by the First Amendment, the system would not have the same effect on users outside of the country. Some countries have weaker free speech protection.

Growing up in an era where digital platforms like Twitter have facilitated activism and public discourse on critical global and local issues could have harrowing outcomes.

This legal compliance will likely reduce or eliminate the ability of activists, journalists, intergovernmental organisations (IGOs), and local NGOs to post real-time content critical for getting messages to the world.

Elliott writes that it would risk users’ lives in countries with weak free speech protection and make them suspectable of ‘being punished for civil disobedience’. It can even cause a shutdown or banning of the platform in these countries to expand censorship control.

Therefore, the impact of Musk’s decision goes beyond the US, which appears will benefit the most in contrast to its effects on the developing world and development communication.

Kristen Saloome’s report, ‘Musk plans to form content moderation council for Twitter, ‘ with an appearance by Vittoria Elliott for Al Jazeera English (October 29, 2022).

The Fundamentals of Content Moderation

According to Roger Brown, content moderation, also known as social media content moderation, controls the content on online platforms by moderating content unsuitable for the general audience.

Another definition by Spectrum Labs is that content moderation is the screening and monitoring of user-generated online content by platforms to provide a safe online environment for their users and brands. This monitoring ensures that content falls within the platform’s pre-established guidelines and rules of acceptable behaviour, also referred to as acceptable user-generated content (UGC). Although priorities vary per platform, there are numerous advantages for the platform, its users and brands when tools and procedures exist that ensure UGC is not harmful or inappropriate.

Content moderation can be viewed as a complete and complex system involving visual interfaces, sociotechnical computational systems, and communication practices, according to Sarah West in her 2018 article for New Media & Society Journal.

The moderation process depends on the type of content and online community, approved brand messages and user posts. This process is executed with the aid of Artificial Intelligence.

Content Management and Social Media for Development – The UNHCR Model

Social media platforms play an increasingly civil role in discourse, creating a space for the public to ‘gather, discuss, debate, and share information’ and acting as support systems of communication where social, political, and economic life are deeply intertwined (West, 2018).

While many organisations strive to uphold the principles of Freedom of Expression, consideration is given to content moderation and the use of social media for practical and effective dialogue. Entities like the United Nations strive to ensure that the content moderation methods of each agency, including development initiatives, depend on their social media goals, operational context and tools, recognising that this is a complex and sensitive process with several risks involved.

Therefore, while moderation processes and strategies vary per UN agency, based on my experience managing social media for the IOM UN Migration, UNCHR provides a sound explanation of the content moderation process and indicators for moderation. In its 2021 guide ‘Using Social Media in Community-Based Protection’, content moderation applies guidelines to text, images and videos appearing on social media accounts or websites, often focusing on user submissions. Moderation indicates

  1. Monitoring and identifying potentially harmful content.
  2. Assessing content and ensuring it complies with relevant agency policies and guidance, the site or platform guidelines, Community Rules and Code of Conduct.
  3. Ensuring positive influence on conversations involving the agency or partners, especially where those conversations are capable of potentially leading to harmful content or behaviour.
  4. Supporting peacebuilding, reconciliation, and countering xenophobia and racism (UNHCR, 2021).

Content moderation practices in development communication carry risks that agencies and brands are aware of. Risk assessments and sound strategies that include risk management are encouraged to minimise or prevent reputational and organisational risks. Protecting a brand’s reputation, including its credibility and trustworthiness, and removing harmful behaviours like hate speech, Child Sexual Abuse Material (CSAM), misogyny, violence, radicalisation, and several others minimises destructive behaviours and positively affects user experience.

However, while organisations can make every effort to ensure their content is moderated to encourage an inclusive and safe community for its users and target audience, protect their reputation and that of their partners and improve the message – they cannot control what other users do on these social media platforms.

Often, widespread disinformation on platforms can incite hatred and create a path to violent online and offline behaviour. However, this disinformation does not consistently violate the terms and conditions of social media companies’ content rules (Article 19, 2022).

The Good, the Bad and the Ugly Future

Twitter’s former CEO, Dick Costolo, once referred to the platform as a “Global Town Square”, adding that Twitter is “a very public, live, in-the-moment conversational platform”. Before Musk’s acquisition, Twitter tried to balance abuse or ‘orchestrated harassment’ while championing free speech by forming a Trust and Safety Council. Formed in 2016, the council included 40 organisations and experts, such as NGOs and activists, that advised Twitter on challenging policy areas. Outside of an advisory capacity, these organisations and experts could not make binding decisions like Meta’s Oversight Board.

There is pressure on platforms to remove harmful content. In 2021, the European Union revised its Code of Practice on Disinformation. Two statements stood out. Věra Jourová, Vice President for Values and Transparency, said, “threats posed by disinformation online are fast evolving”. At the same time, Thierry Breton, Commissioner for Internal Market, added that the ongoing ‘infodemic’ and “the diffusion of false information putting people’s life in danger.”

Facebook, Twitter, YouTube and all social media platforms have made considerable improvements and investments in policies and actions to detect and moderate harmful content.

Bone (2021) states that social media companies must ultimately aim to develop specific guidelines that may be applied consistently, irrespective of political preference or cultural values, suggesting these companies tie content guidelines to the UN Guiding Principles on Business and Human Rights. However, Bone (2021) recognises that private entities may reject the idea of a standard that binds them even though adopting rules applicable across all content regardless of viewpoint or user would ensure that content-moderation decisions are consistent.

Many organisations, particularly IGOs, have also created policies and developed practices to ensure their UGC responsibly supports the Principles of Freedom of Expression when communicating on social media. However, Musk’s approach may create issues for IGOs, impeding their communication and those of their counterparts and partners in the developing world. The possibility that the platform can be banned in countries with weak free speech protection and risk the lives of those in development aid work who rely on platforms that allow them to communicate in real-time to harness support is worrisome. This would also increase misinformation and propaganda, which, although it cannot be fully harnessed, will mean organisations and brands must increase their capacity to deal with these issues.

It is time for those likely to be impacted by impending changes in content moderation policies on digital platforms to review their policies and communication strategies. Also, it may be the call to action that is needed for global powers to consider this threat to free speech and human rights as the internet enters a new phase.

I feel a sense of urgency is needed in considering the implications to free expression and finding solutions before it becomes a reality.

Sources:

Article 19 (2022). Content Moderation and Freedom of Expression: Bridging the Gap between Social Media and Local Civil Society. Retrieved from https://www.article19.org/wp-content/uploads/2022/06/Summary-report-social-media-for-peace.pdf.

Bone, T. (2021). How Content Moderation May Expose Social Media Companies to Greater Defamation Liability. Washington University Law Review, 98(3). Retrieved from  https://openscholarship.wustl.edu/law_lawreview/vol98/iss3/10.

West, S. (2018). Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society, 20(11), 4366-4383. Retrieved from  https://doi.org/10.1177/1461444818773059.

UNHCR (2021). Using Social Media in Community-Based Protection. Retrieved from https://www.unhcr.org/handbooks/aap/documents/Using-Social-Media-in-CBP.pdf.

Photo by Alexander Shatov.

Back to Top