HumanitarAI delves into the dynamic intersection of datafication, AI, and social media, exploring how these tools are reshaping the way we approach humanitarian efforts and communication for development.
 
Queer in the Machine: The Dark Side of Algorithms for LGBT+ Rights

Queer in the Machine: The Dark Side of Algorithms for LGBT+ Rights

I recently came across an article on the potential negative impacts of artificial intelligence for the sexual and gender diverse community. One of the most pressing concerns for LGBT individuals in the digital age is the erosion of privacy. Online communities can be important spaces for connecting with others, especially for those who may not be out to their offline social circles. However, these platforms often collect vast amounts of personal data, which can be misused or exposed. People may fear that their sensitive information, such as sexual orientation or gender identity, could be weaponised against them.

Hacking the Cis-tem

A paper from 2019 (cleverly named “Hacking the Cis-tem”), found that when transgender residents in Great Britain tried to correct their gender in government IDs, they encountered a new computerised system programmed to trigger “compatibility check failures.” The author, Mar Hicks, documents how this “failure” mode had been deliberately programmed so that trans people would not be allowed to exist, except on a rare, case-by-case basis. Though this practice was abandoned in 2011, it remains one of the earliest examples of algorithmic bias.

LGBT individuals have already faced historical and ongoing discrimination, and AI can unintentionally amplify these biases. For example, algorithms used in hiring processes or lending decisions might discriminate against LGBT applicants, perpetuating economic disparities. Especially those living in regions with less accepting attitudes, may be more vulnerable to being tracked and surveilled due to their online activities. This surveillance can have chilling effects on their freedom and safety. The increasing availability of data and in particular the availability of data created as a by-product of people’s use of technological devices and services, has both political and practical implications for the way people are seen and treated.

Credit: Alejandro Ospina

From online to offline

AI can level up prosecution tactics of homophobic governments to monitor and punish individuals with unprecedented speed and sophistication. Sooner than later, prejudiced governments will have easy access to tools which allow them to target activists for prosecution by analysing their online activity, mobile phones, streaming history, ridesharing services and so on. The Russian government has already launched an AI-driven system aimed at identifying “illegal” content online to enforce the “gay propaganda” law. Their system (Oculus) will be able to read text and recognise illegal scenes in photos and videos, analysing more than 200,000 images per day at a rate of about three seconds per image. 

The LGBT community isn’t the only group vulnerable to being exposed by datafication. A few weeks ago International Safe Abortion Day was held. Fitness apps tracking menstrual cycles in the US could be used to penalise someone looking to get an abortion. The personal health data stored in these apps is among the most intimate types of information a person can share. It’s not uncommon for companies to share data with law enforcement, and if abortion or LGBT activism was banned, how would we know that these companies wouldn’t comply with police in those cases as well? 

A rights-based approach to AI

Data and AI has so far been a technical revolution, and has not yet been connected to a social justice agenda by the actors involved. However, data-driven discrimination is advancing as well, but awareness and tactics to combat it are not. An obvious starting point to address the issues surrounding AI and algorithms is more diverse representation among those who design AI systems, and more direct engagement with communities that might be impacted.

We need to determine ethical paths through a datafying world and start taking a legal and rights-based approach to AI, rather than just talking about ethics. Concepts like fairness and transparency are vague and culturally relative, how would subjective ideas around “fairness” be applied in prejudiced governments and companies? Human rights, on the other hand, are universal. They apply to all individuals, regardless of their identity, location, or circumstances. This not only promotes consistency but also guarantees that AI respects fundamental rights everywhere, regardless of jurisdiction.

I will end this post by quoting Mar Hicks, as he wrote in the paper from earlier:

“Technologies are never neutral, but sometimes seem so, usually right up to the point when we realise they’ve caused [an] irreversible change.”