
Hello everyone and welcome back!
While researching for a new post for Re:Power Development I stumbled upon this interesting Ted Talk about AI and human bias by Kriti Sharma, an artificial intelligence technologist, business executive and humanitarian. She asked a very important, yet often forgotten question:
How many decisions have been made about you today by AI? And how many of them have been based on your gender, your race, or your background?
As a self-proclaimed nerd whose passion lies not only in the digital ecosphere of artificial intelligence (AI), but in a world where AI is used for the benefit of truly everyone, she has seen her own fair share of prejudice – not only in the way algorithms are built, but also in the process that led to that point.
Her speech made me wonder. As we emphasize our own biases in AI, can we truly expect they will work fairly and equally for everyone? Short answer, no. Long answer, a bit more optimistic.
Let’s take it back a notch. First, we need to understand…
How can AI have prejudice?
Same as all technology, AI is built by humans. And as AI learns from humans, it learns from individuals’ experiences, their worldviews, and the environment they grew up in. And as a person selects the data that algorithms utilize, it also decides how the results of those algorithms will be applied. Their unconscious biases can be thus easily introduced into machine learning models and in the absence of thorough testing and diverse teams, this biased model is then automated and perpetuated by AI systems.
Just look at some decisions AI has recently made, mentioned by Kriti:
A Black or Latino person is less likely to pay back their loan on time than a white person.
A person called John makes a better programmer than a person called Mary.
A Black man is more likely to be a repeated offender than a White man.
Pretty bad, right? If you are interested in this, you might also want to look into this interesting study by Georgina Curto [1] also examined negative biases in AI in her research Are AI systems biased against the poor? where “data evidencing the existence of bias against the poor within the three pre-trained word embeddings included in the study, namely Google Word2Vec, Twitter and Wikipedia GloVe.” Her findings had major implications for achieving the United Nations’ first Sustainable Goal (no poverty).
But not all is lost
The good news is, we are still in control of the AI, so it is up to us to design it in a way that truly works for the benefit of everyone: Here are a few things Kriti suggests:
- Be aware of our own biases: Believe it or not, everyone has unconscious biases. Many prejudices are developed and retained subconsciously throughout life, mainly as a result of societal and familial training.
- Make sure a diverse team is building this technology: When people come from different backgrounds, it contributes to the versatility of the AI and that algorithm. Geographical, religious, and gender diversity is key!
- We must give AI diverse experiences to learn from.
AI is (already) used for good
Below you can find three different projects that are using AI to help respond to a development/humanitarian issue that we can all learn from:
-
AI for Disaster Response: World Food Programme Project
World Food Programme used the power of AI to develop strategies to forecast affected people and produce tailored disaster assistance packages
-
AI as a solution to combat domestic violence in South Africa?
Providing scenario-based GBV stories to users, Sage Foundation has partnered with AI for Good and Soul City Institute for Social Justice to launch rAInbow, an AI-powered smart companion to support victims of domestic violence.
-
Using technology to stop Online Violence Against Children
Save the Children used AI to examine the company’s transformation agenda about why investment in digital and data can help the positive impact of a not-for-profit.

Is there an AI project that was used for good that fascinated you? Share it in the comments below!
[1] Curto, G., Jojoa Acosta, M.F., Comim, F. et al. Are AI systems biased against the poor? A machine learning analysis using Word2Vec and GloVe embeddings. AI & Soc (2022). https://doi.org/10.1007/s00146-022-01494-z