OK, I’m back, and I’ve read some more. If you have a moment that you won’t mind dedicating to some more theoretical thought, please follow me when I try to widen the perspective from my previous post (“Responsible AI – Who’s in the Driver’s Seat?”) to see how big data and AI relate to technology’s role in “development”. (I think you will find it worthwhile, and as before, can’t wait for your comments).
In my previous post, I was reflecting on responsible AI and who’s in the driver’s seat of the big data/AI transformation, especially considering the reliance on the private sector in the Global south. In this post, I’ll try to frame these concerns within critical social science, ICT4D (Information and Communication for Development) research, and post-development thought. (See the reference list at the end if you want to read more about the different perspectives I bring up.)
In trying to be reflexive, I want you to bear in mind that this is a biased selection very much based on my own background and interests (which I hope you have gotten a sense of in my previous posts). This is important since we always need to understand research in this way – the research questions we ask, and how we interpret the answers, are filtered through the backgrounds, disciplines and assumptions we are all made of. This is also related to my interest in post-development, i.e., how could “development”, or rather, the future, be visualized beyond the paradigms that have formed us?
But let’s start from a critical social science perspective on technology and AI.
AI as a Social Construct and Site of Power
“Technology is neither good nor bad; nor is it neutral.”
(Kranzberg 1986, reference in Lindgren & Holmström 2020, p. 2).
The quote above is referred to in a special issue on AI from the Journal of Digital Social Research (open access) and works as a good introduction to how AI is tackled from a social science perspective. One of the main points is that technology, such as e.g., an AI-system, can’t be neutral since 1) it’s created and shaped by humans, and 2) its use and interaction with society has social and human consequences beyond its immediate purpose; consequences which differ depending on where and who you are (Lindgren & Holmström 2020, p. 2).
So, according to this, we could explore two interlinked processes when it comes to AI-systems:
- How, by whom and why are they created?
- What happens when they get out and start “interacting” with society?
With AI being “imbued with social intentionality” in this way, it also represents a site of power (p. 4), something treated within the field of sociology of technology. Here, we come to understand technology as constructed through the social process of interaction and negotiation among different social groups, and where the dominant ones will imprint their interpretations of a given technological path which eventually is taken for granted and conceived of as the “best” technological “solution”. From this perspective, processes of technological development are viewed as political struggles (Pozzebon & Arruda Fontenelle 2018, pp. 1758-1759).
With this in mind, we turn to the perspective of ICT4D research.
The ICT4D Perspective
From an ICT4D point of view, our interest regarding the second process outlined above especially concerns technology’s impact on “deliver[ing] some part of the international development agenda in a developing country” (Heeks 2017, p. 10), i.e., does the AI system’s interaction with society lead to development? To answer this question, we need to define development, and this definition will shift depending on the development paradigm (or discourse). Since the end of WWII, different paradigms have been emphasized, never fully replacing each other, and sometimes reappearing in a new shape. Let’s make a brief (and generalized) review of the rise of these paradigms and their relation to ICT4D before continuing (based on Heeks 2017, pp. 18-29):
Starting in the 50s, ICT4D 0.0 (data processing, government-driven) took shape under the modernization paradigm where the industrialized countries transferred technology to the “underdeveloped”. The 60s dependency theory then centered on the “underdeveloped” to break free and start creating their own technology (although often led by local elites), which in the 70s led to a basic needs development focus to cover for the resulting technological deficiencies, and later in the 80s to the rise of the neo-liberalism paradigm where market forces would lead the way through a “free flow” of technology, eroding much of the local technological capabilities built up. The 90s came with ICT4D 1.0 (internet, NGO-driven) and an attempt to even out neo-liberalism’s lack of focus on the poor through the human development paradigm, prioritizing ICT4D in health, education and income possibilities for the least well off. By the beginning of the 21st century, ICT4D 2.0 (mobile phones, more involvement of the private sector) was developing together with the sustainable development paradigm emphasizing ICT4D initiatives contributing to the reduction of inequalities and to environmental sustainability.
ICT4D as a research field began within the information systems field in the mid-80s (focusing on finding the cultural barriers to ICT implementation in developing countries) and has then successively turned more interdisciplinary (including perspectives from e.g., anthropology and development studies), as well as more critical, questioning e.g., the nature of development and the role of new technologies (Walsham 2017, pp. 19-24). According to Walsham (2017, p. 24), more research is still needed on the political aspects of ICT4D to answer questions such as who is pushing what technologies and why.
“[…] a major theoretical challenge for the ICT4D field [i]s to strengthen its capacity to associate ICT innovation with socio-economic development.”
(Avergou 2010, in Walsham 2017, p. 30).
According to Heeks (2017, pp. 24-25), ICT4D is most strongly backed-up by the modernization and neo-liberal development paradigms, in which technology plays a key role in achieving development. It’s assumed that when technology starts to interact with society, it will be for good. Although the above historical review would place these paradigms in the past, they still have a strong influence today (p. 25). Next, we’ll have a look at a few examples.
AI, Modernization and Technological Determinism
The way big data and AI are believed to represent an objective truth through a new form of “higher knowledge” (Boyd & Crawford 2015, in Read et al. 2016, p. 10), forms part of modernization’s positivistic ideals where a messy social world can be brought into the formal study of the natural sciences, leading the way to continued progress and development (Svensson & Poveda Guillen 2020, pp. 75-78).
This way of thinking easily leads to technological optimism (technology is for good) and determinism (there’s no other way), something cultural anthropologist Govia (2020) has identified as central elements of the worldview of AI professionals and scientists through her ethnographic studies. For example, on the topic of ethical practice, a CEO of an AI start-up rejected any type of regulation, explaining it would be better not to stop the “trajectory of technology” (p. 55). Similarly, among AI students and professors the regular interpretation of social implications of AI was the view of technology as a purely good thing with potential to solve all problems (p. 51). In relation to this, Taylor and Broeders (2015, p. 236) have warned about a form of “solutionism” where engineering solutions are believed to be able to deal with deep structural and political problems.
Another example comes from the humanitarian field where Read and colleagues (2016, p. 2) identify a form of technological determinism in the way big data developments are driven by “what is possible rather than what is needed”. According to them, there’s a risk if new technologies are naturalized and not questioned enough, since this will exclude other methodologies, and contribute to the further empowerment of technocratic specialists (p. 12).
“[…] the new aspiration towards hubristic big data processing is just another step in the same modernist process of the production of statistical truth.”
(Read et al. 2016, p. 12).
These examples imply that the modernization paradigm is still present in corporations, academia and humanitarian organizations working with the development and implementation of AI-systems, keeping alive the assumption of a linear development path to which everyone needs to catch-up. ICT4D’s traditional focus on “digital divides” can also be said to follow this logic by making the main problem formulation about being more or less included on this same path (Mann 2018, in Cinnamon 2020, p. 215).
AI, Neoliberalism and the Role of Private Corporations
The private sector has steadily increased their funding and involvement in ICT4D projects (Heeks 2017, pp. 82-83), not least when it comes to big data and AI in the Global south. Taylor and Broeders (2015, p. 232) describe a trend where private corporations take advantage of the development discourse underpinning big data public-private partnerships (PPPs) to justify their search for new markets. Through the logic of “informational capitalism” (more data = more power), private corporations are empowered within these partnerships and thereof become primary actors (rather than contractors) in development planning and implementation (pp. 229-236).
Although this “data-driven development” results in more visibility for populations in the Global south who previously were not covered in statistics, the original purpose of the data-gathering (i.e., for profit) risks potential biases and will not necessarily result in greater representation for these populations (pp. 230-232). Since most countries in the Global south don’t have privacy or data protection laws, it also increases the risk for misuse (pp. 232-236). Further consequences of these PPPs are more top-down projects, reducing the voice and influence of the ones who are supposed to benefit (Heeks 2017, p. 84), and (as with neo-liberal influence in general) the pushing out of smaller-scale actors with potentially better local knowledge (Taylor & Broeders 2015, p. 232).
This makes PPPs a crucial factor in establishing and amplifying digital data asymmetries (p. 3), i.e., a site of power.
Post-Development and Tecnologia Social
Based on the above, and in line with my previous post, I would be one of those joining some of the structural critique that’s been posed to the idea of ICT4D, such as e.g., Pieterse (2010, in Heeks 2017, p. 25) arguing that it operates within a modernization/neo-liberal development paradigm serving the economic and political interests of a narrow corporate elite.
“As we are thrown into a digital existence […], digital tech giants and data scientists are increasingly powerful centres around which our existence gravitates.”
(Svensson & Poveda Guillen 2020, p. 78).
The quote sounds dramatic, but points to a future in which technological evolution brings us to a less democratic society, created by an adherence to the modernistic/neo-liberal development paradigm. Let’s look at another, more optimistic, view:
“We should be able to determine our interactions with technology by debating and, if necessary, resisting and proposing different paths.”
(Taylor 2017, p. 12).
So, how could such “different paths” be imagined?
Since the early 2000s, in parallel with the sustainable development paradigm, an “anti-development paradigm” referred to as post-development, has evolved (Heeks 2017, p. 24). Based on the view of the destructive consequences of development through colonialism to neoliberalism, with its focus on economic growth despite of human suffering, post-development seeks a reorientation of imaginaries and practices that displaces development as a central organizational principle of social life (Pozzebon & Arruda Fontenelle 2018, p. 1759). What I find relevant for our discussion here, is the critique towards the continued underpinnings of modernization and neoliberalism in the development discourse/s, including science as the only way to “progress” leading us to focus on how to tackle our fears of the future with “smart” technologies (Kothari et al. 2019, pp. xxii-xxvii; Sachs 2019, p. xv). Post-development doesn’t reject technological innovation as such; depending on its application it can be sustainable and just (or not) (Heeks 2017, p. 24), but at the same time, it stresses that technology is not what will save us since what’s needed is a more fundamental socio-cultural transformation (Kothari et al. 2019, p. xxvii). As argued by Ribeiro (2019, p. 53), when technology is presented as the solution, it serves those who control these technologies, and she points to AI (among other technologies) as primarily a tool to increase profits for large corporations. Again, the solution to this problem is not foremost to make those corporations more responsible or accountable (Kotahari et al. 2019, p. xxiv), instead, other actors and perspectives are needed.
With its roots in these post-development reasonings, but at the same time recognizing the critique it has received for overgeneralizing development history, romanticizing grassroot/indigenous potential, and failing to provide alternatives, I find the Latin American concept of tecnologia social (social technology) (Pozzebon & Arruda Fontenelle 2018, p. 1763) as relevant for our discussion. Pozzebon and Arruda Fontenelle (2018, p. 1750) base this concept in a view of the Western-based historical path of technology development as one of the main sources of social and economic inequalities. A technology development characterized by the quest for increased efficiency (in terms of economic values), intensive use of capital, resources and specialized knowledge, as well as increased mechanization, at a high environmental and human cost (p. 1753). In contrast, the primary objective of tecnologia social is well-being, enforcing a rethink of technological development (pp. 1762-1763). It also calls into question the decision-making process behind the implementation and use of technologies and tries to modify them in favour of local communities rather than private corporations (p. 1758). It doesn’t reject so-called “technological expertise” but tries to integrate it from an indigenous/local knowledge point of view. In this way, it differs from dependency theory’s take on technological development (as described above) in two ways; it secures a political process that prevents local elites taking ownership of technological development, and it takes advantage of external knowledge (without letting it take over) (p. 1761).
Tecnologia social converges with the concepts of buen vivir (development as “living well” instead of growth), as well as de-growth (downscaling of production and consumption to achieve long-term environmental and social balance without risking well-being) (p. 1761-1763). All three concepts represent alternative ways of envisioning the future by questioning the current development paradigm/s view on technological progress.
Conclusion (and a few reflections on my blogging)
So, to conclude, if we take a critical view on AI as a social construct and site of power and apply it to the field of development work, it opens perspectives where we can see how persisting ideals from the modernization and neo-liberal development paradigms lead to technological determinism and increased power of private corporations, with potential (and real) negative impact, especially for the most vulnerable. With inspiration from post-development thought, I have argued that in our quest to form a more just and sustainable future, we could benefit from other ways of envisioning technology development, and from there also the role of big data and AI.
Since this will be my last individual post on this blog, I would also like to leave you with some of my reflections regarding the experience. I will be honest, I never blogged before, and it’s been quite challenging to find the right tone of voice as well as how much of my own opinions I’m ready to share. Because of my job, I’m used to writing more educational texts where I talk about the benefits of involving users/people in the innovation process, or reports outlining the results of a study I’ve conducted. In both cases, as opposed to blogging, I can easily back-up my writing with my presentation skills, and in that way add a personal touch that (hopefully) makes it more interesting. I also have experience of academic writing, not least during this past year when I’m following two master programs (one in communication for development and the other in public health). Because of its conventions and rules (and because of the amount of academic reading I’m doing), I find this writing-style easier than blogging. Especially for this post, I know I easily fall back into academic writing, which in many ways feels much “safer”. But I also think I’ve managed to step out of my comfort zone in some ways and tried to write in a more personal and (again, hopefully) more engaging way. Apart from your comments on the topics in this post, I would also love to hear your thoughts on this.
Thank you for reading.
References:
Cinnamon, J. (2020). Data inequalities and why they matter for development. Information Technology for Development, 26(2): 214-233.
Govia, L. (2020). Coproduction, ethics and artificial intelligence: A perspective from cultural anthropology. Journal of Digital Social Research, 2(3): 42-64.
Heeks, R. (2017). Information and Communication Technology for Development (ICT4D). Abingdon: Routledge.
Kothari, A., Salleh, A., Escobar, A., Demaria, F. & Acosta, A. (Eds.). (2019). Pluriverse – A Post-Development Dictionary. New Delhi: Tulika Books.
Lindgren, S. & Holmström, J. (2020). A social science perspective on artificial intelligence: Building blocks for a research agenda. Journal of Digital Social Research, 2(3): 1-15.
Pozzebon, M. & Arruda Fontenelle, I. (2018). Fostering the post-development debate: the Latin American concept of tecnologia social. Third World Quarterly, 39(9): 1750-1769.
Read, R., Taithe, B. & Mac Ginty, R. (2016). Data hubris? Humanitarian information systems and the mirage of technology. Third World Quarterly.
Ribeiro, S. (2019). Geo-Engineering, in Kothari, A., Salleh, A., Escobar, A., Demaria, F. & Acosta, A. (Eds.). (2019). Pluriverse – A Post-Development Dictionary. New Delhi: Tulika Books.
Sachs, W. (2019). Foreword: The Development Dictionary Revisited, in Kothari, A., Salleh, A., Escobar, A., Demaria, F. & Acosta, A. (Eds.). Pluriverse – A Post-Development Dictionary. New Delhi: Tulika Books.
Svensson, J. & Poveda Guillen, O. (2020). What is data and what can it be used for? Key questions in the age of burgeoning data-essentialism. Journal of Digital Social Research, 2(3): 65-83.
Taylor, L. (2017). What is data justice? The case for connecting digital rights and freedoms globally. Big Data & Society, July-December: 1-14.
Taylor, L. & Broeders, D. (2015). In the name of development: Power, profit and the datafication of the Global south. Geoforum 64: 229-237.
Walsham, G. (2017). ICT4D research: reflections on history and future agenda. Information Technology for Development, 23(1): 18-41.