The Hidden Impact of Global Tech Economy
AI: Ethics, Policies, and Social Change

AI: Ethics, Policies, and Social Change

By Elia and Andrei.

 

On October 23, 2024, the event AI: Ethics, Policies, and Social Change took place at Pompeu Fabra University (UPF) in Barcelona. The organizers of the event are the Political Theory Research Group of UPF, the host university, the John Hopkins University (JHU) Public Policy Center, and the Barcelona School of Management. The moderators, Beatriz Rodríguez-Labajos (JHU-UPF Public Policy Center) and Camil Ungureanu (Department of Political and Social Sciences, UPF), guided the discussion with the following panel:

  • Sara Suárez Gonzalo (Universitat Oberta de Catalunya), researching AI from a feminist and critical, theoretical perspective, big data analysis, populism, communication and mass-media.
  • Adrián Almazán  (University Carlos III of Madrid), with a background in physics, philosophy, and coordinating the group of technology and ecological humanities. 
  • Cristina Astier Murillo (Universitat Pompeu Fabra), with a focus on analytical political philosophy, working within the law department, and on the topic of deliberative democracy with AI. 
  • Carlos Castillo (Universitat Pompeu Fabra), specialising in web media, data recovery, crysis data, web searches. 

The event had an in-person audience and was also broadcast through zoom. We participated online. Below we are going to summarise the main points and share each of our opinions on the key themes covered in the conference.

 

The Challenge of Accessibility and Understanding

Elia: One of the most impactful takeaways from the event was the discussion around the increasing inaccessibility of Artificial Intelligence (AI) systems, particularly generative artificial intelligence. A clear example is a comment made at 20:30, noting that even experts like Geoffrey Hinton, considered to be the godfather of AI, struggle to understand how AI works. This highlights a critical issue: as AI becomes more complex, it not only alienates the general public but also those at the forefront of its development. 

This lack of clarity is deeply problematic, especially when we consider how interconnected AI is becoming with governance, economy, and society. If the very architects of these systems cannot fully grasp their operations, how can we expect policymakers or the broader public to make informed decisions? Without addressing this gap, we risk creating a society governed by technologies that are neither accountable nor comprehensible.

 

Andrei: Professor Gonzalo’s counterpoint on this was also interesting. She mentioned how This opacity in understanding is not accidental, as it favors the economic interests of corporations, being the basis of their business models. 

She argued that since these technologies are taking center place in our societies and are being used to take decisions that affect our lives (to diagnose disease, to organize urban transport, to take decisions in social services as to how to distribute social aid), then the general population needs to be equipped with a minimal level of understanding of how the technology works, its basic logic, what types of problems it is useful for, and what types of problems it isn’t.

I found her point about advocating for a basic understanding of this technology persuasive. Understanding AI as a knowledge field, as a technology that processes huge amounts of data to identify certain patterns to solve a specific task or problem in very controlled contexts, not for any task that a human person is capable of. She argued that there is a lot of myth making and overly generalised uses of the term AI currently, which only benefits the big tech companies.

I agree with her vision that it is possible and desirable for the general public to understand the basic facts about AI, like what it is for, for whom and what it’s useful, whom it prejudices, all things that right now are not part of the public debate.

 

AI and the 4 fractures of Democracy

Elia: Professor Almazán’s analysis of how AI is disrupting democracy in four key ways was another significant point of discussion. These “roturas”, meaning “fractures”, offer a framework to understand the multifaceted challenges AI poses to societal governance. 

First, the privatization and commodification of AI feeds directly into the mechanisms of a new form of capitalism, creating products and systems that serve corporate interests rather than public needs. 

Second, AI is intensifying social inequality by concentrating wealth and power in the hands of a few dominant players, further polarizing societies. 

Third, the ecological costs of AI development—through energy-intensive data centers and unsustainable digital practices—are contributing to the global environmental crisis. 

Finally, there’s the erosion of enlightenment values, as AI reshapes information ecosystems in ways that prioritize engagement over truth, ultimately diminishing critical thinking and informed decision-making. These fractures illustrate the urgent need for a democratic approach to AI regulation that balances innovation with accountability, equity, and sustainability.

 

Andrei: I also found salient the point he made about fetishising technology: that in our societies, we are interpreting technologies as objects that we use and control, as simple technological artefacts, missing the reality that they represent in fact a series of economic, political relationships.

From this perspective, he questioned the premise of the question of the ethics of AI use. While the question of use is relevant, the social issue of the technology is the environmental impact and social model of the current infrastructure built around it.

He argues that an actual ethical-political reflection on a technological process needs a deep investigation on all its socio-economic, infrastructure, and power preconditions, not just questioning where it should be used or not. In this sense, the ethical-political questioning of AI almost doesn’t exist in public debate. Then, we would need to talk about the fact that the tech of AI is based on the low paid labor under harsh conditions of people from the global south to train this software. Without touching on this, we cannot have a discussion about the ethics of AI. 

 

What’s Next?

Elia: The conference left us with more questions than answers—appropriate, given the complexities at hand. How can policies evolve fast enough to regulate an industry that thrives on disruption? And how do we communicate these dynamics to broader audiences without alienating them?

Ultimately, the conference underscored that AI’s transformative potential comes with a dual responsibility: to innovate ethically and to distribute its benefits equitably. Whether this shift can occur within existing systems—or requires a complete overhaul—is a debate that continues.

 

Andrei: The most important point highlighted by all panelists was the focus on the structural preconditions of these technologies: who makes them, who they benefit, and at whose expense. Professor Castillo emphasised the importance of collective action (for example through social movements), because the interlocutor for this conversation is an entire industry. Therefore, individual action has little power, and expectations for change shouldn’t be focused on it.

Professor Gonzalo highlighted the importance for everyone to understand the role AI can play in society, and professor Almazán explained the damage of these technologies on the environment (as the companies that produce them have increasingly higher CO2 emissions and some, like Microsoft, are resorting to nuclear plants to power them), and how the infrastructure, the most impactful aspect of the topic of ethics isn’t part of the conversation.

 

Current Inequalities and Monopolistic control

Elia: While much of the discussion around AI revolves around future risks—such as the possibility of AI surpassing human agency—I believe the real problem lies in the inequalities we’re facing now. A few powerful corporations dominate the AI landscape, controlling the infrastructure, data, and decision-making processes that shape how these technologies are developed and used. These monopolistic practices not only concentrate wealth and power but also deepen existing social and economic disparities. We’re facing a new kind of an old system where technological elites wield disproportionate influence over societal outcomes. For instance, the prioritization of profit-driven AI systems often leads to the marginalization of vulnerable communities, who bear the brunt of automation’s negative effects, from job displacement to biased algorithmic decisions. If we fail to challenge the monopolies now, we risk cementing a future where technology serves the few at the expense of the many.

While it is important to carefully evaluate and address every potential risk related to artificial intelligence—such as its impacts on democracy or its hypothetical domination over humanity—the current inequality surrounding AI and the monopolistic control of big tech companies over this technology is a far more pressing concern. The dominance of a few large corporations in the development, deployment, and governance of AI (aka Technocracy) is exacerbating social and economic inequalities. These companies not only control the data and infrastructure needed for AI but also shape how it is integrated into societies, prioritizing profit over public interest. So, the potential risks of AI domination—such as its ability to overpower human agency—are worth exploring, these scenarios are speculative and distant. The panel’s discussion at 44:00 touched on these risks, but I believe the focus should remain on the tangible issues we face today. Inequality, ecological degradation, and social fragmentation driven by the monopolistic practices of tech giants are not hypothetical—they are happening now. 

The monopolistic practices of tech giants are already having real, tangible impacts on our lives. Therefore, the most urgent issue is addressing this concentration of power and ensuring that AI development and technology is more inclusive, equitable, and accountable. If we fail to tackle this now, the foundation we build for AI’s future will be flawed, making it harder to address those distant risks later.

 

Andrei: Given the current state of affairs in digital technologies, owned and controlled by big tech corporations, run with huge amounts of resources that harm the environment, trained by workers under exploitative conditions, it shouldn’t be surprising how little regulation exists in the field. 

The excuse that the novelty of these technologies accounts for the lack of guardrails becomes less persuasive the more time passes. Professor Castillo made the point in the conference that the sparse regulation of digital technology is contrasted by the large amount of regulation in other engineering fields. The European General Data Protection Regulation (GDPR) is one of the first and only major regulations in the space of digital technologies. As these, including AI, have more and more impact in our daily lives, more severe regulations should be imposed.

On the topic of monopoly, professor Almazán raises the point that digital industries have gone through a process of accumulation of non-regulated spaces, and expanded into monopolies that generate enormous wealth for the few. The digital tools produced by these corporations become indispensable for entire industries and institutions,  under the noble guise of digitalisation. In the current configuration, there are now companies that have the capacity to paralyze educational, health, judicial, and logistical systems that rely on their software. 

Installing these tools in all public institutions and making these public actors dependent on the products of private big tech corporations, makes the regulations less possible and brings about a great inequality of power. 

I agree with professor Murillo when she argues that, at this juncture, it is necessary for social movements to expand their role of monitoring institutions, to monitoring the effects and  impact of these technologies and companies. This is something that social movements already do, of course, and it is a key point in this configuration. With the states having limited ability or willingness to regulate the very systems they rely on in their infrastructure, with economic actors benefiting from the extractivist logic behind the development of digital technology, and with individual action being limited in the face of industries and structural issues, collective action and social movements remain the best positioned actor to pressure all others stakeholders into changing the configuration of this power structure into a fairer and more equitable one.

 

The conclusion we can reach based on the conference is that the accumulation of power of tech companie is the most acute factor for injustices in the infrastructure of digital technologies. Therefore, it should become a primary topic for social movements to drive change on this issue.

What are some ways in which you would tackle this topic?

What types of collective action do you think would be most effective to address this?

 

Image: NASA Earth Observatory.

One comment

  1. This is an insightful summary of such a complex and urgent topic. The focus on the “four fractures” of democracy and the structural inequalities driven by AI was especially eye-opening. I found Professor Gonzalo’s point about the opacity of AI being intentional, to benefit corporate interests, particularly striking. It really underscores how much work needs to be done to make AI systems more accessible and accountable to the public.

    The emphasis on collective action and the role of social movements as a way to challenge monopolistic control is incredibly important. In a world where states seem increasingly reliant on these tech giants, it’s clear that public pressure and grassroots efforts will be vital in pushing for equitable regulation.

    I also appreciated the discussion on the environmental impact of AI, which often gets overlooked. The fact that companies are resorting to nuclear power to sustain their operations is alarming and shows how unsustainable these technologies are under the current model.

    Thank you for bringing all these perspectives together. It’s a lot to think about, but I hope this conversation continues to grow and leads to tangible change.

    Xanthia Mavraki

Comments are closed.