In my previous post I touched on some of the problems AI and social media algorithms cause ranging from rather small problems to outright existential ones. I argued that our free-for-all capitalism does not provide safeguards from AI derailments, and I argued for regulatory frameworks for our common future to coexist with artificial intelligence.
What are the problems we need to address?
I believe these regulations are needed even for our most minute issues (minute only in comparison to global threats), such as intellectual property, data privacy, identity theft, and cross-country workers’ rights (Berg, 2022), but looking at existential threats, such as AI in warfare (think of killer robots in the Russo-Ukrainian War) or a superintelligence that is misaligned with human values – what if an AI tasked with solving the climate change concludes that humans are the problem that need to be eliminated? – the urgent need for intervention arises with utmost importance.
Between these two ends of personal, and existential levels we find further impacts of AI that might lead to societal disruptions. As Phaendra Boinodiris explains in her video, many people are unaware of how AI already impacts their lives – her examples include loaning decisions, and university admissions. She goes further and explains that there is an assumption from many that AI is infallible and has zero unbiased assumptions. Exactly therein lies the issues of bias and discrimination, that AI trained on biased data might reinforce our existing biases in hiring, policing, and lending. One other example that could shake societal structures is misinformation campaigns fuelled by AI tools generating content – such as deepfakes, which can destabilize democracies and undermine trust – once recent examples are AI-generated robocalls mimicking the voice of President Biden.
Shifting the focus from societal issues, advancements in AI have negative impacts on the environment too. My colleague Eva pointed out in a previous post on this blog, that the energy consumption and the water needs of data centres used to power AI tools further complicate the issues around water scarcity especially when large international corporations establish these centres in already drought-prone areas.
In conclusion, we can say the problems are many, affect our lives on various levels from personal to existential, and affect various walks of life – even regulatory jurisdictions. This means that there might not be a single regulatory solution that might untangle the issues we face both now, and most importantly in the future as well. I will argue later in this article that in tackling these issues we need to create hybrid solutions that can effectively deal with them.
Fostering responsible AI development?
If we accept as an axiom that artificial intelligence is part of our future (and let us be honest – it is: our existing capitalist structures see the possible profit margins in AI development), we might need to settle for the next best thing: a mix of robust regulation, international cooperation, and public participation in the policy discourse – with the concept of ethical AI development in the centre of it all.
IBM defines AI ethics as ‘a multidisciplinary field that studies how to optimize AI’s beneficial impact while reducing risks and adverse outcomes’ (2024). They then list various issues that align with the list of problems listed above and more. They developed five pillars for the responsible adoptions of AI technologies which I find a good starting point for fostering conversations:
- Fairness – to ensure outcomes are fair towards everyone, particularly minorities
- Explainable – what is the data set used to train the model, and what methods are used
- Robustness – the model cannot be hacked to harm/benefit certain groups
- Transparency – the usage of AI is represented, and people can learn the model
- Data privacy – ensuring the data rights of the consumers
These ethical guidelines are only half of the solution, however – we need to consider threats that go beyond ethics due to their sheer level of threat: the AI arms race that is currently happening, the possibility of AI being developed by ‘rouge actors’ (such as the question of misaligned AI), or just plain and simple human accidents hidden in the design, delivery, or usage.
Eerily all these issues have been raised before in the history books of humanity – regarding nuclear weapons: just think of the nuclear arms race in the Cold War; or how North Korea possesses WMDs (weapons of mass destruction); or the so-called Broken Arrows – documented cases of accidents involving a nuclear weapon. This parallel is not accidental either: humanity can destroy itself – but we collectively realised the responsibility of that burden and decided to act on a global scale (even if it is not nearly robust enough). The question is: are we on the same level of consciousness in regard to a possible superintelligence?
As I am writing these lines, one question is not escaping my consciousness, and is probably a hotly debated question among AI developers: Can we even talk about responsible AI development and AI ethics when we discuss possible use-cases of artificial intelligence that cause human suffering or an outright existential threat?
I am not an ethics expert to answer this very question, but I would point out one very important pillar from IBM’s book that is applicable here namely robustness. It is paramount that any regulation will require both private and state actors to develop, implement, and maintain fail-safe mechanisms to shut down AI systems if needed, and plan with failure scenarios in case advanced systems override those kill switches.
Air-tight frameworks to live and survive
After summarising the issues and searching for answers on what might responsible AI development be with the guidelines of ethics, we can talk about the regulatory frameworks for artificial intelligence systems.
Looking into cross-border opportunities, in our interconnected world of international standards, treaties, and agreements, these are not a novel concept, so there are several pre-existing examples we can use as a blueprint in building our thought-exercise regulatory fortress.
Addressing the most burning and existential threats of possible misuses and accidents of these technologies might only be solved by international bans – for example: (some) weaponised usages (O’Connell, 2023), self-replicating modules, and more broadly speaking, AI development that are not aligned with very basic human values and ethics, such as do no harm. This can be overseen by a global AI governance body to check high-stakes AI development as well as to mediate inter-governmental disputes (a possible blueprint for such a body could be the IAEA and its role in nuclear energy).
As I argued above for ethical AI functionalities, these bans should be further supplemented with accountability and transparency frameworks so all AI systems (and not just high-stakes ones) can be audited, explained, and most importantly align with ethical guidelines, with heavy penalties for non-compliance. To accomplish these goals as well as non-proliferation, the international community needs to draft several treaties to jointly agree on a way forward.
Therein lies our first issue, however, which lingers around all international treaties: political realities, tensions, and a lack of trust between parties. Looking at existing disarmament frameworks, while the CWC (Chemical Weapons Convention) and the BWC (Biological Weapons Convention) have near universal ratification of the sovereign states of the world, the same cannot be said of the CTBT (Comprehensive Nuclear-Test-Ban Treaty) which has not even entered into force as key countries have refused to be party of it yet, as well as the TPNW (Treaty on the Prohibition of Nuclear Weapons) – which have only been signed by 94 countries.
Lack of trust and true will for disarmament is not a faithful sign of ethical AI development either – especially when artificial intelligence has sparked a new arms race as was mentioned above in regard to the war in the Ukraine, or when state actors use misinformation campaigns abroad to shape public opinion of other nations as well as to disturb democratic discourses.
What powers should a future governance body wield? In my opinion, such an organisation needs to have a legal mandate to audit and sanction both state and private entities in violation of development standards. Nation-states must also agree to cede some of their sovereign powers to this body to ensure a binding power. In the focus of such an intergovernmental body should the more existential and catastrophic risks be, so even rivalling nations would be incentivised to join due to their shared risks to create cooperative safeguards.
As for functionality, I argue that (1) interdisciplinary collaboration should be at the centre of all workings to ensure technical, environmental, societal, and ethical insights are all considered; and (2) states must fund the organisation, so it can enforce policies.
Opportunities within the realm of opt-ins
Moving away from the core existential threats that even rival nations might cooperate on due to their nature, there are still several risks mentioned in the introduction that might need some form of international regulation, rather than a patchwork of national rules.
I have mentioned above the general lack of trust between nations that might hinder cooperation. So how do we improve trust between these sovereign entities? At least within the realm of artificial intelligence, knowledge transfers between nations to ensure equitable access to beneficial technologies has the effect of both reducing global inequality (Khan et.al., 2024) as well as building trust.
These knowledge transfers as well as other economic incentives might be dependent on the signature of another AI treaty that addresses several other areas of life with optional opt-ins, such as international intellectual property, environmental issues, cross-country labour rights, a commitment to ethical AI development, or battling misinformation campaigns of other states to name a few.
These opt-ins might come with their own various incentives ranging from economic, to research-based ones as well as security guarantees that these countries could deploy shared cybersecurity resources against AI-driven cyberattacks. The possibilities are endless. On the other hand, countries not joining such a treaty might face negative consequences in the form of sanctions against them that would hinder their own AI development programs.
Such opt-in-incentive-based agreements are nothing new either, many of the previously mentioned treaties have some form of incentives built in – both positive motivators such as technology access, and negative ones, such as the possibility of sanctions for non-compliance.
Further thoughts for national lawmakers
Mindful readers might see that some problem areas have not been addressed yet. This is due to their already patchworked nature on the global stage, as well as how these issues to some degree are culturally bound with each nation putting different aspects in the centre of the very same issue. This is most prominent in privacy-related issues – it would be prudent to leave their regulation to national legislatures, as coming up with a solution that satisfies both democratic and more authoritarian countries is hard if not impossible.
Similarly, labour issues are starkly different in developed nations and the Global South, and they are very differently affected by AI advancements, and possible job displacement. One area that needs to be addressed internationally is, as mentioned, cross-country work to avoid the exploitation of workers in the Global South to perpetuate further inequalities on the international stage. My colleague Pree has summed up the issue of marginalised groups in the AI era in a post on this blog, so I would advise reading that entry as well!
… so what can I do?
Thinking about international regulation, frameworks, and solutions puts an enormous distance between me, you, the everyday person and the topic. That does not make it less important to be informed about it and have a discussion.
As I have briefly nodded to the importance of public participation in the debate regarding artificial intelligence and its regulation, I would like to underline it here – public pressure by empowering ourselves can force governments, corporations, and international bodies to act responsibly.
My advice is – educate yourselves (this is a good first step!), talk to others, and keep yourself up to date. A healthy amount of scepticism helps, and as Algorithm Watch writes: If you want to help humanity, don’t fall for “AI” hype.
Feature image: Yeti Studios via Adobe
References
Algorithm Watch. (2024). If you want to help humanity, don’t fall for “AI” hype. Retrieved from Algorithm Watch: https://algorithmwatch.org/en/wp-content/uploads/2024/04/AlgorithmWatch-Stance-UN-AI-Regulation.pdf
Appel, G., Neelbauer, J., & Schweidel, D. A. (2023). Generative AI Has an Intellectual Property Problem. Forrás: Harward Business Review: https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem
Bank of America Institute. (2024). AI: From evolution to revolution? Retrieved from Bank of America Institute: https://institute.bankofamerica.com/content/dam/transformation/ai-evolution-to-revolution.pdf
Berg, J. (2022). An International Governance System for Digital Work in the Planetary Market. In M. Graham, & F. Fabian, Digital Work in the Planetary Market. Cambridge, Massachusetts: The MIT Press.
Dib, D. (2024). Companies in Mexico embrace AI to resurrect the dead. Retrieved from Rest of World: https://restofworld.org/2024/ai-powered-resurrections-mexico-privacy/
DW Akademie. (2024). Generative AI is the ultimate disinformation amplifier. Retrieved from DW Akademie: https://akademie.dw.com/en/generative-ai-is-the-ultimate-disinformation-amplifier/a-68593890
Gomstyn, A., & Jonker, A. (2024). Exploring privacy issues in the age of AI. Forrás: International Business Machines: https://www.ibm.com/think/insights/ai-privacy
Hsu, T. (2024). New Hampshire Officials to Investigate A.I. Robocalls Mimicking Biden. Retrieved from The New York Times: https://www.nytimes.com/2024/01/22/business/media/biden-robocall-ai-new-hampshire.html?ref=disinfodocket.com
International Business Machines. (2023). IBM Artificial Intelligence Pillars. Retrieved from International Business Machines: https://www.ibm.com/policy/ibm-artificial-intelligence-pillars/
International Business Machines. (2024). What is AI ethics? Retrieved from International Business Machines: https://www.ibm.com/topics/ai-ethics
Jonker, A., & Gomstyn, A. (2024). What is AI alignment? Retrieved from International Business Machines: https://www.ibm.com/think/topics/ai-alignment
Khan, M. S., Umer, H., & Faruqe, F. (2024). Artificial intelligence for low income countries. Humanities and Social Sciences Communications, 11, https://doi.org/10.1057/s41599-024-03947-w.
Lautman, O. (2024). We’re Winning, Say Russia’s Fake News Manufacturers. Retrieved from Center for European Policy Analysis: https://cepa.org/article/were-winning-say-russias-fake-news-manufacturers/
Marr, B. (2024). How AI Is Used In War Today. Retrieved from Forbes: https://www.forbes.com/sites/bernardmarr/2024/09/17/how-ai-is-used-in-war-today/
Mozur, P., & Satariano, A. (2024). A.I. Begins Ushering In an Age of Killer Robots. Retrieved from The New York Times: https://www.nytimes.com/2024/07/02/technology/ukraine-war-ai-weapons.html
O’Connell, M. E. (2023). Banning Autonomous Weapons: A Legal and Ethical Mandate. Ethics & International Affairs, 37(3), 287-298. doi:10.1017/S0892679423000357 .