There is this phrase I like using in my native Hungarian: even a broken clock is right twice a day. It is very expressive in my opinion – reflecting how even those we do not agree with, make good decisions – whether they are genuine or just a gimmick is always the big question.
As a citizen of the European Union, I was delighted that even amidst our neoliberal reality the EU announced plans to protect my data and wellbeing online to a certain degree. The reaction from many platforms was expected: I either continue business as usual and hand over my data voluntarily, or I pay up to have my privacy.
The question of national and supranational regulation rises at the intersection of free speech and misinformation as well. Recently, the Brazilian Supreme Court banned, then reinstated access to X (formerly Twitter) after Musk’s company paid its fines for not banning profiles for spreading misinformation. This combined by the massive fines levied by the European Union raises yet another question: Are these regulatory fines becoming a calculated business expense now?
Clearly – politicians should never have the right to silence dissenting voices, and such arguments should be fought against with teeth and claws. However, the freedom of speech is not absolute either: the anonymity the Internet provides is a double edged sword, as it can fuel hatred. We learnt that lession when Facebook’s algorithms knowingly fanned the flames of genocide in Myanmar.
Stepping back – the international community have created safeguards against existential risks to humanity before: we mended the ozone layer, we attempted to inch closer to global disarmament, and humanity now focuses on solving climate change.
Disclaimer: I do not possess a crystal ball to know where the future of AI is heading and how much an existential threat it poses to humanity. However, if we make an argument against any state building up a nuclear arsenal, we equally need to be mindful of them developing and weaponizing AI technologies – the questions are endless, and we must ask ourselves: are our regulatory frameworks merely reactions to yesterday’s news, or can we build up systems that stand the test of time?
Sziasztok! I am Daniel, one of the contributors to Artificial Inequality. Stepping beside my colleagues, I will be writing about regulatory tech policing, how we can build supranational systems, and frameworks for future technological advancements. I argue that the evolved free-for-all tech capitalism fails to provide any safeguards against any threats to our daily lives – be it existential on a global level, a localized echo chamber online on social media platforms, or personal with data protection.
Coming from a political background I do understand both the popular need for such frameworks to protect ourselves from external threats, while also being mindful of the very thin line of stifling potential advancements that would benefit humanity as a whole. I am sure that it will take policymakers and their staff years to come up with a solution that ‘benefits everybody’. Do we have the luxury to wait that long?
Feature image: istock
Thanks for sharing this very interesting blog, Daniel! I very much enjoyed reading it and took away so much valuable insight. I appreciate the nuanced look at technology regulation and the urgent need to balance individual freedoms with protection from digital harm.
You raise an important question about whether regulatory fines will become just another cost of doing business for tech giants — a calculated expense that may not actually restrain or curb harmful practices. In your view, how could regulatory frameworks be designed to create real change in the behavior of these companies rather than simply enforcing compliance on paper? And what role should supranational entities like the EU play in setting standards that can be both flexible enough to adapt to new technologies and firm enough to prevent abuse?
Looking forward to reading more of your blogs!