﻿{"id":414,"date":"2024-12-15T15:53:10","date_gmt":"2024-12-15T15:53:10","guid":{"rendered":"https:\/\/wpmu.mau.se\/msm24group5\/?p=414"},"modified":"2024-12-15T15:53:46","modified_gmt":"2024-12-15T15:53:46","slug":"learning-from-ai-incidents","status":"publish","type":"post","link":"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/","title":{"rendered":"Learning from AI incidents"},"content":{"rendered":"<div id=\"fb-root\"><\/div>\r\n<p><span style=\"font-weight: 400\">I recently joined and listened to OECD\u2019s International Conference on AI in Work, Innovation, Productivity and Skills that took place on December 12 and 13. The conference brought in many voices in a multi-disciplinary manner and tackled the topic of AI from various angles. You can check out<\/span> <a href=\"https:\/\/www.oecd-events.org\/ai-wips2024\/eventagenda\"><span style=\"font-weight: 400\">the conference&#8217;s agenda<\/span><\/a><span style=\"font-weight: 400\"> and the recording of each session online.<\/span><\/p>\n<p><span style=\"font-weight: 400\">For this article, I am covering one discussion on <\/span><i><span style=\"font-weight: 400\">\u2018AI Incidents: A look at past mistakes to inform future AI governance\u2019<\/span><\/i><span style=\"font-weight: 400\">. If you have read my<\/span> <a href=\"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/11\/the-muddy-waters-of-ai-regulation\/\"><span style=\"font-weight: 400\">previous post<\/span><\/a><span style=\"font-weight: 400\"> then you know I am quite interested in governance options for AI development, so upon reading that this session would be part of this conference, I made sure to listen in and cover it on this blog for you. Their initial angle was that since 2022 OECD\u2019s AI Incident Monitor (AIM) has seen a considerable increase in reports \u2013 which can be valuable data for policymakers to make decisions regarding governance.<\/span><\/p>\n<p><span style=\"font-weight: 400\">The panel discussion featured four speakers:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Elham Tabassi, Chief AI Advisor for the National Institute of Standards and Technology (NIST)<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Sean McGregor, Founding Director of the Digital Safety Research Institute<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Marko Grobelnik, AI Researcher, and co-leader at The Artificial Intelligence Lab at Jo\u017eef Stefan Institute (JSI)<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Jimena Viveros, Managing Director and CEO at IQuilibriumAI<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400\">This short discussion was moderated by Stephanie Ifayemi, a Senior Managing Director at Partnership on AI.<\/span><\/p>\n<p><span style=\"font-weight: 400\">\u00a0<\/span><\/p>\n<h6><b>How can we address the risks and incidents of AI?<\/b><\/h6>\n<p><span style=\"font-weight: 400\">Ms Ifayemi opened the discussion by setting the context \u2013 how workings on frameworks are ongoing, and why incident reporting is important. She underlined that policymaking is too slow and reactive \u2013 but policymakers are understanding the risks of AI, even the most existential ones, and how the<\/span> <a href=\"https:\/\/digital-strategy.ec.europa.eu\/en\/library\/hiroshima-process-international-code-conduct-advanced-ai-systems\"><span style=\"font-weight: 400\">Hiroshima Code of Conduct<\/span><\/a><span style=\"font-weight: 400\">, and the<\/span> <a href=\"https:\/\/www.europarl.europa.eu\/topics\/en\/article\/20230601STO93804\/eu-ai-act-first-regulation-on-artificial-intelligence\"><span style=\"font-weight: 400\">EU\u2019s AI Act<\/span><\/a><span style=\"font-weight: 400\"> are addressing some concerns, but as all such documents they do contain loopholes \u2013 one example stems from simple things such as definitions: What can be considered as <\/span><i><span style=\"font-weight: 400\">meaningful<\/span><\/i><span style=\"font-weight: 400\"> information (i.e. the ones to be reported)?<\/span><\/p>\n<p><span style=\"font-weight: 400\">Ms Tabassi noted that AI is the most transformative technology there is, but she underlined quite explicitly that it does come with negative consequences and harms \u2013 ones we cannot necessarily quantify right now. This is an extremely important point to address \u2013 it is a concern that humanity knows way less about vulnerabilities than we should for responsible development. She argued for a general reporting framework that is <\/span><i><span style=\"font-weight: 400\">concise, simple, flexible, and easy to understand <\/span><\/i><span style=\"font-weight: 400\">for outsiders and utilizes a multi-stakeholder approach so people can understand what can go wrong. She raised the important question: <\/span><i><span style=\"font-weight: 400\">Who is the system failing? Who is bearing the negative impact of AI? <\/span><\/i><span style=\"font-weight: 400\">\u2013 questions we have attempted to look at on this blog ourselves as well.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Ms Viveros then voiced her opinion on the need for international governance to ensure global safety. Quoting UN officials \u2013 she mentioned that there is no sustainable development without global peace while pointing out that our existing AI data is limited to the civilian domains <\/span><i><span style=\"font-weight: 400\">(countries might quite be reluctant to share their military findings after all)<\/span><\/i><span style=\"font-weight: 400\">. She argued for a need for incident reporting across jurisdictions for all systems that might be hazardous (which are concerns for stability and peace). In this approach, it is vital to have a multi-stakeholder approach, collection of data to mitigate future risk and incorporate these findings in AI training, while underlining that missing incident data is a risk itself.<\/span><\/p>\n<p><span style=\"font-weight: 400\">\u00a0<\/span><\/p>\n<h6><b>What kind of incidents happen?<\/b><\/h6>\n<p><span style=\"font-weight: 400\">Mr Grobelnik explained OECD\u2019s AIM and how they collect information. Their reporting is done by a quite wide scope of media reporting \u2013 distilling 150 thousand sources (amounting to a million articles a day!), which produce around 15-25 incidents each day. An issue is that media reporting might not pick up on smaller issues, nor on ones not reported in English. Their observation was that a big increase in incidents came with the appearance of ChatGPT \u2013 so-called <\/span><i><span style=\"font-weight: 400\">soft AI incidents<\/span><\/i><span style=\"font-weight: 400\"> \u2013 where nobody got hurt <\/span><i><span style=\"font-weight: 400\">(although these include deepfakes that might constitute actual harm, both personal and societal)<\/span><\/i><span style=\"font-weight: 400\">. He underlined that it is relatively inexpensive to create problematic content now, which is a concern for ethical AI.<\/span><\/p>\n<p><span style=\"font-weight: 400\">An interesting point was raised that there is no increase in incidents with casualties <\/span><i><span style=\"font-weight: 400\">yet<\/span><\/i><span style=\"font-weight: 400\">, speculating this might be due to the lack of autonomy given to AI for the time being (such as laxer regulation for autonomous driving, <\/span><i><span style=\"font-weight: 400\">or perhaps in healthcare<\/span><\/i><span style=\"font-weight: 400\">). Mr Grobelnik pointed out, however, that their monitoring only extends to the civilian segment of our society and the number of deaths cannot be explained within the militarized framework of AI used.<\/span><\/p>\n<p><span style=\"font-weight: 400\">As for the future of AIM, he addressed the need for expansion to cover more languages, not just English, as well as the use of LLMs to analyse the incidents. Further work is needed on experimentation of the possible impact of these reported incidents.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Mr McGregor talked about the usefulness of AI incidents in the form of it being data we can use to learn \u2013 and figure out what might happen, and what should be done should these systems fail. He also pointed out that governance makes it less likely for incidents to occur, and in its centre should these incidents be for the time being.<\/span><\/p>\n<p><span style=\"font-weight: 400\">\u00a0<\/span><\/p>\n<h6><b>International effort for governance<\/b><\/h6>\n<p><span style=\"font-weight: 400\">At this point, Ms Ifayemi announced the results of an interactive poll of the audience \u2013 with 17 respondents reporting they have been negatively affected by AI, with 58 responding in the negative. I share the concern of one fellow audience member though \u2013 the lack of the <\/span><i><span style=\"font-weight: 400\">I don\u2019t know<\/span><\/i><span style=\"font-weight: 400\"> option in voting, arguing the possibility that the impact might be in the unconscious.<\/span><\/p>\n<p><span style=\"font-weight: 400\">After this, the discussion was refocused on the international nature of governance.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Ms Viveros summed up the existing reporting system by saying it is fragmented and there are incomplete mechanisms of reporting right now (that it is both lagging and overlapping). She said the aim is a global framework with global governance, including <\/span><i><span style=\"font-weight: 400\">obligatory <\/span><\/i><span style=\"font-weight: 400\">incident reporting (in contrast to it being voluntary right now). Once again multi stakeholderism was invoked, so we could prevent harm in all domains. She pointed out that potential large-scale harm should be addressed with urgency.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Mr Grobelnik argued for the importance of international organizations \u2013 working as hubs where countries talk and build consensus in contrast with individual nations that might not have the same impact. A concern was raised that AI is developing faster than policymakers can react \u2013 he mentioned just a few shifts in geopolitics, the publishing of new AI models and functionalities just in the last <\/span><i><span style=\"font-weight: 400\">few weeks<\/span><\/i><span style=\"font-weight: 400\">, making it hard even for experts to keep track.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Ms Tabassi reminded everyone that AI does not know borders, and voiced a concern that a system trained on data from one part of the world might not be compatible everywhere as societal realities differ from country to country. She argued for representative data (so it can avoid bias). The main questions for organisations working with incident reporting should be \u2018<\/span><i><span style=\"font-weight: 400\">What went wrong?\u2019 and \u2018Whom did the system fail?\u2019.<\/span><\/i><span style=\"font-weight: 400\"> She argued for the prioritisation of technologies that have low negative impact possibilities (and even those should be easily corrected).<\/span><\/p>\n<p><span style=\"font-weight: 400\">\u00a0<\/span><\/p>\n<h6><b>Conclusions<\/b><\/h6>\n<p><span style=\"font-weight: 400\">A few audience questions were addressed \u2013 on how to ensure reporting is factual \u2013 right now by redundancies and manual checks. To prevent bad reporting Ms Viveros argued for an agency with a mandate for audits and reporting that can be reliable in the long run.<\/span><\/p>\n<p><span style=\"font-weight: 400\">As Ms Ifayemi summed out the discussion, she argued that everyone should be thinking of incidents, and I can\u2019t help but echo that thought. Right now, AI development is not as high stakes as it could be eventually, and we can learn quite much from incidents so developers can avoid fatal mistakes in future developments, to ensure proper AI alignment and ethical considerations. I was quite happy to hear that experts agree on the need for international cooperation to oversee development and the urgency of addressing the catastrophic outcomes of AI incidents.<\/span><\/p>\n<p><span style=\"font-weight: 400\">What do you think \u2013 how can we learn from our AI mistakes that inform future development?<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><em>Feature image: geralt, Creative Commons Zero, via Pixabay<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>I recently joined and listened to OECD\u2019s International Conference on AI in Work, Innovation, Productivity and Skills that took place on December 12 and 13. The conference brought in many voices in a multi-disciplinary manner and tackled the topic of AI from various angles. You can check out the conference&#8217;s agenda and the recording of [&hellip;]<\/p>\n","protected":false},"author":740,"featured_media":415,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[55],"tags":[],"class_list":["post-414","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-interactive-post"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.1.1 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\r\n<title>Learning from AI incidents - Artificial Inequality<\/title>\r\n<meta name=\"description\" content=\"AI incidents inform policymakers on how to create governance structures. What needs to be done so the reporting is accurate, safe and accesible?\" \/>\r\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\r\n<link rel=\"canonical\" href=\"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/\" \/>\r\n<meta property=\"og:locale\" content=\"en_US\" \/>\r\n<meta property=\"og:type\" content=\"article\" \/>\r\n<meta property=\"og:title\" content=\"Learning from AI incidents - Artificial Inequality\" \/>\r\n<meta property=\"og:description\" content=\"AI incidents inform policymakers on how to create governance structures. What needs to be done so the reporting is accurate, safe and accesible?\" \/>\r\n<meta property=\"og:url\" content=\"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/\" \/>\r\n<meta property=\"og:site_name\" content=\"Artificial Inequality\" \/>\r\n<meta property=\"article:published_time\" content=\"2024-12-15T15:53:10+00:00\" \/>\r\n<meta property=\"article:modified_time\" content=\"2024-12-15T15:53:46+00:00\" \/>\r\n<meta property=\"og:image\" content=\"http:\/\/wpmu.mau.se\/msm24group5\/wp-content\/uploads\/sites\/100\/2024\/12\/artificial-intelligence-3382507_1920.jpg\" \/>\r\n\t<meta property=\"og:image:width\" content=\"1920\" \/>\r\n\t<meta property=\"og:image:height\" content=\"1280\" \/>\r\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\r\n<meta name=\"author\" content=\"Daniel\" \/>\r\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\r\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Daniel\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\r\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/\"},\"author\":{\"name\":\"Daniel\",\"@id\":\"https:\/\/wpmu.mau.se\/msm24group5\/#\/schema\/person\/83ea69e2ff91cb1f1cab975518b46061\"},\"headline\":\"Learning from AI incidents\",\"datePublished\":\"2024-12-15T15:53:10+00:00\",\"dateModified\":\"2024-12-15T15:53:46+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/\"},\"wordCount\":1352,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/wpmu.mau.se\/msm24group5\/#organization\"},\"image\":{\"@id\":\"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/wpmu.mau.se\/msm24group5\/wp-content\/uploads\/sites\/100\/2024\/12\/artificial-intelligence-3382507_1920.jpg\",\"articleSection\":[\"Interactive post\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/\",\"url\":\"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/\",\"name\":\"Learning from AI incidents - Artificial Inequality\",\"isPartOf\":{\"@id\":\"https:\/\/wpmu.mau.se\/msm24group5\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/wpmu.mau.se\/msm24group5\/wp-content\/uploads\/sites\/100\/2024\/12\/artificial-intelligence-3382507_1920.jpg\",\"datePublished\":\"2024-12-15T15:53:10+00:00\",\"dateModified\":\"2024-12-15T15:53:46+00:00\",\"description\":\"AI incidents inform policymakers on how to create governance structures. What needs to be done so the reporting is accurate, safe and accesible?\",\"breadcrumb\":{\"@id\":\"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/#primaryimage\",\"url\":\"https:\/\/wpmu.mau.se\/msm24group5\/wp-content\/uploads\/sites\/100\/2024\/12\/artificial-intelligence-3382507_1920.jpg\",\"contentUrl\":\"https:\/\/wpmu.mau.se\/msm24group5\/wp-content\/uploads\/sites\/100\/2024\/12\/artificial-intelligence-3382507_1920.jpg\",\"width\":1920,\"height\":1280},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/wpmu.mau.se\/msm24group5\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Learning from AI incidents\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/wpmu.mau.se\/msm24group5\/#website\",\"url\":\"https:\/\/wpmu.mau.se\/msm24group5\/\",\"name\":\"Artificial Inequality\",\"description\":\"The Hidden Impact of Global Tech Economy\",\"publisher\":{\"@id\":\"https:\/\/wpmu.mau.se\/msm24group5\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/wpmu.mau.se\/msm24group5\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/wpmu.mau.se\/msm24group5\/#organization\",\"name\":\"Artificial Inequality\",\"url\":\"https:\/\/wpmu.mau.se\/msm24group5\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/wpmu.mau.se\/msm24group5\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/wpmu.mau.se\/msm24group5\/wp-content\/uploads\/sites\/100\/2024\/10\/artificial-logo-1.jpg\",\"contentUrl\":\"https:\/\/wpmu.mau.se\/msm24group5\/wp-content\/uploads\/sites\/100\/2024\/10\/artificial-logo-1.jpg\",\"width\":400,\"height\":400,\"caption\":\"Artificial Inequality\"},\"image\":{\"@id\":\"https:\/\/wpmu.mau.se\/msm24group5\/#\/schema\/logo\/image\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/wpmu.mau.se\/msm24group5\/#\/schema\/person\/83ea69e2ff91cb1f1cab975518b46061\",\"name\":\"Daniel\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/wpmu.mau.se\/msm24group5\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/fe9ef2681e0eb225b6883822853df5e717f9271fe467e71d1dfb7285859c9923?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/fe9ef2681e0eb225b6883822853df5e717f9271fe467e71d1dfb7285859c9923?s=96&d=mm&r=g\",\"caption\":\"Daniel\"},\"url\":\"https:\/\/wpmu.mau.se\/msm24group5\/author\/ao5681\/\"}]}<\/script>\r\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Learning from AI incidents - Artificial Inequality","description":"AI incidents inform policymakers on how to create governance structures. What needs to be done so the reporting is accurate, safe and accesible?","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/","og_locale":"en_US","og_type":"article","og_title":"Learning from AI incidents - Artificial Inequality","og_description":"AI incidents inform policymakers on how to create governance structures. What needs to be done so the reporting is accurate, safe and accesible?","og_url":"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/","og_site_name":"Artificial Inequality","article_published_time":"2024-12-15T15:53:10+00:00","article_modified_time":"2024-12-15T15:53:46+00:00","og_image":[{"width":1920,"height":1280,"url":"http:\/\/wpmu.mau.se\/msm24group5\/wp-content\/uploads\/sites\/100\/2024\/12\/artificial-intelligence-3382507_1920.jpg","type":"image\/jpeg"}],"author":"Daniel","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Daniel","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/#article","isPartOf":{"@id":"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/"},"author":{"name":"Daniel","@id":"https:\/\/wpmu.mau.se\/msm24group5\/#\/schema\/person\/83ea69e2ff91cb1f1cab975518b46061"},"headline":"Learning from AI incidents","datePublished":"2024-12-15T15:53:10+00:00","dateModified":"2024-12-15T15:53:46+00:00","mainEntityOfPage":{"@id":"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/"},"wordCount":1352,"commentCount":0,"publisher":{"@id":"https:\/\/wpmu.mau.se\/msm24group5\/#organization"},"image":{"@id":"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/#primaryimage"},"thumbnailUrl":"https:\/\/wpmu.mau.se\/msm24group5\/wp-content\/uploads\/sites\/100\/2024\/12\/artificial-intelligence-3382507_1920.jpg","articleSection":["Interactive post"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/","url":"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/","name":"Learning from AI incidents - Artificial Inequality","isPartOf":{"@id":"https:\/\/wpmu.mau.se\/msm24group5\/#website"},"primaryImageOfPage":{"@id":"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/#primaryimage"},"image":{"@id":"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/#primaryimage"},"thumbnailUrl":"https:\/\/wpmu.mau.se\/msm24group5\/wp-content\/uploads\/sites\/100\/2024\/12\/artificial-intelligence-3382507_1920.jpg","datePublished":"2024-12-15T15:53:10+00:00","dateModified":"2024-12-15T15:53:46+00:00","description":"AI incidents inform policymakers on how to create governance structures. What needs to be done so the reporting is accurate, safe and accesible?","breadcrumb":{"@id":"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/#primaryimage","url":"https:\/\/wpmu.mau.se\/msm24group5\/wp-content\/uploads\/sites\/100\/2024\/12\/artificial-intelligence-3382507_1920.jpg","contentUrl":"https:\/\/wpmu.mau.se\/msm24group5\/wp-content\/uploads\/sites\/100\/2024\/12\/artificial-intelligence-3382507_1920.jpg","width":1920,"height":1280},{"@type":"BreadcrumbList","@id":"https:\/\/wpmu.mau.se\/msm24group5\/2024\/12\/15\/learning-from-ai-incidents\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/wpmu.mau.se\/msm24group5\/"},{"@type":"ListItem","position":2,"name":"Learning from AI incidents"}]},{"@type":"WebSite","@id":"https:\/\/wpmu.mau.se\/msm24group5\/#website","url":"https:\/\/wpmu.mau.se\/msm24group5\/","name":"Artificial Inequality","description":"The Hidden Impact of Global Tech Economy","publisher":{"@id":"https:\/\/wpmu.mau.se\/msm24group5\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/wpmu.mau.se\/msm24group5\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/wpmu.mau.se\/msm24group5\/#organization","name":"Artificial Inequality","url":"https:\/\/wpmu.mau.se\/msm24group5\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/wpmu.mau.se\/msm24group5\/#\/schema\/logo\/image\/","url":"https:\/\/wpmu.mau.se\/msm24group5\/wp-content\/uploads\/sites\/100\/2024\/10\/artificial-logo-1.jpg","contentUrl":"https:\/\/wpmu.mau.se\/msm24group5\/wp-content\/uploads\/sites\/100\/2024\/10\/artificial-logo-1.jpg","width":400,"height":400,"caption":"Artificial Inequality"},"image":{"@id":"https:\/\/wpmu.mau.se\/msm24group5\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/wpmu.mau.se\/msm24group5\/#\/schema\/person\/83ea69e2ff91cb1f1cab975518b46061","name":"Daniel","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/wpmu.mau.se\/msm24group5\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/fe9ef2681e0eb225b6883822853df5e717f9271fe467e71d1dfb7285859c9923?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/fe9ef2681e0eb225b6883822853df5e717f9271fe467e71d1dfb7285859c9923?s=96&d=mm&r=g","caption":"Daniel"},"url":"https:\/\/wpmu.mau.se\/msm24group5\/author\/ao5681\/"}]}},"_links":{"self":[{"href":"https:\/\/wpmu.mau.se\/msm24group5\/wp-json\/wp\/v2\/posts\/414","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/wpmu.mau.se\/msm24group5\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wpmu.mau.se\/msm24group5\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/wpmu.mau.se\/msm24group5\/wp-json\/wp\/v2\/users\/740"}],"replies":[{"embeddable":true,"href":"https:\/\/wpmu.mau.se\/msm24group5\/wp-json\/wp\/v2\/comments?post=414"}],"version-history":[{"count":1,"href":"https:\/\/wpmu.mau.se\/msm24group5\/wp-json\/wp\/v2\/posts\/414\/revisions"}],"predecessor-version":[{"id":416,"href":"https:\/\/wpmu.mau.se\/msm24group5\/wp-json\/wp\/v2\/posts\/414\/revisions\/416"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/wpmu.mau.se\/msm24group5\/wp-json\/wp\/v2\/media\/415"}],"wp:attachment":[{"href":"https:\/\/wpmu.mau.se\/msm24group5\/wp-json\/wp\/v2\/media?parent=414"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wpmu.mau.se\/msm24group5\/wp-json\/wp\/v2\/categories?post=414"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wpmu.mau.se\/msm24group5\/wp-json\/wp\/v2\/tags?post=414"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}