ICT, Datafication, Covid-19, Social Listening and AI Technology in Development
Empathy in the Machine-led World – AI and Ethics

Empathy in the Machine-led World – AI and Ethics

Empathy in the Machine-led World – AI and Ethics 

 

Nick Bostrom’s book Superintelligence serves as a prelude to the warning of the dangers that can be caused by AI. A potential threat to the humankind, Bostrom strongly affirms that overindulgence in AI and over integration of power in the robots can cause the disintegration of mankind. With supreme power, the machines will not deter from getting turned off even when required. That’s scary, to say the least! But what is that component which can make this almost full-proof man-made technology go so wrong and berserk……. Ethics! Yes, robots and machines can be taught and integrated many algorithms related to tasks, but there is no algorithm for ethics and empathy. The relationship that is shared and cared for among every living being among the biosphere is the bioethics, and that is the one magical element which separates the human intelligence from machine intelligence. 

With our regular everyday tasks and responsibilities, there are instances when we have to make an impromptu decision which leads to a change of plan. Now the need for this decision-making depends a lot on the situation and ability to make such spontaneous decisions depends on the empathy quotient of the people present in such situations. What if, in place of human robots are in charge; no matter what the situation needs they will perform the task exactly as it is installed. They don’t have the ability to take the detour route of planned programs. This situational-based execution of certain tasks may lead to chaos, but who is it to be blamed for…. the machines, their creators or the people who are over-reliant on them.  

The ethics guidelines for trustworthy AI set in 2019 by the high-level expert group on AI of the European union recommended for an accountable, explainable and unbiased AI system. Three elements are laid emphasis upon: 

Lawful– respecting all applicable laws and regulations 

Ethical – respecting ethical principles and values 

Robust– being adaptive, reliable, fair, and trustworthy from a technical perspective while into account its social environment. 

Certain prerequisites for the efficient implication of these key elements were also announced, such as – AI system should be supervised by the operators, human intervention should always be allowed to prevent any avoidable discrepancies, the system should be secure and accurate, the services gained from this technology should be available to all regardless of age, sex or gender, the data and algorithms should be traceable and editable by human beings … and so on.  

These guidelines accentuate the need for the inclusion of the ‘human factor’ in the AI system.  

To cite an example here, let’s take the incident of the Tesla car accident in Florida. The unfortunate night took away the life of a young college student, with no fault of her whatsoever! Shockingly, it was not even the fault of the driver of the car which banged into the victim. The AI integrated self-driven car technology was the main culprit. (https://www.nytimes.com/2021/08/17/business/tesla-autopilot-accident.html) 

Technology is advancing, so is the world around us with the help of it; but somewhere in the midst of all these we are losing the balance. The environment around us is nature-driven, but slowly it is turning into too much man-made and artificial. AI has proven to be beneficial for development of the society at large, but integrating this technology in every possible aspect of our daily life may potentially disrupt the natural environment of the human cycle. To maintain that balance, it becomes crucial to implement empathy and accountability components in technology. Or maybe just choose not to be over-reliant and dependent on the artificial technology in every field of human activities, thus upholding and maintaining the empathy quotient of human beings wherever it is needed. 

 

Do you agree with my perspective on AI? Would love to read about your opinion in the comment section below! 

 

 

Reference 

Bostrom Nick: Superintelligence: paths, dangers, strategies. Keith Mansfield: Oxford University Press; 2014 

European Commission on Ethical Guidelines for Trustworthy AI. The High-Level Expert Group on AI presented this guideline which stated three requirements: lawful, ethical and robust 

Rory CJ. Stephen Hawking warns artificial intelligence could end mankind BBC News Wikipedia, the Free Encyclopedia on Artificial Intelligence. 2014. 

https://www.nytimes.com/2021/08/17/business/tesla-autopilot-accident.html 

 

2 Comments

  1. As a fan of science fiction and dystopian novels I find it uncannily fascinating how we are getting closer to that world of fiction that, for decades, has been whispering to our ears about the dangers of AI and high technologies.
    One of the main argument coming from these narratives is exactly this incompatibility between the cold and “flawless” structure of AI and the empathic and imperfect system of human beings.
    As you pointed out, natural and artificial are mixed together into the one big cauldron of modern society. Can these two realms coexist peacefully?

    Lorenzo
  2. What appears to be a real challenge to me, in regard to creating ethical AI, is not developing the technology to make this happen in itself, but rather defining what ethics are and who can determine this. Given most technology is developed in specific areas of the world, even if a way was found to programme AI to be ethical, the AI’s understanding of what is or isn’t ethical will most likely be influenced by contextualised sociocultural preconceptions. In this sense, it appears that while we are so worried about how AI could harm us due to its not being human, we overlook the elephant in the room, as in the ways in which AI could be harmful because of the elements it might inherit from humans as its creators. Case studies have already shown, for example, how AI contributes to perpetuating racial biases and racial discrimination (see article ‘The Whiteness of Al’ (2020, Cave & Dihal or read about Google scandal:https://www.theguardian.com/technology/2018/jan/12/google-racism-ban-gorilla-black-people). Should we worry less about AI and more about humans?

    Julia Zaremba

Comments are closed.