Artificial Intelligence Risks in the Health Care Sector
This short article deals with the philosophical debate about Responsibility Gap in case an AI is used to diagnose patients before meeting a medical doctor. What would happen if the AI gives a wrong diagnosis, not recognising a new illness?
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Artificial Intelligence, also in short AI are innovations in form of machines that show programmed intelligence. Until now they are used for different tasks in sectors like health care, finance, education or commerce and they get further developed every day to be able to help humankind even more in the future. Especially in health care, AI is being used more frequently nowadays, which is why it is important to discuss the ethical issue that could arise with its further development and usage in the future.
A scenario is given in which an AI helps out doctors with the diagnosis of patients to lessen their work in the office as they often are overcrowded and waiting times are long. This would work through telling the robot the symptoms one has, and it being connected to the databank of the doctor’s office to be able to have access to all data needed, it hence acts as an autonomous agent. The robot then diagnoses a patient and either sends them home with medication or to the doctor if needed or wanted. This scenario is connected to a world situation like we are in right now, in which a new virus occurs, which the mentioned robot diagnoses wrongly as the flu or cold due to the symptoms being identical and the new virus not being registered in their databank yet.
In this assignment, I will analyse this imagined issue through a philosophical debate about the Responsibility Gap.
Back to topThe problem of the wrong diagnosis of Artificial Intelligence
The responsibility gap is described by Matthias (2004) as the occurrence of the event happening when an AI shows faulty behaviour, but nobody can be completely held responsible for it. In the described scenario the AI acted wrongly due to a new virus occurring which is not registered in any databank yet, meaning there would have been no way that the robot could have diagnosed the new virus as what it really is. Now imagining the virus is deadly and contagious, who will be held responsible if the patient gets others sick or even dies because they trusted the AI and did not go and see the doctor because of it?
To be held accountable for the matter, one must be at least causally responsible for it. That is, one must be part of the chain of events that led to that outcome. But, having causal responsibility does not itself make one accountable. For instance, if an individual has a disease in which their arm moves independently of their will, then they can hardly be held accountable when their autonomous arm hits someone. To be held accountable for that act, they should also be considered to have control over it. Only then can they be said to be morally responsible for an outcome, and so accountable for it. In other words, to be held accountable for an outcome, an individual should be causally and morally responsible for it.
The engineers who worked on the AI, can hence only be held causally responsible as they indeed programmed the AI but can easily argue that they have no responsibility as the occurrence of a new virus and the AI wrongly diagnosing the new virus has nothing to do with the programming as it is out of their reach of control. Furthermore, the AI acts as an autonomous agent meaning they act without human supervision and develop further through its environment over which the engineers also do not have control over (Matthias, 2004). The doctor can also argue with the same statement as well with that the virus is new and they could have not known especially if the symptoms are identical to something as simple as the cold or flu. The company selling the AI, in this case, can neither be held morally nor causally responsible as all they do is sell the product and are not responsible for the wrong diagnosis of a new occurring virus and its consequences. In general, companies can often be indeed held responsible though due to actively choosing what they sell and knowing the implied consequences that come with it like selling weapons to people during a war. This would mean the company does not kill anyone in the war but they most likely know people will die through their sold products. The patient technically could have still gone to have a chat with the doctor but if the AI is normality in the described scenario and an everyday occurrence can they be blamed for trusting the AI? And would the doctor have diagnosed them differently, given that the virus is new and not in their knowledge yet? Lastly, the AI itself can also not be held morally nor causally responsible, simply because they are an artificial intelligence and hence not an actual being which excludes them from being a party that could be held responsible in the first place.
Back to topWho is responsible for AI mistakes
The given scenario describes a typical occurrence of a responsibility gap in which it is hard to figure out who is responsible for the mistake an AI does. It also shows that sometimes it is not possible to determine the one who is responsible for occurring situations like this. Sometimes a partial responsibility can be detected but this is not enough to completely hold someone accountable.
Back to topReferences
Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175-183.
Back to top