Article

The responsibility gap: Using robotic assistance to make colonoscopy kinder

Nowadays, technology and medicine go hand in hand. New technologies like robotic assistance can provide better, if not new, treatments for patients. However, these technologies are not always beneficial. 

Published date
Courses
Philosophical Perspectives on Art, Media and Society
Copyright
Read time
4 minutes

Nowadays, technology and medicine go hand in hand in implementing technological devices that have great potential to significantly improve medical  and provide better, if not new, treatments for patients. However, that is not always beneficial. Although technology might provide a better solution, it comes with unpredictability and lack of control.

In this paper, I am going to address the newest development in performing colonoscopy, namely using robotic assistance to carry the procedure. However, this raises the question: who bears the responsibility for any mistakes made from this new kind of intervention? 

Back to top

How does the robotic assistance work?

Scientists from the University of Leeds in the UK have created a “semi-autonomous robotic system to perform” colonoscopy, claim be easier for doctors and nurses to operate on and being less painful for the patients (Mack, 2020). The robotic system involves a “smaller, capsule-shaped device on the end of a cord” that is navigated into the patient from a robotic arm, equipped with magnets, which is optionally controlled  joystick by the doctors (Mack, 2020)(Figure 1). Therefore, the semi-autonomous performance the machine is because the system is the one responsible for manipulating the capsule-shaped device into the patient. Even though the doctors can intervene with the joystick during the procedure, the robotic arm has semi-autonomous capacities and will operate itself.

This groundbreaking improvement in colonoscopy might sound ideal, however, the possibility of error remains high since joysticks just like any technical device come with the uncertainty of making mistakes. This unresponsiveness of the joystick in such crucial situations could lead to serious physical harm to the patients, as well as a threat to their lives. Then, is the creator or the operator responsible? 

Back to top

Responsibility Gap

If the responsibility is with the medical staff, then mistakes with using the robotic arm are theirs. This implies that they should know how the device works. But since they are not the programmers of the AI of the device, this argument does not hold.

Then, it can be argued that the creators are responsible since they created the device. However, the scientists have transferred the control of the device to the medical staff “ by specifying the precise set of actions and reactions the device is expected to undergo during normal operation” and the ability to handle the device in a predictable manner (Matthias, 2004). Therefore, the scientists have reduced their control over the finished product, which means that they “bear less or no responsibility” because they are unable to check for errors (Matthias, 2004). It seems that neither of these sides are responsible for the errors, which is called the problem of the “responsibility gap”. 

Back to top

Robotic responsibility: can we hold a machine accountable?

However, there is a third option, in which the AI bears the responsibility. This scenario is highly neglected, because how can a machine be held accountable for its actions? As a matter of fact, robots can be moral agents if they fulfill three qualities - the robot has sufficient autonomy, its behavior is not intentional, and it is in a position of responsibility (Sullins, 2006). In this case, firstly, the robotic arm does not have enough autonomy since it is controlled by the doctors with a joystick. Secondly, by “intentionally” it is meant whether a complex interaction between the robot’s programming and the environment is presented, causing the machine’s actions to be seemingly calculated (Sullins, 2006). This lacks in this case since following the commands of a joystick is not a complex interaction. Thirdly, to be in a position of responsibility, the robot must be morally responsible to other agents, which is hardly the case as the robotic system is designed to follow a step-by-step procedure, rather than taking decisions concerning other people (Sullins, 2006). Since the robotic arm does not meet the criteria, it is not a moral agent, hence not responsible. 

Back to top

Conclusion

Looking at all available options, it seems that the problem of the responsibility gap is far more complex since, in situations like the one presented in this paper, ascribing responsibility to someone seems rather impossible. However, this gap can be solved by identifying
 “ a process in which the designer of a machine increasingly loses control over it, and gradually transfers this control to the machine itself. In a steady progression the programmer role changes from coder to creator of software organisms” (Matthias, 2004). 

Back to top

References:

Mack, E. (2020). This Robot Could Perform Your Colonoscopy In The Near Future. Forbes. 

Matthias, A. (2004). The responsibility gap: ascribing responsibility for the actions of learning automata. Ethics and information technology, 6(3), 175–183.

Sullins, J. (2006). When Is a Robot a Moral Agent?. International Review Of Information Ethics, 6.

Back to top

More from this author

Content ID

Published date
Course
Philosophical Perspectives on Art, Media and Society