Promises and Challenges of Medical AI

Call for Papers for a Special Issue of Bioethics. Deadline: November 1, 2020

Guest Editors:

Mita Banerjee, Professor and Chair of American Studies, Department of English and Linguistics, University of Mainz, Germany; Norbert W. Paul, Professor and Director of the Institute for History, Theory and Ethics of Medicine, University of Mainz Medical Center, Germany; Nils-Frederic Wagner, Postdoctoral Fellow, GRK Life Sciences - Life Writing, Institute for History, Theory and Ethics of Medicine, University of Mainz Medical Center, Germany

CLOSING DATE FOR SUBMISSIONS: 1st November 2020

Artificial Intelligence (AI) has recently found its way into health care. Areas of application include prevention, monitoring, diagnosis,and treatment recommendations. When it comes to improving both quality and efficiency of health care, the emergence of medical AIholds a lot of promise. However, it also faces serious ethical and conceptual challenges. For example, medical AI may reinvigorate traditional bioethical controversies about paternalism and respect for patients’ autonomy, reinforce existing and/or create new biases, and the opacity of medical AI’s inner workings is likely to spark both ethical discussions about informed consent as well as raise conceptual issues concerning artificial (moral) agency.

These and related issues appear even more pressing vis-à-vis a predominantly normativist conception in bioethics (endorsed by the WHO) according to which health is a sensitive, personal endeavor, centered around what most people value highly: the duration and quality of their lives. What this means in detail, however, varies greatly between different people. Accordingly, medical decisions—including those generated by medical AI—are culturally embedded, involving personal values and preferences; not just clinical information. Providing and receiving health care, then, requires both medical expertise and a trusting relationship between patient and physician that fosters treatment shaped and informed by patients’ values.

Despite the rapid advance of medical AI and its increasingly pervasive clinical application, there is still little systematic bioethical work on fundamental ethical and conceptual issues (most current work focuses narrowly on ethical implications of specific applications). Current bioethical works tends to shy away from, for example, discussing controversial conceptions ofpotentiallydifferent meanings of medical AI in specific national and cultural contexts. Moreover, we may ask in what relationship medical AI stands to developments such as precision medicine, on the one hand, and personalized medicine on the other. Generally, a thorough discussion of medical AI appears indispensable when it comes to the future of medicine as such.

This special issue seeks to contribute to a more comprehensive bioethical analysis of medical AI by addressing the following three main areas of inquiry, to be tackled in an empirically-informed manner.

(1) Medical AI’s ethical challenges, both in theory and in practice with regard to concrete clinical applications;

(2) medical AI’s epistemic role in generating medical knowledge, including its relation to medical practice;

(3) medical AI’s conceptual footing, including resulting ethical implications.

Accordingly, topics to be addressed by submissions may include (but are not restricted to) questions such as:

•Does the rise of medical AI introduce artificial (moral) agents into the healthcare system?

•How to simultaneously harness medical AI’s data-crunching power and respect patients’ diverse, value-laden preferences?

•Does the advent of medical AI require rethinking the canonical principles of bioethics?

•Is there a need to reevaluate the ideal of patient-centered care with its emphasis on autonomy and dignity in the age ofmedical AI?

•How does medical AI change the relationship between patients and physicians?

•How does medical AI impact the relationship between medical knowledge and medical practice?

•How do the opaque aspects of medical AI affect physicians’ and patients’ health literacy?

•How to safeguard human oversight, data security, control, and responsibility in light of shared decision-making involving opaque medical AI?

•How to measure medical AI’s performance, given the lack of a gold standard in many medical disciplines?

•How does medical AI fare in adequately assessing peoples’ subjective well-being and perceived quality of life?

•How is the use of medical AI different in countries and regions with poor health care infrastructure?

The guest editors invite contributions from scholars in medical ethics, bioethics, philosophy (of technology), health sciences, medicine, psychology, and other relevant areas to answer these and related questions. Consistent with Bioethics’ norms, we will, all-else-being-equal, prefer theoretical works with practical suggestions, and practical works that engage theory. Please refer to Bioethics’ Author Guidelines for more information.

The guest editors welcome early discussion of brief proposals and/or abstracts by email to: n.wagner(at)uni-mainz(dot)de.

Manuscripts should be submitted to Bioethics online at:http://mc.manuscriptcentral.com/biot.

Please make sure to select the manuscript type ‘Special Issue’ and state that your contribution is for the ‘Medical AI’ Special Issue when prompted.

Further information (PDF)