Oncology researchers raise ethics concerns posed by patient-facing AI
New York, Nov 4 (IANS) Patients with cancer are increasingly likely to find themselves interacting with artificial intelligence technologies to schedule appointments, monitor their health, learn about their disease and its treatment, find support, etc.
In a new paper published in the journal JCO Oncology Practice, bioethics researchers led by and India-origin scientist at Dana-Farber Cancer Institute in Boston, the US, call on medical societies, government leaders, clinicians, and researchers to work together to ensure AI-driven healthcare preserves patient autonomy and respects human dignity.
The authors said that while AI has immense potential for expanding access to cancer care and improving the ability to detect, diagnose and treat cancer, medical professionals and technology developers need to act now to prevent the technology from depersonalising patient care and eroding relationships between patients and caregivers.
“To date, there has been little formal consideration of the impact of patient interactions with AI programs that haven’t been vetted by clinicians or regulatory organizations,” said Amar Kelkar, a Stem Cell Transplantation Physician at Dana-Farber Cancer Institute.
“We wanted to explore the ethical challenges of patient-facing AI in cancer, with a particular concern for its potential implications for human dignity,” Kelkar added.
The authors focused on three areas in which patients are likely to engage with AI now or in the future.
Telehealth may use AI to shorten wait times and collect patient data before and after appointments.
Remote monitoring of patients’ health may be enhanced by AI systems that analyse information reported by patients themselves or collected by wearable devices.
Health coaching can employ AI — including natural language models that mimic human interactions — to provide personalised health advice, education and psychosocial support, they wrote.
For all its potential in these areas, AI also poses a variety of ethical challenges, many of which have yet to be adequately addressed.
The authors cite several principles to guide the development and adoption of AI in patient-facing situations — including human dignity, patient autonomy, equity and justice, regulatory oversight, and collaboration to ensure that AI-driven healthcare is ethically sound and equitable.
To ensure patient autonomy, patients need to understand the limits of AI-generated recommendations.
“The opacity of some patient-facing AI algorithms can make it impossible to trace the ‘thought process’ that lead to a treatment recommendation.
It needs to be clear whether a recommendation came from the patient’s physician or from an algorithmic model raking through a vast amount of data,” said Kelkar.