In order for doctors to accept the recommendation of an AI-driven clinical program, medical practitioners must first be able to see and examine the process by which the algorithm reaches its conclusions. Only then can they be trusted enough by doctors to incorporate the AI’s conclusions into their own recommendations for their patients.

AI-driven clinical programs could potentially eliminate waste, fraud and abuse in medical practice. But there is still a lot of progress to be made in making their algorithms less opaque. The responsibility for the diagnosis remains with the doctor and no finger will be pointed at the AI.

Stephanie Baum outlines the issue in this excerpt from MedCityNews:

Looking at the legal considerations for AI implementation in healthcare, (lawyer Ryan) Johnson noted that for providers and healthcare professionals, malpractice risk is an important consideration. Physicians cannot blindly rely on AI clinical recommendations.

“Physicians still have primary responsibility for clinical decision-making, and the standard of care does not yet treat AI-based recommendations as superior to physician judgment.” However, Johnson notes, “At some point in the future, when AI clearly outperforms human diagnoses and treatment recommendations, it might be malpractice per se not to follow the AI recommendation.” That raises the question: If the process showing how an algorithm reaches its recommended decision is not transparent, how does the clinician evaluate whether the recommendation is right?

Johnson noted that the FDA has been very supportive of digital health and has worked to try to provide rules to encourage, not stifle innovation. Some software driving clinical decisions are treated as medical devices subject to FDA jurisdiction, Johnson notes that the FDA has emphasized that clinical decision support software should allow licensed professionals to independently review the basis for the software based recommendations.

The black box issue makes the use of AI even more challenging. Can the algorithm consistently reproduce results? The risk for clinicians is that if the algorithm relies on incomplete information, for example, a fuzzy image, it could lead to a faulty conclusion and the provider takes on malpractice risk.