If the use of tools sets humans apart from other species, then the practice of medicine might be the most human of activities. But the introduction of new automated technologies—the dawn of artificial intelligence (AI) and deep learning algorithms—present new relationships between physicians and their tools, which in turn raise new questions about the organization of medicine and the roles physicians play within it.

How Physicians Use Tools

Traditionally, physicians have always been the masters of their tools. Some tools, such as textbooks and digital research libraries, diagnostic tests and scans, stethoscopes and microscopes, and health records (whether electronic or paper) are informational. These tools give information and convey accumulated knowledge to physicians, which in turn assists physicians in their assessments of the facts presented before them.

Some tools, such as scalpels, defibrillators, or lasers, are operational. These tools combine with a physician’s skill to execute therapeutic duties and provide direct care to patients.

With the advance of medical technologies, a third category, which we could call maintenance tools, has emerged and grown rapidly. These tools, commonly associated with medical devices and medications, are designed to perform a function long after the physician’s direct role has ended. These include heart pacemakers, artificial hips, insulin pumps, pharmaceuticals, and increasingly, digital apps.

Central to informational, operational, and maintenance tools is that their value is derived from their accompaniment to the physician. They inform a physician’s judgment, but the ultimate judgement still belongs to the physician. They assist a physician’s services, but the services still come from physician’s skills. They allow for ongoing care outside of a physician’s direct supervision, but the physician still prescribes the care regimen and monitors treatment.

Organization And Authority In Medicine

Understanding these categories of tools helps explain the organization and regulation of medicine.

Informational tools, for example, lie within the domain of physician self-regulation. Because informational tools inform a physician’s judgment, typically in the process of making diagnoses, only fellow physicians possess the relevant expertise and professional experience to scrutinize a colleague’s use of the provided information. Should an error occur, internal quality management is historically organized through formal peer review in which physicians gather together to review adverse diagnoses and pursue paths for collective learning. State regulation of mistakes of judgment rests primarily on professional licensure boards, which (although arms of the state) are composed of fellow providers who articulate standards of professionalism. Even when physicians are sued, a plaintiff must call upon a physician expert to show that the defendant physician deviated from a professional standard. When scrutinizing a physician’s judgment, authority lies on peer review and self-regulation.

Maintenance tools that operate outside a physician’s direct control are governed by a different authority, as most come under the scrutiny of the Food and Drug Administration (FDA) or product liability law. Still, the lines of authority aim for a clear division of labor. Makers of these devices must convince FDA regulators that they will not malfunction, but determination of their proper use remains under physician control. The FDA aims to permit into the marketplace only the maintenance tools that are safe for patients, but it has steadfastly maintained that it does not regulate the practice of medicine and defers prescribing authority to physicians.

Oversight of operational tools presents a hybrid between informational and maintenance tools, but it rarely presents new questions. If a physician’s use of an operational tool is problematic, then redress falls under the domain of physician self-supervision. If the tool itself misperforms, then it is scrutinized by the approving agencies or examined under product liability law.

The Unique Challenge Of AI And The Rethinking Of Physician Authority

The introduction of AI technologies poses a significant challenge to these familiar categories and, in turn, to medicine’s traditional divisions of authority. AI informational tools do not merely inform a physician’s judgment but instead assemble information and produce a judgment on their own. Algorithms mine databases of biomarkers, health records, and even consumer data to produce diagnoses and target useful preventive care. Digital apps offer “virtual health assistants” that take and monitor biometric data over time, particularly for those managing illnesses, and then automatically suggest necessary medical care.

AI is also expanding the role of operational tools. Artificial intelligence systems can examine notes and reports from a patient’s file to help select customized treatments in surgical techniques, rehabilitative treatments, or even mental health care regimens. And maintenance tools are exploding with AI technologies, everything from adaptive implants to lifestyle apps. These products take advantage not just of machine learning but also of an abundance of cloud-based data.

The overly simple conclusions from these technologies is that physicians are not, and cannot be, the sole source of medical information and judgment, and their tools are assuming a greater and expanding role of tasks that were previously the physician’s alone. But more is at stake than merely the displacement of the physician’s role. The most vexing, and perhaps the prominent, feature of these AI technologies is that they take place within a black box. One can observe the data that are inputted and the outcome that is recommended, but it is virtually impossible to dissect the mechanism that connects the two.

This black-box algorithm not only supplants a physician’s judgment, but it also prevents the physician from reviewing the judgment itself. The health sector has not fully confronted the implications these products have for its traditional lines of authority and control.

To be sure, the black-box feature of AI-fueled medicine offers benefits. The creation of a closed-loop, self-learning algorithm allows for immediate recommendations, swift incorporation of new information, and real-time implementation of changes without delays in supervision. This opacity is a feature, not a bug, but it disables the possibility for human supervision over activities that traditionally have fallen under the exclusive domain of physicians.

Alternative Paths Ahead

If physicians have traditionally monitored the diagnostic and operational roles of other physicians, must they also monitor the tasks that physicians have delegated to machines? Can they? And as maintenance tools become increasingly self-sufficient, what role can physicians play in a patient’s ongoing care? We formulate three alternative but not mutually exclusive approaches to the coming organizational, and perhaps existential, challenge that AI presents for modern medicine.

One strategy is to treat AI just as medicine has treated prior tools. Under this approach, the priority is to ensure that the tools serve the physician and the physician remains in control. Technologies that make physicians’ jobs more taxing, alienating, or error-prone should be tamed to conform with the physician’s familiar relationship with their tools. Many consider the struggle to use electronic health records (EHRs) as a model. EHRs, when improperly used, can become so unwieldy as to overwhelm physicians and undermine their effectiveness. A common strategy has been to use EHRs effectively alongside physicians, so they become useful clinical decision support technologies.

A second strategy is to recognize that a radical transformation in health care is underway and to adapt accordingly. It is not unreasonable to imagine that interactive devices (Alexa, Siri, and so forth) will serve as a patient’s primary interface when seeking health care, and that AI technologies will inform, direct, and administer care to patients without relying on meaningful physician contact. To prepare for this world, physicians must learn to master these technologies or risk seeing their role as health care providers shrink significantly (perhaps even being displaced entirely by Silicon Valley products). Physicians therefore should team with computer scientists and other technologists, revisit the exclusivity that physicians have enjoyed in self-regulation, and reconstruct a delivery that abandons traditional patterns of organization and authority.

A third strategy, perhaps a middle ground, is to think carefully what health care providers can do that machines cannot. Many of the informational, operational, and maintenance services that physicians provide might be replaced by machines, but machines will never be human. Therein lies some kernel of irreplaceability that human providers (not just physicians) can and must offer. Ironically, physicians played this role historically. Before medical science armed them with a wealth of new tools, physicians tended to the dying, comforted the sick, and served as a compassionate custodian of health. But the profession moved away from this role as the medical science exploded with new capabilities.

Conclusions And Implications

In AI-guided medicine, we are approaching a point where we can discern neither what the machine did nor why it did it. That is a problem in and of itself, but it also is a problem for the organization and regulation of medicine. Should medical systems reconsider the role of peer review, perhaps having them led by computer technicians instead of physicians? Should the FDA and tort law regulate AI tools as they do other medical devices, or might that encroach on the practice of medicine?

Most important, what role should physicians play if their tools can do nearly everything that they can? If it is the use of tools that has made medicine consummately human, what if the tools start to expunge the need for humans?

An easier conclusion is that the traditional categories of tools and responsibilities are bound to get messier. In little time, the regulation and organization of medicine—of physicians supervising physicians and product agencies supervising devices—will need to be rethought, and it would benefit physicians if they are at the lead of that rethinking.