When I lived in Los Angeles, neither I, nor anyone I knew, watched the weather. I often wondered how the local meteorologist stayed engaged delivering either “it is going to be hot and sunny today” or “it is going to be kind of hot and sunny today.”

It wasn’t always like this: I grew up in Athens, Ohio, and later lived in Boston and Cleveland. In all three of those places, I spent time wondering and talking about what the weather would be, how this would impact my activities, and how we could plan for what we expect. I’d prepare for any adjustments. Accurate weather forecasting assisted in this process and was incredibly helpful.

Now imagine you’re caught in a hurricane. You have a real-time predictive weather analytics service that grabs you by your drenched wool sweater, screaming, “It’s going to rain, you need to be prepared.”

As the monsoon soaks you, it’s hard to imagine you’d be appreciative of the update, nor willing to comply when that service demands that you acknowledge its incredibly insightful and helpful warning.  As clinicians, this describes our daily interactions with clinical decision support.

With my academic background in the decision-making process of experts–physicians, specifically–the promise of computational power to inform expert decisions in clinical medicine, as it does in meteorology, is overwhelmingly exciting.

We dream of the day when a caring and empathic physician, armed with the latest artificial intelligence computer systems, can pinpoint the exact cause, treatment, and guidance. This would not just help anxious patients avoid catastrophe, but help them thrive in previously unimaginable ways.

Unfortunately, our day-to-day clinical practice realities don’t measure up. Clinical decision support systems provide 1 part useful insight and 99 parts frustrating distraction. Many of them fall into the “if you need to tell me, all is already lost” category.  We’re told it’s raining while we’re already soaked.

For example: I am standing in the emergency department and just received a code 3, high acuity ambulance, with a 47-year-old male who has a fever of 102, heart rate of 160, a blood pressure of 70, and oxygen saturation of 70%.  I intubate the patient, quickly place a central line, and then run to my computer to enter orders for antibiotics and the rest of the sepsis bundle.  I then have to go through the computer telling me that I should think that this patient may be sick and explain why this patient may be ill, with justification for why I should consider sepsis.  If the clinician sees this patient and does not recognize the patient is in septic shock, no computer popup would ever save this patient.

Another significant problem? Our clinical decision support systems provide information that does not lead to any action, intervention, or other trajectory change. This information is just more distraction demanding attention in a world where our attention is the most valuable resource.

Continuous requests to document my rationale for providing evidence-based care, popups detailing drug interactions based on a case report in an alternate dimension, and warnings of side-effects that are already well known to me: The paralytic I need to give to intubate the patient may cause them to stop breathing and require airway management.

As a clinician, I do not want sepsis alerts firing at every moment in time, causing people to run around, creating more chaos in a chaotic environment. Similarly, I do not appreciate driving and being given directions to take the exit I’d just passed.

I want someone to say that the patient in front of me has some minor infection but is likely to get septic in one, two, or even three days. I want, as Dan Heath details in his book Upstream, to “solve problems before they happen.” I would call that insight. Then I can prevent sepsis alerts, shock alerts and other chaos. I can also have a clear pathway with each patient, so the ones who are going to become septic get some special attention and closer monitoring in a calm, planned, and deliberate fashion.

Clinical decision support systems don’t need to be distracting. We can design them to stop pointing out the patients in the middle of catastrophe, and start helping us prevent them from ever arriving there. We can predict the weather the way we can–if not in LA–then in Cleveland or Boston. I want something to tell me, “it looks sunny outside now, but there is a 20% chance of rain … so just take an umbrella this morning.”

Photo: metamorworks, Getty Images