Wednesday, November 28, 2012

What exactly is "Alert Fatigue"?

Clinical Decision Support (CDS) - It's the mythical creature that every healthcare administrator and informaticist is hunting, to help reduce costs and improve care. Loosely, it can be broken into a few different areas :
  1. Electronic decision support (e.g. CPOE Alerts to help prevent errors)
  2. Order / order set design (e.g. to help prevent errors / guide docs to evidence-based care)
  3. Workflow/documentation redesign (e.g. tools used to standardize high-risk decisions, e.g. procedure checklists)
  4. Workflow/protocol design (e.g. tools used to automate high-risk procedures)
One of the hardest to tackle is #1 - CPOE Alerts. Are there too many, or too few? Everyone I know seems to be struggling with the same issue :
  • Wanting to provide CPOE alerts to avoid errors, but
  • Providing "too many alerts" could cause docs to ignore the "important alerts".
This phenomenon is loosely called alert fatigue, and has been fairly well-documented in literature as, paradoxically, a potential risk
When you hear Informatics professionals discuss alert fatigue, the challenging part is actually knowing when alert fatigue exists. Docs sometimes complain about it, but the response docs get to this is often skepticism - After all, how can an alert be bad? Maybe the doc just complains too much? And who is going to turn off the alert? Is it safe to turn off the alert? What if this opens up other problems? When is it too much? When is it too little?

So when you ask docs to define alert fatigue, they typically use general, loose definitions, like :
  • "It's when the system gives me too much information and I miss the important stuff."
  • "It's when the system tells me about the Tylenol interacting with Colace, but I miss out on the Coumadin/Bactrim interaction."
  • "It's when I can't read all of the alerts."
  • "It's when I just keep clicking 'Bypass' without actually reading the alert."
  • "It's when I just keep clicking 'Acknowledge' without actually reading the alert."
  • "It's when I click 'bypass' within 3 seconds, so I know I didn't read the alert."
And recently, when I asked some informatics colleagues for their definition of alert fatigue, I again got a myriad of responses, followed by the same sort of response Supreme Court Justice Potter Stewart gave in 1964, when defining "obscenity" in the Jacobellis v. Ohio case : "I know it when I see it."

Unfortunately, this doesn't help much for those of us who are really working to combat alert fatigue
The problem with all of these definitions is that they are fairly loose and subjective, and don't make a good litmus test to answer the question : Do you have alert fatigue?

So I'm going to use some reason and inference, to try to develop a better definition of alert fatigue that is quantifiable. (I used to be a mathematician/statistician, so please forgive the quasi-mathematical approach.)

Since it seems the "undesired scenario" nobody wants is made up of two steps :
  • An EMR providing a confusing alert environment, and
  • A doc displaying signs of poor response to that environment
So I'd like to submit two proofs, for two conditions which then go into a third proof. Here they are :

PROOF1 : "AlertOverload"
1. [AlertOverload] = [Bad] > [Good]
2. [AlertOverload] = [Noise] > [Signal] 
3. [AlertOverload] = [Low-value alerts] > [High-value alerts]
4. [AlertOverload] = [Low-risk alerts] > [High-risk alerts] 
5. [AlertOverload] = [# of low-risk alerts] > [# of high-risk alerts] 
6. [AlertOverload] = [Number of low-risk alerts in a time period] > [Number of high-risk alerts in a time period]
7. [AlertOverload] = When the number of low-risk alerts exceeds the number of high-risk alerts for a given physician in a given time period

PROOF2 : "AlertLoss"
1. [AlertLoss] = [Bad] > [Good]
2. [AlertLoss] = [BypassedAlert] > [AcknowledgedAlert] 
3. [AlertLoss] = [Number of bypassed alerts in a given time period] > [Number of acknowledged alerts in a given time period] 
4. [AlertLoss] = When the number of bypassed alerts exceeds the number of acknowledged alerts in a given time period

If one were to accept proof #1 and #2 as true, then I would propose this final proof/definition of AlertFatigue :

PROOF3 : "AlertFatigue"
1. [Bad] = [Bad] 
2. [AlertFatigue] = [Bad]
3. [AlertFatigue] = [AlertOverload] + [AlertLoss] 
4. [AlertFatigue] = Exists when a given physician experiences [AlertOverload] and displays [AlertLoss] in a given time period

So voila - My proposed definitions :

  1. Alert Overload = When the number of low-risk alerts exceeds the number of high-risk alerts for a given physician in a given time period
  2. Alert Loss = When the number of bypassed alerts exceeds the number of acknowledged alerts in a given time period
  3. Alert Fatigue = "When a given physician experiences alert overload and displays evidence of alert loss in a given time period."

It's certainly not a universally-recognized definition, and I'm curious if other people are aware of any other professional, practical, policy-grade definitions that exist out there. Obviously, this definition now needs to be peer reviewed, tested, validated, and professionally accepted, so please don't use it in your own organization without consulting a legal professional, informatics professional, and your local regulatory agencies first.

Remember : As always, this discussion is for educational purposes only! Remember, your mileage may vary! Always enjoy thoughts, comments, and ideas!

10 comments:

JSW said...

Hi Dirk, I like having subjective (person's experience) as well as objective (measurable process) components to the definition. I also think applying a Human Factors Model approach helps identify key components: people engaged in processes using tools to accomplish desired goals in a community/policy/physical/social environment (full citation below).
-Jon
Jonathan S Wald, MD, MPH
RTI

Human Factors model - page 62 - of free report available from National Research Council. (2011). Health Care Comes Home: The Human Factors. Committee on the Role of Human Factors in Home Health Care, Board on Human-Systems Integration, Division of Behavioral and Social Sci- ences and Education. Washington, DC: The National Academies Press.

Unknown said...

Hi Dirk,

I understood the "quasi-math" but my first thought is about your presumption that to be undesirable, an alert must be "confusing". Obviously, a confusing alert is BAD and undesirable - but undesirable alerts for physicians can just be TOO MANY alerts during a specified period of time, or ones that the see over and over again (such as when prescribing a specific drug).

Also agree that human factors have much to do with the definition of undesirable alerts. I am sure there are some docs who believe that you have not enough! (Is that possible?)

JoAnn Jordan, MPH

Unknown said...

Hi Dirk,

I understand your proof, and even appreciate the high school flashback :) My first thoughts are that the presumption that for an alert to be undesirable, it must be confusing. I believe that many docs just find the volume of alerts to be undesirable as well as specific types that "pop up" at them during prescribing certain meds, for example. I would suggest that many of these undesirable alerts are not confusing at all, just annoying :)

JoAnn Jordan, MPH

DeanSittig said...

Dirk: your proofs seem to assume that all alerts are "correct", i.e., representing some truth about the patient. This is not a good assumption. Many alerts are incorrect for a number of reasons including: unavailable or inaccurate data, errors in logical
processing (e.g., software bugs) and situation specific clinical exceptions (e.g., user request for blood transfusion denied by a computer-generated intervention that did not capture active
bleeding since last hemoglobin result) see: CMAJ 2012. DOI:10.1503
/cmaj.111599.

I would also call an alert that suggests a drug-drug interaction exists for two medications the patient has been taking successfully ie, without adverse effects previously.

Therefore, I postulate that alert fatigue is when the number of incorrect alerts is greater than the number of correct alerts.

I think that we should be striving for > 80% acceptance of our alerts. if we can not reach that figure, then we should re-think our alert strategy. At LDS Hospital they have many alerts that reach this goal.

Heather Leslie said...

Your post made me smile! I love it when people approach things so differently.

I have been teasing out the notion of alerts and warnings from a clinical modelling point of view in the last few days, so this topic has been high in my thoughts, but you have taken a completely different spin to a very complex topic to me. I may need to post my thoughts in my blog in a similar way soon.

Alert fatigue is real, and the subsequent risk is real.

The art of the modern EHR is how to create them sensibly, to ensure that alerts that need to be flagged are presented correctly (as Dean suggested) - correct and accurate from an evidence viewpoint; contextually correct from a patient point of view. We need to make the clinicians want to read them because they know the alert will provide them with value. That is totally orthogonal to the current situation in most clinical systems - alerts are commonly: too numerous; irrelevant; confusing; not applicable to this patient; not timely; wrong tone; too detailed or not enough detail; disrupt thought processes or workflow (eg popups); etc.

The challenge is how to filter to ensure that only relevant alerts are presented to the clinician such that they will read them and act upon them.

So to me it is most important to define what is a 'good' alert. Then work out how to present them - and annoying pop-ups are to be avoided at all costs.

Interestingly the National Prescribing Service, here in Australia, sends out proposed alerts for it's Radar product to clinicians for review. These alerts are displayed for the first few times that a new drug might be prescribed, so have a distinct education flavour and are opt in by the clinicians (so yes, you could argue that they are slightly different intent, but the approach is relevant). They develop a new alert for selected new and strategic medicines released to the market. NPS proposes some structured and key prescribing informative text about each medicine. For each piece of information they ask reviewers: Is the message clear or confusing? Is the content practical or unrealistic? Is it helpful or condescending? Should we include this or omit it? - Interesting metrics, I think, and certainly time consuming. But they are doing their market research to try to be useful to the clinicians and I admire that.

I suspect that the majority of knowledge bases authoring alerts or content that might be used within alerts don't market test them in this kind of way. And the reality is that for large amounts of data it may not be sustainable.

Yet it is worth considering how to create quality alerts that are worth the clinicians taking the time to read them!

Dirk Stanley, MD, MPH said...

Hi Dean -

Appreciate your feedback about "correct" versus "incorrect" alerts -

I don't mean to sound like I'm trying to categorize "correct" versus "incorrect" alerts, since I'm not sure how one would define a "correct" alert -

For example, our EMR sometimes warns us about the "Fleet-enema-versus-_______" alert, which sometimes results because Fleet Enemas help wash out the colon - So any drug that is still absorbed in the colon may have a somewhat decreased amount of absorption.

Most docs intuitively look at that as an "incorrect" alert, since it typically bears little clinical relevance, e.g. 1. Most patients only get a fleet enema for a day or two, and 2. The impact of reduced absorption of a drug for a day or two is typically minimal...

But rather than reporting it as an subjective "incorrect alert", it reports it as a more objective "low-risk alert" -

Which, docs often subjectively see as an "incorrect alert" -

So I'm hoping this discussion bears the fruit of a mathematical "litmus test" that can help identify whether there is an objective measurement of alert fatigue.

And so, ideally, I think a CPOE alert strategy should demonstrate a relationship between exposure and outcome. That is :

EXPOSURE = High-risk vs. low-risk alert

OUTCOME = Accepted alert vs. bypassed alert

And so if there is no relationship between exposure and outcome, I would argue that there is alert fatigue.

By putting this into a 2x2 table, it would then allow someone to calculate a NPV, PPV, Sensitivity, Specificity, True Positive, True Negative, correlation coefficient, and p-value - Which are much more helpful in evaluating and guiding a CPOE alert strategy than just physician anecdotes and subjective opinions.

I really like Richard Schreiber's idea of adding a coefficient to help magnify/reduce the exposure/outcome effect... Will chew on that before making a second blog post about this.

Thanks to everyone for your feedback and citations - They are all very-much appreciated. :)

- Dirk ;)

Eric said...

Hi Dirk,

I enjoyed this post. I shared it with the pharmacy informatics class that I teach, it happened to coincide with the inclass topic, and worked well to share your examples. Thanks!

Eric

John Horn, PharmD, FCCP said...

Dirk, well stated. Alert fatigue is a problem that can be solved. We have done drug interaction database customization for institutions who wish to reduce the number of inappropriate alerts. We do this by defining potential interactions that can cause harm and thus have excessive risk/benefit ratios. We employ our ORCA classification system for drug interactions. Typically we reduce the severity rating of about 70% of the vendor's database. The result is fewer alerts with few false-positives.
John Horn, PharmD

Jr. Williams said...

I understand your proof, and even appreciate Really great..

Best Factor

Unknown said...

This was helpful. The problem is real and hard to define. I think EMRs should have a way of measuring this for each user automatically and report it in a dashboard to the admin team. One trick can be putting two or more options for the button that bypasses the alert: "acknowledged, helpful" & "acknowledged, not helpful." Also one could study how much the alert delayed the response as a fraction of time the specific user takes to pass a similar screen in the same EMR as a measure of user cognitive engagement.