Showing posts with label safety. Show all posts
Showing posts with label safety. Show all posts

Monday, January 17, 2022

A Student's Take on A.I. in Healthcare

Hi fellow Clinical Informaticists, workflow designers, and other clinical architects,

Today's blog post is a slight deviation from my usual posts - It's actually a guest post, from a smart young college student, Paul Lestz, who I recently had the good fortune of working with on an educational internship.

Paul's particular interest is related to the use of Artificial Intelligence (A.I.), so we discussed the current state of A.I. in healthcare, and ways to implement this technology to a broader audience. 

So I'm very happy to report that, after reading Paul's blog post below, that 'The Kids Are Alright' - If this is what our future leadership looks like, then I have great confidence in our future. 

Please enjoy Paul's post below : 

______________________________

If after an exhaustive examination of data, an artificial intelligence (A.I.) algorithm were to recommend termination of care for a relative - how would you react? How would you feel if this type of recommendation or decision was made solely by an A.I. algorithm, with no clear human oversight? Does it help to differentiate a recommendation from a decision?

Currently, there are few industry-wide reasons to be concerned - at least so far. While some healthcare institutions have begun the deployment of A.I. systems, we are not yet dependent on them for these types of high-risk decisions. Human doctors still have responsibility and remain in control - which means now is a good time to educate ourselves on A.I., including its many compelling benefits, potential risks, and ways to mitigate those risks.
 
While reading, please remember - A.I. is a complicated topic, that warrants our attention. Turning a 'blind eye' to A.I. does not mean that the field will not continue to expand into every industry, including healthcare. I hope this post provides some helpful education - as a starting point for future discussions - and helps to reduce the initial intimidation that A.I. discussions often induce.

 

Why do I believe that A.I. will continue to expand into the healthcare industry? It's because of the many potential benefits of using A.I. to manage the high-risk scenarios that healthcare workers commonly encounter. Among others, here are some major benefits offered by A.I.:

 

Adapted from: Artificial Intelligence in Medicine | Machine Learning | IBM

  • Cutting through the noise - A.I. can help make sense of the overwhelming amount of clinical data, medical literature, and population and utilization data to inform decisions.

  • Providing contextual relevance - A.I. can help empower healthcare providers to see expansively by quickly interpreting billions of data points - both text and image data - to identify contextually relevant information for individual patients.

  • Reducing errors related to human fatigue - Human error is costly and human fatigue can cause errors. A.I. algorithms don’t suffer from fatigue, distractions, or moods. They can process vast amounts of data with incredible speed and accuracy, all of the time.

  • Identifying diseases more readily - A.I. systems can be used to quickly spot anomalies in medical images (e.g. CT scans and MRIs).

From my perspective as a student, these are all compelling examples of how A.I. could help develop healthcare into a more modern, efficient, and reliably data-driven patient-care system.

 

To do this, however, also requires an examination of the challenges that A.I. can bring with it - unsurprisingly, extremely new technology sometimes brings unexpected issues. Some of the known challenges of A.I. implementation include: 

 

Adapted from: The Dangers of A.I. in the Healthcare Industry [Report] (thomasnet.com)

  • Distributional shift - A mismatch in data due to a change of environment or circumstance can result in erroneous predictions. For example, over time, disease patterns can change, leading to a disparity between training and operational data.

  • Insensitivity to impact - A.I. doesn’t yet have the ability to take into account false negatives or false positives.

  • Black box decision-making - With A.I., predictions are not open to inspection or interpretation. For example, a problem with training data could produce an inaccurate X-ray analysis that the A.I. system cannot factor in, and that clinicians cannot analyze.

  • Unsafe failure mode - Unlike a human doctor, an A.I. system can diagnose patients without having confidence in its prediction, especially when working with insufficient information.

  • Automation complacency - Clinicians may start to trust A.I. tools implicitly, assuming all predictions are correct and failing to cross-check or consider alternatives.

  • Reinforcement of outmoded practice - A.I. can’t adapt when developments or changes in medical policy are implemented, as these systems are trained using historical data.

  • Self-fulfilling prediction - An A.I. machine trained to detect a certain illness may lean toward the outcome it is designed to detect.

  • Negative side effects - A.I. systems may suggest a treatment but fail to consider any potential unintended consequences.

  • Reward hacking - Proxies for intended goals sometimes serve as 'rewards' for A.I., and these clever machines are able to find hacks or loopholes in order to receive unearned rewards, without actually fulfilling the intended goal.

  • Unsafe exploration - In order to learn new strategies or get the outcome it is searching for, an A.I. system may start to test boundaries in an unsafe way.

  • Unscalable oversight - Because A.I. systems are capable of carrying out countless jobs and activities, including multitasking, monitoring such a machine can be extremely challenging.

  • Unrepresentative training data - A dataset lacking in sufficient demographic diversity may lead to unexpected, incorrect diagnoses from an A.I. system.

  • Lack of understanding of human values and emotions - A.I. systems lack the complexity to both feel emotions (e.g. empathy) and understand intangible virtues (e.g. honor), which could lead to decisions that humans would consider immoral or inhumane.

  • Lack of accountability for mistakes - Because A.I. systems cannot feel pain and have no ability to compensate monetarily or emotionally for their decisions, there is no way to hold them accountable for errors. Blame is therefore redirected onto the many people related to the incident, with no one person ever truly held liable. 

Rather than feel discouraged when comparing the benefits of A.I. versus these risks above, I'd like to share that there are solutions to many, if not all, of these known risks above - through commitment and detailed policy work.

 

For instance, let’s take a look at the challenge underlined above: automation complacency. At first glance, one might think it would be too difficult to resolve this extremely conceptual issue, intrinsic to the mind of the clinician. However, automation complacency serves little to no problem if the following workflow is implemented

 

(Sample policy/workflow for managing automation complacency - Click to enlarge)

 

I designed this visual to help simplify the complex process of reducing automation complacency to a few, easy-to-follow steps.

 

Resolving the issues related to A.I. does not mean instantly coming up with a single, lengthy procedure in the hopes that it will work. Instead, resolving challenges means breaking the problem down into pieces and isolating different steps in order to achieve the desired result.

 

When developing the flow chart above, I had to determine what exactly was the root of the unwanted issue: 

 

Q: How could a clinician be biased towards picking the A.I. algorithm’s result without considering alternatives

A: It would most likely be because they knew the A.I.’s prediction before/at the time they made their initial diagnosis

 

While we, as humans, might think that we are not biased by certain information, this assumption is often an illusion. Subconscious biases tend to be the most powerful because we do not realize how much they affect us.

 

In order to solve this problem, my workflow above mandates that the clinician provide and lock in their initial opinion before being provided the A.I. algorithm’s prediction. By doing so, we resolve our first issue of initial, subconscious biases.

______________________________

 

As I have just demonstrated, solving A.I.-related issues is often a matter of breaking down problems and coming up with small solutions that together, sum up to a working whole.

 

So, if there are often ways to mitigate the risks of these A.I.-related issues - are we good to go? The answer: it’s complicated.

 

Often, users (e.g. healthcare institutions) are not actually making their own algorithms. Instead, they purchase them. Therefore, one must consider various factors in deciding which A.I. algorithms to purchase. Unfortunately, after an extensive literature search, it doesn't appear that there has been a helpful, cohesive guide as to what factors to consider when purchasing A.I. solutions, so I would like to propose the following guidelines:

 

 

(Sample questions to consider in A.I. purchasing - Click to enlarge)


I created the infographic above to help frame some helpful questions to ask a vendor when considering the purchase of an A.I. solution.

______________________________

 

Generally, I hope that this piece helps to serve two primary purposes: 

  1. The first is to convince you that, with good understanding and planning - A.I. typically brings about more good than harm in the world. 

  2. (This second purpose assumes that you have already embraced the first) - The second purpose is to convince you not to take A.I. for granted, but to be thoughtful in the approach so that institutions (and the people who work at them) solve problems, purchase algorithms, and engage with the world of A.I. responsibly.

It's generally important to prepare and 'do your homework' before engaging in A.I. discussions. This preparation is especially important if we want to maximize the benefits of A.I. and minimize the risks. This post’s goal, therefore, is to bring the focal point of A.I. not to its use, but to its purchase. After all, a well-considered purchase combined with a thoughtful implementation often leads to more responsible ownership and successful outcomes. Alternatively, inadequate preparation can lead to unexpected outcomes

______________________________

 

As a student, and without a deeper knowledge of the exact workflow expectations for a particular circumstance, I am unfortunately unable to offer any more-detailed perspectives. However, I hope this initial post helps to 'get the ball rolling' on some important discussions related to proper A.I. planning, purchasing, and use. The right answers will still need to be evaluated and defined by planners, users, regulatory agencies, and society.

______________________________


Remember this blog is for educational and discussion purposes only - Your mileage may vary. Have any thoughts or feedback to share about A.I. in Healthcare? Feel free to leave in the comments section below!

Sunday, October 4, 2020

Great NEJM Catalyst piece about HealthIT Implementations

Hi fellow CMIOs, CNIOs, Clinical Informatics, and #HealthIT friends,

Short post - Just wanted to share this great piece about HealthIT implementation from the October 2nd New England Journal of Medicine (NEJM) Catalyst, by Christina Pagel, PhD, David W. Bates, MD MSc, and Donald Goldmann, MD

"How to Avoid Common Pitfalls of Health IT Implementation" 

LINK https://catalyst.nejm.org/doi/full/10.1056/CAT.20.0048 

This is a super-helpful piece that includes cartoons (!) to help explain common issues in HealthIT implementations.

Since many of you know I'm a big fan of educating using cartoons, I just had to share. Feel free to share this great piece with your colleagues. 

Remember, this site is for educational and discussion purposes only - Your mileage may vary. Have helpful tips or other lessons you've learned from HealthIT implementations? Feel free to share them in the comments section below! 

Sunday, February 17, 2019

Using CPOE Order Modes to Streamline Workflows

Hi fellow CMIOs, CNIOs, and other Clinical #Informatics enthusiasts,

This month, I thought I'd help demystify a common Computerized Provider Order Entry (CPOE) issue, that actually has a big impact on clinical workflows - Order modes.


Having a good understanding of order modes is essential to resolving many clinical workflow issues. If you've ever asked yourself : 

  • When is it appropriate to use telephone orders?
  • When is it appropriate to use verbal orders?
  • When is it appropriate to use written orders?
  • When is it appropriate to use protocol orders?
... then you've shared in the very common struggle with CPOE order modes

Order modes don't need to be confusing. One of the most common sources of confusion stems from the use of the term 'Computerized Provider Order Entry', or 'CPOE'. 
On selecting an EMR, some organizations assume that having a 'CPOE system' implies that all orders will be entered directly by a provider (The POE in 'CPOE') - And that once it is up-and-running, that there will no longer be any reasons for anyone else to enter orders. Some of those organizations may recognize the need to maintain telephone and verbal orders, for emergency purposes, but don't appreciate the same need for written or protocol orders. 
The truth is that while providers entering their own orders is a best practice, ideal and applicable in almost all ordering scenarios - It is not useful, or even possible, in all scenarios. For this reason, out of necessity, most EMRs recognize a few different ways that orders get entered into the EMR. 

I'm hoping this post will help generate more clarity around their use, and how they can help you streamline, and even improve, your clinical workflows. 

A. Order Mode Basics
To better understand order modes and how they help streamline and support workflows, it's first helpful to understand the difference between an order mode, and order status


(Click image to enlarge)

Basically :
  • Order Status - Tells you whether or not you should be executing ('following') the order
  • Order Modes - Tells you how the order got into the computer
The following slide gives a basic summary of the common order statuses and order modes, found in most electronic medical records : 

(Click to enlarge image)

It's again important to note that direct provider order entry ('CPOE') may be a best practice in almost all clinical scenarios - But the other order modes exist to support order entry in scenarios where it is impossible or even undesirable for the provider to enter the order directly. So to make sure you're only using those other order modes for the right scenarios, you'll want organizational policies in place to make sure they are being used appropriately and safely. The following policy discussion sheds more light on these scenarios, and at the end I've provided a nice summary table. 

B. Sample Policy Definitions
Since order statuses represent the different states that an order can have inside most EMRs, some [ DRAFTED ] policy-grade definitions for these four common order statuses ('states') might look like this : 
  • ACTIVE orders - Orders which HAVE been submitted and signed by a licensed prescriber, or by a well-trained, delegated clinical team member on behalf of a licensed prescriber as part of a standardized, clear, well-developed protocol approved by legal, nursing, provider, and pharmacy leadership. These orders are ACTIVE and should be executed in a timely manner, according to the details contained inside the order. Outcomes from all active orders are attributed to the licensed prescriber.
  • PENDED orders - Future orders which HAVE been submitted and signed by a licensed prescriber, in anticipation of planned future release ('activation') at a future date/time by the licensed prescriber, or by a well-trained, delegated clinical team member on behalf of the licensed prescriber as part of a clear, standardized, well-developed protocol approved by nursing, provider, and pharmacy leadership. These PENDED orders are NOT ACTIVE and  SHOULD NOT be executed until they are released ('activated') into ACTIVE order status by a licensed prescriber, or by a well-trained, delegated clinical team member on behalf of a licensed prescriber as part of a standardized, clear, well-developed protocol approved by legal, nursing, provider, and pharmacy leadership. Outcomes from all pended orders are attributed to the licensed prescriber.
  • HELD ordersPreviously ACTIVE orders which have been placed on hold ('paused') by a licensed prescriber, or by a well-trained, delegated clinical team member on behalf of a licensed prescriber as part of a standardized, clear, well-developed protocol approved by legal nursing, provider, and pharmacy leadership. These HELD orders are NOT ACTIVE and SHOULD NOT be executed until they are again released back into ACTIVE order status by the licensed prescriber, or by a trained, delegated clinical team member on behalf of the licensed prescriber as part of a standardized, well-developed protocol approved by legal, nursing, provider, and pharmacy leadership. Outcomes from all held orders are attributed to the licensed prescriber.
  • DISCONTINUED ordersPreviously ACTIVE, PENDED, or HELD orders which have been discontinued ('deactivated') by a licensed prescriber, or on behalf of the licensed prescriber by a well-trained, delegated clinical team member as part of a clear standardized, well-developed protocol approved by legal, nursing, provider, and pharmacy leadership. These discontinued orders must be retained as part of the legal medical record but must NO LONGER be executed for patient care purposes. Outcomes from all discontinued orders are attributed to the licensed prescriber.
And if the order MODES include the different ways that those orders can get into the computer, then some [ DRAFTED ] policy-grade definitions for these different order modes might look like this : 
  1. CPOE ('PROVIDER') order MODE - Routine orders originated, entered directly, reviewed, and immediately signed (authenticated) by a licensed prescriber, allowing the prescriber to follow decision support rules and order designs that guide best practices and identify errors before they occur. 
  2. TELEPHONE order MODE - Orders originated by a licensed prescriber via direct telephone ('voice-to-voice') communication, and transcribed by a Registered Nurse, Registered Pharmacist, or other registered, licensed, and trained, delegated team member on behalf of the originating licensed prescriber according to a well-developed plan approved by legal, nursing, pharmacy, and provider leadership. Telephone orders must be signed by the originating licensed prescriber within _?12_?24_ hours.
  3. VERBAL order MODE - Orders originated by a licensed prescriber via direct verbal ('face-to-face') communication, transcribed by a Registered Nurse, Registered Pharmacist, or other registered, licensed, and trained, delegated team member, on behalf of the licensed prescriber, according to a well-developed plan approved by legal, nursing, pharmacy, and provider leadership. Verbal orders must be signed by the originating licensed prescriber within _?1_?2_?6_ hours.
  4. WRITTEN order MODE - Orders originated by a licensed prescriber via a pre-approved paper form (approved by legal, nursing, pharmacy, and provider leadership), and transcribed by a Registered Nurse, Registered Pharmacist, or other registered, licensed, and trained, delegated team member (according to a well-developed plan approved by legal, nursing, pharmacy, and provider leadership). Since these paper orders must be signed prior to transcription, they [ usually ] do not require re-authentication ('re-signing') after transcription. The original paper orders are part of the legal medical record and should be retained for quality-control purposes. 
  5. PROTOCOL - WithOUT SIGNATURE order MODE - LOW-risk patient care orders which are activated, modified, or discontinued by a Registered Nurse, Registered Pharmacist, or other registered, licensed, and trained, delegated team member, on behalf of an attending prescriber, as part of a standardized, clear, well-developed protocol approved by legal, nursing, pharmacy, and provider leadership. By policy, all child orders from these low-risk patient care protocols are attributed to the attending provider, and do not require signature.
  6. PROTOCOL - WITH SIGNATURE order MODE - HIGH-risk patient care orders which are activated, modified, or discontinued by a Registered Nurse, Registered Pharmacist, or other registered, licensed, and trained, delegated team member, on behalf of an ordering prescriber, as part of a standardized, clear, well-developed protocol approved by legal, nursing, pharmacy, and provider leadership. By policy, all child orders from these high-risk patient care protocols are attributed to the ordering provider, and require signature within __?12_?24__ hours.
You'll notice in the above [ DRAFT ] definitions : 
  • These are all just [ DRAFT ] definitions - You'll want to check with your own legal team before you consider them and approve them for use in your own organization.
  • There are several signature timeframes which are unidentified (E.g. "__?__ hours") - You will want to review them with your own risk, legal, nursing, provider, and pharmacy leadership to decide on an organizational standard for these. Since these orders all carry risks of miscommunication, you will want to set these timeframes to as short a time period as possible. 

COMMON QUESTION : 
Q: Will every provider sign these orders within the assigned timeframes? 
A: Probably not. But you will want to regularly monitor compliance with your organizational standard, and that probably includes provider report cards for CPOE compliance. Some organizations find that connecting these CPOE statistics to compensation helps improve compliance with organizational standards. 

C. The Summary Table
Confused by the above definitions? Don't like the policy mumbo-jumbo? To help make more sense out of these order modes, and how they impact workflow, I've put together a little summary table which should help clarify them. It includes a summary of the order modes, WHEN to use them, their risks/benefits, and helpful ways to minimize the risks : 

(click to enlarge image)

Remember, it's all about safety and great patient care. Using the right order modes is essential to designing and implementing workflows that deliver that safe, great patient care. Once you have that good understanding of these modes, and the organizational policies to back them up, it becomes much easier to design clinical workflows that meet the needs of your patients, providers, nurses, pharmacists, and other ancillary staff. 

Hope this was a helpful summary! If you have any questions or feedback, please leave them in the comments section below!

Remember, this post is for educational and discussion purposes only - Your mileage may vary. Do not use any of these standards or definitions without first consulting with your informatics team and legal counsel!

Have your own tips for educating CPOE order modes, or anecdotes about how they improved your workflows? Feel free to leave them in the comments section below!