Monday, January 17, 2022

A Student's Take on A.I. in Healthcare

Hi fellow Clinical Informaticists, workflow designers, and other clinical architects,

Today's blog post is a slight deviation from my usual posts - It's actually a guest post, from a smart young college student, Paul Lestz, who I recently had the good fortune of working with on an educational internship.

Paul's particular interest is related to the use of Artificial Intelligence (A.I.), so we discussed the current state of A.I. in healthcare, and ways to implement this technology to a broader audience. 

So I'm very happy to report that, after reading Paul's blog post below, that 'The Kids Are Alright' - If this is what our future leadership looks like, then I have great confidence in our future. 

Please enjoy Paul's post below : 

______________________________

If after an exhaustive examination of data, an artificial intelligence (A.I.) algorithm were to recommend termination of care for a relative - how would you react? How would you feel if this type of recommendation or decision was made solely by an A.I. algorithm, with no clear human oversight? Does it help to differentiate a recommendation from a decision?

Currently, there are few industry-wide reasons to be concerned - at least so far. While some healthcare institutions have begun the deployment of A.I. systems, we are not yet dependent on them for these types of high-risk decisions. Human doctors still have responsibility and remain in control - which means now is a good time to educate ourselves on A.I., including its many compelling benefits, potential risks, and ways to mitigate those risks.
 
While reading, please remember - A.I. is a complicated topic, that warrants our attention. Turning a 'blind eye' to A.I. does not mean that the field will not continue to expand into every industry, including healthcare. I hope this post provides some helpful education - as a starting point for future discussions - and helps to reduce the initial intimidation that A.I. discussions often induce.

 

Why do I believe that A.I. will continue to expand into the healthcare industry? It's because of the many potential benefits of using A.I. to manage the high-risk scenarios that healthcare workers commonly encounter. Among others, here are some major benefits offered by A.I.:

 

Adapted from: Artificial Intelligence in Medicine | Machine Learning | IBM

  • Cutting through the noise - A.I. can help make sense of the overwhelming amount of clinical data, medical literature, and population and utilization data to inform decisions.

  • Providing contextual relevance - A.I. can help empower healthcare providers to see expansively by quickly interpreting billions of data points - both text and image data - to identify contextually relevant information for individual patients.

  • Reducing errors related to human fatigue - Human error is costly and human fatigue can cause errors. A.I. algorithms don’t suffer from fatigue, distractions, or moods. They can process vast amounts of data with incredible speed and accuracy, all of the time.

  • Identifying diseases more readily - A.I. systems can be used to quickly spot anomalies in medical images (e.g. CT scans and MRIs).

From my perspective as a student, these are all compelling examples of how A.I. could help develop healthcare into a more modern, efficient, and reliably data-driven patient-care system.

 

To do this, however, also requires an examination of the challenges that A.I. can bring with it - unsurprisingly, extremely new technology sometimes brings unexpected issues. Some of the known challenges of A.I. implementation include: 

 

Adapted from: The Dangers of A.I. in the Healthcare Industry [Report] (thomasnet.com)

  • Distributional shift - A mismatch in data due to a change of environment or circumstance can result in erroneous predictions. For example, over time, disease patterns can change, leading to a disparity between training and operational data.

  • Insensitivity to impact - A.I. doesn’t yet have the ability to take into account false negatives or false positives.

  • Black box decision-making - With A.I., predictions are not open to inspection or interpretation. For example, a problem with training data could produce an inaccurate X-ray analysis that the A.I. system cannot factor in, and that clinicians cannot analyze.

  • Unsafe failure mode - Unlike a human doctor, an A.I. system can diagnose patients without having confidence in its prediction, especially when working with insufficient information.

  • Automation complacency - Clinicians may start to trust A.I. tools implicitly, assuming all predictions are correct and failing to cross-check or consider alternatives.

  • Reinforcement of outmoded practice - A.I. can’t adapt when developments or changes in medical policy are implemented, as these systems are trained using historical data.

  • Self-fulfilling prediction - An A.I. machine trained to detect a certain illness may lean toward the outcome it is designed to detect.

  • Negative side effects - A.I. systems may suggest a treatment but fail to consider any potential unintended consequences.

  • Reward hacking - Proxies for intended goals sometimes serve as 'rewards' for A.I., and these clever machines are able to find hacks or loopholes in order to receive unearned rewards, without actually fulfilling the intended goal.

  • Unsafe exploration - In order to learn new strategies or get the outcome it is searching for, an A.I. system may start to test boundaries in an unsafe way.

  • Unscalable oversight - Because A.I. systems are capable of carrying out countless jobs and activities, including multitasking, monitoring such a machine can be extremely challenging.

  • Unrepresentative training data - A dataset lacking in sufficient demographic diversity may lead to unexpected, incorrect diagnoses from an A.I. system.

  • Lack of understanding of human values and emotions - A.I. systems lack the complexity to both feel emotions (e.g. empathy) and understand intangible virtues (e.g. honor), which could lead to decisions that humans would consider immoral or inhumane.

  • Lack of accountability for mistakes - Because A.I. systems cannot feel pain and have no ability to compensate monetarily or emotionally for their decisions, there is no way to hold them accountable for errors. Blame is therefore redirected onto the many people related to the incident, with no one person ever truly held liable. 

Rather than feel discouraged when comparing the benefits of A.I. versus these risks above, I'd like to share that there are solutions to many, if not all, of these known risks above - through commitment and detailed policy work.

 

For instance, let’s take a look at the challenge underlined above: automation complacency. At first glance, one might think it would be too difficult to resolve this extremely conceptual issue, intrinsic to the mind of the clinician. However, automation complacency serves little to no problem if the following workflow is implemented

 

(Sample policy/workflow for managing automation complacency - Click to enlarge)

 

I designed this visual to help simplify the complex process of reducing automation complacency to a few, easy-to-follow steps.

 

Resolving the issues related to A.I. does not mean instantly coming up with a single, lengthy procedure in the hopes that it will work. Instead, resolving challenges means breaking the problem down into pieces and isolating different steps in order to achieve the desired result.

 

When developing the flow chart above, I had to determine what exactly was the root of the unwanted issue: 

 

Q: How could a clinician be biased towards picking the A.I. algorithm’s result without considering alternatives

A: It would most likely be because they knew the A.I.’s prediction before/at the time they made their initial diagnosis

 

While we, as humans, might think that we are not biased by certain information, this assumption is often an illusion. Subconscious biases tend to be the most powerful because we do not realize how much they affect us.

 

In order to solve this problem, my workflow above mandates that the clinician provide and lock in their initial opinion before being provided the A.I. algorithm’s prediction. By doing so, we resolve our first issue of initial, subconscious biases.

______________________________

 

As I have just demonstrated, solving A.I.-related issues is often a matter of breaking down problems and coming up with small solutions that together, sum up to a working whole.

 

So, if there are often ways to mitigate the risks of these A.I.-related issues - are we good to go? The answer: it’s complicated.

 

Often, users (e.g. healthcare institutions) are not actually making their own algorithms. Instead, they purchase them. Therefore, one must consider various factors in deciding which A.I. algorithms to purchase. Unfortunately, after an extensive literature search, it doesn't appear that there has been a helpful, cohesive guide as to what factors to consider when purchasing A.I. solutions, so I would like to propose the following guidelines:

 

 

(Sample questions to consider in A.I. purchasing - Click to enlarge)


I created the infographic above to help frame some helpful questions to ask a vendor when considering the purchase of an A.I. solution.

______________________________

 

Generally, I hope that this piece helps to serve two primary purposes: 

  1. The first is to convince you that, with good understanding and planning - A.I. typically brings about more good than harm in the world. 

  2. (This second purpose assumes that you have already embraced the first) - The second purpose is to convince you not to take A.I. for granted, but to be thoughtful in the approach so that institutions (and the people who work at them) solve problems, purchase algorithms, and engage with the world of A.I. responsibly.

It's generally important to prepare and 'do your homework' before engaging in A.I. discussions. This preparation is especially important if we want to maximize the benefits of A.I. and minimize the risks. This post’s goal, therefore, is to bring the focal point of A.I. not to its use, but to its purchase. After all, a well-considered purchase combined with a thoughtful implementation often leads to more responsible ownership and successful outcomes. Alternatively, inadequate preparation can lead to unexpected outcomes

______________________________

 

As a student, and without a deeper knowledge of the exact workflow expectations for a particular circumstance, I am unfortunately unable to offer any more-detailed perspectives. However, I hope this initial post helps to 'get the ball rolling' on some important discussions related to proper A.I. planning, purchasing, and use. The right answers will still need to be evaluated and defined by planners, users, regulatory agencies, and society.

______________________________


Remember this blog is for educational and discussion purposes only - Your mileage may vary. Have any thoughts or feedback to share about A.I. in Healthcare? Feel free to leave in the comments section below!

Thursday, December 2, 2021

Engineering Healthcare : Through A Historical Lens

 Hi fellow CMIOs, CNIOs, Clinical Informatics, and other HealthIT friends,

I'm writing today to share a presentation I recently did, on engineering Healthcare through a historical lens.

Seems like a peculiar title - but it summarizes a lot of the lessons I've learned in my roughly 13 years of both direct clinical and clinical informatics experience.

Below is my slide deck I used - Sharing it in case any of the slides help you develop your presentations on clinical Change/Project Management or Applied Clinical Informatics.

First - My intro slide : 

... which brings me to a brief discussion of our human history of documentation

It was pretty profound to me, when I first fully grasped the magnitude of this simple documentation loop, between both reading and writing information : 

Unfortunately, despite being open for business for over 300+ years - Healthcare has never really had an opportunity to really 'pause' to 'fix the plane' - so a lot of changes have happened serendipitously over this long timeframe : 

... which tells us a few things : 

So how can we do better? We need to start thinking like designers and engineers, and plan our workflows and changes by examining those documents that users interact with every day : 


... and if we look at those documents more closely, we see that roughly half of them are contained inside an electronic medical record - And the other half are outside. This gives us the roughly 24 building blocks of all clinical workflows : 


So if we depend on those 24 documents to be the building blocks of all clinical workflows - How do we help make sure these documents are as functional as they need to be? It all starts with functional definitions - Both what it's called, and what it does.


Once you have those functional definitions, this helps you create a working glossary and document templates, to help you quickly develop high-quality documents to build your workflows from : 


And to help you further develop your documents, it helps to understand how to build them in the most robust way - Aligning the concepts > terminology > templates > documents > workflows > goals/regulations > mission/vision


Now that you know how to engineer these documents for maximum benefit, it's helpful to figure out how to move (change) from Point A (current) to Point B (future). The distance between these two points gives you a rough estimate of which tools you will need to get there, and the project scope - How much time, people, and resources it will take to get there



Once you have your stakeholders and deliverables identified, it's helpful to orchestrate your change in a linearorganized, thoughtful, and predictable manner. For this, I offer up a helpful general-purpose change management recipe


If you don't have an organized process for managing/engineering changes - you could fall into one of these engineering pitfalls, which can lead to unexpected outcomes



... all of which should be aligned to your policies and procedures, the standards of your organization : 



A few final tips and closing thoughts, about planning, infrastructure, and clinical operations :
 


... and my final thank you and advice : "Control your documents, before they control you."


I hope these slides help you develop your own presentations on Applied Clinical Informatics, and the importance of solid clinical leadership and clinical change / project management
Thank you!

Remember - This blog is for educational and discussion purposes only - Your mileage may vary!

Have any secrets about policy writing, workflow development, or project/change management? Feel free to share in the comments section below!!

Saturday, October 30, 2021

Optimizing your Intranet

Hi fellow CMIOs, CNIOs, Applied Clinical Informaticists, and other HealthIT friends,

It's been a while since my last post - As you know, healthcare is very busy adapting to changes brought about by our global COVID-19 pandemic. While the pandemic has and continues to be a great source of sadness and tragedy, it also brings a lot of change - I think a lot of this change is going to be very good, and facilitate lots of innovative, new ways to deliver care.

So for this post, I thought I'd piggyback onto my last post, "Welcome to Healthcare", by showing how helpful it can be to use a standardized index of healthcare to optimize your organizational Intranet

Why optimize your Intranet? It's the one 'filing cabinet' that everyone has access to, on their desktop, usually with one click. Imagine... Could your Intranet become a silo-bustinghigh-value tool that your employees use regularly to quickly find helpful information, that helps them troubleshoot problems, plan solutions, and easily learn about the people they work with? Could it also be an internal communication tool that invisibly teaches them about the structure of healthcare? I believe good indexing can do this, and I'll share why I believe this below.

But first - I'd like to provide some background, using one of my heroes, the brilliant Clinical Informatics pioneer Lawrence 'Larry' Weed, MD (1923 - 2017).

Dr. Weed and Dr. Stanley
A treasured photo of me with the great Dr. Larry Weed, 
at the 2014 HIMSS Conference.

If you've ever written a SOAP note, it's because of Larry Weed's 1968 New England Journal of Medicine article, "Medical Records that Guide and Teach" - This was the breakthrough article that changed the way the whole globe writes clinical documentationA copy of his original article in .PDF format is available on the Washington University web site by clicking here.

It's a fantastic read. What amazes me is that his SOAP note template allowed us, as clinicians, to organize our thoughts and then share them with other clinicians. One could argue that the whole specialization of healthcare in the 1960s and 1970s was made possible through his contributions to clinical documentation! 

In short - Larry Weed was right. You can't separate reading, writing and thinking - They are intrinsically connected. How you read and write shapes how you think. (By the way, if you'd like to learn more about him, you can also see his 1971 Grand Rounds at Emory University by clicking here.)

Now, borrowing from Dr. Weed's lessons that what we read and write shapes how we think - let's look back at the sample index we discussed in my last post. (Remember, your mileage may vary, depending on your institution's needs...)

Sample Healthcare Index
Note : This [DRAFT] sample index may vary from institution to institution, depending on your needs. 
Also, for clarity and brevity, it also does not reflect the Board of Directors.

This general-purpose index can help us make seven very helpful Intranet homepages that guide and teach (thank you Dr. Weed!), with landing pages specific to each operational area of your institution, but yet connected to each other logically by links and strategically-designed news/announcement links. For example, using this index :

1. The Administrative Enterprise (1) Homepage would look something like this : 

Administrative Enterprise Homepage (1)

Notice that in each of these pages, for institutional communication and awareness, there are three news banners for Administrative news local to this page, and also news from the other areas of the organization.

2. The Academic Enterprise (1.a) Homepage would look something like this : 

Academic Enterprise Homepage (1.a)

Here again, for awareness - there are three news banners, connecting Academic users with the events happening in Administrative/Research/Clinical Enterprises, and also the clinical services

3. The Research Enterprise (1.b) Homepage would look something like this : 

Research Enterprise Homepage (1.b)

Again, with its three news banners, the Research Enterprise Homepage connects users with Administrative, Academic, and Clinical Enterprise news. 

4. The Clinical Enterprise (1.cHomepage would look something like this : 

Clinical Enterprise Homepage (1.c)

While the first level of news banners here is focused on Clinical Enterprise news, the second level connects with Hospital-Based, Ambulatory-Based, and Off-Campus Services, followed by a third with Administrative, Academic, and Research News. 


5. The Clinical Enterprise > Hospital-Based Services (1.c.iHomepage would look something like this : 

Clinical Enterprise > Hospital-Based (1.c.i)

Here, the primary news links are related to Hospital-Based News, followed by General Clinical Enterprise and Ambulatory Clinical Service News, followed by Administrative, Research, and Academic News. 

6. The Clinical Enterprise > Ambulatory-Based Services (1.c.iiHomepage would look something like this : 

Clinical Enterprise > Ambulatory (1.c.ii)

Here, the news links will help connect Ambulatory Users to Ambulatory News, followed by General Clinical Enterprise and Hospital-Based news, followed by Administrative, Research, and Academic news/announcements. 


7. Finally, the Clinical Enterprise > Off-Campus Services (1.c.iii) Homepage would look something like this : 

Clinical Enterprise > Off-Campus (1.c.iii)

Here, the news links help connect
Off-Campus Clinical Services with Off-Campus News, followed by General Clinical Enterprise news, followed by Administrative, Research, and Academic News links. 

Creating this sort of framework is not easy, and would require a significant investment in time and resources to implement and maintain this. One of the biggest challenges would be maintenance - How exactly would you maintain such a framework? Would there be one central 'webmaster' team, or would there be distributed 'webmasters' in different departments, each trained to maintain their area, links, news/announcements, and files?

That being said, I do believe there could be significant benefits to this sort of structure, by educating and empowering all of your employees to strategically find solutions within a few clicks of their landing page.

Either way - I hope this sample index and these designs help you think about how to strategically design and optimize your Intranet for your own institution.

Have any experience with Intranet optimization? See any areas for improvement? Feel free to leave them in the comments section below!

Remember, this blog is for educational purposes only - Your mileage may vary! Do not make any changes to your Intranet strategy without discussing, scoping, prioritizing, and approval from your own leadership teams!