Sunday, May 1, 2022
Sunday, March 27, 2022
Hi fellow CMIOs, CNIOs, and other Applied Clinical Informatics and #HealthIT friends,
In today's post, I thought I'd help answer a common clinical terminology question I sometimes get asked, about information management during inpatient hospitalizations :
- Specialty / Subspecialty
- (Nursing) Level-of-Care
- Geographic Location
Unfortunately, getting this terminology right is essential to good communication, good patient flow, good bed management, and good data reporting - So for clinical educational purposes, I figured I'd write this helpful primer on these terms, what they do, and how to use them.
A. WHAT IS SPECIALTY (and SUBSPECIALTY)
Specialty (and subspecialty) is what a Provider is trained to do. While the Association of American Medical Colleges (AAMC) recognized the need to stratify medical training back in 1876, this specialty (and subspecialty) training has since continued to evolve.
Today, we recognize a number of :
- RESIDENCIES (SPECIALTY TRAINING)
- FELLOWSHIPS (SUBSPECIALTY TRAINING)
- Select ONE :
- ( ) INTERNAL MEDICINE (General Internal Medicine)
- ( ) INTERNAL MEDICINE > CARDIOLOGY
- ( ) INTERNAL MEDICINE > ENDOCRINOLOGY
- ( ) INTERNAL MEDICINE > GASTROENTEROLOGY
- ( ) INTERNAL MEDICINE > RHEUMATOLOGY
- ( ) INTERNAL MEDICINE > GERONTOLOGY (Geriatrics)
- ( ) INTERNAL MEDICINE > PULMONARY/CRITICAL CARE
- ( ) INTERNAL MEDICINE > HEMATOLOGY / ONCOLOGY
- ( ) PEDIATRICS (General Pediatrics)
- ( ) PEDIATRICS > EMERGENCY MEDICINE
- ( ) PEDIATRICS > NEONATOLOGY
- ( ) EMERGENCY MEDICINE (General emergency medicine)
- ( ) EMERGENCY MEDICINE > TRAUMATOLOGY
- ( ) EMERGENCY MEDICINE > TOXICOLOGY
- ( ) RADIOLOGY (General Radiology)
- ( ) RADIOLOGY > INTERVENTIONAL
- ( ) SURGERY (General Surgery)
- ( ) SURGERY > ORTHOPEDICS
- ( ) SURGERY > PLASTIC SURGERY
- ( ) SURGERY > NEUROSURGERY
- ( ) SURGERY > TRANSPLANT
- ( ) SURGERY > GYNECOLOGIC
- ( ) SURGERY > VASCULAR
- ( ) NEUROLOGY (General Neurology)
- ( ) NEUROLOGY > MOVEMENT DISORDERS
- ( ) NEUROLOGY > MULTIPLE SCLEROSIS
- ( ) OBGYN (General OBGYN)
- ( ) OBGYN > MATERNAL FETAL MEDICINE
- ( ) OBGYN > FERTILITY MEDICINE
- ( ) PSYCHIATRY (General Psychiatry)
- ( ) PSYCHIATRY > CHILD AND ADOLESCENT
Note : While there may be some occasional variation about how one ended up in a particular subspecialty (e.g. Pediatrics>Emergency Medicine, or Emergency Medicine>Pediatrics), historically - this system of categorization has generally worked fairly well, and gives people a good sense of what training the provider has had.
B. WHAT IS A SERVICE?
Service is what the provider actually does. It'stypically one-or-more clinical function(s) that they have been assigned to deliver.
Services are commonly categorized as either INPATIENT, ED, or OUTPATIENT services, and again, a provider may function in one or more services :
- Select ALL THAT APPLY :
- [ ] OUTPATIENT Internal Medicine (Ambulatory Internal Medicine Clinic)
- [ ] INPATIENT Hospitalist
- [ ] INPATIENT Intensivist
- [ ] INPATIENT Labor and Delivery
- [ ] OUTPATIENT Psychiatry
- [ ] INPATIENT Psychiatry
- [ ] EMERGENCY MEDICINE (Emergency Services)
- [ ] INPATIENT Neurology
- [ ] OUTPATIENT Neurology (Ambulatory Neurology Clinic)
- [ ] INPATIENT Surgery
- [ ] OUTPATIENT Surgery (Ambulatory Surgery Clinic)
... and many other clinical services (functions) that have been designed to provide patient care services in various settings.
This is where confusion can sometimes arise, especially for scenarios where a provider might have one specialty but two services, e.g. :
- SPECIALTY/SUBSPECIALTY = INTERNAL MEDICINE (General Internal Medicine)
- SERVICE1 (Primary Service) = OUTPATIENT INTERNAL MEDICINE (General Internal Medicine)
- SERVICE2 (Secondary Service) = INPATIENT HOSPITALIST
Confusing specialty and service can lead to incorrect scheduling of meetings - E.g. Let's say you want to introduce a new outpatient televideo service to your OUTPATIENT INTERNAL MEDICINE docs, then :
- [ WRONG WAY ] Mail to SPECIALTY = Internal Medicine ('Please mail this to all Internal Medicine Docs!')
- [ RIGHT WAY ] Mail to SERVICE = Outpatient Internal Medicine ('Please mail this to all docs who work in the Outpatient Internal Medicine Clinic/Service!')
If you accidentally did mail your announcement to SPECIALTY = Internal Medicine, then half of the recipients might wonder why you contacted them about this new outpatient tool :
- SPECIALTY = INTERNAL MEDICINE - Includes both :
- [ INTENDED AUDIENCE ] SERVICE = Outpatient Internal Medicine
- [ UNINTENDED AUDIENCE ] SERVICE = Inpatient Hospitalist
As you can see, it's very easy to get tripped up on this terminology, when it looks so similar.
One final note about SERVICE - This is often used during inpatient admissions to describe the "Admitting/Covering Service", as in, who should Nursing call when they identify something that needs a Physician's attention?
C. WHAT IS A (Nursing) LEVEL-OF-CARE
The (Nursing) Level-of-Care is an important concept that basically answers the question, "What are the nursing standards that are required for a patient admitted in this hospital bed?" Typically, this is based on patient type and acuity, and is developed in conjunction with both Nursing Leadership and Physician Leadership. From a practical standpoint, this usually needs to include some agreements about :
- Patient Acuity - How active are the patient's medical problems, and how much care will they need? (Low/Medium/High?)
- Standard Frequency of Vitals - How often does a Nurse need to monitor the patient?
- Standard Nursing Skill Set - What are the Nurses trained/certified to do? Is it general care, or specialty care? On what patient population? Adults? Pediatric? Neonates?
- Standard Nurse Staffing Ratios - How many patients are Nurses routinely expected to manage concurrently for this Level-of-Care?
Because these are all important to establish a level-of-care, they are commonly laid out in a table that might look something like this :
- ADMIT TO ADULT MED/SURG :
- [ ] Vital Signs every 8 hours
- [ ] Vital Signs every 6 hours
- ADMIT TO ADULT ICU
- [ ] Vital Signs every 1 hour
- [ ] Vital Signs continuously
D. WHAT IS A GEOGRAPHIC LOCATION?
Geographic location technically should be the easiest concept to manage - It's just the floor/room (and sometimes bed slot, E.g. Bed A or Bed B) that the patient's bed is geographically located in. Sometimes it also includes a temporary location, such as when a patient is being temporarily located in Radiology for an X-ray :
- Geographic Location = Room 401
- Temporary Location = Radiology
- Sometimes displayed as "Room 401 (Radiology)"
However, location can occasionally be confused with a (Nursing) Level of Care, especially when naming conventions sometimes combine these concepts, usually intended for convenience purposes. (E.g. "5th Floor Telemetry")
Note that there are two challenges that can sometimes occur when combining these concepts in the naming convention for your geographic locations/floors :
1. FIRST CHALLENGE : The first of these challenges is boarding - which is when a patient bed needs to be created in a non-standard location, usually for patient flow and/or surge purposes. For example -
- If you usually have ten (10) beds on your FOURTH floor, where you commonly care for up to ten (10) Med/Surg patients...
- One day, you have a patient surge, and need to be able to care for twelve (12) Med/Surg patients...
- ... then you will need to create two (2) extra Med/Surg beds, maybe on the FIFTH floor.
Assuming you are approved to 'surge' your bed capacity like this, and have the Med/Surg nurses available to support those two (2) extra Med/Surg beds on the FIFTH floor, then you can hypothetically create a bed with a defined (Nursing) level-of-care in any geographic location that can support the delivery of the necessary (Nursing) level-of-care.
For example, in an disaster scenario, you could hypothetically make a med/surg bed available in your cafeteria (assuming you had the available resources) :
- ADMIT TO = Med/Surg Level-of-care
- GEOGRAPHIC LOCATION = CAFETERIA Bed 2
- SERVICE = Inpatient Hospitalist
Or, if you are admitting a Med/Surg Patient from the Emergency Room to your FOURTH floor (where you commonly care for Med/Surg patients) - If there is no bed available on the FOURTH floor, you you could hypothetically admit and 'board' the Med/Surg patient (temporarily) in an Emergency Department location :
- ADMIT TO = Med/Surg Level-of-Care
- GEOGRAPHIC LOCATION = ED Bed 2
- SERVICE = Inpatient Hospitalist
- ADMIT TO = Med/Surg Level-of-Care
- GEOGRAPHIC LOCATION = Fourth Floor Bed 401
- SERVICE = Inpatient Hospitalist
2. SECOND CHALLENGE : The second challenge that comes from naming conventions that combine concepts (e.g. "FOURTH Floor Med/Surg") is data-reporting. Suppose that when beds are needed - you
- sometimes have to board MED/SURG patients on your FIFTH floor, or
- sometimes you have to board TELEMETRY patients on your FOURTH floor.
And then one day, you need to know, "How many Med/Surg patients did we see last month?"
- If you generate a report of 'How many patients were geographically admitted to the FOURTH floor', you may miss any Med/Surg patients who were boarded in other locations, or over-count other telemetry patients who might have been temporarily boarded on the FOURTH floor.
- If, instead, you generate a report of 'How many patients were admitted with a Level-of-Care=Med/Surg", your report will be accurate and will account for any patients who were temporarily boarded in non-standard locations.
- [ REQUIRED ] ADMIT TO = ________ (Nursing) Level-of-care
- [ REQUIRED ] SERVICE = ___________
- [ OPTIONAL ] GEOGRAPHIC LOCATION=(Use only if a particular location is necessary, otherwise Nursing may not have any flexibility about where to geographically locate the patient in a surge/boarding scenario.)
... and why it's also helpful to track doctors by both their specialty/subspecialty and also their service(s) :
- SPECIALTY/SUBSPECIALTY = Internal Medicine (General Internal Medicine)
- SERVICE1 (Primary Service) = Inpatient Hospitalist
- SERVICE2 (Secondary Service) = Outpatient General Internal Medicine
While this may have been somewhat lengthy, I hope this helps you review and discuss this terminology with your own teams.
Remember, this blog is for academic/discussion purposes only - Your mileage may vary! Have any patient flow or bed management tips you'd like to share? Have any experiences managing this terminology with your teams, or any other feedback you'd like to share? Leave it in the comments section below!
Saturday, March 19, 2022
Hi fellow CMIOs, CNIOs, Clinical Informaticists, and other HealthIT friends,
Can growing up in a multicultural, bilingual (or polylingual) household help to prepare you for a career in Applied Clinical Informatics? In today's post, I'll explain why I believe the answer to this is "Yes".
Almost all of my Applied Clinical Informatics colleagues that I've met over the years have amazing educational and experiential backgrounds. However, I've noticed that a surprising number of them also come from multicultural backgrounds, where they grew up speaking multiple languages.
In full disclosure : I don't have great data to support this claim. And I might be biased (or more sensitive) to this issue because I grew up in a polylingual household myself, the son of a German immigrant mother and a polyglot American father, who counted German as one of this favorite and most fluent languages.
My father's passion for languages started as a high school student in Yonkers, NY, and would continue to develop until he became a Military Policeman (MP) for the US Army, in Germany, where he also served as a court interpreter. This would also eventually lead him to meet my mother (who had immigrated from Herford, Germany to Westchester County, NY), and to a future career as a high school language teacher at White Plains High School in White Plains, NY.
So with parents like these, I grew up in a multicultural, multilingual household, where we commonly spoke German at home, and then spoke English when other people came to visit our house. Vacations were often spent visiting relatives in Germany, immersed in German language and culture, before returning to America and resuming daily activities in English.
Given my father's interpreter experiences, he always took languages and translation very seriously. Growing up outside of NYC in the 1970s and 1980s, he would occasionally take me into the city to the United Nations, to learn about and watch the famous UN Interpreter pool at work. Over our dinner table, we would often discuss the inseparable bond between culture and language, the real responsibilities of professional interpreters, and the occasional fallibility of both written and spoken words.
This sort of cross-cultural upbringing led me to some frequent challenges, that most multicultural people can probably relate to :
- Having to explain "American things" to my German family.
- Having to explain "German things" to my American friends.
- Occasionally having to do real-time interpretation of English-to-German, and German-to-English, to facilitate discussions between my German family and American friends.
I didn't fully appreciate this sort of multicultural upbringing until I was older, and learned that not everyone struggled with (or learned to manage) these types of issues.
One of the things you learn from this sort of cross-cultural upbringing is that communication is actually much more frail and fragile than you might imagine. Success often depends on a number of factors helping you achieve a desired comprehension rate :For most routine, practical, day-to-day communications, about 75%-80% comprehension is just fine. Typically, your brain fills in the gaps (without your awareness), and you usually don't even notice the small details you might have missed. It still gets you to work, gets you to dinner on time, lets you order food at restaurants, and lets you manage your typical day-to-day activities. Informally, I personally refer to this as "Kitchen Language", since it's what you'd typically hear in a kitchen when people are making dinner and talking about their day. Failures sometimes happen, but when they do - they usually only result in some brief confusion, a wrong or forgotten birthday gift, or an impromptu discussion about 'ineffective communication' from a loved one. After a little more discussion - The error or conflict usually gets resolved. Failure is usually pretty well-tolerated.
*Interesting historical side-note :
Ever wonder about the June 1961 Cuban Missile summit between Kennedy and Kruschev? Viktor Sukhodrev was the interpreter in between them - Talk about responsibility for ensuring both accurate translation and comprehension!
So in closing - I'd say a bilingual (or polylingual), multicultural upbringing can serve as an excellent model for the same interpretation functions that Applied Clinical Informaticists provide in their daily work. It would be interesting to do some formal research into these concepts, to help confirm the value of this sort of early training.
Monday, January 17, 2022
Hi fellow Clinical Informaticists, workflow designers, and other clinical architects,
Today's blog post is a slight deviation from my usual posts - It's actually a guest post, from a smart young college student, Paul Lestz, who I recently had the good fortune of working with on an educational internship.
Paul's particular interest is related to the use of Artificial Intelligence (A.I.), so we discussed the current state of A.I. in healthcare, and ways to implement this technology to a broader audience.
So I'm very happy to report that, after reading Paul's blog post below, that 'The Kids Are Alright' - If this is what our future leadership looks like, then I have great confidence in our future.
Please enjoy Paul's post below :
Currently, there are few industry-wide reasons to be concerned - at least so far. While some healthcare institutions have begun the deployment of A.I. systems, we are not yet dependent on them for these types of high-risk decisions. Human doctors still have responsibility and remain in control - which means now is a good time to educate ourselves on A.I., including its many compelling benefits, potential risks, and ways to mitigate those risks.
While reading, please remember - A.I. is a complicated topic, that warrants our attention. Turning a 'blind eye' to A.I. does not mean that the field will not continue to expand into every industry, including healthcare. I hope this post provides some helpful education - as a starting point for future discussions - and helps to reduce the initial intimidation that A.I. discussions often induce.
Why do I believe that A.I. will continue to expand into the healthcare industry? It's because of the many potential benefits of using A.I. to manage the high-risk scenarios that healthcare workers commonly encounter. Among others, here are some major benefits offered by A.I.:
Cutting through the noise - A.I. can help make sense of the overwhelming amount of clinical data, medical literature, and population and utilization data to inform decisions.
Providing contextual relevance - A.I. can help empower healthcare providers to see expansively by quickly interpreting billions of data points - both text and image data - to identify contextually relevant information for individual patients.
Reducing errors related to human fatigue - Human error is costly and human fatigue can cause errors. A.I. algorithms don’t suffer from fatigue, distractions, or moods. They can process vast amounts of data with incredible speed and accuracy, all of the time.
Identifying diseases more readily - A.I. systems can be used to quickly spot anomalies in medical images (e.g. CT scans and MRIs).
From my perspective as a student, these are all compelling examples of how A.I. could help develop healthcare into a more modern, efficient, and reliably data-driven patient-care system.
To do this, however, also requires an examination of the challenges that A.I. can bring with it - unsurprisingly, extremely new technology sometimes brings unexpected issues. Some of the known challenges of A.I. implementation include:
Distributional shift - A mismatch in data due to a change of environment or circumstance can result in erroneous predictions. For example, over time, disease patterns can change, leading to a disparity between training and operational data.
Insensitivity to impact - A.I. doesn’t yet have the ability to take into account false negatives or false positives.
Black box decision-making - With A.I., predictions are not open to inspection or interpretation. For example, a problem with training data could produce an inaccurate X-ray analysis that the A.I. system cannot factor in, and that clinicians cannot analyze.
Unsafe failure mode - Unlike a human doctor, an A.I. system can diagnose patients without having confidence in its prediction, especially when working with insufficient information.
Automation complacency - Clinicians may start to trust A.I. tools implicitly, assuming all predictions are correct and failing to cross-check or consider alternatives.
Reinforcement of outmoded practice - A.I. can’t adapt when developments or changes in medical policy are implemented, as these systems are trained using historical data.
Self-fulfilling prediction - An A.I. machine trained to detect a certain illness may lean toward the outcome it is designed to detect.
Negative side effects - A.I. systems may suggest a treatment but fail to consider any potential unintended consequences.
Reward hacking - Proxies for intended goals sometimes serve as 'rewards' for A.I., and these clever machines are able to find hacks or loopholes in order to receive unearned rewards, without actually fulfilling the intended goal.
Unsafe exploration - In order to learn new strategies or get the outcome it is searching for, an A.I. system may start to test boundaries in an unsafe way.
Unscalable oversight - Because A.I. systems are capable of carrying out countless jobs and activities, including multitasking, monitoring such a machine can be extremely challenging.
Unrepresentative training data - A dataset lacking in sufficient demographic diversity may lead to unexpected, incorrect diagnoses from an A.I. system.
Lack of understanding of human values and emotions - A.I. systems lack the complexity to both feel emotions (e.g. empathy) and understand intangible virtues (e.g. honor), which could lead to decisions that humans would consider immoral or inhumane.
Lack of accountability for mistakes - Because A.I. systems cannot feel pain and have no ability to compensate monetarily or emotionally for their decisions, there is no way to hold them accountable for errors. Blame is therefore redirected onto the many people related to the incident, with no one person ever truly held liable.
Rather than feel discouraged when comparing the benefits of A.I. versus these risks above, I'd like to share that there are solutions to many, if not all, of these known risks above - through commitment and detailed policy work.
For instance, let’s take a look at the challenge underlined above: automation complacency. At first glance, one might think it would be too difficult to resolve this extremely conceptual issue, intrinsic to the mind of the clinician. However, automation complacency serves little to no problem if the following workflow is implemented:
(Sample policy/workflow for managing automation complacency - Click to enlarge)
I designed this visual to help simplify the complex process of reducing automation complacency to a few, easy-to-follow steps.
Resolving the issues related to A.I. does not mean instantly coming up with a single, lengthy procedure in the hopes that it will work. Instead, resolving challenges means breaking the problem down into pieces and isolating different steps in order to achieve the desired result.
When developing the flow chart above, I had to determine what exactly was the root of the unwanted issue:
Q: How could a clinician be biased towards picking the A.I. algorithm’s result without considering alternatives?
A: It would most likely be because they knew the A.I.’s prediction before/at the time they made their initial diagnosis.
While we, as humans, might think that we are not biased by certain information, this assumption is often an illusion. Subconscious biases tend to be the most powerful because we do not realize how much they affect us.
In order to solve this problem, my workflow above mandates that the clinician provide and lock in their initial opinion before being provided the A.I. algorithm’s prediction. By doing so, we resolve our first issue of initial, subconscious biases.
As I have just demonstrated, solving A.I.-related issues is often a matter of breaking down problems and coming up with small solutions that together, sum up to a working whole.
So, if there are often ways to mitigate the risks of these A.I.-related issues - are we good to go? The answer: it’s complicated.
Often, users (e.g. healthcare institutions) are not actually making their own algorithms. Instead, they purchase them. Therefore, one must consider various factors in deciding which A.I. algorithms to purchase. Unfortunately, after an extensive literature search, it doesn't appear that there has been a helpful, cohesive guide as to what factors to consider when purchasing A.I. solutions, so I would like to propose the following guidelines:
(Sample questions to consider in A.I. purchasing - Click to enlarge)
I created the infographic above to help frame some helpful questions to ask a vendor when considering the purchase of an A.I. solution.
Generally, I hope that this piece helps to serve two primary purposes:
The first is to convince you that, with good understanding and planning - A.I. typically brings about more good than harm in the world.
(This second purpose assumes that you have already embraced the first) - The second purpose is to convince you not to take A.I. for granted, but to be thoughtful in the approach so that institutions (and the people who work at them) solve problems, purchase algorithms, and engage with the world of A.I. responsibly.
It's generally important to prepare and 'do your homework' before engaging in A.I. discussions. This preparation is especially important if we want to maximize the benefits of A.I. and minimize the risks. This post’s goal, therefore, is to bring the focal point of A.I. not to its use, but to its purchase. After all, a well-considered purchase combined with a thoughtful implementation often leads to more responsible ownership and successful outcomes. Alternatively, inadequate preparation can lead to unexpected outcomes.
As a student, and without a deeper knowledge of the exact workflow expectations for a particular circumstance, I am unfortunately unable to offer any more-detailed perspectives. However, I hope this initial post helps to 'get the ball rolling' on some important discussions related to proper A.I. planning, purchasing, and use. The right answers will still need to be evaluated and defined by planners, users, regulatory agencies, and society.
Remember this blog is for educational and discussion purposes only - Your mileage may vary. Have any thoughts or feedback to share about A.I. in Healthcare? Feel free to leave in the comments section below!