AI and Human Augmentation in Healthcare

healthcare

AI and Human Augmentation in Healthcare

A Humanist Consideration of Healthcare's Tech-Enabled Future

December 1, 2018
by Toby Bottorf
aihealthcarehero

All new technology is invested with unrealistic hopes and fears. Our judgment about how to understand it is further clouded by the curiosity that pushes us to try things simply because we can. AI promises to expand or augment human capabilities in how we deliver healthcare.

How exactly will human augmentation improve the quality of our healthcare and, ultimately, our lives? How can we ensure that it won’t diminish that fabled quality? And how should we be tracking this augmentation, as technologies and health policy evolve? These are questions that we can’t help asking, again and again, as we ponder the future of AI-enhanced healthcare. We think it’s wise to consider such questions, and to tease out their implications as artificial intelligence and machine learning make their inroads into surgical suites, examination rooms, and of course our own homes. In the following, Toby Bottorf, brings a much-needed humanist perspective to the often-panicked AI conversation. We suspect that his musings will strike a chord with patients, providers, and payers.

The impact of artificial intelligence will depend on how it intersects with existing trends in healthcare. Some will accelerate adoption while others are headwinds. Some will point AI toward great medical benefits, others present ethically troubling opportunities. Here are five relevant AI trends to consider.

Patients don’t need to adopt new technologies, because the technologies can adopt patients.

Outpatients Are the New Inpatients

Studies show that one of the best ways to bend the cost curve on healthcare is for patients with chronic, stable conditions to receive care at home. And in the last few years, dozens of hardware startups have emerged to reinvent common medical devices in affordable and consumer-facing iterations, reducing one of the biggest barriers to at-home care. This trend shifts responsibility to patients and their caregivers who are usually non-clinicians, creating higher barriers to behavior change.

The good news is that AI offers the potential for a dramatic shift: Patients don’t need to adopt new technologies, because the technologies can adopt patients. Conversational UI, for instance, is an accessible interface for some interactions and can help with medication adherence and even detecting and managing depression.

Wellness Is the New Chronic Condition

The mindsets and behaviors that we used to associate with managing a chronic condition are becoming more pervasive, as people take a more engaged and proactive approach to their own wellness. People are increasingly comfortable with sensors and data-capturing devices. These devices feel recreational and personal, not medical, even though they have enormous potential for maintaining good health.

And even if we’re not aggressively managing our excellent health, our everyday devices can be put to surprising diagnostic uses too. For instance, a company called Mindstrong claims to be able to detect depression and other mental disorders in the patterns of thousands of little interactions with our smartphones.

AI Anxiety

However, pervasive tech makes people nervous, and not just luddites. Elon Musk is worried; the late Stephen Hawking found tech troubling as well.

There may be times when talking to a conversational AI is better than talking with a doctor.

There may be times when talking to a conversational AI is better than talking with a doctor. There may be more effective therapies for PTSD that start with bots. If therapy requires that you “work through” some horrific trauma, that is easier if I don’t have to share that with another feeling person.

Or just as good, only much more conveniently or at lower cost: The Woebot trial showed success in delivering cognitive behavioral therapy to people suffering from depression.

While AI and machine learning promise earlier diagnoses and better screening, they raise new questions: What does it mean for data to be good? Who should people trust?

Culture of Credibility

People may want to use technology to self-diagnose the difference between, say, the flu and meningitis. They still want experts for diabetes and cancer. Doctors are not thrilled to be competing with WebMD. Dr. Alexa will only further challenge their authority.

While AI and machine learning promise earlier diagnoses and better screening, they raise new questions: What does it mean for data to be good? Who should people trust?

Some evidence suggests that we only trust doctors, that the wall of diplomas buys them trust. We have worked on projects where skilled technicians have the expertise to do a certain procedure better than surgeons who perform the procedure infrequently. Patient perceptions were a barrier to that clearly better solution. The emotional side of healthcare depends on trust, and systems can run smoothly only if patients are on board—that is, if they believe they’re receiving the right care from the right hands.

Doctors are trusted in ways nurses and other clinicians aren’t. Where do AI systems rank? There is promise that the answer is more collaborative than competitive. In some cases, like the detection of skin cancer, it appears that AI can do it better. But in the book Prediction Machines: The Simple Economics of Artificial Intelligence, the volume’s three co-authors report that in reading films to diagnose other cancers, AI and humans together do it better than either alone. This is encouraging because it reveals that people and bots are prone to different kinds of mistakes and have complementary strengths.

False Belief

However, credibility everywhere is eroding. We live in the age, you might have read, of fake news. You can buy pretty much the same health aids from Gwyneth Paltrow at Goop and Alex Jones at InfoWars (well, you could until recently). These are strange times.

Who knows what’s best for us when it comes to our own health? From the opioid epidemic to the “controversies” surrounding vaccinations, people are increasingly skeptical of pharma and the western medical machine. What’s right isn’t necessarily based on medical precedent or clinical facts, but more on what “I feel is important.”

We—that is, all of us—are irrational. This is not a bug of human intelligence. It’s a feature.

We—that is, all of us—are irrational. This is not a bug of human intelligence. It’s a feature. Everyone I’ve ever met believes in love. But we can’t explain it. I couldn’t program it. Maybe Spike Jonze could though. (Dystopia never looked so sweet).

Getting Things Right

Given this context, what should we prioritize as the goals connected health should be pursuing. How will AI extend our best human qualities and mitigate our worst? How will it change us as clinicians, caregivers, and patients?

There are risks that we’re going to get the next big breakthroughs wrong, because of two things that are problematic.

First, a tendency for healthcare to be crisis-based. We are very good at fighting against diseases, but much less good at caring for patients. We go to war against disease, with military metaphors and mindsets. It’s expensive, there’s collateral damage, but we win, dammit. As the grim joke goes: Sometimes the surgery is a success, but the patient doesn’t survive. Better healthcare requires better care, not just excellent, if militaristic, treatment.

How will AI extend our best human qualities and mitigate our worst? How will it change us as clinicians, caregivers and patients?

Second, as we look at a connected digital web overlaid on the system of care, we see the same biases repeated. We measure what’s easy to measure, not necessarily what’s important. As we define a role for AI systems, we need to recognize that these are android systems that have technical “intelligence” far superior to their social skills. We all know people like that. Hopefully that doesn’t remind you of your own doctor.

We need to design systems that have good manners, or at least social graces. Let all the parts of the system, including the non-human, at least be humane. I believe all of these factors point to one huge connection we need to make: between the emotional and the medical needs of patients.

We Are Harry Harlow’s Monkeys

Embedded content: https://www.youtube.com/watch?v=vbEdNJ-e-Yc

I watch this video to remind me of how important it is to take care of the emotional side of the equation.

Harry Harlow was a psychologist who did interesting and controversial experiments in the 1950s at the University of Wisconsin. What he learned about infant Rhesus monkeys feels true about how people interact with the healthcare system. When given the choice between getting their physical needs met, and feeling comforted, they’ll chose the option on the right.

This adorable baby monkey stands in for a patient who’s cold and half-naked in a gown that doesn’t fit, sitting alone in a small windowless room, waiting for news that may upend her life.

People navigate the healthcare system with very raw hopes and fears. The experience often robs them of their basic human dignity. This adorable baby monkey stands in for a patient who’s cold and half-naked in a gown that doesn’t fit, sitting alone in a small windowless room, waiting for news that may upend her life. The care in healthcare is for her.

To take better care of people, not just to fight diseases better, we should apply AI to helping us make connections:

Between people’s clinical and emotional needs.

Between crises and everyday baseline needs.

Between patients and their support network.

Between me today and future me.

That’s what we should do. Let’s get on it.

Photo by rawpixel on Unsplash.

filed in: healthcare, artificial intelligence

About the Author