Does artificial intelligence spell the end of the world as we know it? The answer is almost certainly yes, but depending on who you ask we should all either retire now because our jobs will be obviated by automation, or we could be in store for some excellent new jobs. In a recent New Yorker article, “Why Doctors Hate Their Computers,” Atul Gawande describes how Mass General Hospital is preparing for the future of healthcare by ingesting all patient data and making it available for automated analytics and population health management. It’s not AI yet, but it’s at the service of deep learning and machine learning in the future of healthcare. The first generation of the system is, of course, brittle. It gets in the way of the physician-patient relationship and is not well-liked by doctors. As Gawande ponders the hegemony of the new computer system over job satisfaction, he wonders if “the complaints of today’s health-care professionals may just be a white-collar, high-tech equivalent of the century-old blue-collar discontent with ‘Taylorization.’” Technology that leads to discontent will not be the way of the future of AI in healthcare.
Fredrick Winslow Taylor’s Principles of Scientific Management has helped many achieve huge gains in productivity and profitability by compartmentalizing working apart from thinking, but it hasn’t been concerned with delivering on other important human values like dignity, freedom, hope, and joy. Taylor summed up his approach with an assertion that “In the past the man has been first; in the future the system must be first.” In the context of AI, “the system being first” feels especially objectionable as we invite AI to think things with us or for us. Perhaps nowhere are the stakes higher than in healthcare, where decisions like how to perform surgery or how to monitor, diagnose, and treat a patient involve livelihood and despair or even life-and-death. We all have the sense that we must become collectively wiser, so that humans don’t cede the wrong decisions to automation. We can’t leave design decisions entirely up to business thinking, which only asks: Is it operable? Is it legal? and Is it profitable? We must also employ what we know about human relations and design thinking, and consider: Is it valuable? Is it relevant? Is it ethical? Thankfully, a handful of technologists involved in autonomous and intelligent systems are recognizing the need for Ethically Aligned Design. IEEE, for example, has inspired multiple standards working groups “for technologists, policy makers and academics to utilize right away” that prioritize human well-being.
Based on these recommendations, and the 20 years I’ve spent in human-centered business and technology development, here are four values I’d like to see built into AI and how we develop automation technologies in healthcare.
People Want Dignity. Inclusion Delivers It.
Humans value dignity. We have a right to feel honored and respected. Gawande chronicles the experience of a dedicated office assistant with a new electronic health record system. The siloed EHR software prevents her from lightening the load of overburdened physicians in the ways she used to. She misses the dignity she previously enjoyed in her role when she drafted letters to patients and prepped routine prescriptions. “It’s disempowering. It’s sort of like they want any cookie-cutter person to be able to walk in the door, plop down in a seat, and just do the job exactly as it is laid out.” By keeping a diverse group of humans in the loop during design and implementation of decision-making automation, we can avoid insults to and enhance human dignity.
We can’t leave design decisions entirely up to business thinking, which only asks: Is it operable? Is it legal? and Is it profitable? We must also employ what we know about human relations and design thinking, and consider: Is it valuable? Is it relevant? Is it ethical?
To deliver on dignity we can focus on the behavior of inclusion. AI and robotics are efficient, but they have a diversity problem in terms of how they are conceived and executed. (In the offline world, the same tactics that are useful for efficiency and productivity are bad at respecting and honoring the unique and original.) The benefit of automation is that it should give humans more time and resources to consider the perspective of stakeholders who don’t fit today’s business plan. Inclusion, by definition, doesn’t happen by accident. It’s an action that needs to be planned for and budgeted in every stage of development and commercialization. Alex Ziegler, my EPAM colleague, has some great advice on how to drive inclusion in the AI design process. Developing diagnostics and treatments for rare diseases is proof of inclusion due to the technical risk and small market size. FDNA uses AI to correlate facial features, genetic tests, and symptoms to help pediatric geneticists identify rare diseases in their patients. The company demonstrates that AI decision recommendation with human review can provide the performance and cost structure needed to sustainably and meaningfully serve those with rare diseases. Ultimately, this system raises the dignity of patients and caregivers—but only because it was trained by humans to do so.
People Want Freedom. Transparency Delivers It.
At the moment, AI is not designed to tell you why it decides what it does. Researchers and designers today understand how to make chatbots and robots more familiar by leveraging the non-verbal cues of communication we all use, but without the intimacy of understanding what’s going on in that artificial mindset, and the transparency of intent, we’re left with digital psychopaths who clearly serve someone with their collecting, deciding, and judging. The question is: Who? We don’t want to be deceived; we want the freedom to make up our own minds based on the truth. We all have experience with people in our lives who are trustworthy and believable. They invest in relationships with us through intimacy, familiarity, credibility, and selflessness. It’s harder with AI and robots.
To serve human freedom fully, we need AI and automation that offer ways for everyday people to look inside and disagree. How about, as Ken Goldberg calls for, a robotic driver that offers the transparency of consistent conversation so it can clue us in when it feels its performance might be impaired? Experiments with medical transparency have shown patients to have better outcomes when successes and failures of physicians and treatments are reported honestly. The good news is that AI creators are learning how to leave digital breadcrumbs in the deep learning neural networks they create so someday we’ll understand what’s going on inside the black box. For now, IDx’s system for autonomously diagnosing diabetic retinopathy gives feedback on poor image quality rather than leading caregivers astray with an opaque determination of disease presence. We should demand automation technologies that demonstrate transparent behaviors that support freedom. Let’s not focus on technology that just pretends to be human; instead let’s design technical tools to communicate, honestly and simply, the intent, brilliant ideas, and follies of those who create and operate it.
People Want Hope. Forbearance Delivers It.
When it comes to what ails us, we are all underdogs. Whether it’s a chronic disease, surgery, or recovering from infection, the human practice of medicine is all about beating the odds and improving the odds for those we love and those who will come after us. When our inherited or self-imposed medical histories are forgiven, and we’re treated with hope, our health outcomes can exceed expectations. Here, the lack of empathy and understanding in raw data and the persistence of digitized memory can put autonomous technologies at odds with our ability to transcend our circumstances. It’s easy to imagine that today’s Chinese Social Credit system is already or will someday be automated via AI and that opaque punishments, like refusing a plane ticket for a journalist who made a less-than-sincere apology in court, will impact people’s ability to transcend the limits placed on them by a system-wide bully. How can we build benevolence into a system with a planet-wide collective memory that is not designed to forgive or forget?
We all have experience with people in our lives who are trustworthy and believable. It’s harder with AI and robots.
Will future health systems punish and penalize the sick or unwell or give the benefit of the doubt that people’s health can be improved regardless of their condition and history? In a human-centered future of automated medicine, as Hugh Harvey points out, forgiveness will mean that the brutal reality of algorithms should never overrule the understanding of a dedicated physician who will, when necessary, look past the probabilities associated with clinical markers and believe in a patient’s ability to transcend their medical history and beat the odds.
People Want Joy. Engagement Delivers It.
Tech leaders in Silicon Valley know plenty about pleasure and positive emotions in digital interfaces, but they just recently started to turn toward healthy engagement. As a race, we humans are ready to demand more from our digital technologies. Martin Seligman founded and has spent most of his career studying positive psychology. He has learned that engagement, or deploying one’s greatest strengths and talents to meet the world in flow, is critical to authentic happiness and well-being. We get this at EPAM Continuum; my colleague Jen Ashman just penned a brilliant post on three types of play new mothers often engage in after giving birth. Citing practical examples, she shows how the skills and thought patterns accumulated can translate to mastery, accomplishment, and joy in other parts of their lives like work.
Today’s leading medtech companies are keenly interested in how to engage patients and EPAM Continuum is helping them understand the many archetypes of patients to design for. Treatment plans and solutions for a single disease should be multi-modal and configurable, with human and digital interaction, to truly engage individual patient’s strengths and interests and their focus on well-being in a wide array of contexts. Our elderly parents don’t want to be left alone with robotic caregivers. Like Seniorlink’s VOICE for dementia care, tomorrow’s AI-enabled health solutions will have to find a way to keep and enhance human caregiver relationships to deliver the meaningful joy that comes from engaging and succeeding. It’s unlikely that true engagement will come from automated technologies without human support.
Becoming More Human
At EPAM Continuum, we have a privilege and responsibility to develop medical technologies in the emerging context of AI. Whether it’s through design, adoption, conversation, protest, or rejection, we all have some opportunity to teach artificial intelligence, in its adolescence, how to behave autonomously and collaboratively in the service of people. I’m not talking about an added, optional attention to ethics. It’s baked in: AI systems are trained systems. We’re training them anyway.
It’s unlikely that true engagement will come from automated technologies without human support.
In Haim Ginott’s Teacher and Child, the author makes a request of educators and parents that is apt for all of us today, whether we teach AI, or teach the people who will create, use it, or work with it in the future. “Help your children become human,” he writes. “Your efforts must never produce learned monsters, skilled psychopaths or educated Eichmanns. Reading, writing, and arithmetic are important only if they serve to make our children more human.”
What about you? What’s your request? I’d love to hear which values you’d suggest are central to ethical healthcare design and will lead to better jobs and experiences. Let’s talk.