Tech journalist John C. Havens has been my Twitter pal—follow him: @johnchavens—for about a year. In that time, we’ve had innumerable conversations about a constellation of great topics: artificial intelligence, ethics, personal data, and values-based design. He recently surprised me by covering all of this, with good humor and many footnotes, in a book called Heartificial Intelligence: Embracing Our Humanity to Maximize Machines. When the volume was published, I immediately knew that (1) I had to read it cover-to-cover; and (2) we needed to talk about it—and not in 140-character bursts. So I sent him some questions in a Google Doc, and he shot back some answers. Our edited dialog sits below. Get to know JCH, and his very personal, highly passionate, always interesting views…
Continuum: A few days ago, when Google announced that its AlphaGo program solidly defeated the human Go champ Lee Se-dol, I thought, “Somewhere, John C. Havens is frowning.” But then I said: “No, not frowning. Throughout Heartificial Intelligence, Havens suggests that AI is well on track to defeat humanity, and so AlphaGo’s success surely won’t come as a surprise.” Seems to me, you’re most interested in the human response to AI. Did you find anything universal, or telling, in Se-do’s attitude before—confidence—and after—speechlessness—the match with AlphaGo? If you could talk to him, what would you say?
John C. Havens: I feel for you, brother. We’re all going to be speechless about AI, in one way or another, if we assume the skills or talents we hold dear could never be replicated by machines. The idea that that certain tasks can’t be taken away from humans is a terrible fallacy that should no longer be communicated. You can’t determine people’s preferences, and many humans may well prefer machines to humans for some jobs. So any of us saying, “Machines will never replace humans” automatically offends people who don’t feel that way. It also ignores the fact that multiple companies are building machines that will replace many “irreplaceable” jobs.
The idea that that certain tasks can’t be taken away from humans is a terrible fallacy that should no longer be communicated.
Continuum: Of the many disturbing sentences in Heartifical Intelligence, this was one of the most disturbing to me: “The field of AI is advancing so rapidly we may lose the opportunity for introspection unhindered by algorithmic influence within a few years.” Can you give us your most accurate estimation of when this AI inflection point will take place? And is there anything at all we can do to stave this off? Also: The idea that unencumbered introspection will become an impossibility is unsettling. I’d like you to say that there is a way to avoid this—or is that just denial talking?
John C. Havens: Hopefully it’s clear that I wrote the book to provide solutions to this central issue. I say in the book that you can’t automate well-being, and I mean that. Today, at least, most people can’t hack their brains and increase oxcytocin to simulate genuine well-being. So we have to do that by practicing gratitude, altruism, finding our purpose and/or values.
In terms of an AI inflection point: I think many people today don’t understand how to be introspective. I just came back from SXSW, and everyone there was plugging into Virtual Reality. You can see in VR the potential for radically wiping away meditation and mindfulness. I do think we’ll learn how to meditate and such in virtual worlds, but who’s going to cultivate those skills when they can alternatively engage in thousands of games, social interactions, porn, or anything else? Brain hacking is probably the biggest risk to introspection. If you could press a button to see an image that would instantly make you happy—in terms of an increase of dopamine—or deal directly with depression, would you click that button? I bet a lot of people would. Similarly, if you could replicate a loved one who was dying, so you didn’t have to experience the grief of his or her death, would you? I wouldn’t, but many would. I think most of them would be devastated in that process, because the new being might look like their loved one but wouldn’t truly be him or her. So they’d experience a two-fold grief.
You can see in VR the potential for radically wiping away meditation and mindfulness.
Continuum: “Soon we won’t be irritated by irrelevant ads in our feeds but will do double takes when they start to seem more accurate,” you say. “Targeting will become nuanced to avoid ‘tumbling into the uncanny valley,’ and we will have lost the finite window of time in which we have awareness regarding our ads.” It seems like you’re suggesting that the not-quite-perfect online ads, which we suffer from constantly today, are, in a weird way, a positive thing. The fact that AI doesn’t know us that well means that part of us remains free, unknown, autonomous—but that this is really a short-lived phenomenon. What do audiences who encounter this idea for the first time say to this? I imagine that they might not be convinced at first.
John C. Havens: The ads are positive. For the time being, we can still see Oz behind the curtain—at least some of the time. In terms of our online identity, we’ve lost most of our autonomy since we can’t access our data. Audiences who hear my ideas along these lines, I think, largely don’t believe me, or get what I’m saying. The concept of a digital identity is very esoteric. At least until they strap on the VR headset and see six versions of themselves that they didn’t create, and can’t claim their own identity. Then they get pissed. I’m hoping to avoid that situation. I have no interest in being the DB who says, “I told you so,” while people suffer from humanity’s comprehensive identity theft.
Continuum: A major consequence of the coming AI revolution, you say, will be depression. “The same physical, emotional, and mental costs of unemployment felt during the Great Depression are something most humans will face in the coming wave of automation.” Indeed, the book seems to be preparing the pre-depressed for what awaits by educating readers about the power of positive psychology. Do you feel depressed about the future? Do you feel like your message is being heard? And are you trying to track to the influence of yours ideas on the attitudes of the general public and the designers of the future?
If our data is a commodity, our identities are as well. Are we okay with that? Are we okay with that in terms of our children? Then we need to stop saying that “Consumers don’t care about their data.”
John C. Havens: I do feel depressed about the future when people continue to say things like, “But consumers won’t want to try to control their data—it’s just too hard. They’ve been trained to not care about it.” For me, it’s the equivalent of saying, “Human trafficking is just too hard to deal with…so let’s just let it run its course.” Data is not esoteric to me. It is a living, breathing thing that encapsulates my identity and those around me. As a longtime fan of Augmented Reality—I’ve written about it since 2011—I envision what the real world will look like with digital data overlaid on it, and I here’s what I see: some people walking around with “bad credit” notices floating above their heads. Grim stuff.
Will there be positive things as well? Sure, probably. But without individuals having control, there is no way to tell. Period. Full stop. Unless people can control at least a copy of their data, anything anybody says—including me—about the future is BS. It will be controlled, as it currently is, by Facebook, Google, and co. So if you know their plans, you could say with some accuracy what’s going to happen—but as outsiders, our knowledge of these companies is merely based on their PR messages, so it’s not exactly solid.
Fortunately, there are a lot of people who get the importance of personal data. The E.U. has passed legislation along these lines, and there’s a whole personal data ecosystem dealing with this as well. The U.S. is actually woefully behind in all of this, and it’s largely because Silicon Valley doesn’t want people to control their data. Then our data, and our identities in the process, becomes a commodity. And if our data is a commodity, our identities are as well.
Are we okay with that?
Are we okay with that in terms of our children?
Then we need to stop saying that “Consumers don’t care about their data.”
Make the case. Help me make them care.
Yes, designers need to design for a world in which humans have a say in their subjective identity. Even if we become one with machines, for the time being we’re different than them, and that’s okay. But if we also think about machines as being our children or progeny, why would we want to create creatures born from the hidden tracking of our lives? Do we want children built on the paradigm of consumerism and surveillance? Will future generations not gain the insights of holistic individuals, whose lives beyond buying things includes parenting, spirituality, music, philosophy?
Do we want to train the future of humanity based solely on our consumerist identities?
No. Of course not. But unless individuals can control their data enough to broadcast a portrait of their subjective identities, that’s what will happen. That’s what is happening.
But the genie’s not out of the bottle. We just have to stop thinking it is.