Designing for Modern Humans

education

Designing for Modern Humans

A Conversation with Paul-Jervis Heath

September 17, 2017
by Zach Hyman
pjhero

Paul-Jervis Heath presented the opening keynote at UX STRAT in Amsterdam in June. His talk, and many projects at Modern Human, the consultancy he founded, focus upon the challenges and efforts of trying to design humanity and sensitivity into the increasingly numerous and complex digital products that surround us today. While he feels strongly about how the increasingly invisible technology that surrounds us still lacks proper input from customers, he exercised great tact and diplomacy when assessing the current home offerings.

After working as a designer at IBM and in other notable London design consultancies, followed by a stint as the Head of Innovation and Chief Designer at the University of Cambridge, Paul started Modern Human to embrace his true passions around inspiring design teams and fostering meaningful innovation in organizations. I was fortunate enough to speak with him about bringing design to the education sector, the evolving notions of “smartness” in products, and ethics in design.

One thing I’m interested in learning more about is the approach you took to understanding which interventions of “smartness” are welcomed across different products and services, and which ones are seen as intrusive. Our NXT research group focuses upon understanding trends shaping the future of different fields, and we've spent time diving deep into trends driving the future of financial services, we’ve found people can be sensitive around things like being told they should be saving more money, or that they should be trying to improve their credit scores. How has your work sought to strike the right balance between “helpful” and intrusive”?

It’s interesting how people conceptualize and understand intelligence in products and services. For example, in our ethnographic research study examining how early adopters are using smart home technologies, we wanted to witness how people were using the first wave of devices. We did diary studies and in-home visits to see they were using them, how people related to them, and how the devices changed people’s relationships. The way our subjects related to these devices had a great deal to do with how they related to space in their homes in general, and the people around them. Increasingly, these devices are becoming more intrusive because they’re insensitive to the etiquette of human interactions. For instance, they'll spark up if they hear their name spoken aloud, interrupting any conversation you might have been having.

Our sense of ethics comes from the people for whom we design.

You mentioned financial services, which I think is an area ripe for innovation. How does a product really support my decision making? It’s not enough to tell me to save more money, or improve my credit ratings—I probably already knew I needed to do that. As a user, my question is really "How?"That’s tricky for financial services providers in the U.K., because of regulation concerning how financial services providers offer financial advice. But there’s nothing worse than being told to do something you already knew you needed to do. There is a real opportunity here.

I keep thinking about how this relates to life products, such as life insurance or pensions. When you choose a pension, or investments, there’s no meaningful way of rehearsing that series of decisions or trying out different options. So you only really know if you made a good or bad decision 20 years down the line. You get a single chance to make a decision, and the outcome is completely hidden to you! Imagine how you might interact with your financial products: What should be automatic? What should remain manual?

Transparency is important, too. With the increasing power of machine learning, there's a danger that we can become divorced from the decisions that we're making, and the outcomes that are connected to them. I spoke at an event recently about bounded rationality and choice architectures. In a world where computers are going to be making decisions for us, how do we retain control over how those decisions are made? In particular, in the moment that you get a hand-off between yourself and the computer: How do you take control? How do you step into the situation to make these decisions for yourself?

Right, and if your ability to make such decisions has atrophied after a period of prolonged automation, whether it’s driving a car or making a financial decision, and suddenly you as the responsible human, are given a problem that the algorithm can't handle, you become a kind of consultant of yourself. Companies today can solve their own, easy problems, but when they encounter problems that they can't easily solve, then it’s time to call on a company like Continuum or Modern Human, right? We become very sharp at trafficking almost exclusively in very complicated problems. These problems are now the ones that require human intervention!

That is indeed correct. Companies like ours are essentially hired to solve the most difficult problems that a business faces.

Your point about abilities atrophying is a very good one, and it makes me think about how the opportunities to learn in the first place could potentially be removed. Let’s go back to financial services… with financial products, we face an inevitable process of lifelong learning. When we first leave home, for example, our financial situation is relatively simple. We learn by dealing with this "easy" situation so we can go on to deal with more "advanced" ones. If the easy things are automated away, removing the safe sandbox, it upsets that balance. I'm not sure that, as designers, we’re always cognizant that we might be "automating away" the sandboxes in peoples’ lives—reducing their ability to learn in the first place.

One of the projects you mentioned in your talk was about finding a balance between “augmenting” and “automating” when it comes to the Internet of Things making our lives easier. You shared your team’s design principle of “speeding up the tedious parts, and prolonging the pleasurable moments,” but when those things shift by person and activity, what approach do you take for finding those different “lines” people have, and how do you keep from crossing them in what you design?

I think we’re at a particular stage of maturity with AI and machine learning where the technology is currently being applied to the things that are easiest to automate! Unfortunately, there are not a lot of informed decisions being made about when and where to automate, or not. The time savings that we can give to people may be valuable to them in the short-term, but if that time-saving has a cost in terms of later skill acquisition, or if we’re automating something that they actually quite enjoyed doing... well, you take the pleasure out of life!

To give another example: There have been times when we’ve been working on autonomous vehicles and I’ve thought: "What are the situations where I actually want to drive? And, when would I prefer to sit back and read a book, or enjoy the scenery? If I don’t need to drive any more when will I want to?” Not everything that it is possible to automate should be automated. What are the "right" things to be automating? We need to be looking at people’s lives in detail. What are the opportunities to remove drudgery in order to maximize their experience?

I was looking at some of your and Modern Human’s past work, and was interested in hearing more about your work leading Cambridge University’s Innovation Team. Continuum has increasingly begun designing for clients in education; over the course with our engagement with Boston College to help update and redesign their core curriculum for the 21st century, we discovered many ways that clients in the education sector think and act differently. I’d love to hear more about your and Modern Human’s work in the education sector, how they’ve varied from clients of other sectors, and what you’ve found “works” particularly well for them.

Not everything that it is possible to automate should be automated.

There's a different motivation within the higher education sector, one that sets academic institutions apart from commercial clients. For commercial clients, it's usually easy to understand their business model and understand what the organization needs to do in order to make money and succeed. There is a commercial imperative which makes things really clear. Decisions can be framed as "What are the business implications of doing A versus B?"

When you step into an 800-year-old organization, such as Cambridge, there is a longevity, which influences the culture. You know that the university has been around forever. That it will outlast you, your children, and probably your children’s children. This creates a particular perspective on change and a longer-term attitude to decision-making. Sometimes decisions that are easy in the commercial sector become quite complicated in the higher education sector because the aims are more diverse, and outcomes can’t be narrowed to a single factor.

Decisions are made very differently within higher-education institutions. To influence change you have to understand how the system of committees works. Now, the idea of committee is anathema to a designer. How many times do designers talk about the perils of design by committee? Well, at an ancient university, that's the way the place is run—so you have to adjust your own preconceptions. To make change happen you have to learn the system of statutes and ordinances and how groups of academics make the best decisions for their discipline, their research, and their students.

Building that understanding was an interesting ethnographic exercise for me. You can imagine the culture clash between the very fast world that I come from and that of a university. I had to shed a lot of preconceptions and find a way of marrying my work methods to that of a university culture.

In your UX STRAT presentation, you quoted BJ Fogg, of Stanford’s Persuasive Technology Lab, who said: “Help people do the thing they want to do anyways.” As the amount we rely upon technology continues to grow in our daily lives, you mentioned the importance of ethics, and making sure that our what designs encourage people to do are in fact what is truly best for them (and not just our bottom lines). Specifically, you said, “Ethics must be more than the ‘Band-Aid’ at the end… much like web accessibility standards, they can’t just be an afterthought. Like design, ethics has to be included from the very start, the very conception of the work.” What are some of the ways you ensure that you and your team build ethics into Modern Human’s work from the very beginning?

Our sense of ethics comes from the people for whom we design. You'll know this as someone who engages in anthropology and ethnography: When you go out and meet people, understand their lives, understand what they doing, and why they're doing it, you can't help but build empathy for them. With that empathy comes the sense that you don't want to do something to them that isn't something they want done to themselves. That’s the source of our ethics. To give you an example: We're currently doing a project in a call center, in financial services. There are various ways you can improve people's productivity in a call center, but after you've sat next to them, spoken to them about their job, and watched them do it, you want to help them do a great job—and do it in a way that’s sustainable and human.

Your presentation also brought up a few of the obvious shortcomings of objects whose creators pride themselves on calling their creations “smart.” You shared stories of home-lighting systems that are unusable by a friend who is stuck in the dark in your home because he or she doesn’t have the proper “user permissions” to operate the light, or how once you’d put the baby to sleep and whispered to a home-automation hub to turn off the lights, yet its perfunctory response was in the same shouted volume at which it always speaks (hence waking the baby up again). Continuum’s Milan-based design team working with Medela found a similar set of challenges that new mothers faced when trying to fit a breast pump into their daily routine (like the need for it to be as quiet as possible when using it nearby a sleeping baby or on a phone call while at work). In the end, they designed for a more “discreet” form of intelligence for Medela. Do you have advice for those manufacturers of “smart” objects that could stand to be just a bit smarter?

You have to really understand the context of use and understand the etiquette and usual behavioral cues of people in that environment. Matching the user’s volume when you speak back to a voice command is a really simple example. It seems obvious that if you whisper a voice command, the device should lower its volume in response but it’s something you might not even consider and it’s a situation that would never occur in a controlled environment.

To be fair, it’s easy to criticize once these devices are being used in the wild. I do feel a bit guilty pointing at Google and Amazon for the shortcomings of their first wave of devices (like they care!). However, it leads people to move their Amazon Alexa or Google Home into the less trafficked parts of their house. This is one of the trends we identified in our recent ethnographic research with early adopters of smart home devices.

When you observe phenomena such as a mother feeding her baby, you instantly and intuitively understand so many things about that situation at a human level. What’s interesting then is to examine the phenomena in a wider context. To design a smart product that responds discretely to human needs, you need a deep contextual understanding. I think that manufacturers of smart objects, or any kind of technology really, need to get out into the places that their things are actually going to be used most, and not the places they think it's going to be used!

filed in: education, financial services, IoT

About the Author