I’m a big-picture, small-data guy. My organization, Continuum, has built an enduring reputation, and resilient business model, by working in a consistently human-sized scale. We don’t seek understanding by ranging over mountains of analytics. Instead, we talk with a handful or two of relevant people, observing them behave in a relevant situation. We don’t, of course, just talk: we listen. We watch. We focus to hear the unspoken subtext, to observe the accidental wince or frown, and convert those moments of evanescent insight into the products and experiences that our clients need. Ethnography is inherently a small-data operation: to achieve the depth of understanding users (typically a two-to-three-hour interview), we simply don’t have the time or budget to meet a statistically significant percentage of the population.
We are not unaware of the law of small numbers, and the limitations of working exclusively with qualitative data. In fact, recent transportation projects with Cambridge Systematics and subsequent conversations with CS’s big-data-brained Nate Higgins—who happens to be a friend—have revealed to me the value big data can bring to our tried-and-true methodology.
Our small-data work could be profitably augmented by the big stuff. The trick, however, is arranging the proper balance between the two approaches. Neither can ultimately be subordinate to the other. Which is to say, small data and big data should be integrated to provide a system of checks-and-balances that will ensure excellence at every level of our projects.
Ethnography is not intended to be all encompassing; it’s intended to be generative.
Closing the Credibility Gap
One of the most common questions we encounter is: “How can you be sure we understand the needs of all our stakeholders if you’ve only spoken to a small number of them?” Ethnography is not intended to be all encompassing; it’s intended to be generative. While we believe that this approach to research can quickly get us to a shared set of behaviors and values for our intended markets, we can’t be sure until we interrogate our findings against a larger set of information. For example, findings from our work with riders of the MBTA (example: people will trade reliability for speed, and it’s not just out-of-towners that need help learning the system) could be checked against a geographic data set of CharlieCard taps or Net Promoter Scores, which could be used to add credibility to our qualitative research. Even better, we might find discrepancies between quant and qual—unanswered questions and contradictions that could form the basis for our discussion guides the next time we go out into the field.
In a recent project, designing a consumer product for millennials, we initially focused our research on respondents in urban centers. Despite what you might have heard—and what we ourselves have heard—statistical data on national home-buying activity clearly shows that most millennials are actually moving to the suburbs. While we might have anecdotal evidence that this groups wants to live in higher-density areas (which is to say: cities), the numbers don’t play this out. As urban dwellers ourselves, it’s easy to project our own experience onto that of others. In our next phase of research, we will be spending most of our time outside the city center in order to get a broader view of this group.
Recruiting people for Continuum’s fieldwork can be difficult. Laborious. And expensive. We often work with outside recruiters and often have to supplement this work by canvassing our own social networks for appropriate people to talk to. We spend an inordinate amount of time recruiting the right group of respondents, scheduling visits to their homes and places of work, and coordinating payments, non-disclosures, and follow-ups. A recent conversation with Nate suggested that, just possibly, access to masses of data, might streamline the process and put us in direct contact with the right people far faster than non-data-enabled recruiting.
Given the chance, big data and small data play can well together.
If data can show us who the ideal research subjects are, and where they live, we could save large amounts of time and money trying to track them down in more manual ways. Could quantitative data sets help us get to the right people faster? Could rideshare and calendar integration help us reduce the uncertainty of travel time? Would a Lyft-Zillow-Facebook tool let us talk to better respondents, more of them, and more quickly? We’re itching to find out.
Analytics and Storytelling
Could quant improve our analysis of research subjects? Speech-to-text technologies could create transcripts that are simple to review and help us strategically tag our video interviews. We could use facial recognition to map emotional reaction, and biometric trackers to alert us to changes in heart rate, to give us more tools that highlight latent reactions to stimulus. Finally, machine learning could be employed to look across a large multitude of interviews to look for themes and patterns. OK, this last one scares us a bit, but in the spirit of pushing the boundaries of our craft, and expanding our ability to positively impact humans everywhere, it’s worth exploring.
The Benefits of Data Friendship
I’m confident that, given the chance, big data and small data play can well together. Continuum certainly has an interest in finding some middle ground between the two approaches, and so does Nate and Cambridge Systematics. I suspect that our clients, and future clients, would be happy to reap the benefits of this friendship.