Big Data, Evidence, and the Question “Why?”
Some Quality Thoughts About Qual
Even without specifying exactly what it is for, big data promises something that sounds incredibly valuable: a complete and unified record of a customer’s interactions with your company. It compiles both the patterns of behavior detected by a growing suite of sensors, and an inferred understanding of context from the digital exhaust or metadata around those interactions.
What is all this information for, though? To us human-centered designers, information about human behavior and intention is the first source of our ideas. We gather it by analog means: observing and talking to people. So, data offers another way to understand people—and it is especially helpful in two areas. First, it can reveal surprising patterns or unexpected truths, and in doing so be a check on our own biases and blind spots. Second, it can help us diagnose and fix problems with the handshakes and handoffs of the customer journey, as people move through modes and channels. Beyond the realm of human-centered innovation, the smart use of data can help optimize and reduce costs throughout a business’ supply chain, from better inventory management through to clearer customer understanding of returned goods. For many industries, optimization of current business processes can represent enormous efficiency gains.
One of the best uses of data in innovation is to ground our research in a story as a starting point for investigation.
That said, in its short history, big data has been through its stages. We’ve had the euphoric and unrealistic beginning when it was promoted as a magical solution. We’ve had the trough of disillusionment, having realized it was, perhaps, over-sold (Gartner reports the failure rate of big data initiatives at 85%.) We’re now working our way back to a more pragmatic and realistic approach that understands the limits of data science and looks for complementary sources of evidence to fill in the gaps in the record.
Big data’s biggest limitation is that it offers us a view that faces backward. It presents a record of the past. As Roger L. Martin says: “There is no data about the future,” and then adds a mischievous “yet.” As a result, it can only predict the future by extrapolating. It does this well, but it is bad at anticipating non-linear changes. It can’t steer us through turbulence and uncertainty. This is a serious limitation for people in the innovation business.
At EPAM Continuum, we innovate by understanding people. Our strategists and designers develop ideas for the future by looking beyond objective measures. In order to understand motives and values, we ask why (which is, as Simon Sinek famously said, a great place to start). The question why looks for different things depending on who’s asking. Machines ask why to detect correlation and look for root causes; designers ask why to look for reasons. Consider E. M. Forster’s distinction between a story and a plot. “‘The king died and then the queen died’ is a story. ‘The king died, and then the queen died of grief’ is a plot.”
Things happen for emotional as well as material reasons. A queen can die of both cardiac arrest and grief. There is a crucial why that is focused on emotions. If you are looking to use design to innovate, you are inevitably looking to engage your customers’ emotions.
To design products and services that people love and choose, it’s not enough to understand the story. We need to understand the plot. Data can help: One of the best uses of data in innovation is to ground our research in a story as a starting point for investigation. We use it not to provide big answers, but to reveal interesting questions: What’s weird or confusing in the data? The things that big data cannot reconcile are interesting. They may be the sources of customer pain or uncaptured value. They may be the things on which the plot hinges.
The first use of big data is to point us to the anomalies and mysteries that call for closer scrutiny and different methods. To look closer, we rely on more qualitative, ethnographic methods. These promise something completely different: deep insight into values and human context.
This is the kind of work in which EPAM Continuum specializes. We learn about people’s values, their workarounds and attachments. We understand why. The why is hugely helpful in generating ideas, because while it focuses our ideas on ultimate goals, it allows for broad exploration of the possible means.
Qualitative insights have to work harder for their credibility than numbers do.
However, qualitative insights have to work harder for their credibility than numbers do. They are often perceived to be made of softer and more malleable stuff than numbers, because figuring out what things mean takes interpretation. But as I learned from Gary David, sociology professor at Bentley University: “All data is interpreted.” It is every bit as easy to be wrong (or manipulative) with numbers as with stories.
I prefer to describe what we learn as evidence, not data, because it more accurately reflects what we have. Data sounds too objective and carries connotations of accuracy that are often misunderstood in a dangerous way. When people see precision, they often interpret it as accuracy. Next time you see a political poll that shows results to the first decimal place (like 55.3%) with a margin of error of three percentage points, keep in mind that you have been given information that is 60 times more precise than it is accurate. This is bogus credibility that people attach to numbers. Reality is more complicated. 128.6% more complicated.
We are looking for evidence that gives us confidence in understanding the future. All the rigor in our approach is in service of getting to the bedrock truth. We have to get it right because we intend to place bets on the ideas that come out of our insights. These insights inspire us to come up with vivid and testable ideas. They tell us what good looks like.
There is no proof of future ideas or things that don’t exist yet. Our insights only get us as far as testable hypotheses and ideas. With these, we can quickly build and test prototypes. Once ideas turn into things, even in rough prototype form, we can test our assumptions and validate or adjust our ideas. To do this, quantitative data again becomes incredibly valuable.
Because new ideas need to be provable. They need to be testable in light and accurate ways. Prototyping is always about learning in the fastest and most responsible way possible. We make stuff in order gather more data, or information, or evidence. To learn.
When we understand why we need to learn, we realize how big data supports innovation. It has the same job as qualitative research: To help us make better decisions and investments in possible futures. Healthcare’s push to value-based care offers a great example of the complementary roles of qualitative and quantitative understanding. Data-oriented decision-making helps ensure a focus on measurable outcomes, both at population and individual levels. But getting things right for providers requires empathic understanding of the social and relational dynamics of caregiving.
EPAM Continuum’s expertise lies in understanding people and translating that into better or entirely new products, services, and businesses. To de-risk innovation, we align greater investment with growing understanding and confidence. Leveraging both quantitative and qualitative evidence, we look for human insights to address the greatest risk of all: that of making something that nobody wants.
Photo by Štefan Štefančík on Unsplash
Photo by Markus Spiske on Unsplash
Photo by Ken Treloar on Unsplash