I have been reading a wonderful bestseller which you have probably already read: The Black Swan, by Nassim Nicholas Taleb. (If you haven't read it, please be warned that I am thinking of becoming Queen of the World so that I can require everyone who has a bank account or casts a vote to pass a test on its contents.)
Taleb is a "skeptical empiricist" philosopher who participates (profitably) in the financial sector. He explains that in our species' drive to predict what will happen, we fall into a number of potentially destructive information-processing traps, because we fail to understand much about the nature of information, or even much about how our minds actually work. Worse, according to studies he cites, experts are blinded by too much information and are even worse at prediction than ordinary people.
Oops. What is it that I do for a living, again? I extrapolate recommendations from data about how people think and feel. In doing that, I must presume that the data has some predictive value and that I am expert enough to interpret it correctly.
Not likely, according to Taleb, unless I am a physical scientist. Human behavior is just too unpredictable--prone to outlier events that he calls black swans. The type of research I do stumbles into two of Taleb's traps: the tendency to attribute doubtful cause and effect relationships between data points (the narrative fallacy), and the tendency to assume that because something has happened over and over again for a while, it will continue to happen indefinitely (the confirmation fallacy).
Narratives are cherished by marketers, because they reinforce our illusion that the data we are able to afford to collect must mean something important. Piles of data are much easier to digest when they are marinated in emotional oomph and salted with just enough cause-effect to be plausible.
For example, if I "know" from a research study that people with less than a college education are more dependent on their doctors for their medication decisions, I am likely to construct the following narrative: "less educated people feel that they don't understand things as well as the more educated doctor, which makes them deferential."
But of course I don't know that at all, I have merely noticed a statistical correlation, and one that is probably not much better than 80%, if even that good. What percentage of the highly educated portion of my customers are dependent on their doctors? Is it 20%? 40%? And what about the 20% of the less educated people who are not dependent on their doctors? If I had constructed my analysis differently, I might find that the correlation with education masks some other factor, such as cultural background or type of profession. Or the correlation may not mean anything at all, in narrative terms. Who knows?
What Taleb would probably point out is that it doesn't matter, anyway, because of my confirmation fallacy. My use of data to predict the future assumes that there are no outlier events lurking over the horizon. In other words, I am assuming that because in a single segmentation study, education level turned out to be correlated with dependency on doctors, that this will continue to be true for some meaningful period of time.
Yet, at this moment in 2009, that is actually unlikely. Right now, our government is investing in medical technology that will help doctors to do a better job of determining which treatments will work best for which patients based on empirical data. I have reason to at least suspect that the publicity about this new technology will change how even our most educated patients view their doctors' expertise, and therefore undermine both my mental and quantitative models of how people behave.
And that's just one example.
So here's my plea. First, if you haven't already, read The Black Swan; it's both necessary and delightful. Second, ask yourself some serious questions about quantitative research. It may be--heresy though this is--that qualitative is nearly always a much better basis for the development of marketing messages.
I am not completely anti-quant. Segmentation and behavioral models can lift your results if they are narrative-free (i.e., reflect no assumptions), easy to validate in real-time, and frequently refreshed. However, quantitative messaging studies over-complicate and even distort our understanding of human attitudes and behaviors.
There, I said it.
Why? Qualitative research, done properly (which means more interviews and fewer groups), forces you to deal with the complexity of human reactions in a way that humans are reasonably good at--face to face. In the qualitative setting:
1) You seek the simplest important conclusions. You look for a simple preponderance of evidence that seems consistent or reliable, not "data" connections between human attitudes that are either statistical phantoms or too complex to be replicated in the actual marketplace.
2) You treat the result as temporary. In interviewing actual people, you are confronted with the fact that they are responding to specific stimulus at a specific point in time, and that as the marketplace or external factors change, their responses would probably change.
3) You are somewhat less likely to end up focused on the wrong data or ignoring surprising data. When people are able to speak at length, relevant facts emerge that you would never have considered incorporating in a quantitative study. (That's also why interviews are better than groups.)
To corroborate this, by the way, I have been told that in the case of branding research, decisions made based on twelve in-depth interviews can produce better in-market results than a quantitative study. (I would infinitely prefer to write a creative brief based on twelve in-depth interviews than on a quantitative study, that's for sure.)
Allow me to repeat that segmentation and modeling can definitely lift your results. I have seen some excellent models lift results for my clients. However, the excellent models were refreshed frequently, sometimes based on real-time behavioral data, using actual data from actual marketing activities. They avoided making long-term predictions based on data collected at a single point in time. Also, being purely statistical creatures, they contained no narratives; no assumptions of cause and effect, just correlations. That approach, I think, minimizes both the confirmation and narrative errors Taleb refers to.
You and I could take some comfort in the thought that we marketers might find it easier to correct our bad habits than the academic economists Taleb takes on. After all, we have to get real people to engage with our products over very short timeframes, so we are able to learn from our mistakes. Surely we are more practical and effective than university professors!
However, Taleb points out that everyone believes him or herself to be the exception to statistical rules. So humility is the best policy. I will gently push my clients to rely more on qualitative findings as bases for messaging decisions. And from now on, I pledge to assume that there is something potentially important missing from my analysis: I will recommend that my clients prepare for the possibility that I am wrong. For example, I will strongly recommend that we be rigorous about benchmark and tracking disciplines, and refresh our insights more regularly.
Hold me to it.
Subscribe to:
Post Comments (Atom)
First of all, thanks for your blog, it's a wonderful addition to the MR blogs I follow, and I am glad there's another blog out there focusing on Qualitative research.
ReplyDeleteLately - and as a result of the Community-hype in MR - I've been struggling with a thought: In the near future, the MR industry may be faced with a new methodology, next to qualitative and qualitative research: the hybrid QualiQuant, too large a sample to be called qualitative and too in-depth to be called quantitive...
Would love to hear your thoughts on this!
Well, no arguments from me!
ReplyDeleteGreat post and as Emiel said, a wonderful addition to the MR blogosphere!
Looking forward to reading more.
: )