You probably know the old fable about the three blind men asked to describe an elephant. The first, stationed near a leg, concludes, "The elephant is round like a tree." The second, who happens to feel the trunk, exclaims, "No! The elephant is long and thin like a snake!" The third, with his hands placed on the side of the elephant, scoffs, "You two are nuts. The elephant is large and flat like a house." The implication of the fable is that each of us has a limited perspective and we should learn from one another before forming conclusions.
Not so for marketing researchers. We want to understand what people think and do as individuals in the marketplace. Our approach is that despite the disagreement, each blind man is 100% right. For us, consensus generates error. You say snake, I say tree, it's all good.
But, well-known fact, people in focus groups tend to try to form a consensus. A typical solution is to ensure that each of the group participants has a chance to form an individual opinion before the group discusses it; for example, they may read quietly to themselves and jot down a few notes before the discussion, or fill out a rating form. I've found that to be helpful, but a recent blog post (about the financial crisis, as it happens) makes me wonder if it's really enough.
Naked Capitalism cited a story about "social validation". Apparently, people in group settings will hesitate to assess the nature of what they perceive until they have checked in with those around them. The story concerns a lifeguard who nearly let someone drown because the the other lifeguards seemed unconcerned. What's disturbing is that he was unable to perceive that the swimmer was struggling because nobody else perceived it. This phenomenon has also been cited in the infamous 1964 killing of Kitty Genovese, in which 38 people witnessed the murder and did nothing to stop it, each concluding that because of a seeming lack of concern on the part of the other 37, the situation must somehow be OK!
I have always assumed that group-think was a kind of self-censorship due to social pressures. But in fact, people in group settings may postpone forming any perceptions until they check in with the group. No self-censorship is needed. Even disagreement with the group may just mean that a participant has an oppositional bias, and is reacting to the social consensus. It's still an artifact of group-think.
At long last, this explains a result I once saw in groups of technologically-sophisticated business people. They were asked to develop categories for a set of product claims. They could come up with any categories they chose, but "service", "reliability" and so on would be obvious ones.
The exercise turned out to be largely pointless. Groups assigned claims to categories very idiosyncratically; there was little inter-group consensus. Frequently it seemed that as soon as any member of a group "perceived" a connection between ideas, the rest of the group also perceived it. ("Yep!" "Yes!" "That makes sense." Even if it obviously didn't.) We in the back room alternated between humor and frustration.
I think on that occasion someone cited the old joke: the IQ of a group is the IQ of its stupidest member divided by the total number of members. That's harsh, but we need to at least keep in mind the possibility that participating in a group dulls the perceptual abilities of its members. Ouch.
What's a researcher to do? Groups will always have social validation issues, no matter how they are managed. Even if you make them take notes prior to discussion, participants are exposed to facial expressions, body language, and so on, which may influence their perceptions. There's no way out.
One solution? Think small. It may seem paradoxical, but in my experience, the opinions generated by smaller groups (three or four) are more diverse than those of larger groups (six or more). It may be that the smaller the group, the easier it is for the individual to feel that his or her perceptions are "equal" to everyone else's. (Would Kitty Genovese have been saved if there had been just a couple of neighbors looking out their windows that night?)
We probably should also address social validation issues more agressively in our research approach. Thoughts:
1) Spend less energy on generating gut-level perceptions, which are questionable, and more time on the internal experience of perceiving. You may even want to avoid having the group articulate base perceptions because once these are formed, they're hard to change. Focus instead on the components of the decision-making process; e.g., emotions, thoughts, what other people say, what you say.
2) Role play. Create scenarios where group members act out operating as individuals, for example, telling a sales person what they want or need.
3) An experiment I'd like to try: instead of the moderator playing "devil's advocate" to a consensus position, challenge group think by asking a group member to take this role. When the moderator does this, the group tends to close ranks around the consensus. It might encourage the group to reconsider consensus perceptions if a group member is the challenger.
You may have better ideas, and I'd love to hear them!
Tuesday, November 17, 2009
Sunday, April 5, 2009
The mirage of quantitative messaging
I have been reading a wonderful bestseller which you have probably already read: The Black Swan, by Nassim Nicholas Taleb. (If you haven't read it, please be warned that I am thinking of becoming Queen of the World so that I can require everyone who has a bank account or casts a vote to pass a test on its contents.)
Taleb is a "skeptical empiricist" philosopher who participates (profitably) in the financial sector. He explains that in our species' drive to predict what will happen, we fall into a number of potentially destructive information-processing traps, because we fail to understand much about the nature of information, or even much about how our minds actually work. Worse, according to studies he cites, experts are blinded by too much information and are even worse at prediction than ordinary people.
Oops. What is it that I do for a living, again? I extrapolate recommendations from data about how people think and feel. In doing that, I must presume that the data has some predictive value and that I am expert enough to interpret it correctly.
Not likely, according to Taleb, unless I am a physical scientist. Human behavior is just too unpredictable--prone to outlier events that he calls black swans. The type of research I do stumbles into two of Taleb's traps: the tendency to attribute doubtful cause and effect relationships between data points (the narrative fallacy), and the tendency to assume that because something has happened over and over again for a while, it will continue to happen indefinitely (the confirmation fallacy).
Narratives are cherished by marketers, because they reinforce our illusion that the data we are able to afford to collect must mean something important. Piles of data are much easier to digest when they are marinated in emotional oomph and salted with just enough cause-effect to be plausible.
For example, if I "know" from a research study that people with less than a college education are more dependent on their doctors for their medication decisions, I am likely to construct the following narrative: "less educated people feel that they don't understand things as well as the more educated doctor, which makes them deferential."
But of course I don't know that at all, I have merely noticed a statistical correlation, and one that is probably not much better than 80%, if even that good. What percentage of the highly educated portion of my customers are dependent on their doctors? Is it 20%? 40%? And what about the 20% of the less educated people who are not dependent on their doctors? If I had constructed my analysis differently, I might find that the correlation with education masks some other factor, such as cultural background or type of profession. Or the correlation may not mean anything at all, in narrative terms. Who knows?
What Taleb would probably point out is that it doesn't matter, anyway, because of my confirmation fallacy. My use of data to predict the future assumes that there are no outlier events lurking over the horizon. In other words, I am assuming that because in a single segmentation study, education level turned out to be correlated with dependency on doctors, that this will continue to be true for some meaningful period of time.
Yet, at this moment in 2009, that is actually unlikely. Right now, our government is investing in medical technology that will help doctors to do a better job of determining which treatments will work best for which patients based on empirical data. I have reason to at least suspect that the publicity about this new technology will change how even our most educated patients view their doctors' expertise, and therefore undermine both my mental and quantitative models of how people behave.
And that's just one example.
So here's my plea. First, if you haven't already, read The Black Swan; it's both necessary and delightful. Second, ask yourself some serious questions about quantitative research. It may be--heresy though this is--that qualitative is nearly always a much better basis for the development of marketing messages.
I am not completely anti-quant. Segmentation and behavioral models can lift your results if they are narrative-free (i.e., reflect no assumptions), easy to validate in real-time, and frequently refreshed. However, quantitative messaging studies over-complicate and even distort our understanding of human attitudes and behaviors.
There, I said it.
Why? Qualitative research, done properly (which means more interviews and fewer groups), forces you to deal with the complexity of human reactions in a way that humans are reasonably good at--face to face. In the qualitative setting:
1) You seek the simplest important conclusions. You look for a simple preponderance of evidence that seems consistent or reliable, not "data" connections between human attitudes that are either statistical phantoms or too complex to be replicated in the actual marketplace.
2) You treat the result as temporary. In interviewing actual people, you are confronted with the fact that they are responding to specific stimulus at a specific point in time, and that as the marketplace or external factors change, their responses would probably change.
3) You are somewhat less likely to end up focused on the wrong data or ignoring surprising data. When people are able to speak at length, relevant facts emerge that you would never have considered incorporating in a quantitative study. (That's also why interviews are better than groups.)
To corroborate this, by the way, I have been told that in the case of branding research, decisions made based on twelve in-depth interviews can produce better in-market results than a quantitative study. (I would infinitely prefer to write a creative brief based on twelve in-depth interviews than on a quantitative study, that's for sure.)
Allow me to repeat that segmentation and modeling can definitely lift your results. I have seen some excellent models lift results for my clients. However, the excellent models were refreshed frequently, sometimes based on real-time behavioral data, using actual data from actual marketing activities. They avoided making long-term predictions based on data collected at a single point in time. Also, being purely statistical creatures, they contained no narratives; no assumptions of cause and effect, just correlations. That approach, I think, minimizes both the confirmation and narrative errors Taleb refers to.
You and I could take some comfort in the thought that we marketers might find it easier to correct our bad habits than the academic economists Taleb takes on. After all, we have to get real people to engage with our products over very short timeframes, so we are able to learn from our mistakes. Surely we are more practical and effective than university professors!
However, Taleb points out that everyone believes him or herself to be the exception to statistical rules. So humility is the best policy. I will gently push my clients to rely more on qualitative findings as bases for messaging decisions. And from now on, I pledge to assume that there is something potentially important missing from my analysis: I will recommend that my clients prepare for the possibility that I am wrong. For example, I will strongly recommend that we be rigorous about benchmark and tracking disciplines, and refresh our insights more regularly.
Hold me to it.
Taleb is a "skeptical empiricist" philosopher who participates (profitably) in the financial sector. He explains that in our species' drive to predict what will happen, we fall into a number of potentially destructive information-processing traps, because we fail to understand much about the nature of information, or even much about how our minds actually work. Worse, according to studies he cites, experts are blinded by too much information and are even worse at prediction than ordinary people.
Oops. What is it that I do for a living, again? I extrapolate recommendations from data about how people think and feel. In doing that, I must presume that the data has some predictive value and that I am expert enough to interpret it correctly.
Not likely, according to Taleb, unless I am a physical scientist. Human behavior is just too unpredictable--prone to outlier events that he calls black swans. The type of research I do stumbles into two of Taleb's traps: the tendency to attribute doubtful cause and effect relationships between data points (the narrative fallacy), and the tendency to assume that because something has happened over and over again for a while, it will continue to happen indefinitely (the confirmation fallacy).
Narratives are cherished by marketers, because they reinforce our illusion that the data we are able to afford to collect must mean something important. Piles of data are much easier to digest when they are marinated in emotional oomph and salted with just enough cause-effect to be plausible.
For example, if I "know" from a research study that people with less than a college education are more dependent on their doctors for their medication decisions, I am likely to construct the following narrative: "less educated people feel that they don't understand things as well as the more educated doctor, which makes them deferential."
But of course I don't know that at all, I have merely noticed a statistical correlation, and one that is probably not much better than 80%, if even that good. What percentage of the highly educated portion of my customers are dependent on their doctors? Is it 20%? 40%? And what about the 20% of the less educated people who are not dependent on their doctors? If I had constructed my analysis differently, I might find that the correlation with education masks some other factor, such as cultural background or type of profession. Or the correlation may not mean anything at all, in narrative terms. Who knows?
What Taleb would probably point out is that it doesn't matter, anyway, because of my confirmation fallacy. My use of data to predict the future assumes that there are no outlier events lurking over the horizon. In other words, I am assuming that because in a single segmentation study, education level turned out to be correlated with dependency on doctors, that this will continue to be true for some meaningful period of time.
Yet, at this moment in 2009, that is actually unlikely. Right now, our government is investing in medical technology that will help doctors to do a better job of determining which treatments will work best for which patients based on empirical data. I have reason to at least suspect that the publicity about this new technology will change how even our most educated patients view their doctors' expertise, and therefore undermine both my mental and quantitative models of how people behave.
And that's just one example.
So here's my plea. First, if you haven't already, read The Black Swan; it's both necessary and delightful. Second, ask yourself some serious questions about quantitative research. It may be--heresy though this is--that qualitative is nearly always a much better basis for the development of marketing messages.
I am not completely anti-quant. Segmentation and behavioral models can lift your results if they are narrative-free (i.e., reflect no assumptions), easy to validate in real-time, and frequently refreshed. However, quantitative messaging studies over-complicate and even distort our understanding of human attitudes and behaviors.
There, I said it.
Why? Qualitative research, done properly (which means more interviews and fewer groups), forces you to deal with the complexity of human reactions in a way that humans are reasonably good at--face to face. In the qualitative setting:
1) You seek the simplest important conclusions. You look for a simple preponderance of evidence that seems consistent or reliable, not "data" connections between human attitudes that are either statistical phantoms or too complex to be replicated in the actual marketplace.
2) You treat the result as temporary. In interviewing actual people, you are confronted with the fact that they are responding to specific stimulus at a specific point in time, and that as the marketplace or external factors change, their responses would probably change.
3) You are somewhat less likely to end up focused on the wrong data or ignoring surprising data. When people are able to speak at length, relevant facts emerge that you would never have considered incorporating in a quantitative study. (That's also why interviews are better than groups.)
To corroborate this, by the way, I have been told that in the case of branding research, decisions made based on twelve in-depth interviews can produce better in-market results than a quantitative study. (I would infinitely prefer to write a creative brief based on twelve in-depth interviews than on a quantitative study, that's for sure.)
Allow me to repeat that segmentation and modeling can definitely lift your results. I have seen some excellent models lift results for my clients. However, the excellent models were refreshed frequently, sometimes based on real-time behavioral data, using actual data from actual marketing activities. They avoided making long-term predictions based on data collected at a single point in time. Also, being purely statistical creatures, they contained no narratives; no assumptions of cause and effect, just correlations. That approach, I think, minimizes both the confirmation and narrative errors Taleb refers to.
You and I could take some comfort in the thought that we marketers might find it easier to correct our bad habits than the academic economists Taleb takes on. After all, we have to get real people to engage with our products over very short timeframes, so we are able to learn from our mistakes. Surely we are more practical and effective than university professors!
However, Taleb points out that everyone believes him or herself to be the exception to statistical rules. So humility is the best policy. I will gently push my clients to rely more on qualitative findings as bases for messaging decisions. And from now on, I pledge to assume that there is something potentially important missing from my analysis: I will recommend that my clients prepare for the possibility that I am wrong. For example, I will strongly recommend that we be rigorous about benchmark and tracking disciplines, and refresh our insights more regularly.
Hold me to it.
Friday, February 13, 2009
Lies and consequences
"Man will occasionally stumble over the truth, but most of the time he will pick himself up and continue on." --Winston Churchill
Let’s get right to the point: the old cynic was right, people lie.
I witnessed a great instance of this as a research neophyte on a project involving credit card use. Our group of heavy credit users (those who consistently owe close to the maximum amount available on their credit line) went around the room cheerfully estimating the number of credit cards they owned at “two or three”. These men and women appeared to be very proper people of the type who brush their teeth regularly and teach their children to be good honest citizens. They all sternly claimed to dislike credit cards, use them only when strictly necessary, and pay their bills as promptly as possible.
Then the moderator tripped them up: she asked them to pull out their wallets and count the actual number of credit cards they found there. They all had at least seven (this was the eighties, by the way). Self-righteousness gave way to red-faced laughter. They turned it into a joke, competing to see who had told the biggest lie: one particularly elegant woman who counted twenty-some separate cards was proclaimed the winner.
So why do people lie in research studies? What’s in it for them?
It’s obvious from their embarrassed laughter that my credit-line maxers lied mainly to avoid embarrassment. Although the focus group I describe was completely confidential, although the others in the room were strangers there was no obvious reward for impressing, the participants lied to preserve an appearance of responsible frugality befitting their clean clothes. That appearance clearly mattered deeply to these people.
This kind of ''irrational" lying is a common problem for more than just researchers. A study of Mexicans applying for state assistance found that though applicants under-reported their ownership of cars and other items they feared would disqualify them, an act of rational self-interest, they also over-reported having items like toilets and concrete vs. dirt floors in their homes—to avoid embarrassment about their abject living conditions. In this case, their embarrassment could potentially cost them dearly. (1)
What does it all mean? In keeping with my project in this blog, the fact that other people lie isn’t where this is going. In order to learn from this, I need to turn the two-way mirror on myself.
Urk. Um. Shouldn’t I be doing something important, like flossing, which I do after every meal? Or practicing my flute (I do that every day, religiously).
To make this a little easier, I am going to employ the pronoun “we” here forth.
As social animals, we need to trust each other, which you might think would lead us to be honest. But we also need to look acceptable to each other, and one of the main reasons for that is also our need to trust. Disapproval threatens social bonds—you can’t really trust someone who disapproves of you to be kind or helpful to you. Honesty is the ironic casualty of this need for trust.
(I’m not speaking about intimate relationships here—family, friends. The need for intimacy is in part the need to be loved as we are, and that will generally lead us to be more truthful about ourselves to our intimates. It’s imperfect, of course, but it’s generally true, which is why researchers often get more truthful results if they interview people in the company of friends or family.)
When it comes to honesty, we need to search our own souls and be as empathic as possible with other people. If you were put on the spot, would you reveal behavior that is generally unacceptable? While I like to believe that in that situation I would simply refuse to answer a question, I doubt that I am being honest with myself. I’m one of those unfortunates who turn red and choke up when I say something that I know is untrue, but I compensate for this social deficit with some pretty nimble rationalization and self-censorship.
Truth be told, the social interactions we have are not based on honesty, but on trust. Can I believe strangers or acquaintances when they tell me I'm looking great, or that they enjoyed my presentation? Not really. If I want honesty, I'm going to have to turn to those who trust me enough to give me honest feedback. But I can enjoy the sensation of belonging and safety I experience because they felt a desire to put me at ease.
All this dishonesty isn't such a terrible problem. You just have to face it honestly.
(1) César Martinelli and Susan W. Parker; Deception and Misreporting in a Social Program; http://ideas.repec.org/p/cla/levrem/321307000000000120.html; First Version: June 2006; This Version: May 2007
Let’s get right to the point: the old cynic was right, people lie.
I witnessed a great instance of this as a research neophyte on a project involving credit card use. Our group of heavy credit users (those who consistently owe close to the maximum amount available on their credit line) went around the room cheerfully estimating the number of credit cards they owned at “two or three”. These men and women appeared to be very proper people of the type who brush their teeth regularly and teach their children to be good honest citizens. They all sternly claimed to dislike credit cards, use them only when strictly necessary, and pay their bills as promptly as possible.
Then the moderator tripped them up: she asked them to pull out their wallets and count the actual number of credit cards they found there. They all had at least seven (this was the eighties, by the way). Self-righteousness gave way to red-faced laughter. They turned it into a joke, competing to see who had told the biggest lie: one particularly elegant woman who counted twenty-some separate cards was proclaimed the winner.
So why do people lie in research studies? What’s in it for them?
It’s obvious from their embarrassed laughter that my credit-line maxers lied mainly to avoid embarrassment. Although the focus group I describe was completely confidential, although the others in the room were strangers there was no obvious reward for impressing, the participants lied to preserve an appearance of responsible frugality befitting their clean clothes. That appearance clearly mattered deeply to these people.
This kind of ''irrational" lying is a common problem for more than just researchers. A study of Mexicans applying for state assistance found that though applicants under-reported their ownership of cars and other items they feared would disqualify them, an act of rational self-interest, they also over-reported having items like toilets and concrete vs. dirt floors in their homes—to avoid embarrassment about their abject living conditions. In this case, their embarrassment could potentially cost them dearly. (1)
What does it all mean? In keeping with my project in this blog, the fact that other people lie isn’t where this is going. In order to learn from this, I need to turn the two-way mirror on myself.
Urk. Um. Shouldn’t I be doing something important, like flossing, which I do after every meal? Or practicing my flute (I do that every day, religiously).
To make this a little easier, I am going to employ the pronoun “we” here forth.
As social animals, we need to trust each other, which you might think would lead us to be honest. But we also need to look acceptable to each other, and one of the main reasons for that is also our need to trust. Disapproval threatens social bonds—you can’t really trust someone who disapproves of you to be kind or helpful to you. Honesty is the ironic casualty of this need for trust.
(I’m not speaking about intimate relationships here—family, friends. The need for intimacy is in part the need to be loved as we are, and that will generally lead us to be more truthful about ourselves to our intimates. It’s imperfect, of course, but it’s generally true, which is why researchers often get more truthful results if they interview people in the company of friends or family.)
When it comes to honesty, we need to search our own souls and be as empathic as possible with other people. If you were put on the spot, would you reveal behavior that is generally unacceptable? While I like to believe that in that situation I would simply refuse to answer a question, I doubt that I am being honest with myself. I’m one of those unfortunates who turn red and choke up when I say something that I know is untrue, but I compensate for this social deficit with some pretty nimble rationalization and self-censorship.
Truth be told, the social interactions we have are not based on honesty, but on trust. Can I believe strangers or acquaintances when they tell me I'm looking great, or that they enjoyed my presentation? Not really. If I want honesty, I'm going to have to turn to those who trust me enough to give me honest feedback. But I can enjoy the sensation of belonging and safety I experience because they felt a desire to put me at ease.
All this dishonesty isn't such a terrible problem. You just have to face it honestly.
(1) César Martinelli and Susan W. Parker; Deception and Misreporting in a Social Program; http://ideas.repec.org/p/cla/levrem/321307000000000120.html; First Version: June 2006; This Version: May 2007
Thursday, February 5, 2009
Reasons to be cheerful
I recently had reason to check out a study on optimism done by a psychologist called Lisa Aspinwall back in 1996. The study compared how people deal with health risk and danger information, depending on their level of optimism. (1)
What it found may be counter-intuitive: the more optimistic people tend to be, the more attention they pay to that scary information about side effects and long-term risks. So your optimist isn't really some pie-in-the-sky, head-in-the-sand stereotype. Instead, they are people with the courage to face and deal with problems.
Here's how it works:
1) The optimist believes that problems are mostly solvable.
2) This leads her or him to seek information or resources that can help solve the problem.
3) Information leads to smarter actions, which create a more positive, successful outcome.
4) The initial optimistic tendency is confirmed and reinforced in a virtuous cycle.
I was mulling this over when considering whether to reduce my news consumption, which I have to admit is making me pretty jumpy, not to mention mad as hell. Since I'm an anxious type of person (see prior posts) I definitely don't count as an optimist.
But I wonder if despite my disposition, the optimist's strategy might help me to reduce anxiety even better than turning off the evening news. In general, I'll go out of my way to avoid even potentially unpleasant information (e.g., getting the right medical tests done on time). What if instead of avoiding the problem, I decided to face it square on and deal with it if it arises? What if I had more faith in my ability to cope well given the right information?
When you look at it this way, the optimist is a hard core realist, and anxious doubters like me are the ones with our heads in the sands. Hmmm.....
(1) Distinguishing Optimism from Denial: Optimistic Beliefs Predict Attention to Health Threats. Lisa G. Aspinwall and Susanne M. Brunhart; Personality and Social Psychology Bulletin, 1996; 22; 993
What it found may be counter-intuitive: the more optimistic people tend to be, the more attention they pay to that scary information about side effects and long-term risks. So your optimist isn't really some pie-in-the-sky, head-in-the-sand stereotype. Instead, they are people with the courage to face and deal with problems.
Here's how it works:
1) The optimist believes that problems are mostly solvable.
2) This leads her or him to seek information or resources that can help solve the problem.
3) Information leads to smarter actions, which create a more positive, successful outcome.
4) The initial optimistic tendency is confirmed and reinforced in a virtuous cycle.
I was mulling this over when considering whether to reduce my news consumption, which I have to admit is making me pretty jumpy, not to mention mad as hell. Since I'm an anxious type of person (see prior posts) I definitely don't count as an optimist.
But I wonder if despite my disposition, the optimist's strategy might help me to reduce anxiety even better than turning off the evening news. In general, I'll go out of my way to avoid even potentially unpleasant information (e.g., getting the right medical tests done on time). What if instead of avoiding the problem, I decided to face it square on and deal with it if it arises? What if I had more faith in my ability to cope well given the right information?
When you look at it this way, the optimist is a hard core realist, and anxious doubters like me are the ones with our heads in the sands. Hmmm.....
(1) Distinguishing Optimism from Denial: Optimistic Beliefs Predict Attention to Health Threats. Lisa G. Aspinwall and Susanne M. Brunhart; Personality and Social Psychology Bulletin, 1996; 22; 993
Friday, January 9, 2009
Life, death, and doctors
I've been mulling over a recent project in which we interviewed people who are mainly old, and mainly very, very sick. As a fifty+ American, I have the usual anxieties over inevitable age-related illness and my no-doubt completely inadequate insurance coverage. So I'm hardly an objective observer!
Beyond sickness itself and the financial hell it entails, I'm concerned about the time when I will rely more on doctors than I do today. I've never gotten along very well with doctors, who seem to operate more like faith-healers than scientists much of the time. I know it's not their fault; diagnosis is frequently a guessing-game and if anything, all those expensive new diagnostic tests make the guessing more complicated. Unfortunately, knowing that doesn't help me to feel like a competent consumer of medical care; quite the opposite--I feel helpless and suspect that my wallet is being exploited.
Given my personal issues around health care, it was extraordinary to sit behind the mirror and watch a parade of very sick people talk about their medical care. These are folks with mutiple conditions which, in the words of one of them, are going to "get me", sooner rather than later. Most had already encountered Mr. D in the course of prior heart attacks or strokes.
I observed two very different attitudinal groups. (Bear in mind that this is not a quantitative result, just an observation, as most of these blog posts will be.) One group tended to display a faint little smile, sit back in the chair, talk matter-of-factly about symptoms and outcomes, and speak of their doctors with faith. Why didn't they have all the facts about their condition? "My doctor tries to protect me." Why didn't they even know the name of their condition? "Probably he/she said it, but I don't remember." Are they curious about new treatments? "If my doctor thinks I need it he/she will tell me."
The other (a minority) were the angry ones. They sat up straight and did not smile. They wanted to believe they could be cured, but they didn't trust anyone's help very much. So they focus on all of the things they can control, from healthy behaviors to internet searches. When they saw the information we had for them, they asked the key question: "Why didn't my doctor tell me this!!??" (I have to admit I felt a little sorry for the doctors when these respondents left the room.)
Probably--actually, almost definitely--the angry patient receives and will continue to receive better care than the accepting group. That's because they act like consumers of health care, not parishioners in the Temple of Health. They expect and demand clear, accurate information, and that helps them to make better lifestyle decisions as well as treatment decisions. This is the group I identify with.
But which group would I rather belong to? If it came down to a complete breakdown of my health--the accepting patients. For one thing, they are completely right. Probably their doctor is trying to protect them from information that will only make them anxious. Probably their doctor would tell them about any treatments that could really help them, with help defined a little more loosely. Their diseases are going to "get" them, sooner rather than later, no matter what treatment they receive. So "help" doesn't just mean treatment, it means acceptance; being emotionally and physically comfortable enough to enjoy the time you have left.
All of this leaves me with a question about life, appropriate to my middle-aged lifestage and anxiety-prone character: wouldn't it be better to learn to let go a little? To accept that death is part of the narrative that is Me? That it shouldn't in any way prevent me from enjoying the wonders of the universe I've been born into? Would I want to spend my waning days fighting with doctors? Really? Because there are much better things to do with my limited time.
Anger inspires you to work on problems. Acceptance can let you live well despite problems. There is a season for each of them; I hope that somewhere along the way I gain the wisdom to know when the seasons turn.
Beyond sickness itself and the financial hell it entails, I'm concerned about the time when I will rely more on doctors than I do today. I've never gotten along very well with doctors, who seem to operate more like faith-healers than scientists much of the time. I know it's not their fault; diagnosis is frequently a guessing-game and if anything, all those expensive new diagnostic tests make the guessing more complicated. Unfortunately, knowing that doesn't help me to feel like a competent consumer of medical care; quite the opposite--I feel helpless and suspect that my wallet is being exploited.
Given my personal issues around health care, it was extraordinary to sit behind the mirror and watch a parade of very sick people talk about their medical care. These are folks with mutiple conditions which, in the words of one of them, are going to "get me", sooner rather than later. Most had already encountered Mr. D in the course of prior heart attacks or strokes.
I observed two very different attitudinal groups. (Bear in mind that this is not a quantitative result, just an observation, as most of these blog posts will be.) One group tended to display a faint little smile, sit back in the chair, talk matter-of-factly about symptoms and outcomes, and speak of their doctors with faith. Why didn't they have all the facts about their condition? "My doctor tries to protect me." Why didn't they even know the name of their condition? "Probably he/she said it, but I don't remember." Are they curious about new treatments? "If my doctor thinks I need it he/she will tell me."
The other (a minority) were the angry ones. They sat up straight and did not smile. They wanted to believe they could be cured, but they didn't trust anyone's help very much. So they focus on all of the things they can control, from healthy behaviors to internet searches. When they saw the information we had for them, they asked the key question: "Why didn't my doctor tell me this!!??" (I have to admit I felt a little sorry for the doctors when these respondents left the room.)
Probably--actually, almost definitely--the angry patient receives and will continue to receive better care than the accepting group. That's because they act like consumers of health care, not parishioners in the Temple of Health. They expect and demand clear, accurate information, and that helps them to make better lifestyle decisions as well as treatment decisions. This is the group I identify with.
But which group would I rather belong to? If it came down to a complete breakdown of my health--the accepting patients. For one thing, they are completely right. Probably their doctor is trying to protect them from information that will only make them anxious. Probably their doctor would tell them about any treatments that could really help them, with help defined a little more loosely. Their diseases are going to "get" them, sooner rather than later, no matter what treatment they receive. So "help" doesn't just mean treatment, it means acceptance; being emotionally and physically comfortable enough to enjoy the time you have left.
All of this leaves me with a question about life, appropriate to my middle-aged lifestage and anxiety-prone character: wouldn't it be better to learn to let go a little? To accept that death is part of the narrative that is Me? That it shouldn't in any way prevent me from enjoying the wonders of the universe I've been born into? Would I want to spend my waning days fighting with doctors? Really? Because there are much better things to do with my limited time.
Anger inspires you to work on problems. Acceptance can let you live well despite problems. There is a season for each of them; I hope that somewhere along the way I gain the wisdom to know when the seasons turn.
Monday, January 5, 2009
What's Important?
As an advertising strategist, I have a strange window on humanity--the two-way mirror of the focus group room. I sit on one side, in the dark. On the other side, people of all types, ages, sexes, sizes talk about ads and products. We ask whether the photo of the middle aged woman in this ad is "relevant". Or if the brochure we are planning to write is hefty enough to be "important". Thanks to trends and fashions, all of this learning is about as eternal and profound as dust in the wind.
Yet after twenty years lurking in the darkened back room, combing through surveys and dissecting research reports, it has finally occurred to me that I was searching for something that actually matters. No matter what I am researching, as a strategist I always want to understand what's important to people. And that's a profound question.
If you watch people's faces as they react and discuss ads (eew! yessss! *yawn*), you soon realize that people bring their full humanity to every decision, no matter how concrete or obvious their reasons are on the surface. Nonetheless, experts in advertising psychology have distilled the rich spectrum of human motivations down to short lists that include things like sex, self-esteem, self-preservation, and greed. Did I mention sex?
Maybe this is basically true, but it's also hopelessly reductionist. (It's easy to boil the soup until there's nothing but brown goop left, too, but you won't learn much about soup that way.) And from my personal point of view, it's sadly dismissive of the actual lives and concerns of all of the individual human beings I've been watching through the mirror all these years. What if you or I were on the other side of the mirror? Would we want our voices to be heard as the grunts and screeches of primitive impulse?
I've watched young people talk about buying their first car and though self-esteem was obviously a big part of that experience, so was everything from childhood memories to their dreams of having children of their own. I've watched older people in the last stages of crippling illness talk about how they manage their conditions and though they certainly still care about self-preservation, they could also teach every one of us about endurance, dignity, and humor.
So this blog isn't about the useful reductions and formulations common in my industry. It's about what I've actually learned about people throughout the years. It isn't scientific, because I am not a scientist. Call it a diary of revelations granted to me by my fellow beings; my personal antidote to the cynicism of the marketplace.
Yet after twenty years lurking in the darkened back room, combing through surveys and dissecting research reports, it has finally occurred to me that I was searching for something that actually matters. No matter what I am researching, as a strategist I always want to understand what's important to people. And that's a profound question.
If you watch people's faces as they react and discuss ads (eew! yessss! *yawn*), you soon realize that people bring their full humanity to every decision, no matter how concrete or obvious their reasons are on the surface. Nonetheless, experts in advertising psychology have distilled the rich spectrum of human motivations down to short lists that include things like sex, self-esteem, self-preservation, and greed. Did I mention sex?
Maybe this is basically true, but it's also hopelessly reductionist. (It's easy to boil the soup until there's nothing but brown goop left, too, but you won't learn much about soup that way.) And from my personal point of view, it's sadly dismissive of the actual lives and concerns of all of the individual human beings I've been watching through the mirror all these years. What if you or I were on the other side of the mirror? Would we want our voices to be heard as the grunts and screeches of primitive impulse?
I've watched young people talk about buying their first car and though self-esteem was obviously a big part of that experience, so was everything from childhood memories to their dreams of having children of their own. I've watched older people in the last stages of crippling illness talk about how they manage their conditions and though they certainly still care about self-preservation, they could also teach every one of us about endurance, dignity, and humor.
So this blog isn't about the useful reductions and formulations common in my industry. It's about what I've actually learned about people throughout the years. It isn't scientific, because I am not a scientist. Call it a diary of revelations granted to me by my fellow beings; my personal antidote to the cynicism of the marketplace.
Subscribe to:
Posts (Atom)