Summary: Consumer bias in experiential marketing surveys runs both ways. It can be created by circumstances, by the interviewer or by the consumer. Gleaning meaningful results is always dependent upon how well we control the potential for bias.
We go to great lengths to avoid producing interviewer bias (the impact of the interviewer knowing what answers he wants to hear and pushing the consumer to answer accordingly). We are careful to write our surveys to avoid including any leading questions. Then we ask our brand advocates stick to our scripted survey to preclude them from adding any hints of their motivation.
People have an innate tendency to please anyone giving them free stuff
This is referred to as the Hawthorne effect. It was initially demonstrated in a series of experiments conducted by Elton Mayo, a sociologist and researcher at Harvard University. He described the effect saying,
“The desire to stand well with one’s fellows, the so-called human instinct of association, easily outweighs the merely individual interest and the logic of reasoning.”
A shining example of this was in a boxed dinner program I’m working on today.
After sampling our products, we asked the consumers which of our brand’s boxed dinners they had purchased in the past six months, as well as how many competing brands’ box dinners they had purchased during the same period. The result was that 70% reported having purchased our brand, while the closest competitor (a market leader in boxed dinners) only got 30%.
The implication is that the response was skewed by the emotional desire to please the interviewer outweighing the logic of reasoning and individual interest.
How can we avoid this effect?
Unfortunately, with branded experiential marketing programs it’s extremely difficult.
Although we can ask a consumer to ignore any biases they have after eating free samples of our product and having a personal experience with the program, the Hawthorne effect is so inherently strong, that they can’t help themselves. Likewise, removing the branding from these programs would be equally unproductive in a marketing campaign.
Our solution, and one we encourage our clients to use, is to gather year-over-year measurements and comparisons. The only surefire way to put these metrics into context is by measuring responses at similar events with similar patrons.
We are developing a benchmarking product at PortMA that will establish metrics and averages for programs like these, which will be especially useful during the first year of a campaign. However, even once it’s finalized, we shall still emphasize the importance of measuring programs year-after-year because, in the end, that’s going to yield the most discernible results.
Photo Source: http://www.flickr.com/photos/roboppy/9625780/