For Better or for Worse: Comparison in Experiential Marketing Research
Yesterday I had a quarterly recap review and year-to-year comparison with a client that I have been working with for over five years. I have worked with this client on multiple experiential marketing programs and the one we were working on yesterday was a very large mobile tour (10 plus legs) for a well known insurance company and it spanned over a twelve month period.
Over the five years that I have managed the research for this experiential tour we have done many different research efforts (exit event research, psychographic profiling, and predictive modeling) and the methodology has always included a post-event survey. This survey has changed slightly a couple of times over the years but in 2015 we gave it a pretty significant re-vamp. Our client loves historical data comparisons, so this year we decided to keep the survey identical to 2015.
The objectives of the post-event research effort were as follows:
• Understand perceptions of the brand.
• Measure message recall and retention.
• Determine the rate at which consumers have taken action three months post experience (in case you are wondering, we chose 3 months because of the insurance industry’s six month renewal cycle.)
• Measure intent to take action in the future.
Now that we have covered some background, back to the quarterly recap comparison. I had some great news to share with the client:
• A new leg of the tour that was targeting younger males was able to reach their demographic.
• Brand perceptions were just as high in 1Q16 as they were in first quarter of 2015.
• Message recall on two key marketing messages was up.
This being said, I also had some not so great news:
• Intent to inquire and purchase in the future was down slightly but significantly.
Let’s face it, no-one likes to be the bearer of bad news in any shape or form. As researchers, it is our job to do just that, IF that is the story the numbers are telling us. End of story, right? Wrong. It is also our job to provide some insights and observations as to why the results look the way that they do. I was able to do just that. It turned out that two of the tours that had a lot of events in 1Q16 had spikes of exceptionally high intent numbers in 2015. By comparing the same quarters of each year the numbers looked “off” because of these two events. Instead of focusing on these two spikes we decided to look at the average intent per quarter and were please with the results. What we were seeing now was that intent ratings were on par with the tour average for 1Q16, the spikes were equalized by dips at different events and it all worked out in the end. Once we did a little digging, things weren’t so bad. We ended the call with a decision to continue to monitor the intent ratings for those two tours before further action to correct the decreases was considered.
We all like a happy endings. Sometimes our stories have bumps in the road or a road-block here or there. As long as we can go back and analyze what caused the issues we can usually make our way through it and reach our happy ending. In research it is our job to report the good with the not-so-good so that our clients can make the most educated decisions for their campaigns. Who knows, the bad news you are sharing may not be so bad after all.