March and April are busy month at PortMA. While many experiential programs don’t hit the road until the weather gets better in May, account managers all over the world are getting their ducks in a row during these two months. That includes the research. We are busy planning research strategies, drafting methodologies, exploring the best options to collect data and designing surveys. Designing surveys … let’s stop there. What’s the big deal about designing a survey? A well-designed survey is everything. A poorly designed survey is nothing – zip – zero – nada – even if you receive thousands of responses. To have a well designed survey you MUST START with the following:
Clear objectives of what you want to measure.
Your objectives should align with the overall goals of the experiential program. For example,
- If the goal of the program is to generate awareness, the survey should include a question that measures consumers’ prior experience with the brand.
- If the goal of the program is to generate sales, the survey should include a question that measure future purchase intent.
Other things come into play after this, such as:
- the KISS rule (Keep It Short & Simple),
- eliminating double-barreled questions,
- eliminating leading questions or questions worded with bias
We discuss those in another blog post. (so come back and visit!)
For example, last year I designed a survey for a new to-market pasta brand that was using experiential only in select markets to introduce the brand to consumers. They were sure, once the consumer tried their product they would love it (They were right, but I jump ahead). The goals of the program were to generate awareness and drive sales. No problem!
Designed with the goals in mind.
We designed a survey that measured past experience with the brand, future purchase intent, and future recommend intent. We wanted to understand the overall feeling towards the brand. We also included sample types to the survey so we could provide actionable insights regarding which varieties were creating the greatest impact on purchase and recommend intent. Knowing that, the experiential program could hand out more of the most favored sample types to increase sales.
At the end of the first round of data collection, our research showed that consumers generally loved their products – some products more than others. The client was pleased and wanted to learn more about consumer buying habits (what do they buy in the category, why do they buy what they do, etc.)
This ask was a little out of line with the primary objectives of the sampling program, but seemed reasonable. Why not throw a couple of consumer insight tidbits to the mix?
When the second round of data collection ended, and we had the data to support an experiential program that was performing successfully, along with the bonus of a peek into the minds of the consumers.
When the third round of data collection started, we went into the field with more consumer insight questions and fewer questions that actually measured the goals of the program. At that point, the data no longer had the value it originally did.
Lessons Learned.
There is nothing wrong with Consumer Insights studies. In fact, love to do them a PortMA. But, call them what they are!
- If you want measurement of the performance of an experiential program, design your survey to measure the performance of the experiential program.
- If you want to understand the mind of the consumer, launch a consumer insights study.
We strongly recommend not mixing the two. A question or two about buying behavior may be fine if it can be figured into the objectives of the program. More than that and you could lose sight of the objectives, collect meaningless data and fail, therefore to achieve the project’s original goals.