Now that we have discovered what worked last year, it’s time to research the second key question: What can we do to improve upon last year?
There were some important learning experiences from this program that can be applied not only to this year’s program, but other experiential research projects that we undertake. The experiences helped us develop best practices about efficient monitoring of survey results and event recap data throughout the course of the program.
I think this is a good opportunity for readers to understand how seriously PortMA treats data collection.
Keeping track of data collection from multiple markets
First, teams in some markets did not collect a balanced number of surveys from consumers who sampled or who did not sample the brand at events.
I know I said previously that field teams effectively collected a balanced number, but I was referring to the overall number of surveys. At the market level, some teams collected more surveys from those who sampled than those who didn’t sample, and vice versa, in the early stages of the program.
While these issues were spotted early, there were communication issues that prevented resolution in real-time. As a result, we developed a more efficient communication method for weekly data collection updates to ensure that any data issues are spotted and resolved in a timely manner.
Working with the client to organize field staff reporting
Second, the work put into organizing event recap data received from the field was very labor-intensive.
Data such as event attendance, interactions, and consumers sampled was reorganized each month by venue and market for reporting purposes, which caused us to exceed the typical amount of time devoted to recapping data.
The solution was simple, and that was to work together with all parties to ensure the event recap data is easy to read and organize by all parties.
Developing hypotheses for testing at project launch
Third, it’s important to collaborate with the client on an ongoing basis to ensure that any time spent on analysis follows the research objectives proposed at the project launch. We developed a set of hypotheses that are based on the objectives.
For easy comprehension, each hypothesis is simple to test. For example, to meet the objective of reaching the right consumer, we developed the following hypothesis:
“The brand’s target consumer is more likely to purchase the product in the near future than others.”
By comparing the target consumer’s purchase intent to any other consumer, we’ll know whether or not to reject the hypothesis. This approach is valuable to all parties because it ties back to the research objectives and it’s easy to interpret.
As always, I’ll keep you updated on how the program plays out, and whether or not the hypothesis holds true.