At PortMA we design data collection plans, collect data, analyze, and report for 30 to 40 brands/event marketing campaigns a year. In almost every case, the goal is to generate sales, service adoption, or some revenue generating behavior. Therefore, one of our primary objectives as the analysts on the project is to measure the revenue generated from this activity.
Actual Behavior Isn’t the Answer.
Many believe the best source of this data is actual sales. If it were attainable in a true and valid way, they would be correct. The problem is that it is never truly attainable.
- Coupons don’t always get used, or they are used outside of the domain being measured.
- Before-and-after shelf counts can be inaccurate and are subject to inventory issues or management re-stocking.
- Premiums based on proof-of-purchase are not always sought after a consumer buys.
These are the challenges if the program is retail-based. With so much of experiential happening on the street, at state fairs, through sponsorship, or at festivals, sales data seems almost unattainable.
Under the rare circumstances that actual purchase data is sought, it’s typically going to be available for analysis two to three months (at best) after the activation has ended. We welcome the chance to review and analyze this data, but, to be frank, no one cares about outdated data three or four months after the activation. The ships have sailed and everyone, from the agency to the brand, have moved on to other things.
If you’re in-store at Walmart and have access to Retail Link, you might think you’re covered. And, truth be told, you’re closer than any other possible scenario. But, you still have to deal with significant challenges.
It’s the problem of hording. Hording is when current customers buy more now, so they can buy less down the road (when you’re no longer looking at the data). You think you’re seeing a lift, when it’s really just folks taking advantage of short-term discounts.
Measuring Purchase Intent Gets You To True Event Impact.
Instead of all the work, cost, and challenges associated with getting actual purchase behavior, we advocate a simple three-to-four question exit survey.
The survey has two key questions. The first measures if the person is a current customer (or not). The second measures how likely they are to purchase (e.g., or visit, or sign-up, etc.) in the future.
Using the simple, yet vital, academic rules of research design and statistical rigor, you can collect only a dozen or so responses during each day of activation. The result is immediate feedback on how well you’re creating an intention to buy where it didn’t previously exist.
You can, in near real-time, understand if your event marketing program is delivering greater sales at state fairs or street intercepts. This will allow you to shape future routing in an evidence-based manner.
You’ll understand if you’re generating the same impact in the southwest as you are in the northeast. Competitive pressures and a brand’s historical advertising efforts are almost always regionally based.
Different field teams are going to come up against different market challenges. You’ll be able to see exactly how you’re performing in those markets and identify areas where different talking points, or even full activation strategies, are warranted. This is where you identify event marketing best practices and deploy them across teams.
We See Evidence of Intent Co-Varying with Actual Behavior Every Day.
Purchase intent predicts actual purchase behavior any time we have the chance to compare the two.
Recently, I was reviewing a recap report in the Spirits industry. We’ve been measuring and reporting the performance of this brand for the past three years.
When we look at the rates of onsite sales and compare them to reported purchase intent, one clearly tracks with the other. Markets with high purchase intent have high on-site sales. Markets with low purchase intent have low on-site sales.
Likewise, when we go back to consumers two or three months later and ask them in a post-event survey what they did, we see reports of actual purchase behavior that track with what they said they’d do when they exited the event.
Sure, there is drop-off. It’s usually between 30% and 50%, depending on the brand category. But knowing this drop-off allows you to translate a measure of intended behavior to real revenue – real revenue, easily linked to differences in your activation strategy – in time to actually act on the results.
Photo Source: https://www.flickr.com/photos/walmartcorporate/