Not too long ago, an agency partner came to me and asked about an analysis completed by the brand team. It showed their program didn’t drive sales. They asked me to take a closer look. We found the problem. It had nothing to do with the experiential marketing program.
The Best On-premise Depletion Data Is in Its Most Raw Form
The first thing I recommended was that they go back and ask the client to look at the data by account on a monthly, weekly, or, ideally, a daily basis. You can’t simply overlay on-premise activations that run four or five times a month with sales data grouped for the year. You have to break it down to the individual day when there was activity on site and, perhaps, a few days before and after. Most of the time, marketing activity happens on a day-to-day basis. The depletion data should match this.
We know not all the accounts were active at the same time, with the same volume of fervor, or doing the same thing. You have to control for this.
The Easy On-premise Depletion Data Analysis Isn’t Always the Best
The brand team was making a conclusion about the program based on the average across the accounts, but the standard deviation was significant. Account depletion ranged from -50% to +50% (when looking at the activation months). The suggestion was that we clean up the data to detemine why half the accounts showed results on par or far better than the average for all other accounts. And, why did some of the accounts do so poorly? Did these include accounts with less-than-ideal, on-premise dynamics or heavier competitor presence?
I recommended we be a bit more selective on what the control (“all other”) accounts are. It wasn’t entirely clear what defined “all other.” If there were large accounts in the control, they might be in a position to take better advantage of seasonal sales or promotions. Their service footprint compared to the program accounts could also contribute to bias. The solution could be to define “all other” as accounts of similar monthly volume in a similar footprint. For example, if there were some college towns in “all other,” but not, or not the same number, in program accounts, that could skew the numbers.
When it came to the actual data, their logic was to use nine months and six months prior as another comparison point. I’d rather get into the weeds a bit more and tighten it up by looking at the program accounts by month. Next, compare that data against the same months in the prior year. Finally, use the monthly data for a new “all other” control account classification for the program months and same months the year before.
The analysis might look something like this:
- What was the year-over-year change in program accounts by month?
- Was this (by month) statistically better/ worse/ the same as the “all other” accounts’ year-over-year change?
- What can we learn from our records of actual activity in the program accounts to identify those program account activities that drove positive on-premise depletion?
- Can we identify cross-account patterns to establish best (or worst) practices?
Make Sure to Manage On-premise Depletion Sales Analysis Risks
Risks? Absolutely. It could be that the experiential marketing program didn’t work, but that’s doubtful. True, any use of free product in an account can drive down sales. There could also be something going on where the buyer was holding off during the program to see if they could get a better deal or score some free product. (You always want to try to control for any early program hoarding or post-program catch-up behavior among buyers.)
When everything is said and done, on-premise drives sales. It’s not a matter of if, but how efficiently (i.e., if there’s an ROI). The right sales analysis will get you there.