One component of experiential marketing we frequently measure is projected event metrics. Our agency clients usually have key performance indicators (KPI) that they need to reach during the program. Knowing how close the brand teams are to their KPI is valuable for projecting a program’s outcome accurately. The challenge with projecting event recap metrics is that you need adequate and appropriate sampling in order to not bias the projections. I’d like to share an example of biased projections and how to overcome the issue.
Projections vs. KPI – Where the bias came from
We worked on an experiential program for an outdoor sporting goods retailer. The agency client has established KPI for event impressions and consumer engagements.
In the first report we wrote for the program, the client provided impressions and engagements from each event. The event recap metrics, plus knowing the number of allocated events, gave me all the tools to run projections against their KPI.
Our projections model is a simple formula:
Projected Metric = Total Allocated events x Average Metric per event
I’ll use the program’s total engagements and allocated events as an example.
971 engagements per event x 33 allocated events = 32,043 projected engagements
The KPI was 25,250, so the team was projecting 6,800 ahead. But there was one glaring problem.
The client had allocated a number of large, mid-sized, and small events, but, at that point in time, no small events had been executed. Therefore, the average engagements number was based only on large and mid-sized events, while the allocated number of events included all three event types. So our projections were biased. They were higher than they should have been.
You may be thinking the solution was to exclude the allocated number of small events from the formula, but then the projection wouldn’t be an accurate representation of the KPI. The projection would have no context.
Working around the bias – Segmenting the projections
We decided it would not be worthwhile to project against the KPI since the current version was biased upward. The alternative was to project engagements by the event type to see if anything actionable could be identified.
Projecting engagements at large events yielded 12,800 estimated engagements, while mid-sized events yielded 1,500. Large events are supposed to represent the largest proportion of engagements, so we expected that projection to be closer to 20,000.
This identified an opportunity to present the challenge of maximizing engagements at large events. Perhaps the brand team was understaffed for handling that many engagements at large events.
We asked the client if they perceived this to be an issue. They were not concerned, because the engagements were generating positive purchase intent. In other words, the quality of the engagement mattered more to them than the quantity.
Ultimately, it’s important to ensure you have an accurate representation of events, if you’re going to project event recap metrics. Otherwise, your projections could drastically change between reporting periods and your current and previous recommendations might be contradictory.
Avoiding this kind of bias will increase the confidence of your results and your client’s confidence in you as a researcher.
Photo Source: StockMonkeys.com