It’s all relative. This is particularly true of program data. It’s also one of the major reasons we encourage doing research with each program every year. It’s one thing to say that 30% of people said they would buy a product. It’s an entirely different, and more valuable thing, to say that 30% of people said they would buy a product, up from 19% in 2012.
I had the chance to do a slightly more in depth version of this just last week. The client had 2012 and 2013 data, each uniquely tagged. Because of this I was able to look at any change in recommend intent across individuals. So not only did I get to look at the overarching trend, I could isolate the consumers who depreciated from likely to unlikely and identify their unique traits. This, combined with the rest of the survey, allowed me to identify exactly what areas the client was faltering in the eyes of those most strongly affected.
The data also worked the other way, by looking at those who went from unlikely to likely to recommend; we could identify the areas in which the company had appreciably improved.
I think comparing to historical data is something that is far too often neglected, people get so focused on what’s happening with a product this year, they forget that the context offered by last year’s performance can change a 70% from amazing to terrible, or vice versa.
Photo Source: http://www.flickr.com/photos/yyq123/193849032/