Making the Stretch With Survey Data Collection
You can only find out so much about a consumer within a single survey. It is simply a matter of time available. If you are asking someone a 25 question survey in person, not only will you have people dropping out part way through, but you’ll only get to so many people to begin with. In order to streamline this process, we try to get as much insight as possible out of each question, and ensure that the questions ask cover the largest range of analysis we might want to perform.
Enhancing a pre-existing survey
For one project, we knew the brand ambassadors were going to be at very busy fairs, giving away a variety of free samples, so time for the survey was limited.
We managed to trim down to five questions that we thought would deliver the data we needed for a thorough analysis and we launched the program. However, as is prone to happen with these types of programs, there was a shift in desired insights.
The team decided to look specifically at how consumers were reacting to each sample, which, due to the great variety, was not something we had captured in our original survey. Additionally, the team became concerned that some of the events might be generating different responses based on the time of day.
Lucky for us, time of day was easy. The tool we were using for data collection had an active internet connection and tagged each result for when it was collected.
It was easy enough to code those into categories and look for significant differences. Interestingly, while there was a dip in consumer response between 5:00 PM and 7:00 PM, it was likely due to the higher percentage of men who were less likely to want to purchase the product interviewed in that time frame .
Linking survey data to field recap data
Next came the samples issue. At first, it seemed impossible to analyze, given how little we knew about what the consumers had sampled. But, similar to the time issue, it was just a matter of incorporating data that was strictly survey results into the results for analysis.
We knew what events all the results came from, and we had a list of which samples were distributed. It was simply a matter of tagging those results with the samples that had been given out at the event.
If we couldn’t see it at the consumer level, the event level would have to suffice. From there, because of the small differences in varieties sampled at each event, we were able offer some insight about how they were performing.
What value did this add to the project?
It was unexpected, but the integration of outside data into the survey results offered insights we had not initially been able to produce. That is something I now keep in mind with each new project.
Photo Source: https://www.flickr.com/photos/fleur-design/