Have you ever conducted a survey to help make a decision? Perhaps you helped someone else build a survey to help them make a decision. I’m sure you have taken surveys. I seem to get several every week in my email from various companies and organizations. Then there are the surveys on the back side of your receipts from restaurants and stores. You might even get surveys in the mail. I remember when they use to include a nickel in the envelope with the survey to make you feel guilty about keeping the nickel and not filling in the survey. I guess a nickel doesn’t buy a lot of guilt today.
But how many of the surveys do you think they get back? Ninety percent? Seventy five percent? How many people need to respond to a survey in order to make valid predictions?
I guess the answer depends on who you send the survey to and what questions you ask. If you are a auto manufacturer and you want to evaluate your customer satisfaction with a new model, you do not want to send the survey to all automobile owners. On the other hand, if you want to find out what features would entice owners of automobiles from other manufacturers to switch and buy one of your vehicles, you may want to exclude owners of your cars.
You see the dilemma? The type of question should be closely tied to the audience to whom you send the survey. If you cannot narrow down your audience, perhaps the questions in your survey are too broad and you should consider making two or more surveys to target specific audience groups with question that would be important to them.
How many questions should you put on a survey? The more questions you include, the less likely someone will take the time to answer them. For myself, anything more than a half dozen questions and I’m bored and ready to stop taking the survey. One way you can counterbalance this tendency is to offer greater rewards for completing the survey. Restaurants often offer free appetizers or deserts or even menu items for completing their surveys. Stores may offer a certain percent off your next purchase. Internet surveys have offered everything from cash/gift cards to thumb drives and even iPads. Would you fill in a 100-question survey for the chance to win a 4 GB thumb drive? What if they were offer an iPad for 5 lucky survey takers? You might want to guess at how many people are willing to take that survey. If you think only 500 people will take the survey, you might be more willing to spend the next half hour completing the survey than if fifty thousand people were to take the survey. On the other hand, a survey that offers nothing in exchange for my time will probably wind up getting filled in my circular temporary storage bin otherwise known as a trash can.
So you have identified your survey questions and you have a targeted list of people you will be asking to take the survey. You even have a reward program set up to encourage people to trade their time for a chance at a ‘gift’. What percent of your target audience do you need to get a response from in order to have a reasonable chance that the survey will represent the total population? Take the last presidential election as an example. Did any of the survey takers ask you whom you might be voting for? Probably not. In all of the years that I’ve been voting, I’ve never once been asked by a survey taker who I was going to vote for or who I did vote for. So whom are they asking? Would it surprise you to know that they can get a pretty good idea of the way the election will turn out by only ask a few thousand people? Makes me wonder if we just cannot randomly select a few thousand people from around the country to cast their ballots instead of trying to get all of us to drive to the polls and then wait in lines for hours to cast our votes. If the odds are that we would get the same results, imagine the time we would save. After all, isn’t that what manufacturers do when they conduct product surveys to determine what goods to make and sell to us?
I was recently involved in a survey with a very well defined maximum population. The survey was open for several weeks, but I can evaluate the cumulative data as of any day beginning with only a percent or two up to the final 60+% results. The survey owners were still trying to get people eligible to take the survey to complete the survey before the deadline even though they already had survey results from over 60% of the eligible survey takers. Was this overkill? Would a few more percent make a difference in the outcome of the decision?
Over the next couple of days (or weeks if I get busy doing something else), I’m going to pull the data into a PowerPivot model and evaluate the results to some of the questions over time as more and more people take the survey. I’m curious to see if getting more people to take the survey really made a difference. I’ll report back to you what I discover in a future Tuesday blog.
Until then, think about it and try to reason out what the results will look like. C’ya!