Single Blog

Surveys: The Good, the Bad & the Ugly

Originally posted on Ryerson Student Affairs on February 24, 2016.

Even if you’re new to assessment techniques, I’ll wager you know all about surveys. They are the most common assessment tool used to provide data about our programs, students, and communities. They are cost effective and easy to distribute, but while good for collecting information about perceptions and awareness that can contextualize other assessment data, they are not good as a sole tool of evaluation. The best-designed surveys have these shortcomings, but when there are  problems with survey design (such as bad questions or a poor sample) it makes survey data just plain unreliable. So even if we’re properly using data in our decision-making processes, bad survey data can make good decisions pretty unlikely. Beyond that, if you publicize bad data, you risk doing real harm to the community by promoting incorrect assumptions. This is why ethics and expertise are so important when carrying out assessment. Data is powerful and comes with responsibility.

There are many, many ways to collect data. Surveys are an important tool, but are definitely over-represented in our assessment toolkits. Using them over and over again not only gives us a whole lot of indirect data that is usually not well-used, but it contributes to the problem of survey exhaustion. Students get a lot of invitations to fill out surveys and it’s understandable that they eventually start to ignore them. It’s why we get excited when we break 20% in our response rates. So what can we do to limit survey exhaustion? Here are a few ideas:

  • Create a comprehensive assessment plan and ensure that surveys don’t make up a majority of the tools used.
  • Time surveys well. They should occur within 2 weeks of a program (unless measuring a long-term impact) and should not coincide with major events in the student calendar.
  • Share the assessment plan with other staff and time surveys to avoid sending multiple out at the same time.
  • Try to combine surveys within units and even across departments. Chances are many of us are asking some similar questions. A single, longer survey (although not too long) is better than two.

The moral that I’d like to teach here is, “Don’t use a survey unless it’s absolutely necessary.” But since we’re all bound to find ourselves in the position where a survey is best, here are 7 strategies to help your survey get you the best data possible.

1) Don’t Start From Scratch

What’s the easiest way to create a good survey? Use someone else’s! I’m not going to lie— designing a good survey is a labour of love…and at times, it’s a long labour. Chances are someone has already put the time in to create an effective survey tool, one that will get you the data you need and has been tested and validated. Ask around and see if you can get a hold of one from your colleagues, listservs, associations, or Campus Labs. Why waste precious time if there’s already a survey out there that can work for you?

2) Organize Your Survey Structure

Think about the people who will be responding to your survey. You want them to answer as accurately as possible and above all, you want them to finish the entire survey. That means employing some good, old-fashioned psychology:

  • Keep it short. If the survey is too long, one of two things will start to happen that undermines the reliability of your data. 1) People will start to drop out, or 2) they’ll stop paying attention and just try to get to the end as quickly as possible. If you won’t be using the data for a specific purpose, don’t ask the question. Be ruthless when you’re editing and when in doubt, cut it out.
  • Write a good preamble that tells them who you are, why you’re conducting the survey, and how the data will be used. People care more about things that they see value in or connect to personally.
  • Group your questions together logically and use headings for longer surveys. People will be able to move through the questions more quickly if they don’t have to switch topics or recall different events with each question. Quicker survey = higher completion rates.
  • Always put your demographic questions at the end of the survey. Why? Because people understand that demographic questions are quick and easy to respond to. It gives the impression that they’re almost done and makes them more likely to reach the finish line. Plus, you really want them to answer the core questions of the survey; the demographics are less important.

Basically, no one wants to respond to a long, disorganized, confusing mess of a survey.

3) Choose The Right Types Of Questions

You can ask structured (fixed response) questions or unstructured (open-ended response) questions. Structured questions offer a selection of responses that the user chooses from. These are best when you know all the most likely responses (but you should consider adding “other: please explain”, “doesn’t apply”, or “none of the above” options, unless there are no possible responses outside the ones you list). These will improve the validity of your data by offering more accurate response options. Use “I don’t know” or “not sure” infrequently since it gives an easy out to indecisive respondents who might otherwise give you more information.

You can also include some unstructured questions (questions with open-ended response options). These can be useful when you’re trying to gather new information from the respondents. Use them sparingly since they take much longer to complete.

4) Write Good Questions

This sounds so straightforward, doesn’t it? It’s surprising how often the questions themselves are the downfall of survey data.

  • Use concise, specific language. Make sure that respondents can’t misunderstand your question because of the language used, and that you can say with certainty what the data means once you have it. Don’t ask “Did you like the program?” Instead, ask things like “Did you learn something new at this program?” or “Was this program a good use of your time?”
  • Only ask one question at a time! Don’t ask, “Did you find this program engaging and helpful?” because then you can’t say for sure if they think it was engaging, helpful, both, or neither. Ask two questions instead.
  • Don’t ask leading questions. This is one surefire way to skew your data. If you ask, “Did you find this program totally awesome?” then you’re planting the response you want into the question, and your results are less valid.
  • Make sure that you have mutually exclusive response options. If you ask participants what their age bracket is and your responses are: 10-20, 20-30, 30-40, there are two possible answers for anyone who is 20 or 30.
  • Figure out how to structure your answer options. Do you want them to be able to make multiple selections (more accurate when assessing interest level in all options) or require participants to make one selection (useful to figure out top priorities), or rank options (best for figuring out how a number of options are prioritized)?

Above all, if you’re not using (or going to use) the data, don’t ask the question. This not only shortens the survey, it potentially protects the privacy of respondents. On that note, make sure to add “prefer not to answer” to demographic questions and avoid asking personal questions if you don’t have to. Don’t ever ask for names, e-mails, student numbers, or other identifiers unless they’re not possible to link to the participants responses.

5) Ask The Right Questions

Make sure the question you’re asking will get you the data you’re looking for. You can ask great questions, but if they’re not designed to get the information you need,  you’re likely to end up with a whole lot of useless data.

  • As I’ve written already, assessment starts with your outcomes. When you’re writing your survey, go back and read them so you can design your questions to find out whether they’ve been acheived.
  • You should avoid asking negative questions (ie: Did you dislike the program?) and ask questions about what you wanted participants to gain or understand at the end of the experience. If students indicate, “No, I didn’t dislike the program” that doesn’t necessarily mean that they did like it.
  • Ask only questions that a reasonable person would be able to answer. If you send out a survey asking exactly how much money students spent on textbooks from last year, it’s pretty unlikely that you’ll get accurate responses. (That, and you’ll just irritate participants.)

6) Make Proper Use Of Likert Scales!

You can choose to use a Likert scale to get more nuanced information about your program, but beware using too many and don’t place several of them back-to-back on one page or your completion rate may drop.

  • Even or odd point scales? The debate about using a mid-point in Likert scales is heated. There are benefits to both, but my advice is to include a neutral option. The reason for doing so is that there are respondents who truly feel neutral and by forcing them to choose either a positive or negative response, you are inherently skewing your data. Better to allow some indecisive participants to opt out of choosing and know that the positive and negative responses are truly valid.
  • Keep the scales consistent throughout the survey. The direction of the scales should all be the same (ie: highest to lowest, lowest to highest). This helps you avoid participant mistakes if they start to anticipate the scales, and it helps you avoid confusion on the analysis side. Also, the wording of the scales should be as consistent as possible.
  • Use ordinal response scales. The answer options should be obviously different to anyone answering. Answers like “Very little”, “Little”, “Sometimes”, and “Often” are pretty subjective depending on the participants’ understanding of those terms and the order of responses may not be evident. You’re better off making the distinctions between options very clear, so rather than using “Very good” and “Good” as options, you might use “Excellent” and “Good.”

7) Use Incentives (Just Not Really Big Ones)

Incentives will increase your response rate. It’s as simple as that. If you use a really significant incentive (like an iPad or a few hundred dollars), however, then you run the risk of skewing your data. Respondents may feel that they need to answer positively in order to have a better chance at receiving the incentive, or may look at the incentive as a kind of bribe and feel negative about your brand. You’ll also need to decide if each participant will receive an incentive, or if they’re entered into a lottery. Data shows that the best option to get a great response rate without damaging the integrity of your data is to include a nominal incentive to each participant (ie: a $1 gift code for iTunes, etc.). This, however, may significantly increase the cost of your survey so you’ll have to decide what works best for you.

Bonus — Test Your Survey!

Get someone else to try out your survey and find out what they think. Were the questions clear? Did they feel it was too long? Were they reasonably able to answer the questions? All of this information will help you refine your survey tool to get you the data you want.

Comments (0)

Post a Comment

© Copyright 2016 - Lesley D'Souza * Photos by Katherine Holland Photography