Agree/disagree scales are used far too often in our questionnaires. They ask respondents to assess a statement and then indicate the extent to which they agree or disagree with that statement. Over the past fifty years there have been many studies that have clearly demonstrated problems with responses to statements with agree/disagree scales. The primary problem is that they lead to a positive bias, but there are other problems as well.
Agree/disagree scales make writing questions easy. All we have to do is write down a bunch of statements and slap an agree/disagree scale on to them. Yet it is critical that all of us who do survey research know that there are inherent problems with agree/disagree scales. And when we want to measure something well, there is always a better way to measure it.
Let’s illustrate with two examples. Here is an example from a survey given to employees of a large company where respondents are asked to agree or disagree with a statement:
Q: The Company has a clear business strategy.
__ Strongly agree
__ Agree
__ Neither agree nor disagree
__ Disagree
__ Strongly disagree
The problem with this question is that it is really asking respondents to do a few things: They have to interpret the literal meaning of the question stem. Do they think that the company has a business strategy, and if so, how clear is that business strategy? Then they have to translate their interpretation into an agree/disagree format.
This statement, like so many others, is written positively. Therefore, it sets up conditions for a positive bias. Again, there have been many experiments that have shown this.
The solution is to measure the construct directly, which is sometimes referred to as questions with item specific response options.
Let’s assume we want to know whether respondents think the company has a clear business strategy. We could ask a yes/no question:
Q: Do you believe the company has a clear business strategy?
__ Yes
__ No
Or suppose we want to know how clear the business strategy is to employees. We could ask:
Q: How clear or unclear is the company’s business strategy?
Very unclear 1 2 3 4 5 6 Very clear
We could think through other possible ways to ask the question and to scale the responses. The point is that we are now measuring the construct directly.
Agree/disagree questions are often the result of a failure to think through what information is really needed and how to ask the question. It is better to measure concepts directly because agree/disagree statements have an inherent bias, and a few additional problems that I will discuss with this second example.
Here is a second example where respondents are asked to agree or disagree with a statement:
Q: We have developed an excellent business management approach.
__ Strongly agree
__ Agree
__ Neither agree nor disagree
__ Disagree
__ Strongly disagree
In addition to positive bias, there are other forms of measurement error in this question.
One additional kind of measurement error comes from whether respondents answer the question literally or make an assumption about the question’s intention. In this example, a literal interpretation of the question would lead those who thought the business management approach was “good” but not “excellent” to strongly disagree. Yet others who assumed the intention of the question was to rate the business management approach and considered it good but not excellent would probably agree with the statement. Identical opinions would yield different results depending on whether respondents answered the question literally or made an assumption about the question’s intention.
Another kind of measurement error stems from differing interpretations of the statement. In this example, some respondents might focus on the concept, developed. Some respondents might think the company had an excellent business management approach well before new management came in and took over a couple of years ago and therefore disagree. Others might focus solely on the business management approach and ignore the development issue, choosing to agree. The point is that the statement is subject to differing interpretations.
In this case, think about what you want to measure, ask the question directly, and take out any clutter (e.g., “we have developed”).
Improved Question:
Q: How would you rate the current business management approach? Poor 1 2 3 4 5 6 Excellent
If you want information about the development of the business management approach, ask for it in a separate question.
Think carefully about what information you really need and ask a direct question about it. By making sure your question is really only one question and that both positive and negative answers are equally acceptable, you will have made the question clear and unbiased.
This example is related to the following three guidelines for writing questions:

  1. Replace agree/disagree scales with direct questions about what you really want to measure.
  2. Make sure the question is really asking only one question.
  3. Make clear that either a positive or a negative answer is equally acceptable.

This post is part of a larger initiative on my part to outline the top ten experiments all survey researchers should know. The first four are listed below.

  1. The effect of category ranges on responses
  2. Why open and closed ended responses yield vastly different responses (and how people do not use the “other” response)
  3. How open-ended responses may not be more valid that closed-ended responses
  4. Forced-choice yields better data than check-all-that-apply