Key Takeaways
- Every question needs a decision: Before you add a question, name the action you will take based on the answer. If you cannot, cut it.
- One idea per question: Double-barreled questions (asking two things at once) are the most common mistake and the easiest to fix.
- Neutral wording, specific timeframes: Remove leading adjectives, replace vague words like "often" or "recently" with concrete ranges.
- Show, then tell: This guide gives you good and bad examples side by side so you can see the difference, not just read about it.
- Test before you send: Run every question through the checklist at the end of this guide. Five minutes of review prevents weeks of unusable data.
What makes a good survey question?
Research on survey methodology consistently identifies four qualities that separate useful questions from ones that produce noise. A question does not need to be clever or comprehensive. It needs to be clear enough that every respondent interprets it the same way, and specific enough that the answer tells you what to do next.
Which question type should you use?
| Question Type | Best For | Example | Mobile Friendly | Effect on Completion |
|---|---|---|---|---|
| Multiple choice (single) | Quick categorization, demographics | "What is your role?" | Yes | Baseline (fastest) |
| Checkbox (select all) | Multi-factor questions | "Which features do you use?" | Yes | +5-10% slower than single MC |
| Likert scale (agree/disagree) | Attitudes, perceptions | "I feel valued at work" | Yes | Similar to MC |
| Rating scale (0-10, 1-5) | NPS, CSAT, CES benchmarks | "How likely to recommend?" | Yes | Similar to MC |
| Open-ended (short text) | Specific qualitative feedback | "What would you change?" | Yes | 15-25% lower completion |
| Open-ended (long text) | Detailed explanations | "Describe your experience" | Partial | 20-30% lower completion |
| Matrix / grid | Rating multiple items on same scale | "Rate each feature 1-5" | No | 10-20% lower completion |
| Ranking / ordering | Prioritization | "Rank these features" | No | 5-15% lower completion |
1. Specific
A good question names exactly what it is asking about, including the timeframe, the product, or the interaction. "How satisfied are you with our service?" is vague -- service could mean speed, accuracy, friendliness, or something else entirely. "How satisfied are you with the speed of your last support interaction?" is specific.
2. Neutral
The wording should not steer the respondent toward a particular answer. Adjectives like "excellent," "innovative," or "easy" inside a question are red flags. So is citing what "most people" think. Strip the question down to a plain, factual prompt and let the answer options carry the range.
3. Answerable in one read
If a respondent has to re-read the question to parse it, the data will suffer. Double negatives ("Do you disagree that the policy is not flexible enough?"), jargon, and long compound sentences all increase cognitive load. Keep questions short, use plain language, and ask one thing at a time.
4. Tied to an action
The most overlooked quality. A question is only worth asking if someone will act differently based on the answer. "What is your favourite colour?" in a product feedback survey wastes a respondent's time unless colour genuinely drives a product decision. Before adding any question, name the specific action the answer will inform.
The rest of this guide shows you what these principles look like in practice -- first with examples of good questions, then with common mistakes and how to fix them.
Good survey question examples
What makes a survey question "good"? It is specific, neutral, answerable in one read, and tied to a decision you will act on. Here are some of the strongest examples from across this guide, chosen because they follow all four of those principles.
- How likely are you to recommend [Company] to a friend or colleague? 0Not likely12345678910Very likely
- How easy was it to resolve your issue today? 1Very difficult234567Very easy
- What is the primary reason you decided to leave?
- What is one thing we should start, stop, or continue? Type your answer here...
- How confident are you that you can apply what you learned to your daily work? 0Not confident12345678910Very confident
- What prevented you from completing your task?
- Which session was most valuable?
- At what price would you consider [Product] too expensive to purchase? Type your answer here...
- How disappointed would you be if you could no longer use [Product]?
- In a typical week, how many days do you exercise for at least 20 minutes?
- Overall, how satisfied are you with the service you received today? 1Very dissatisfied2345Very satisfied
- How well did our product meet your expectations for this purchase?
- In the past week, how often did you feel motivated to go beyond what was asked of you?
- How strongly do you agree: "I have the tools and resources I need to do my job well"? Strongly disagreeDisagreeNeutralAgreeStrongly agree
- How would you rate the accuracy of the product description compared to what you received? 1Very inaccurate2345Very accurate
- How satisfied are you with the time it took for your order to arrive? 1Very dissatisfied2345Very satisfied
- Were you able to complete the task you came here to do?
- How easy was it to find the information you were looking for on our website? 1Extremely difficult234567Extremely easy
- How relevant was the content of this session to your current role? 1Not relevant2345Highly relevant
- How would you rate the balance between presentation length and Q&A time?
- Which of the following features would be most valuable to you? Select up to three.
- When you think of online survey tools, which brands come to mind? List as many as you recall. Type your answer here...
- After your first two weeks, how clear are you on what success looks like in your role? 1Not clear at all2345Completely clear
- How supported did you feel by your team during your first week? Strongly disagreeDisagreeNeutralAgreeStrongly agree
- How easy was it to set up your account and complete your first task? 1Extremely difficult234567Extremely easy
- During this visit, how well did your care provider explain your treatment options? 1Not well at all2345Extremely well
- How long did you wait past your scheduled appointment time before being seen?
- How confident are you that you understand the next steps in your care plan? 1Not confident2345Very confident
Each of these questions is specific (names the timeframe or context), neutral (no leading adjectives), and designed to produce actionable data. Browse the full sections below for 100+ more examples organized by survey goal.
Examples of bad survey questions (and how to fix them)
Even experienced researchers write bad survey questions. Research has cataloged at least 48 distinct types of questionnaire bias (Choi & Pak, 2005), and experiments show that small wording changes can swing responses by 12 to 28 percentage points (Pew Research Center, 2019). The examples below cover the six most common patterns. For each, we show the problem, why it matters, and a ready-to-use fix. For a deeper look at survey bias, see response bias.
Leading questions
A leading question steers respondents toward a particular answer through loaded language, social pressure, or embedded assumptions. Studies show that different personality types react differently to leading questions, making the resulting data both biased and unpredictably variable (Springer, 2024).
How satisfied are you with our award-winning customer support team?
How would you rate the customer support you received?
"Award-winning" flatters the team before the respondent can form an independent opinion, nudging answers positive.
Most of our users say onboarding was easy. How easy did you find it?
How easy or difficult was the onboarding process?
Citing what "most users" think creates social conformity pressure. Respondents do not want to be the outlier.
What problems have you experienced with our checkout process?
How would you describe your experience with our checkout process?
The question presumes problems exist. A respondent with no issues is forced into an awkward non-answer.
How much did you enjoy our newly redesigned dashboard?
How would you rate your experience with the current dashboard?
"Newly redesigned" implies the team worked hard, pressuring respondents to be positive. Let the experience speak for itself.
Given that 95% of customers renew, how likely are you to renew your subscription?
How likely are you to renew your subscription when your current term ends?
Quoting a high renewal rate creates bandwagon pressure. Respondents at risk of churning will mask their true intent.
How helpful was our knowledgeable support team in resolving your issue?
Was your issue resolved during this support interaction?
"Knowledgeable" pre-frames the team positively. Asking about resolution outcome produces data you can actually act on.
How great was our customer service?
How would you rate the customer service you received today?
Leading ("great") pressures positive answers.
Double-barreled questions
A double-barreled question asks about two things at once. The respondent might feel one way about the first topic and differently about the second, making any single answer uninterpretable.
How satisfied are you with the speed and accuracy of our order fulfillment?
How satisfied are you with the speed of order delivery?
An order can arrive fast but wrong, or slow but correct. One answer cannot represent both.
Was the training session informative and engaging?
How informative was the training session?
A lecture can be packed with useful content yet delivered in a monotone. Splitting the question reveals which dimension needs work.
Do you find our mobile app fast and easy to navigate?
How would you rate the loading speed of our mobile app?
Performance and usability are separate attributes. A respondent who finds the app quick but confusing cannot answer honestly.
How satisfied are you with the speed and accuracy of your deliveries?
How satisfied are you with the accuracy of your most recent delivery?
Speed and accuracy are independent. A fast but wrong delivery would leave the respondent unsure how to answer.
Does your manager provide clear direction and constructive feedback?
How often does your manager provide feedback you can act on?
Direction-setting and feedback delivery are two distinct management skills. Separating them tells HR which competency needs development.
How satisfied are you with our pricing and product quality?
How satisfied are you with our pricing?
Double-barreled (pricing and quality are different topics).
Loaded and presumptive questions
A loaded question assumes something about the respondent that may not be true, forcing them into a false frame.
Where do you like to eat when you go out for dinner?
How often do you eat at restaurants?
Presumes the respondent eats out. People who cook at home every night have no valid way to respond.
How much time do you waste on social media each day?
On a typical day, approximately how many minutes do you spend on social media?
"Waste" frames social media use as inherently unproductive, shaming respondents into under-reporting.
How much time do you waste in unnecessary meetings each week?
Approximately how many hours of meetings did you attend last week?
"Waste" and "unnecessary" presume all meetings are unproductive, biasing respondents toward inflated estimates.
Since you prefer working from home, how many days should the office be optional?
How many days per week would you prefer to work on-site?
The original assumes every employee prefers remote work. The fix is neutral and lets office-first employees answer honestly too.
What do you dislike about our competitor's product?
How does your experience with [Competitor] compare to your experience with [Our Product]?
Asking what someone "dislikes" presumes negative feelings. A comparative scale captures the full spectrum of opinion.
Have you stopped using our competitor's product?
Which tools, if any, did you use before [Product]?
Presumes the respondent used a competitor. If they never did, both "yes" and "no" are misleading.
Vague or ambiguous wording
Vague questions produce unreliable data because each respondent interprets them against a different mental benchmark.
Do you exercise regularly?
In a typical week, how many days do you exercise for at least 20 minutes?
"Regularly" means daily to one person and twice a month to another. Specify the timeframe and threshold.
How do you feel about our new system?
How satisfied are you with the new inventory tracking system introduced last month?
"New system" is unspecified (which system?), and "feel about" is too broad to produce actionable data.
Are our prices reasonable?
Compared to similar products you have used, how would you rate our pricing?
"Reasonable" is subjective and unanchored. Give respondents a comparison frame so answers are consistent.
How do you feel about our company?
How likely are you to apply for another role at this company in the future?
"Feel about" is unmeasurably broad. A behavioral intent question produces a concrete metric tied to retention strategy.
Is our product good?
How well does [Product] solve the problem you purchased it for?
"Good" means different things to different people. Anchoring to the purchase intent gives the product team specific improvement direction.
Do you use our app often?
How often have you used the app in the last 30 days?
Vague term ("often") means different things to different people.
Did our training help you?
How helpful was the training for your day-to-day work?
Yes/no hides degree and what to improve.
How satisfied are you with the website?
How easy was it to find the information you needed today?
Too general; unclear what "website" means.
Do you like using our CRM platform's API integrations?
How easy is it to connect our tool with the other software you use?
Jargon-heavy; many respondents will not know what "CRM," "API," or "integrations" mean.
Poor answer options
Even a well-written question fails if the answer choices overlap, leave gaps, or omit common responses.
What is your annual household income?
What is your annual household income?
The original has overlapping boundaries ($50,000 appears in two ranges) and no option above $100,000. Always use non-overlapping ranges, include the full spectrum, and add "Prefer not to answer."
Which device do you use to access our website?
Which devices do you use to access our website? Select all that apply.
No smartphone option, even though mobile traffic typically accounts for over half of web visits. Also changed to "select all" since people use multiple devices.
What is your annual household income? ($0-$25k / $25k-$50k / $50k-$100k / $100k+)
What is your annual household income before taxes?
The original has overlapping ranges ($25k appears in two brackets), no "prefer not to answer" for a sensitive topic, and the $100k+ bucket lumps together very different income levels.
How do you commute to work? (Drive / Public transit / Walk)
How do you typically commute to work? Select all that apply.
The original omits biking, remote work, and carpooling. It also forces a single selection when many people use multiple modes.
Rate your stress level: Low / Medium / High
On a typical workday, how would you rate your stress level?
Three options lack the granularity to detect meaningful changes over time. Adding a timeframe anchor and a 5-point scale yields more sensitive trend data.
Why did you choose us? (Price, Quality, Other)
What was the primary reason you chose us?
Options are too broad and not exhaustive; hard to act on.
What is your age? (18-25, 25-35, 35-45, 45+)
What is your age?
Overlapping ranges (25 and 35 each appear in two brackets) and the top bracket is too broad.
Double negatives
Double negatives force respondents to mentally untangle two layers of negation. Most people guess rather than parse, producing random noise instead of data.
Do you disagree that the refund policy is not flexible enough?
How would you rate the flexibility of our refund policy?
"Disagree" plus "not flexible enough" creates a double negative. Replace with a straightforward rating scale.
Would you not recommend against switching to annual billing?
How likely are you to switch to annual billing?
"Not" and "against" cancel each other out logically, but most readers will stumble. Rewrite as a single, positive-direction question.
Do you not disagree with the proposed schedule change?
Do you support the proposed schedule change?
"Not disagree" requires mental arithmetic to parse. Most respondents will guess rather than untangle the logic.
I would not be unwilling to participate in future training sessions.
How willing are you to participate in future training sessions?
"Not unwilling" is a triple-processing task (willing, unwilling, not unwilling). The rewrite asks the question in one clear direction.
I don't disagree that the process isn't too slow.
The process is fast enough for my needs.
Double negative ("don't disagree" + "isn't"); respondents cannot parse the intended meaning.
How to write unbiased survey questions
Good examples fail when the wording is vague or biased. A few checks catch most problems.
- Specify the timeframe: "in the last 30 days" beats "recently".
- Use concrete nouns and verbs: "complete checkout" beats "use the site".
- Avoid leading phrasing: remove adjectives like "great" or "easy" from the question.
- One idea per question: do not combine two topics with "and".
- Offer a legitimate "out": "Not applicable" and "Prefer not to answer" reduce forced guesses.
Bias can also come from context: question order, social desirability, and answer option framing. If you want a practical overview, see response bias and how to reduce it.
For open-ended items, keep the prompt focused and avoid stacking multiple asks into one text box. The University of Florida's Savvy Survey guidance provides practical patterns for constructing open-ended items (see Constructing open-ended items for a questionnaire).
Best survey question order and flow
Even with great wording, poor flow can reduce completion and distort answers. The basics are consistent across survey guidance (for example, Community Tool Box's overview of survey conduct and planning: Conducting surveys).
Start easy and relevant
Open with a simple, non-threatening item (role, use case, or "what were you trying to do?") so respondents feel oriented.
Group by topic, not by format
Keep related items together (support, product, billing). Switching topics too often increases drop-off and "satisficing" (rushing).
Ask for ratings before explanations
Example: ask the 0-10 satisfaction first, then "What is the main reason for your score?" This preserves the metric while still capturing context.
Put sensitive items later
Demographics, compensation, and identity items go near the end and should be optional when possible.
Add follow-ups only where you will act
Use targeted follow-ups ("Which part was unclear?") instead of a general "Any other comments?" on every page.
When you ask a rating or multiple choice question, add one optional open-ended follow-up: "What is the main reason for your answer?" This often gives enough context without turning the survey into an interview.
How question design affects response rates
Writing clear, unbiased questions is only half the challenge. The way you structure a survey, the number of questions you include, and the formats you choose all have a measurable impact on whether people finish.
Survey length and completion
Research consistently shows an inverse relationship between survey length and completion rate. A study published in the Journal of Clinical and Translational Science found that completion rates dropped from 63% for 13-question surveys to 54% for 25-question surveys, and fell to just 37% for 72-question surveys (Kost & Correa da Rosa, 2018). The sweet spot for most feedback surveys is 5-10 questions.
- Transactional surveys (CSAT, CES): 1-3 questions. Capture the metric and one follow-up.
- Pulse surveys: 3-5 questions. Short enough to run weekly without fatigue.
- Feedback surveys (post-event, product): 5-10 questions. Enough depth without losing respondents.
- Annual engagement surveys: 15-25 questions maximum. Beyond this, data quality degrades even if people finish.
Question format and effort
Closed-ended questions (multiple choice, rating scales, Likert) take 10-15 seconds each on average. Open-ended questions take 45-90 seconds and require significantly more cognitive effort. Surveys with more than two or three open-ended questions see measurably lower completion rates and shorter, lower-quality text responses as respondents fatigue.
5 rules for higher response rates
- Lead with a closed-ended question: An easy first question (rating scale or single-select) creates momentum. Do not open with a text box.
- Limit open-ended questions to two: Place them after the quantitative items. One mid-survey ("What is the main reason for your score?") and one at the end ("Anything else?") is usually enough.
- Skip matrix grids on mobile: Matrix questions require horizontal scrolling on phones. Split them into individual rating items or use a simple dropdown instead.
- Show progress early: A progress bar that moves quickly through the first few questions reassures respondents the survey is short. Even perceived progress reduces drop-off.
- Cut every question that fails the action test: If no one will act on the answer, the question is adding length without adding value. Shorter surveys always outperform longer ones.
Question quality checklist
Run every question through this checklist before you send your survey. Most problems are caught by the first three items.
- Single idea: Does this question ask about exactly one thing? If you see "and" joining two topics, split it.
- Neutral phrasing: Remove any adjective that suggests a "right" answer (great, easy, innovative, award-winning). Remove any social pressure ("most users say...").
- Specific timeframe: Replace "recently," "often," or "regularly" with a concrete period (in the last 30 days, this week, since your last visit).
- No assumptions: Does the question presume the respondent has done something, owns something, or holds an opinion? Add a filter question first.
- Answer options are exhaustive and exclusive: Every respondent can find exactly one answer that fits. Ranges do not overlap. Include "Other" or "Not applicable" when needed.
- No double negatives: Read the question aloud. If you stumble, rewrite it as a positive statement with a scale.
- Plain language: Replace jargon and acronyms with words your least technical respondent would understand.
- Action test: Can you name the decision this answer will inform? If not, cut the question.
Print this checklist and tape it next to your screen. Before launching any survey, read each question aloud and check it against all eight items. This catches the majority of wording problems before they reach respondents.
References
- Conrad, F. G., Tourangeau, R., & Sun, H. (2017). Examples in open-ended survey questions. International Journal of Public Opinion Research, 29(4), 690-702.
- O'Leary, J. L., & Israel, G. D. (n.d.). The Savvy Survey #6b: Constructing open-ended items for a questionnaire (PD067). University of Florida IFAS Extension.
- University of Kansas. (n.d.). Conducting surveys. Community Tool Box.
- Purdue Online Writing Lab. (n.d.). Creating good interview and survey questions. Purdue University.
- Kansas State University. (n.d.). Survey questions. Ethics in Science Communication.
- Sharma, H., & Ruikar, M. (2025). Crafting an effective questionnaire: An essential prerequisite of engaging surveys. Perspectives in Clinical Research, 16(3), 118-126.
- Gariton, C. E., & Israel, G. D. (n.d.). The Savvy Survey #6e: Understanding how question type impacts future analysis (PD083). University of Florida IFAS Extension.
- Choi, B. C. K., & Pak, A. W. P. (2005). A catalog of biases in questionnaires. Preventing Chronic Disease, 2(1), A13.
- Pew Research Center. (2019). Survey experiments can measure effects of question wording and more.
- Leading question susceptibility in surveys. (2024). Quality & Quantity. Springer Nature.