View/Export Results
Manage Existing Surveys
Create/Copy Multiple Surveys
Collaborate with Team Members
Sign inSign in with Facebook
Sign inSign in with Google
Skip to article content

How to write good survey questions

A practical guide to writing clear, unbiased questions -- with examples of what works, what fails, and how to fix common mistakes.

Key Takeaways

  1. Every question needs a decision: Before you add a question, name the action you will take based on the answer. If you cannot, cut it.
  2. One idea per question: Double-barreled questions (asking two things at once) are the most common mistake and the easiest to fix.
  3. Neutral wording, specific timeframes: Remove leading adjectives, replace vague words like "often" or "recently" with concrete ranges.
  4. Show, then tell: This guide gives you good and bad examples side by side so you can see the difference, not just read about it.
  5. Test before you send: Run every question through the checklist at the end of this guide. Five minutes of review prevents weeks of unusable data.
Two people writing survey questions together, one reviewing a checklist of question types and the other analyzing survey results on a laptop

What makes a good survey question?

Research on survey methodology consistently identifies four qualities that separate useful questions from ones that produce noise. A question does not need to be clever or comprehensive. It needs to be clear enough that every respondent interprets it the same way, and specific enough that the answer tells you what to do next.

Question wording alone can swing responses by 12 to 28 percentage points. In controlled experiments, changing a single qualifier ("jobs" vs. "good jobs") shifted agreement by 12 points. Adding specific numbers changed "major problem" responses by 28 points (Pew Research Center, 2019).

Which question type should you use?

What kind of data do you need? Categories / Groups One answer? Multiple Choice Select many? Checkbox Opinions / Attitudes Agreement? Likert Scale Satisfaction? Rating Scale Priorities Ranking Open Feedback Brief? Short Text Detailed? Long Text Compare Items (same scale) Matrix Grid Decision / data type Question format to use
How each question type affects your survey
Question TypeBest ForExampleMobile FriendlyEffect on Completion
Multiple choice (single)Quick categorization, demographics"What is your role?"YesBaseline (fastest)
Checkbox (select all)Multi-factor questions"Which features do you use?"Yes+5-10% slower than single MC
Likert scale (agree/disagree)Attitudes, perceptions"I feel valued at work"YesSimilar to MC
Rating scale (0-10, 1-5)NPS, CSAT, CES benchmarks"How likely to recommend?"YesSimilar to MC
Open-ended (short text)Specific qualitative feedback"What would you change?"Yes15-25% lower completion
Open-ended (long text)Detailed explanations"Describe your experience"Partial20-30% lower completion
Matrix / gridRating multiple items on same scale"Rate each feature 1-5"No10-20% lower completion
Ranking / orderingPrioritization"Rank these features"No5-15% lower completion

1. Specific

A good question names exactly what it is asking about, including the timeframe, the product, or the interaction. "How satisfied are you with our service?" is vague -- service could mean speed, accuracy, friendliness, or something else entirely. "How satisfied are you with the speed of your last support interaction?" is specific.

2. Neutral

The wording should not steer the respondent toward a particular answer. Adjectives like "excellent," "innovative," or "easy" inside a question are red flags. So is citing what "most people" think. Strip the question down to a plain, factual prompt and let the answer options carry the range.

3. Answerable in one read

If a respondent has to re-read the question to parse it, the data will suffer. Double negatives ("Do you disagree that the policy is not flexible enough?"), jargon, and long compound sentences all increase cognitive load. Keep questions short, use plain language, and ask one thing at a time.

4. Tied to an action

The most overlooked quality. A question is only worth asking if someone will act differently based on the answer. "What is your favourite colour?" in a product feedback survey wastes a respondent's time unless colour genuinely drives a product decision. Before adding any question, name the specific action the answer will inform.

The rest of this guide shows you what these principles look like in practice -- first with examples of good questions, then with common mistakes and how to fix them.

Good survey question examples

What makes a survey question "good"? It is specific, neutral, answerable in one read, and tied to a decision you will act on. Here are some of the strongest examples from across this guide, chosen because they follow all four of those principles.

Quick test: Before adding any question to your survey, ask: "What will I do differently based on this answer?" If you cannot answer that, cut the question.
  • How likely are you to recommend [Company] to a friend or colleague?
    0Not likely
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10Very likely
  • The standard Net Promoter Score question. One number that tracks loyalty over time and benchmarks against your industry.
  • How easy was it to resolve your issue today?
    1Very difficult
    2
    3
    4
    5
    6
    7Very easy
  • The Customer Effort Score format. Low-effort experiences drive retention more reliably than delight.
  • What is the primary reason you decided to leave?
    Career growth
    Compensation
    Management
    Work-life balance
    Relocation
    Company culture
    Other
  • Specific options surface the real driver behind turnover. Open-ended "why" questions rarely get honest answers in exit surveys.
  • What is one thing we should start, stop, or continue?
    Type your answer here...
  • Forces concreteness. The start/stop/continue frame prevents vague positivity and surfaces actionable changes.
  • How confident are you that you can apply what you learned to your daily work?
    0Not confident
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10Very confident
  • Directly measures training transfer. If confidence is low, the training content or delivery needs work before you run it again.
  • What prevented you from completing your task?
    Confusing navigation
    Technical error or bug
    Could not find information
    Process was too long
    Needed to contact support
    Other
  • Named barriers are easier to act on than open-ended complaints. "Other" catches edge cases without bloating the list.
  • Which session was most valuable?
    Opening keynote
    Hands-on workshop
    Panel discussion
    Breakout groups
  • Identifies what to repeat and what to cut. Concrete session names prevent vague "it was all great" answers.
  • At what price would you consider [Product] too expensive to purchase?
    Type your answer here...
  • The Van Westendorp price sensitivity format. Open numeric input avoids anchoring respondents to pre-set ranges.
  • How disappointed would you be if you could no longer use [Product]?
    Not disappointed
    Somewhat disappointed
    Very disappointed
  • The Sean Ellis product-market fit test. If fewer than 40% say "very disappointed," you have not reached product-market fit.
  • In a typical week, how many days do you exercise for at least 20 minutes?
    0 days
    1-2 days
    3-4 days
    5-6 days
    7 days
  • Specific timeframe and threshold ("at least 20 minutes") mean every respondent interprets the question the same way.
  • Overall, how satisfied are you with the service you received today?
    1Very dissatisfied
    2
    3
    4
    5Very satisfied
  • A single-question CSAT score. Asking "today" anchors the response to a specific interaction, which makes trends over time meaningful.
  • How well did our product meet your expectations for this purchase?
    Fell short of expectations
    Met expectations
    Exceeded expectations
  • Expectations framing avoids leading language while revealing the gap between what was promised and what was delivered.
  • In the past week, how often did you feel motivated to go beyond what was asked of you?
    Never
    Rarely
    Sometimes
    Often
    Every day
  • A weekly pulse question that tracks discretionary effort. The one-week window keeps responses concrete rather than aspirational.
  • How strongly do you agree: "I have the tools and resources I need to do my job well"?
    Strongly disagree
    Disagree
    Neutral
    Agree
    Strongly agree
  • Resource adequacy is one of the strongest predictors of engagement. Phrasing it as a direct statement makes agreement scales work naturally.
  • How would you rate the accuracy of the product description compared to what you received?
    1Very inaccurate
    2
    3
    4
    5Very accurate
  • Pinpoints listing quality issues before they become returns. Teams can trace low scores back to specific SKUs and fix descriptions.
  • How satisfied are you with the time it took for your order to arrive?
    1Very dissatisfied
    2
    3
    4
    5Very satisfied
  • Isolates delivery speed from product quality so logistics and merchandising teams each get actionable data.
  • Were you able to complete the task you came here to do?
    Yes, completely
    Yes, partially
    No
  • Task completion rate is the single most important usability metric. This question surfaces blockers that analytics alone may miss.
  • How easy was it to find the information you were looking for on our website?
    1Extremely difficult
    2
    3
    4
    5
    6
    7Extremely easy
  • Findability is a leading indicator of site architecture problems. A 7-point scale gives enough resolution to detect small design improvements.
  • How relevant was the content of this session to your current role?
    1Not relevant
    2
    3
    4
    5Highly relevant
  • Relevance scores help event organizers match topics to audience segments and improve future session curation.
  • How would you rate the balance between presentation length and Q&A time?
    Too much presentation
    About right
    Too much Q&A
  • A balanced three-option scale reveals format preferences without leading respondents toward either extreme.
  • Which of the following features would be most valuable to you? Select up to three.
    Offline mode
    Team collaboration
    Custom branding
    API access
    Advanced analytics
    Priority support
  • Capping selections at three forces prioritization, producing cleaner signal than "select all that apply" for roadmap decisions.
  • When you think of online survey tools, which brands come to mind? List as many as you recall.
    Type your answer here...
  • Unaided brand recall measures true top-of-mind awareness. Open text avoids the recognition bias that a brand list introduces.
  • After your first two weeks, how clear are you on what success looks like in your role?
    1Not clear at all
    2
    3
    4
    5Completely clear
  • Role clarity within the first 14 days is a strong predictor of new-hire retention. Low scores trigger immediate manager check-ins.
  • How supported did you feel by your team during your first week?
    Strongly disagree
    Disagree
    Neutral
    Agree
    Strongly agree
  • Social integration drives early engagement. This question flags onboarding experiences where new hires feel isolated.
  • How easy was it to set up your account and complete your first task?
    1Extremely difficult
    2
    3
    4
    5
    6
    7Extremely easy
  • First-task completion ease predicts long-term activation. Product teams use this score to identify onboarding friction points.
  • During this visit, how well did your care provider explain your treatment options?
    1Not well at all
    2
    3
    4
    5Extremely well
  • Communication quality is a core CAHPS domain. Scores on this question help clinics identify providers who may need communication coaching.
  • How long did you wait past your scheduled appointment time before being seen?
    Seen on time
    1-15 minutes
    16-30 minutes
    31-60 minutes
    More than 60 minutes
  • Concrete time brackets eliminate recall bias and give operations teams precise data for scheduling improvements.
  • How confident are you that you understand the next steps in your care plan?
    1Not confident
    2
    3
    4
    5Very confident
  • Patient confidence in next steps correlates with treatment adherence. Low scores signal a need for clearer discharge instructions.

Each of these questions is specific (names the timeframe or context), neutral (no leading adjectives), and designed to produce actionable data. Browse the full sections below for 100+ more examples organized by survey goal.

Comparison of a good survey question with clear wording versus a bad survey question with leading and vague phrasing

Examples of bad survey questions (and how to fix them)

Even experienced researchers write bad survey questions. Research has cataloged at least 48 distinct types of questionnaire bias (Choi & Pak, 2005), and experiments show that small wording changes can swing responses by 12 to 28 percentage points (Pew Research Center, 2019). The examples below cover the six most common patterns. For each, we show the problem, why it matters, and a ready-to-use fix. For a deeper look at survey bias, see response bias.

Leading questions

A leading question steers respondents toward a particular answer through loaded language, social pressure, or embedded assumptions. Studies show that different personality types react differently to leading questions, making the resulting data both biased and unpredictably variable (Springer, 2024).

How satisfied are you with our award-winning customer support team?

How would you rate the customer support you received?

Very poor
Poor
Fair
Good
Very good

"Award-winning" flatters the team before the respondent can form an independent opinion, nudging answers positive.

Most of our users say onboarding was easy. How easy did you find it?

How easy or difficult was the onboarding process?

1Very difficult
2
3
4
5Very easy

Citing what "most users" think creates social conformity pressure. Respondents do not want to be the outlier.

What problems have you experienced with our checkout process?

How would you describe your experience with our checkout process?

Type your answer here...

The question presumes problems exist. A respondent with no issues is forced into an awkward non-answer.

How much did you enjoy our newly redesigned dashboard?

How would you rate your experience with the current dashboard?

1Very poor
2
3
4
5Excellent

"Newly redesigned" implies the team worked hard, pressuring respondents to be positive. Let the experience speak for itself.

Given that 95% of customers renew, how likely are you to renew your subscription?

How likely are you to renew your subscription when your current term ends?

1Very unlikely
2
3
4
5Very likely

Quoting a high renewal rate creates bandwagon pressure. Respondents at risk of churning will mask their true intent.

How helpful was our knowledgeable support team in resolving your issue?

Was your issue resolved during this support interaction?

Yes, fully resolved
Partially resolved
Not resolved

"Knowledgeable" pre-frames the team positively. Asking about resolution outcome produces data you can actually act on.

How great was our customer service?

How would you rate the customer service you received today?

1Very poor
2
3
4
5Very good

Leading ("great") pressures positive answers.

Double-barreled questions

A double-barreled question asks about two things at once. The respondent might feel one way about the first topic and differently about the second, making any single answer uninterpretable.

How satisfied are you with the speed and accuracy of our order fulfillment?

How satisfied are you with the speed of order delivery?

1Very dissatisfied
2
3
4
5Very satisfied

An order can arrive fast but wrong, or slow but correct. One answer cannot represent both.

Was the training session informative and engaging?

How informative was the training session?

1Not at all
2
3
4
5Extremely

A lecture can be packed with useful content yet delivered in a monotone. Splitting the question reveals which dimension needs work.

Do you find our mobile app fast and easy to navigate?

How would you rate the loading speed of our mobile app?

1Very slow
2
3
4
5Very fast

Performance and usability are separate attributes. A respondent who finds the app quick but confusing cannot answer honestly.

How satisfied are you with the speed and accuracy of your deliveries?

How satisfied are you with the accuracy of your most recent delivery?

1Very dissatisfied
2
3
4
5Very satisfied

Speed and accuracy are independent. A fast but wrong delivery would leave the respondent unsure how to answer.

Does your manager provide clear direction and constructive feedback?

How often does your manager provide feedback you can act on?

Never
Rarely
Sometimes
Often
Always

Direction-setting and feedback delivery are two distinct management skills. Separating them tells HR which competency needs development.

How satisfied are you with our pricing and product quality?

How satisfied are you with our pricing?

1Not satisfied
2
3
4
5Very satisfied

Double-barreled (pricing and quality are different topics).

Loaded and presumptive questions

A loaded question assumes something about the respondent that may not be true, forcing them into a false frame.

Where do you like to eat when you go out for dinner?

How often do you eat at restaurants?

Never
A few times a year
Monthly
Weekly
Multiple times a week

Presumes the respondent eats out. People who cook at home every night have no valid way to respond.

How much time do you waste on social media each day?

On a typical day, approximately how many minutes do you spend on social media?

0
1-15
16-30
31-60
More than 60

"Waste" frames social media use as inherently unproductive, shaming respondents into under-reporting.

How much time do you waste in unnecessary meetings each week?

Approximately how many hours of meetings did you attend last week?

0-2 hours
3-5 hours
6-10 hours
11-15 hours
More than 15 hours

"Waste" and "unnecessary" presume all meetings are unproductive, biasing respondents toward inflated estimates.

Since you prefer working from home, how many days should the office be optional?

How many days per week would you prefer to work on-site?

0 days
1 day
2 days
3 days
4 days
5 days

The original assumes every employee prefers remote work. The fix is neutral and lets office-first employees answer honestly too.

What do you dislike about our competitor's product?

How does your experience with [Competitor] compare to your experience with [Our Product]?

Competitor is much better
Competitor is somewhat better
About the same
Our product is somewhat better
Our product is much better

Asking what someone "dislikes" presumes negative feelings. A comparative scale captures the full spectrum of opinion.

Have you stopped using our competitor's product?

Which tools, if any, did you use before [Product]?

Select all that apply

Presumes the respondent used a competitor. If they never did, both "yes" and "no" are misleading.

Vague or ambiguous wording

Vague questions produce unreliable data because each respondent interprets them against a different mental benchmark.

Do you exercise regularly?

In a typical week, how many days do you exercise for at least 20 minutes?

0
1-2
3-4
5-6
7

"Regularly" means daily to one person and twice a month to another. Specify the timeframe and threshold.

How do you feel about our new system?

How satisfied are you with the new inventory tracking system introduced last month?

1Very dissatisfied
2
3
4
5Very satisfied

"New system" is unspecified (which system?), and "feel about" is too broad to produce actionable data.

Are our prices reasonable?

Compared to similar products you have used, how would you rate our pricing?

Much lower
Somewhat lower
About the same
Somewhat higher
Much higher

"Reasonable" is subjective and unanchored. Give respondents a comparison frame so answers are consistent.

How do you feel about our company?

How likely are you to apply for another role at this company in the future?

1Very unlikely
2
3
4
5Very likely

"Feel about" is unmeasurably broad. A behavioral intent question produces a concrete metric tied to retention strategy.

Is our product good?

How well does [Product] solve the problem you purchased it for?

1Not well at all
2
3
4
5Extremely well

"Good" means different things to different people. Anchoring to the purchase intent gives the product team specific improvement direction.

Do you use our app often?

How often have you used the app in the last 30 days?

0
1-2
3-5
6-10
11+

Vague term ("often") means different things to different people.

Did our training help you?

How helpful was the training for your day-to-day work?

1Not at all helpful
2
3
4
5Extremely helpful

Yes/no hides degree and what to improve.

How satisfied are you with the website?

How easy was it to find the information you needed today?

1Very difficult
2
3
4
5Very easy

Too general; unclear what "website" means.

Do you like using our CRM platform's API integrations?

How easy is it to connect our tool with the other software you use?

1Very difficult
2
3
4
5Very easy

Jargon-heavy; many respondents will not know what "CRM," "API," or "integrations" mean.

Poor answer options

Even a well-written question fails if the answer choices overlap, leave gaps, or omit common responses.

What is your annual household income?

Under $25,000
$25,000-$50,000
$50,000-$75,000
$75,000-$100,000

What is your annual household income?

Under $25,000
$25,000-$49,999
$50,000-$74,999
$75,000-$99,999
$100,000-$149,999
$150,000 or more
Prefer not to answer

The original has overlapping boundaries ($50,000 appears in two ranges) and no option above $100,000. Always use non-overlapping ranges, include the full spectrum, and add "Prefer not to answer."

Which device do you use to access our website?

Laptop
Desktop
Tablet

Which devices do you use to access our website? Select all that apply.

Smartphone
Tablet
Laptop
Desktop
Other

No smartphone option, even though mobile traffic typically accounts for over half of web visits. Also changed to "select all" since people use multiple devices.

What is your annual household income? ($0-$25k / $25k-$50k / $50k-$100k / $100k+)

What is your annual household income before taxes?

Under $25,000
$25,000 - $49,999
$50,000 - $74,999
$75,000 - $99,999
$100,000 - $149,999
$150,000 or more
Prefer not to answer

The original has overlapping ranges ($25k appears in two brackets), no "prefer not to answer" for a sensitive topic, and the $100k+ bucket lumps together very different income levels.

How do you commute to work? (Drive / Public transit / Walk)

How do you typically commute to work? Select all that apply.

Drive alone
Carpool
Public transit
Bicycle
Walk
Work from home
Other

The original omits biking, remote work, and carpooling. It also forces a single selection when many people use multiple modes.

Rate your stress level: Low / Medium / High

On a typical workday, how would you rate your stress level?

1No stress
2
3
4
5Extreme stress

Three options lack the granularity to detect meaningful changes over time. Adding a timeframe anchor and a 5-point scale yields more sensitive trend data.

Why did you choose us? (Price, Quality, Other)

What was the primary reason you chose us?

Specific feature
Lower total cost
Faster delivery
Recommendation
Contract requirement
Previous experience
Other

Options are too broad and not exhaustive; hard to act on.

What is your age? (18-25, 25-35, 35-45, 45+)

What is your age?

18-24
25-34
35-44
45-54
55-64
65+
Prefer not to answer

Overlapping ranges (25 and 35 each appear in two brackets) and the top bracket is too broad.

Double negatives

Double negatives force respondents to mentally untangle two layers of negation. Most people guess rather than parse, producing random noise instead of data.

Do you disagree that the refund policy is not flexible enough?

How would you rate the flexibility of our refund policy?

Very inflexible
Somewhat inflexible
Neutral
Somewhat flexible
Very flexible

"Disagree" plus "not flexible enough" creates a double negative. Replace with a straightforward rating scale.

Would you not recommend against switching to annual billing?

How likely are you to switch to annual billing?

1Very unlikely
2
3
4
5Very likely

"Not" and "against" cancel each other out logically, but most readers will stumble. Rewrite as a single, positive-direction question.

Do you not disagree with the proposed schedule change?

Do you support the proposed schedule change?

Yes
No
Unsure

"Not disagree" requires mental arithmetic to parse. Most respondents will guess rather than untangle the logic.

I would not be unwilling to participate in future training sessions.

How willing are you to participate in future training sessions?

1Not willing
2
3
4
5Very willing

"Not unwilling" is a triple-processing task (willing, unwilling, not unwilling). The rewrite asks the question in one clear direction.

I don't disagree that the process isn't too slow.

The process is fast enough for my needs.

Strongly disagree
Disagree
Neutral
Agree
Strongly agree

Double negative ("don't disagree" + "isn't"); respondents cannot parse the intended meaning.

How to write unbiased survey questions

Good examples fail when the wording is vague or biased. A few checks catch most problems.

  • warning
    Specify the timeframe: "in the last 30 days" beats "recently".
  • warning
    Use concrete nouns and verbs: "complete checkout" beats "use the site".
  • warning
    Avoid leading phrasing: remove adjectives like "great" or "easy" from the question.
  • warning
    One idea per question: do not combine two topics with "and".
  • warning
    Offer a legitimate "out": "Not applicable" and "Prefer not to answer" reduce forced guesses.

Bias can also come from context: question order, social desirability, and answer option framing. If you want a practical overview, see response bias and how to reduce it.

For open-ended items, keep the prompt focused and avoid stacking multiple asks into one text box. The University of Florida's Savvy Survey guidance provides practical patterns for constructing open-ended items (see Constructing open-ended items for a questionnaire).

Best survey question order and flow

Even with great wording, poor flow can reduce completion and distort answers. The basics are consistent across survey guidance (for example, Community Tool Box's overview of survey conduct and planning: Conducting surveys).

  1. Start easy and relevant

    Open with a simple, non-threatening item (role, use case, or "what were you trying to do?") so respondents feel oriented.

  2. Group by topic, not by format

    Keep related items together (support, product, billing). Switching topics too often increases drop-off and "satisficing" (rushing).

  3. Ask for ratings before explanations

    Example: ask the 0-10 satisfaction first, then "What is the main reason for your score?" This preserves the metric while still capturing context.

  4. Put sensitive items later

    Demographics, compensation, and identity items go near the end and should be optional when possible.

  5. Add follow-ups only where you will act

    Use targeted follow-ups ("Which part was unclear?") instead of a general "Any other comments?" on every page.

A simple follow-up pattern that works

When you ask a rating or multiple choice question, add one optional open-ended follow-up: "What is the main reason for your answer?" This often gives enough context without turning the survey into an interview.

Dashboard showing how survey question design and survey length affect response rates and completion

How question design affects response rates

Writing clear, unbiased questions is only half the challenge. The way you structure a survey, the number of questions you include, and the formats you choose all have a measurable impact on whether people finish.

Surveys with more than 12 questions see completion rates drop by roughly 10-20% compared to shorter surveys. Beyond 20 questions the decline accelerates sharply, with some studies reporting abandonment rates above 40% (Kost & Correa da Rosa, 2018).

Survey length and completion

Research consistently shows an inverse relationship between survey length and completion rate. A study published in the Journal of Clinical and Translational Science found that completion rates dropped from 63% for 13-question surveys to 54% for 25-question surveys, and fell to just 37% for 72-question surveys (Kost & Correa da Rosa, 2018). The sweet spot for most feedback surveys is 5-10 questions.

  • warning
    Transactional surveys (CSAT, CES): 1-3 questions. Capture the metric and one follow-up.
  • warning
    Pulse surveys: 3-5 questions. Short enough to run weekly without fatigue.
  • warning
    Feedback surveys (post-event, product): 5-10 questions. Enough depth without losing respondents.
  • warning
    Annual engagement surveys: 15-25 questions maximum. Beyond this, data quality degrades even if people finish.

Question format and effort

Closed-ended questions (multiple choice, rating scales, Likert) take 10-15 seconds each on average. Open-ended questions take 45-90 seconds and require significantly more cognitive effort. Surveys with more than two or three open-ended questions see measurably lower completion rates and shorter, lower-quality text responses as respondents fatigue.

5 rules for higher response rates

  1. Lead with a closed-ended question: An easy first question (rating scale or single-select) creates momentum. Do not open with a text box.
  2. Limit open-ended questions to two: Place them after the quantitative items. One mid-survey ("What is the main reason for your score?") and one at the end ("Anything else?") is usually enough.
  3. Skip matrix grids on mobile: Matrix questions require horizontal scrolling on phones. Split them into individual rating items or use a simple dropdown instead.
  4. Show progress early: A progress bar that moves quickly through the first few questions reassures respondents the survey is short. Even perceived progress reduces drop-off.
  5. Cut every question that fails the action test: If no one will act on the answer, the question is adding length without adding value. Shorter surveys always outperform longer ones.

Question quality checklist

Run every question through this checklist before you send your survey. Most problems are caught by the first three items.

  • warning
    Single idea: Does this question ask about exactly one thing? If you see "and" joining two topics, split it.
  • warning
    Neutral phrasing: Remove any adjective that suggests a "right" answer (great, easy, innovative, award-winning). Remove any social pressure ("most users say...").
  • warning
    Specific timeframe: Replace "recently," "often," or "regularly" with a concrete period (in the last 30 days, this week, since your last visit).
  • warning
    No assumptions: Does the question presume the respondent has done something, owns something, or holds an opinion? Add a filter question first.
  • warning
    Answer options are exhaustive and exclusive: Every respondent can find exactly one answer that fits. Ranges do not overlap. Include "Other" or "Not applicable" when needed.
  • warning
    No double negatives: Read the question aloud. If you stumble, rewrite it as a positive statement with a scale.
  • warning
    Plain language: Replace jargon and acronyms with words your least technical respondent would understand.
  • warning
    Action test: Can you name the decision this answer will inform? If not, cut the question.
The five-minute review

Print this checklist and tape it next to your screen. Before launching any survey, read each question aloud and check it against all eight items. This catches the majority of wording problems before they reach respondents.

References