View/Export Results
Manage Existing Surveys
Create/Copy Multiple Surveys
Collaborate with Team Members
Sign inSign in with Facebook
Sign inSign in with Google

Our Author & Independent Review Process

Overview

Every SuperSurvey template is crafted by a single, accountable author and verified by an independent reviewer. Our goal is simple: instruments that are clear, fair, accessible, and privacy‑first—backed by a documented process and visible author credits.

Below you’ll find our author profile (with specialties, credentials, and example templates), the core principles we apply to every questionnaire, and the exact eight steps each template passes before publication.

About the Author

Michael Hodge · Survey Methodologist & Editor

Designing surveys since , Michael specializes in bias‑aware questionnaire design, practical sampling advice, and decision‑ready reporting for teams that ship fast. He has led the build‑out of SuperSurvey’s reusable question patterns, response‑scale conventions, and template QA checklists.

  • Focus areas: wording & flow, response scales, mobile ergonomics, reliability/validity checks, action‑oriented reporting.
  • Methods used: cognitive interviewing, light pilots (α/ω targets), DIF screening, time‑to‑complete & dropout diagnostics, branch/skip logic audits.
  • Typical outcomes: higher response rates, cleaner data, and survey results that map directly to product, CX, HR, healthcare, and education decisions.

Favourite book: Psychometric Theory (Nunnally & Bernstein) — the north star for building defensible, reliable scales that stand up to scrutiny.

Principles of Good Survey Design

Great surveys are built backwards from the decisions they will inform. The following principles guide every SuperSurvey template authored by Michael:

  1. Start with decisions & constructs
    Define the decisions you’ll make and the constructs you must measure to make them (e.g., satisfaction, ease, trust). Create a 1‑page brief that names audiences, KPIs, and reporting cuts. See examples in Market Research Surveys.
  2. Write plain, single‑purpose questions
    Favor everyday words over jargon, and test for double‑barreled items. Replace hypotheticals with specific time frames and contexts. Keep reading level around grade 8 for accessibility.
  3. Choose the right response scale
    Use consistent, labeled anchors and keep directionality constant. Prefer 5‑ or 7‑point Likert‑type scales for attitudes; use 1‑to‑5 ratings for quick evaluations. Include “Not applicable” when appropriate to reduce forced error.
  4. Reduce respondent burden
    Ask only what you’ll act on. Group related items, funnel from general to specific, and aim for a concise time‑to‑complete. Use progress indicators and save/resume for longer instruments.
  5. Design for mobile‑first accessibility
    Ensure large touch targets, clear focus states, and logical screen‑reader order. Avoid long grids on small screens. Write labels that work when read aloud.
  6. Minimize bias
    Avoid leading phrasing, loaded terms, and order effects. Randomize answer options where sensible and separate measurement from collection of identifiable information.
  7. Be respectful with sensitive data
    Collect the minimum necessary. Offer “Prefer not to say” and explain why data is being gathered and how it will be protected. Keep consent language clear and specific.
  8. Pilot, then iterate
    Run a small pilot with your target audience. Check reliability (aim α ≥ .70), time‑to‑complete, missingness patterns, and ceiling/floor effects. Tweak or drop weak items before wide launch.
  9. Align questions to action
    Tie each item to a decision, owner, or metric. For example, map satisfaction items to specific close‑the‑loop workflows in CSAT programs or improvement backlogs for website UX.
  10. Provide clear scoring & reporting
    Document how each scale is scored and how results will be reported (benchmarks, segments, cadence). Use plain‑English insights and next steps so stakeholders know what to do next.

Independent 8‑Step Review Process

  1. Define the measurement goal
    Create a 1‑page Measurement Brief before any writing: purpose, constructs with operational definitions, target audience/segments, key decisions & KPIs the data will inform, sampling frame, and a reporting plan (cuts, cadence, owners).
  2. Draft to the style guide
    Apply plain‑language guardrails (aim ≤ 8th‑grade reading level), neutral wording, balanced response sets, and reusable Likert patterns with consistent anchors. Add standard metadata to each item (ID, construct tag, rationale, intended analysis).
  3. Internal author QA
    Run a checklist: remove leading or double‑barreled items; verify skip/branch logic; confirm scale direction consistency; estimate time‑to‑complete; preview on mobile and with a screen reader; validate translations/placeholders if applicable.
  4. Independent cross‑functional review
    Named (not anonymous) reviewers from Methods, Content, Accessibility, and Privacy evaluate clarity, fairness/bias, cognitive load, and data minimization. Findings and approvals are recorded in the change log. (Authors never review their own work.)
  5. Pilot & psychometrics
    Run a light pilot (n≈50–100) with target users. Targets: α ≥ 0.70 (or ω ≥ 0.70), item‑total ≥ 0.30, ceiling/floor < 20%, and completion time within target. Revise or drop weak items; document all changes and rationales.
  6. Compliance & accessibility gate
    Confirm consent language and data minimization; classify data (PII/PHI); validate WCAG 2.2 AA (keyboard access, contrast, focus order, labels, error messaging); localize examples, dates, and units where needed.
  7. Publish with versioning
    Ship each template with author credit, an independent reviewer sign‑off, a version ID, scoring/benchmark notes where relevant, a data dictionary (question IDs, scales, scoring), and a public changelog for full transparency.
  8. Monitor & re‑validate
    Track response rate, drop‑off, item non‑response, reliability drift (±0.05), and potential DIF across key segments. Re‑review annually or sooner if standards change; material updates create a new version and update the changelog.

Versioning & Credits

Author: Michael Hodge (Survey Methodologist & Editor)

Independent review: Conducted by a cross‑functional reviewer independent of the author. Authors never review their own work.

Versioning: Each template carries a semantic version ID (e.g., v2025.09.12) and a public changelog summarizing edits (item wording changes, scale updates, accessibility fixes, scoring adjustments).

Transparency pack: All published templates include a data dictionary, scale definitions, and clear guidance on reporting and action planning. For examples, see CSAT survey questions, Product satisfaction, and Employee engagement.

Survey Maker