Our Author & Independent Review Process
In this article
Overview
Every SuperSurvey template is crafted by a single, accountable author and verified by an independent reviewer. Our goal is simple: instruments that are clear, fair, accessible, and privacy‑first—backed by a documented process and visible author credits.
Below you’ll find our author profile (with specialties, credentials, and example templates), the core principles we apply to every questionnaire, and the exact eight steps each template passes before publication.
About the Author
Principles of Good Survey Design
Great surveys are built backwards from the decisions they will inform. The following principles guide every SuperSurvey template authored by Michael:
-
Start with decisions & constructsDefine the decisions you’ll make and the constructs you must measure to make them (e.g., satisfaction, ease, trust). Create a 1‑page brief that names audiences, KPIs, and reporting cuts. See examples in Market Research Surveys.
-
Write plain, single‑purpose questionsFavor everyday words over jargon, and test for double‑barreled items. Replace hypotheticals with specific time frames and contexts. Keep reading level around grade 8 for accessibility.
-
Choose the right response scaleUse consistent, labeled anchors and keep directionality constant. Prefer 5‑ or 7‑point Likert‑type scales for attitudes; use 1‑to‑5 ratings for quick evaluations. Include “Not applicable” when appropriate to reduce forced error.
-
Reduce respondent burdenAsk only what you’ll act on. Group related items, funnel from general to specific, and aim for a concise time‑to‑complete. Use progress indicators and save/resume for longer instruments.
-
Design for mobile‑first accessibilityEnsure large touch targets, clear focus states, and logical screen‑reader order. Avoid long grids on small screens. Write labels that work when read aloud.
-
Minimize biasAvoid leading phrasing, loaded terms, and order effects. Randomize answer options where sensible and separate measurement from collection of identifiable information.
-
Be respectful with sensitive dataCollect the minimum necessary. Offer “Prefer not to say” and explain why data is being gathered and how it will be protected. Keep consent language clear and specific.
-
Pilot, then iterateRun a small pilot with your target audience. Check reliability (aim α ≥ .70), time‑to‑complete, missingness patterns, and ceiling/floor effects. Tweak or drop weak items before wide launch.
-
Align questions to actionTie each item to a decision, owner, or metric. For example, map satisfaction items to specific close‑the‑loop workflows in CSAT programs or improvement backlogs for website UX.
-
Provide clear scoring & reportingDocument how each scale is scored and how results will be reported (benchmarks, segments, cadence). Use plain‑English insights and next steps so stakeholders know what to do next.
Independent 8‑Step Review Process
-
Define the measurement goalCreate a 1‑page Measurement Brief before any writing: purpose, constructs with operational definitions, target audience/segments, key decisions & KPIs the data will inform, sampling frame, and a reporting plan (cuts, cadence, owners).
-
Draft to the style guideApply plain‑language guardrails (aim ≤ 8th‑grade reading level), neutral wording, balanced response sets, and reusable Likert patterns with consistent anchors. Add standard metadata to each item (ID, construct tag, rationale, intended analysis).
-
Internal author QARun a checklist: remove leading or double‑barreled items; verify skip/branch logic; confirm scale direction consistency; estimate time‑to‑complete; preview on mobile and with a screen reader; validate translations/placeholders if applicable.
-
Independent cross‑functional reviewNamed (not anonymous) reviewers from Methods, Content, Accessibility, and Privacy evaluate clarity, fairness/bias, cognitive load, and data minimization. Findings and approvals are recorded in the change log. (Authors never review their own work.)
-
Pilot & psychometricsRun a light pilot (n≈50–100) with target users. Targets: α ≥ 0.70 (or ω ≥ 0.70), item‑total ≥ 0.30, ceiling/floor < 20%, and completion time within target. Revise or drop weak items; document all changes and rationales.
-
Compliance & accessibility gateConfirm consent language and data minimization; classify data (PII/PHI); validate WCAG 2.2 AA (keyboard access, contrast, focus order, labels, error messaging); localize examples, dates, and units where needed.
-
Publish with versioningShip each template with author credit, an independent reviewer sign‑off, a version ID, scoring/benchmark notes where relevant, a data dictionary (question IDs, scales, scoring), and a public changelog for full transparency.
-
Monitor & re‑validateTrack response rate, drop‑off, item non‑response, reliability drift (±0.05), and potential DIF across key segments. Re‑review annually or sooner if standards change; material updates create a new version and update the changelog.
Versioning & Credits
Author: Michael Hodge (Survey Methodologist & Editor)
Independent review: Conducted by a cross‑functional reviewer independent of the author. Authors never review their own work.
Versioning: Each template carries a semantic version ID (e.g., v2025.09.12
) and a public changelog summarizing edits (item wording changes, scale updates, accessibility fixes, scoring adjustments).
Transparency pack: All published templates include a data dictionary, scale definitions, and clear guidance on reporting and action planning. For examples, see CSAT survey questions, Product satisfaction, and Employee engagement.