questionnaire construction

 

  • [26] There are two different types of questions that survey researchers use when writing a questionnaire: free-response questions and closed questions.

  • According to the three-stage theory (also called the sandwich theory), questions should be asked in three stages:[citation needed] 1. screening and rapport questions 2. product-specific
    questions 3. demographic types of questions

  • Clear, detailed instructions are needed in either case, matching the needs of each audience Methods of collection[edit] Main article: Survey data collection There are a number
    of channels, or modes, that can be used to administer a questionnaire.

  • Multi-item scales Within social science research and practice, questionnaires are most frequently used to collect quantitative data using multi-item scales with the following
    characteristics:[8] • Multiple statements or questions (minimum 3; usually 5) are presented for each variable being examined.

  • Pretesting See also: Pilot experiment Pretesting is testing and evaluating whether a questionnaire causes problems that could affect data quality and data collection for interviewers
    or survey respondents.

  • A common method is to “research backwards” in building a questionnaire by first determining the information sought (i.e., Brand A is more/less preferred by x% of the sample
    vs.

  • (See scale for further information) o Matrix questions – Identical response categories are assigned to multiple questions.

  • Initial advice may include: • consulting subject-matter experts • using questionnaire construction guidelines to inform drafts, such as the Tailored Design Method,[1] or those
    produced by National Statistical Organisations.

  • Types of questions Questions, or items, may be: • Closed-ended questions – Respondents’ answers are limited to a fixed set of responses.

  • : closed, multiple-choice, open) should fit the data analysis techniques available and the goals of the survey.

  • [26] A respondent’s answer to an open-ended question can be coded into a response scale afterwards,[27] or analysed using more qualitative methods.

  • A yes/no question will only reveal how many of the sample group answered yes or no, lacking the resolution to determine an average response.

  • [26] If multiple questions are being used to measure one construct, some of the questions should be worded in the opposite direction to evade response bias.

  • Question wording[edit] Main article: Survey methodology § Guidelines for the effective wording of questions The way that a question is phrased can have a large impact on how
    a research participant will answer the question.

  • Answer format: The manner in which the respondent provides an answer, including options for multiple-choice questions.

  • Help or instructions can be dynamically displayed with the question as needed, and automatic sequencing means the computer can determine the next question, rather than relying
    on respondents to correctly follow skip instructions.

  • However, respondents are often limited to their working memory: specially designed visual cues (such as prompt cards) may help in some cases.

  • [26] Free-response questions are beneficial because they allow the responder greater flexibility, but they are also very difficult to record and score, requiring extensive
    coding.

  • The order or grouping of questions is also relevant; early questions may bias later questions.

  • Different methods can be useful for checking a questionnaire and making sure it is accurately capturing the intended information.

  • What is often referred to as “adequate questionnaire construction” is critical to the success of a survey.

  • The list of prepared responses should be collectively exhaustive; one solution is to use a final write-in category for “other”.

  • [26] Thus, survey researchers must be conscious of their wording when writing survey questions.

  • These items serve as fundamental components within questionnaire and psychological tests, often tied to a specific latent psychological construct (see operationalization).

  • Inappropriate questions, incorrect ordering of questions, incorrect scaling, or a bad questionnaire format can make the survey results valueless, as they may not accurately
    reflect the views and opinions of the participants.

  • The research objective(s) and frame-of-reference should be defined beforehand, including the questionnaire’s context of time, budget, manpower, intrusion and privacy.

  • [26] Free-response questions are open-ended, whereas closed questions are usually multiple-choice.

  • However, initial set-up costs can be high for a customised design due to the effort required in developing the back-end system or programming the questionnaire itself.

  • They are a valuable method of collecting a wide range of information from a large number of individuals, often referred to as respondents.

  • By asking a sample of potential-respondents about their interpretation of the questions and use of the questionnaire, a researcher can • carrying out a small pretest of the
    questionnaire, using a small subset of target respondents.

  • Questionnaire construction refers to the design of a questionnaire to gather statistically useful information about a given topic.

  • The respondent supplies their own answer without being constrained by a fixed set of possible responses.

  • A biased question or questionnaire encourages respondents to answer one way rather than another.

  • Brand C), then being certain to ask all the needed questions to obtain the metrics for the report.

  • Topics should fit the respondents’ frame of reference, as their background may affect their interpretation of the questions.

  • • Questionnaires used to collect quantitative data usually comprise several multi-item scales, together with an introductory and concluding section.

 

Works Cited

[‘Dillman, Don A., Smyth, Jolene D., Christian, Leah Melani. 2014. Internet, Phone, Mail and Mixed-Mode Surveys: The Tailored Design Method, 4th edition. John Wiley: Hoboken, NJ
2. ^ Lord, F. and Novick, M. R.(1968). Statistical theories of mental test
scores. Addison – Wesley.
3. ^ Heise, D. R.(1969). Separating reliability and stability in test-retest correlation. American Sociological Review, 34, 93-101. https://dx.doi.org/10.2307/2092790
4. ^ Andrews, F. M. (1984). Construct validity and
error components of survey measures: a structural modelling approach. Public Opinion Quarterly, 48, 409-442. https://dx.doi.org/10.1086/268840
5. ^ Saris, W. E. and Gallhofer, I. N. (2014). Design, evaluation and analysis of questionnaires for survey
research. Second Edition. Hoboken, Wiley.
6. ^ Osterlind, S. J. (2005). Constructing Test Items: Multiple-Choice, Constructed-Response, Performance and Other Formats. Deutschland: Kluwer Academic Publishers. https://books.google.de/books?id=IpMRBwAAQBAJ&pg=PA19
7. ^
Haladyna, T. M., Rodriguez, M. C. (2013). Developing and Validating Test Items. USA: Taylor & Francis.
8. ^ Jump up to:a b Robinson, M. A. (2018). Using multi-item psychometric scales for research and practice in human resource management. Human
Resource Management, 57(3), 739–750. https://dx.doi.org/10.1002/hrm.21852 (open-access)
9. ^ Jump up to:a b Presser, Stanley (March 2004). “Methods for Testing and Evaluating Survey Questions”. Public Opinion Quarterly. 68 (1): 109–130. doi:10.1093/poq/nfh008.
10. ^
Rothgeb, Jennifer (2008). “Pilot Test”. In Lavrakas, Paul (ed.). Encyclopedia of Survey Research Methods. Sage Publishing. doi:10.4135/9781412963947. ISBN 9781412918084.
11. ^ Jump up to:a b Tourangeau, Roger (2019). “A Framework for Making Decisions
About Question Evaluation Methods”. Advances in Questionnaire Design, Development, Evaluation and Testing. Wiley Publishing. pp. 47–69. doi:10.1002/9781119263685.ch3.
12. ^ Willis, Gordon (2005). Cognitive interviewing: A tool for improving questionnaire
design. Sage Publishing. ISBN 9780761928041.
13. ^ “Web Probing”. GESIS – Leibniz Institute for the Social Sciences. Retrieved 2023-10-24.
14. ^ Martin, Elizabeth (2004-06-25). “Vignettes and Respondent Debriefing for Questionnaire Design and
Evaluation”. In Presser, Stanley; Rothgeb, Jennifer M.; Couper, Mick P.; Lessler, Judith T.; Martin, Elizabeth; Martin, Jean; Singer, Eleanor (eds.). Methods for Testing and Evaluating Survey Questionnaires (1 ed.). Wiley. doi:10.1002/0471654728.
ISBN 978-0-471-45841-8.
15. ^ Jump up to:a b Sha, Mandy (2016-08-01). “The Use of Vignettes in Evaluating Asian Language Questionnaire Items”. Survey Practice. 9 (3): 1–8. doi:10.29115/SP-2016-0013.
16. ^ Ongena, Yfke; Dijkstra, Wil (2006). “Methods
of Behavior Coding of Survey Interviews” (PDF). Journal of Official Statistics. 22 (3): 419–451.
17. ^ Kapousouz, Evgenia; Johnson, Timothy; Holbrook, Allyson (2020). “Seeking Clarifications for Problematic Questions: Effects of Interview Language
and Respondent Acculturation (Chapter 2)”. In Sha, Mandy; Gabel, Tim (eds.). The essential role of language in survey research. RTI Press. pp. 23–46. doi:10.3768/rtipress.bk.0023.2004. ISBN 978-1-934831-23-6.
18. ^ Yan, T.; Kreuter, F.; Tourangeau,
R (December 2012). “Evaluating Survey Questions: A Comparison of Methods”. Journal of Official Statistics. 28 (4): 503–529.
19. ^ Aizpurua, Eva (2020). “Pretesting methods in cross-cultural research (Chapter 7)”. In Sha, Mandy; Gabel, Tim (eds.).
The essential role of language in survey research. RTI Press. pp. 129–150. doi:10.3768/rtipress.bk.0023.2004. ISBN 978-1-934831-23-6.
20. ^ Timothy R. Graeff, 2005. “Response Bias”, Encyclopedia of Social Measurement, pp. 411-418. ScienceDirect.
21. ^
Pan, Yuling; Sha, Mandy (2019-07-09). The Sociolinguistics of Survey Translation. London: Routledge. doi:10.4324/9780429294914/sociolinguistics-survey-translation-yuling-pan-mandy-sha-hyunjoo-park. ISBN 978-0-429-29491-4.
22. ^ Wang, Kevin; Sha,
Mandy (2013-03-01). “A Comparison of Results from a Spanish and English Mail Survey: Effects of Instruction Placement on Item Missingness”. Survey Methods: Insights from the Field (SMIF). doi:10.13094/SMIF-2013-00006. ISSN 2296-4754.
23. ^ Frauke
Kreuter, Stanley Presser, and Roger Tourangeau, 2008. “Social Desirability Bias in CATI, IVR, and Web Surveys: The Effects of Mode and Question Sensitivity”, Public Opinion Quarterly, 72(5): 847-865 first published online January 26, 2009 doi:10.1093/poq/nfn063
24. ^
Allyson L. Holbrook, Melanie C. Green And Jon A. Krosnick, 2003. “Telephone versus Face-to-Face Interviewing of National Probability Samples with Long Questionnaires: Comparisons of Respondent Satisficing and Social Desirability Response Bias”. Public
Opinion Quarterly,67(1): 79-125. doi:10.1086/346010.
25. ^ Respicius, Rwehumbiza (2010)
26. ^ Jump up to:a b c d e f g h i j Shaughnessy, J.; Zechmeister, E.; Jeanne, Z. (2011). Research methods in psychology (9th ed.). New York, NY: McGraw Hill.
pp. 161–175. ISBN 9780078035180.
27. ^ Mellenbergh, G.J. (2008). Chapter 9: Surveys. In H.J. Adèr & G.J. Mellenbergh (Eds.) (with contributions by D.J. Hand), Advising on Research Methods: A consultant’s companion (pp. 183–209). Huizen, The Netherlands:
Johannes van Kessel Publishing.
Photo credit: https://www.flickr.com/photos/romanboed/8673416556/’]