Revolutionizing Survey Design: Addressing Methodological Challenges with AI-driven Item Generation at Decision Support Center in Saudi Arabia
Ghadah Alkhadim (Decision support center) - Saudi Arabia
Norah Alhomiedan (Decision support center) - Saudi Arabia
Bashyer Alshahrani (Decision support center) - Saudi Arabia
Amer Basha (Decision support center) - Saudi Arabia
Keywords: Psychometric Item Generator (PIG), Survey Design, Artificial Intelligence (AI), Survey Design Efficiency, Survey Quality
Abstract
The proposal incorporates the Psychometric Item Generator (PIG), a research-based artificial intelligence (AI) tool, into the survey design process at the Decision Support Center (DSC), affiliated with the Royal Court of Saudi Arabia. The DSC's systematically collects and analyzed public opinions to ensure policies align with societal needs to improve decision-making and policy development. Surveys are the principal tools the DSC uses to collect public opinion data. Currently, the DSC requires four business days to develop three surveys per month manually. This conventional survey design methodology is associated with two challenges: 1) low survey design efficiency and 2) poor survey quality, which hinders the DSC's ability to collect high-quality data and respond promptly to urgent policy needs.
This proposal investigates the impact of integrating PIG into survey design to address existing methodological challenges in DSC. The proposal will focus on answering the following research questions:
1- How can PIG improve survey design efficiency when integrated into the design process?
2- How does PIG improve the survey quality regarding psychometric properties (i.e., validation and reliability)?
The Psychometric Item Generator (PIG) represents a significant step forward in survey design methodology, as developed by Götz et al. (2023). PIG utilizes GPT-2, a transformer-based natural language processing (NLP) model. In this proposal, however, we will use PIG with GPT-4 to enhance its functionality. To answer the research questions, two surveys assessing the same concept will be developed: one through the conventional methods (i.e., researchers will create survey items manually) and the other utilizing the PIG (i.e., researchers will provide the necessary input prompts and parameters to guide PIG in generating items).
This study will employ a mixed-methods approach, integrating both quantitative and qualitative methodologies to comprehensively evaluate the efficiency and quality of survey design generated manually and through PIG. To assess survey design efficiency, the time required to develop each survey, and the number of revisions made will be recorded and analyzed. Researchers involved in both manual and PIG-assisted processes will provide qualitative feedback on items' content validity, clarity, and appropriateness. To assess the psychometric properties of the two surveys, data will be collected from 1,000 participants, randomly assigned to complete either the manually developed or PIG-generated survey. Factor analysis and internal consistency will be conducted to assess validity and reliability, respectively. Data will be collected in February 2025.
The study is expected to yield several significant findings:
1. Surveys generated by PIG will require significantly less time to develop than those created manually, and the number of revisions is expected to be lower.
2. The validity and reliability of the surveys generated by PIG will be better than those developed manually.
The findings of the proposal will tackle a key debate in the literature about the validity and reliability of AI-generated survey items compared to traditional methods, providing empirical evidence to inform discussions on AI applications in public policy and decision making.