Back to Programme

A closer examination of the gender gap in knowledge assessments: Are women more likely than men to say, “I don’t know?”

Kirsten Lesage (Pew Research Center) - United States

Keywords: Knowledge assessments, quizzes, gender


Abstract

Previous research on knowledge assessments have found gender differences in some domains such as politics in which men tend to score higher in political knowledge than women (see Dow, 2009). Academic research suggests that this gender difference is driven by the fact that men are more likely to guess than women (Lizotte & Sidman, 2009). For example, Mondak and Anderson (2004) found that approximately 50% of the gender gap in political knowledge is an artifact of the fact that men are simply more likely to choose one of the multiple choice options than women, who are more likely to opt out if they are not certain. Is this also the case in other domains, such as science or religion?

The current study examines knowledge assessments from surveys from the Pew Research Center using nationally representative samples in the U.S. over the last 15+ years and will use a meta-analysis approach. These surveys contain knowledge assessments from a variety of domains – including politics, science, history, religion and international affairs – allowing us to answer the following questions:
1. Do men consistently score higher than women in knowledge assessments across several domains?
2. On surveys that offer an explicit response option of “don’t know,” are women more likely than men to choose this response option? Does this vary by domain?
3. On surveys that do not offer an explicit response option of “don’t know,” are women more likely than men to opt out and not answer the question? Does this vary by domain?
4. If we recode the data in such a way that respondents are rewarded for accurate answers (+1), penalized for inaccurate answers (-1) and neither penalized nor rewarded for not being sure (0), do men still score higher than women in knowledge assessments across different domains?

Results will also be discussed with consideration of the important implications for how to think about assessing knowledge in global surveys – especially when considering item nonresponse across different modes of data collection.