Even when researchers play by the rules, how stable are their findings? New study shows how and why even honest research findings can flip and what to do about it

Tim Vlandas

Study highlights fragility of research findings in political science and social policy

New research, ‘Estimating the Extent and Sources of Model Uncertainty in Political Science’, published in the Proceedings of National Academy of Sciences (PNAS) addresses a fundamental challenge in empirical social science – the extent to which published findings depend on modelling decisions.    

Written by Dr Tim Vlandas, Associate Professor of Comparative Social Policy at DSPI, and Dr Michael Ganslmeier, Assistant Professor in Computational Social Science at the University of Exeter, the study examines how results vary based on common methodological choices such as sample selection, time period, model structure, and how key outcomes are defined.     

Through the analysis of more than 3.6 billion regression estimates across four prominent topics in political science and social policy (democratisation, welfare generosity, public goods provision, and institutional trust), the researchers systematically tested how different modelling decisions affect research results.    

Key findings

The study highlighted the fragility of research findings to different and equally defensible modelling decisions. 

There are three key lessons for social scientists as well as anyone writing, interpreting or reading scientific research:  

  1. Variation in results is driven more by decisions about sampling and outcome measurement than by which control variables are included.  
  2. Even variables that have been widely examined in past research can yield both positive and negative significant results, depending on modelling decisions.  
  3. Scientific reliability depends on more than integrity. It requires transparent and extensive robustness checks across several modelling decisions.  

Practical implementation of research  

Assessing model uncertainty is crucial to quantitative political science. However, most analyses focus only on a few modelling choices, while neglecting to jointly consider several equally important modelling choices simultaneously.  

The research findings can be used to assess the degree of robustness of honest results, as well as to prevent fraud. The study also provides a practical tool, an open-source R package that enables researchers to assess the fragility of their findings across a wide range of model specifications. 

“Even when researchers act in good faith and follow standard practices, the results they report can vary dramatically depending on which defensible modelling choices they make,” commented Dr Vlandas. “We think this isn’t just a technical point for specialists. It’s an important societal issue with real consequences for how we interpret and trust scientific findings.”  

 

Read the full article in PNAS