Module 3 Causal Inference
Much of social science is about causality. We might ask questions like whether voter registration increases political participation, whether bottom-up accountability can improve health outcomes, or whether personal narratives of immigrants help reduce prejudicial attitudes towards them.
Over the past decade, social science has become much more serious about how causal claims are made, building on a long history of work on causality dating back to the classic writings of Fisher and Rubin. We make greater use of experiments, and randomization has become the gold standard for addressing causal questions.
In this module, we introduce the counterfactual approach to causal inference and how statements with causal claims can be interpreted. We introduce the potential outcome framework and how random assignment helps us make claims about what would have happened in the absence of the policy, action or program we study. We discuss the three core assumptions for causal inference: random assignment of subjects to treatment, non-interference, and excludability.
3.1 Core Content
What do we mean when we say “cause”? And why does it matter to be clear about the meaning of causal claims?
An introduction to potential outcomes as a way to think about alternative states of the world.
Randomization helps us learn about counterfactual causal claims in a particularly useful way.
The three key core assumptions for causal inference: random assignment of subjects to treatment, non-interference, and excludability.
Comparison of randomized studies with observational studies.
Randomization brings high internal validity, but it can’t promise external validity.
Your causal question is closely linked so your research design.
Below are slides with the core content that we cover in our lecture on causality. You can directly use these slides or make your own local copy and edit.
You can also see the slides used in previous EGAP Learning Days:
3.3.1 EGAP Methods Guides
3.3.2 Books, Chapters, and Articles
Ronald A. Fisher, The Design of Experiments (Edinburgh: Oliver; Boyd, 1935). Fisher introduces the idea of randomization and hypothesis testing as a way to learn about causal inference.
Donald B. Rubin, “Estimating the Causal Effects of Treatments in Randomized and Nonrandomized Studies,” J. Educ. Psych. 66 (1974): 688–701. Rubin introduces the idea of potential outcomes and links counterfactual conceptualizations of causality to statistical inference.
188.8.131.52 Contemporary Overviews
Henry E Brady, “Causation and Explanation in Social Science,” in The Oxford Handbook of Political Science, 2008, https://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780199286546.001.0001/oxfordhb-9780199286546-e-10.
Gerber and Green, Field Experiments, Chapter 1. This book is a great resource for many topics in experimental design.
Stephen L. Morgan and Christopher Winship, Counterfactuals and Causal Inference: Methods and Principles for Social Research (Cambridge University Press, 2007), Chapter 1. This book includes nice examples of thinking through making causal claims from observational data.
Rachel Glennerster and Kudzai Takavarasha, Running Randomized Evaluations: A Practical Guide (Princeton: Princeton University Press, 2013). This is a great introduction to running field experiments and discusses many examples.
3.3.3 EGAP Policy Briefs
Some examples of causal questions: