Module 5 Hypothesis Testing
We cannot directly observe causal effects because of the fundamental problem of counterfactual causal inference (causal inference module). So how can we learn about these unobserved causal effects using what we do observe? In a randomized experiment, we can assess guesses or hypotheses about the unobserved causal effects by comparing what we observe in a given experiment to what we would observe if we were able to repeat the experimental manipulation and the guess or hypothesis were true.
In this module we introduce hypothesis testing, how it relates to causal inference, \(p\)-values, and what to do when we have multiple hypotheses to test.
5.1 Core Content
What is a good hypothesis?
The relationship between hypothesis testing and causal inference.
Hypothesis tests.
Null hypotheses.
Estimators versus test statistics.
In an experiment, a reference distribution for a hypothesis test comes from the experimental design and the randomization.
\(p\)-values and how to interpret the results of hypothesis tests.
A good hypothesis test should (1) cast doubt on the truth rarely (i.e., have a controlled and low false positive rate), and (2) easily distinguish signal from noise (i.e., cast doubt on falsehoods often; have high statistical power).
How would we know when our hypothesis test is doing a good job? (Power analysis is its own module).
False positive rates.
Correct coverage of a confidence interval.
Assessing the false positive rate of a hypothesis test for a given design and choice of test statistic; the case of cluster-randomized trials and robust cluster standard errors.
Be careful when testing many hypotheses, such as when you have more than two treatment arms or you are assessing the effects of a treatment on multiple outcomes. We should be careful to adjust the \(p\)-values or confidence intervals to reflect the number of tests/intervals produced.
5.2 Slides
Below are slides with the core content that we cover in our lecture on hypothesis testing. You can directly use these slides or make your own local copy and edit.
You can also see the slides used in previous EGAP Learning Days:
5.3 Resources
5.3.1 EGAP Methods Guides
EGAP Methods Guide 10 Things to Know about Hypothesis Testing
EGAP Methods Guide 10 Things You Need to Know about Multiple Comparisons
5.3.2 Books, Chapters, and Articles
Gerber and Green, Field Experiments. Chapter 3: Sampling Distributions, Statistical Inference, and Hypothesis Testing.
Paul R. Rosenbaum, “Design of observational studies,” Springer Series in Statistics (2010). Chapter 2: Causal Inference in Randomized Experiments.
Paul R. Rosenbaum, Observation and Experiment: An Introduction to Causal Inference (Harvard University Press, 2017). Part I: Randomized Experiments.
References
Rosenbaum, Paul R. “Design of observational studies.” Springer Series in Statistics (2010).
Rosenbaum, Paul R. Observation and Experiment: An Introduction to Causal Inference. Harvard University Press, 2017.