HHS Public Access Author manuscript Author Manuscript

Am J Public Health. Author manuscript; available in PMC 2017 August 01. Published in final edited form as: Am J Public Health. 2016 August ; 106(8): 1388–1389. doi:10.2105/AJPH.2016.303294.

Capitalizing on natural experiments to improve our understanding of population health Jacob Bor, ScD [Assistant Professor] Departments of Global Health and Epidemiology at Boston University School of Public Health

Author Manuscript

Natural experiments have a long history in public health research, the most famous being John Snow’s 1853 study of cholera in Londoners supplied with drinking water by companies upstream and downstream the city’s sewage outflows. Dr. Snow had a causal theory (cholera is spread by contaminated water), he identified a natural experiment, and he set out to collect appropriate data and analyze it. There is much to learn from the classic example. And yet, current teaching and research suggest that natural experiments do not hold the place of prominence that they once did. After a brief introduction to Dr. Snow, the modal introductory methods course segues into the more mundane aspects of observational study design and analysis: when (if at all) should a cohort be called retrospective? When (if ever) should Mantel-Haenszel standardization be used in place of regression?

Author Manuscript

Something is lost here. We want to understand the determinants of population health and how to change them. We want to train students to pose research questions of consequence and to design causally rigorous studies to answer them. Instead, we give them MantelHaenszel. Implicitly, we are asking students to consider a different question: given access to a large cohort, how could you best model the association between two or more variables in that dataset? Causal inference proceeds under the typically strong assumption of no residual confounding. This standard approach is outlined as “seven foundational steps in conducting an epidemiologic study” by Keyes & Galea (2014) [1] (Table, left).

Author Manuscript

The problem is not limited to teaching. Keyes & Galea (2015) point to the proliferation of studies that assess risk factor associations with little theoretical basis and which are quickly contradicted in the next study [2]. Their critique is the latest in a long debate on the merits of “risk factor epidemiology”. There are likely many causes, but I suspect part of the problem is the focus of training on predictive and descriptive analysis and the (mis-)application of those methods to causal questions. There is an important role for predictive modeling in targeting high-risk populations for interventions. But imputing causality onto such models is fraught. In any case, observational public health research has developed something of a credibility problem. A 2015 NY Times Op-Ed described the state of affairs: “How did experts get [diet advice] so wrong? Certainly, the food industry has muddied the waters through its lobbying. But the primary problem is that nutrition policy has long relied on a

Correspondence should be sent to Jacob Bor, Boston University School of Public Health, 801 Massachusetts Avenue, 3rd Floor, Boston, MA 02118 USA ([email protected]).

Bor

Page 2

Author Manuscript

very weak kind of science: epidemiological, or ‘observational,’ studies in which researchers follow large groups of people over many years. But even the most rigorous epidemiological studies suffer from a fundamental limitation. At best they can show only association, not causation”[3].

Author Manuscript

And yet, a retreat to randomized trials would be an unfortunate response to the current impasse. Observational studies are critical to understanding the production of population health across the full range of exposures and contexts, including: exposures that cannot be randomized (e.g. social position) or cannot be randomized ethically (e.g. smoking), interventions in real-world settings at scale or in different study populations, and downstream effects of exposures on health and non-health outcomes revealed long after primary endpoints are reached and trials stopped. However, too much observational public health research is simply not generating new knowledge. Although it has been suggested that public health research has focused too much on causality and not enough on consequence[4], I would argue that the two are inextricably linked: a deeper focus on causal questions and designing rigorous studies to answer them will improve the quality of evidence, our confidence in policy recommendations, and the impact of our research.

Author Manuscript

Natural experiments – like Snow’s cholera study – can help fill the gap. Natural (or quasi-) experimental studies exploit quasi-random variation in the exposure of interest to identify causal effects. Rather than controlling for observed confounders and hoping that there were no unobserved confounders (as in multiple regression, matching, and reweighting), natural experiments identify variation in the exposure that is known to be (or can be persuasively argued to be) independent of other confounders. Quasi-random variation in the exposure may arise from naturally occurring random variation (e.g. Mendellian randomization), due to eligibility criteria (e.g. diagnostic clinical thresholds [5]), or policy changes (e.g. the rollout of a new intervention [6]). Study designs to evaluate natural experiments including interrupted time series, difference-in-differences, regression discontinuity, and instrumental variables techniques are increasingly used in public health research [5,7].

Author Manuscript

Importantly, however, natural or quasi-experimental methods are not (just) a bag of statistical tricks. Rather, quasi-experimental methods represent an epistemological approach steeped in the scientific process of posing counterfactuals – states of the world in which a population with the same distribution of potential outcomes is exposed/unexposed – and designing experiments, or finding natural experiments, that reveal both counterfactuals. Indeed, thinking about all causal studies as “experiments” can be a useful way to improve the transparency of assumptions and the rigor of observational research in general, even where a source of quasi-random variation is not identified. As with randomized trials, natural experiments cannot answer all questions of policy relevance because they do not exist in all situations. But when they are available, they should be used given their potential for internal validity and transparency of assumptions. The Table juxtaposes the standard approach to observational inquiry with the approach typical of the quasi-experimental literature. In the latter, a source of quasi-random variation is identified. An analytical plan developed, and assumptions under which the proposed estimator has a causal interpretation are clearly stated. Then the researcher goes to the data

Am J Public Health. Author manuscript; available in PMC 2017 August 01.

Bor

Page 3

Author Manuscript

to assess an association already believed to have a causal interpretation by design. This inverts the common practice of finding an association in the data and then exploring whether it might be causal. Consequential public health research is research that engages with questions that matter; generates robust causal inferences; and has potential for translation into policy [4]. Natural experiments are uniquely suited to address these three aims. Natural experiments can asses important, but hard-to-randomize exposures. They have the potential for greater internal validity, reducing reliance on tenuous assumptions about residual confounding. Finally, if consequence is to be measured by the potential of findings to improve population health, then an approach that links scientific discovery closely to evaluation of policy changes and administrative rules facilitates precisely such translation.

Author Manuscript

I have argued here that public health would be well served to return to its roots in natural experiments. Innovations in computing and novel opportunities to link big datasets only enhance the opportunities to evaluate natural experiments. Over the last twenty years, there has been quite an active literature using natural experiments to study the determinants of population health. Yet most of this work has occurred outside schools of public health, e.g. in economics and sociology departments, and surprisingly little of this work is taught in core courses at schools of public health. Further embrace of natural experiments as an approach to scientific inquiry has the potential to lead to more creative, valid, and impactful scholarship on what really matters for population health.

Acknowledgments Author Manuscript

Dr. Bor acknowledges financial support from NIH grant K01-MH105320-01A1 and helpful feedback from Sandro Galea, Jeremy Barofsky, and Noah Haber. The contents are the responsibility of the author and do not necessarily reflect the views of the US Government.

References

Author Manuscript

1. Keyes KM, Galea S. Current practices in teaching introductory epidemiology: how we got here, where to go. Am J Epidemiol. 2014; 180:661–668. [PubMed: 25190677] 2. Keyes K, Galea S. What matters most: quantifying an epidemiology of consequence. Ann Epidemiol. 2015; 25:305–311. [PubMed: 25749559] 3. Teicholz N. The Government’s Bad Diet Advice. New York Times. :A19. [21 Feb 2015] Available: http://www.nytimes.com/2015/02/21/opinion/when-the-government-tells-you-what-to-eat.html? _r=0. 4. Galea S. An argument for a consequentialist epidemiology. Am J Epidemiol. 2013; 178:1185–1191. [PubMed: 24022890] 5. Bor J, Moscoe E, Mutevedzi P, Newell M-L, Bärnighausen T. Regression Discontinuity Designs in Epidemiology: Causal Inference Without Randomized Trials. Epidemiology. 2014; 25:729–737. [PubMed: 25061922] 6. De Neve J-W, Fink G, Subramanian SV, Moyo S, Bor J. Length of secondary schooling and risk of HIV infection in Botswana: evidence from a natural experiment. Lancet Glob Heal. 2015; 3:e470– e477. 7. Hernán MA, Robins JM. Instruments for causal inference: an epidemiologist’s dream? Epidemiology. 2006; 17:360–372. [PubMed: 16755261]

Am J Public Health. Author manuscript; available in PMC 2017 August 01.

Bor

Page 4

Table

Author Manuscript

Seven foundational steps in quantitative public health research A traditional observational study*

A (quasi-)experimental study

1. Define population of interest

1. State causal study question / theory

2. Define exposures and outcome

2. Find quasi-random variation in exposure

3. Take a sample from population

3. Plan analysis that exploits (2) to answer (1); state assumptions required for causal inference

4. Estimate measures of association

4. Collect new or existing data

5. Assess whether observed association suggest causal relationship

5. Assess effect of quasi-random variation in exposure on outcome.

6. Assess evidence for multi-causation

6. Assess robustness to assumption violations

7. Assess external validity

7. Interpret results w/r/t theory and policy

*

Adapted from [1].

Author Manuscript Author Manuscript Author Manuscript Am J Public Health. Author manuscript; available in PMC 2017 August 01.

Capitalizing on Natural Experiments to Improve Our Understanding of Population Health.

Capitalizing on Natural Experiments to Improve Our Understanding of Population Health. - PDF Download Free
38KB Sizes 0 Downloads 8 Views