The counterfactual, or the forecasted course of events that would have taken place in the absence of the research output assessed (i.e. the “without” scenario), is the analytical core of an impact assessment. Every epIA is premised on this element, as the “impact” is derived from the difference between observed events (i.e. the factual scenario) and the counterfactual. If the counterfactual is unrealistic or strongly biased in its estimation, the results of an impact assessment will have little credibility.
Constructing a realistic and accurate counterfactual is a far from simple task. Agriculture is a dynamic sector that is influenced by a multitude of exogenous factors, including government policies, conflicts, resource changes, social changes, and climate dynamics, in addition to the effects of technical change. Technical change itself is the product of many innovations, and the contribution of any single one of these is difficult to isolate. Each innovation is the product of collaborative efforts among scientists and institutions, which are also difficult to attribute. Among these many drivers of change, it is a considerable challenge to determine what the course of events would be if a single research contribution were removed.
Before vs. After
The “before” adoption scenario is not an accurate counterfactual to the “after” scenario, due to the fact that the context for agricultural production and resource management is constantly in flux – lots of things other than adoption may have changed in the intervening period that would influence the outcomes we are interested in. Thus, in the absence of research interventions, measures of welfare may be decreasing or increasing, and research benefits may even accrue by slowing rates of losses over time. As a result, before-after (“reflexive”) comparisons are of limited use for impact assessment.
With vs. Without
A simple comparison of adopters and non-adopters in the period after adoption has taken place on a significant scale is flawed because adoption is endogenous. Another way of putting this is that the choice by farmers to adopt a technology reflects their own expectations of whether they will profit from adoption. As such, a choice to not adopt a technology suggests either that there is some factor, not obvious to outsiders, that the farmer knows about (e.g. soil quality) that would limit the profitability of adoption, or that the farmer is risk-averse or holds some other objective other than maximising their profits. In this latter case, this makes adopters and non-adopters fundamentally different in a way that could very significantly covary with the outcomes that impact assessment tries to causally attribute to research. This problem is known as selection bias.
Collecting data from adopters both before and after adoption, and drawing comparisons with data from non-adopters over the same time periods - the “double difference” approach - is an improvement over naïve comparisons of before and after adoption or of adopters and non-adopters. However, this method assumes that differences in unobservable factors between the two groups are constant over time.
A number of econometric techniques attempt to control selection bias (see Khanker et al, 2010 for a review) but by definition can only do so for observable differences between adopters and non-adopters. For example, Propensity Score Matching can be applied to a cross-sectional dataset in the comparison of adopters and non-adopters (as per the paper by Mendola (2007). However, the likely presence of unobservable differences between adopters and non-adopters mean that these methods still have their downsides in terms of the accuracy of estimates – there will be a tendency to over-estimate effect sizes.
Randomised Experiments, Natural experiments, Instrumental Variables and Regression Discontinuities
This list describes a family of methods for simulating the counterfactual that relies on exogenous variation in adoption – situations where a difference in the adoption rate between two populations is generated and that this difference does not covary with the outcomes we are interested in. This situation can be deliberately created (in the case of a randomised experiment) or may have happened inadvertently (natural experiments or regression discontinuity approaches) as an accident of nature or policy. Instrumental variables are a useful analytical device but hard to find – a variable that is strongly correlated with adoption but not correlated with the outcomes of interest.