Timing is key! Photo: Jörg Weingrill

Let me explain in a little more detail the relationship between types of methods for assessments (ex ante versus ex post) and the diffusion process of new technologies.

In a previous post, I indicated that the portfolio of methods for ex ante and ex post assessments can be different. What this statement really means is that the mix of methods will be different. Certainly, the number and complexity of methods available for ex post (after effective deliberate release approval and subsequent diffusion of an LMO technology) is much larger than with ex ante (before anything is in the hands of farmers; before deliberate release). The reason that we have a larger portfolio of methods for ex post assessments is simply that we as practitioners have (or can obtain) hard data on yields, pesticide use, labor, other inputs, and prices and thus do robust assessments.

In fact, some specific methods and approaches for data collection and assessment can be used both in an ex ante and an ex post situation. Some examples include cost/benefit, Net Present Value, economic surplus, expert consultations, focus groups, gender and generational dimensions assessments, etc.

We are not talking about two discrete events in time (ex ante and ex post) as related to technology adoption. In reality we are working within a continuum of time. This has interesting implications in terms of SEC assessments and regulatory design as one needs to bring clarity into what is expected from an assessment–implying that in a regulatory setting we are talking about a time delimited exercise.

Let me illustrate briefly with an example. We conducted a social and economic assessment of the adoption of Bt maize in the Philippines and Honduras. Our national partners at the University of Philippines-Los Baños were able to conduct a stratified random sample for selecting farmers and were able to conduct quite sophisticated statistical analysis through complex models. In contrast in Honduras, as this was the second year of availability and the number of adopters was quite small, we could not implement a statistically random sample; rather we had to do a census in the study sites. This, of course, has implications in terms of what we could do in our posterior assessments/analysis. From a personal standpoint, when doing a complete desk study,  I would rather have some field data no matter how imperfect, than no data at all.

The other option, of course, is to conduct data collection over several years, but this has a important implication in terms of time and resources for funding. As I discuss in other posts, this has implications in terms of long term innovation, technology diffusion, gains in knowledge, and others.