Many environmental problems are difficult to evaluate because they are beset with scientific uncertainty. Obvious examples include mass extinction of species (how many species are we losing per year, how many shall we lose within the next fifty years?), the ultimate impacts of pollutants (notably endocrine disrupters), and the biggest problem that is probably subject to the most scientific uncertainty, climate change. In all these areas, scientific uncertainty bedevils the question of costs. We are generally aware of the costs of action but we know far less about the costs of inaction. Hence inaction rules the day.
Key question: what is "legitimate scientific caution" in the face of uncertainty—especially when uncertainty can cut both ways? Some observers may consider that in the absence of conclusive evidence and assessment, it is better to stick with low estimates of environmental impacts on the grounds that they are more "responsible." But there is an asymmetry of evaluation at work. A low estimate, ostensibly "safe" because it takes a conservative view of such limited evidence as is to hand in documented detail, may fail to reflect the real situation just as much as does an "unduly" high estimate that is more of a best-judgement affair based on all available evidence with varying degrees of demonstrable validity. A minimalist calculation with apparently greater precision may in fact amount to spurious accuracy. In a situation of uncertainty where not all factors can be quantified to conventional satisfaction, let us not become preoccupied with what can be precisely counted if that is to the detriment of what ultimately counts.
This applies especially to issues with policy implications of exceptional scope, as in the case of climate change. Suppose a policy maker hears scientists stating they cannot legitimately offer final guidance about a problem because they have not yet completed their research with conventionally conclusive analysis in all respects. Or suppose the scientists simply refrain from going public about the problem because they feel, in accord with certain traditional canons of science, they cannot validly say anything much before they can say all. In these circumstances, the policy maker may well assume there is little to worry about for the time being: absence of evidence about a problem implies evidence of absence of a problem. By consequence, the policy maker may decide to do nothing—and to do nothing in a world of unprecedentedly rapid change can be to do a great deal. In these circumstances, undue caution from scientists can become undue recklessness in terms of the policy fallout: their silence can send a resounding message, however unintentional. As in other situations beset with uncertainty, it will be better for us to find we have been roughly right that precisely wrong.