Four Steps For A Climate Policy Beyond Scenarios And Fear
Very interesting review by Tim Lewens in the London Review of Books with (explicit) reference to a “new” way to select a rational climate policy, beyond the usual soup of running scenarios designed to deal with worst-case and in general of applying the precautionary principle in order to stifle innovation and institutionalize killjoyfulness. In summary:
- We should aim for “concrete recommendations that are thoroughly in accordance with precautionary thinking in remaining humble about our state of knowledge, while taking into account the full range of scientific evidence“
- However, the precautionary principle on its own is no guidance to policy decisions when facing great complexity and uncertainty, as both action and inaction might lead to disaster
- Cost-benefit analysis is not much better, as it simply collapses complexity and provides “a bland expression of uncertainty” that strongly depends on the (lack of) knowledge and understanding of the system at hand
- Instead, the first step of a good policy is to “examine how our proposed interventions will fare under a range of different plausible scenarios for the unfolding of a complex system, picking the strategy which has a satisfactory outcome across the largest range of future scenarios“
- The second step is to “assume that the world may not behave in a manner we expect it to, and therefore make sure that the strategy we choose can be undone or altered with reasonable ease“
- Another problem for a good policy is to avoid falling victim of “optimism bias” (overestimating the likelihood of outcomes one favours) and “affiliation bias” (the dependency of a researcher’s results on his/her affiliation)
- The third step is therefore to “to be attentive to the institutional sources of the data“, in order to understand and perhaps even remove the biases from the policymaking “picture”
- The fourth step goes even further for the same aim, and involves “broad public participation“
Very shortly: know your science, know its limits, know its biases, involve as many people as possible, pick a policy that looks best across many scenarios and can be easily changed.
Now, it is pretty easy to argue that the IPCC has failed on all fronts: by fixating on worst-case analysis thereby restricting the range of scenarios; by not assuming that the world may not behave as expected, steering quite clear of providing any sign of being humble about anything; by refusing to consider the bias of its own authors and editors, through its flawed review system; and by consistently trying to keep the public at bay, with countless elitist “summits” only good for people on expenses and/or without a day job.
And now for some quotes from Lewens’ review of “Unsimple Truths: Science, Complexity and Policy” by Sandra Mitchell, ISBN 978 0 226 53262 (available at Amazon.com with the “Look Inside” feature enabled):
[…] on the important matter of what decision-makers can do to handle complexity […] Mitchell’s book is at its best. Nearly all the systems we care about – the global climate, the human body, the international financial system – exhibit the various forms of complexity she dissects.
[…] A typical reaction, displayed in many policy documents, is that when dealing with scientific uncertainty in relation to important systems, policy-makers should adopt a precautionary approach. […] Both unintentional vandalism and irresponsible dithering can lead to disaster. Those who oppose precautionary thinking often argue that it becomes incoherent or dangerous when spelled out in detail. The problem is that precautionary thinking is supposed to help in situations of uncertainty; that is, in situations where we lack knowledge, or where our knowledge is imprecise. But since decisions under such conditions tend to have the potential for grave outcomes whichever option we choose, we need guidance on how to err on the side of caution.
High-profile opponents of the precautionary principle, such as Barack Obama’s new regulation tsar, Cass Sunstein, have argued [for] a form of cost-benefit analysis as the best way to ensure that the potential costs and benefits of all courses of regulatory action – including inaction – are placed ‘on screen’.
Mitchell’s critique of cost-benefit analysis is a familiar one. It is suitable for well-understood systems, unfolding over short time periods, where we can assign probabilities with confidence. But the probability of a given outcome – financial profit, the extinction of species, an increase in sea levels, high blood pressure – in whatever system we are analysing will often vary significantly with small changes in the starting conditions, with our assumptions about the causal interactions within the system, and with variation in background conditions as the system evolves over long periods of time. Our estimates of these conditions will often be imprecise, or thoroughly conjectural, in spite of the apparent precision of the cost-benefit methodology. The question is how to turn uncertainty of this sort into trustworthy policy recommendations.
Mitchell’s stance on these matters is not new […] but her way of justifying it is particularly crisp and compelling. Simple cost-benefit analysis will tend to collapse a rich understanding of the complexity of a system into a single set of all-things-considered probability estimates for its likely end-states. In so doing, Mitchell says, we mask our grasp of complexity, and replace it with a bland expression of uncertainty.
[…] once we do acknowledge complexity, two strategies become available. First, we can examine how our proposed interventions will fare under a range of different plausible scenarios for the unfolding of a complex system, picking the strategy which has a satisfactory outcome across the largest range of future scenarios. Second, we can assume that the world may not behave in a manner we expect it to, and therefore make sure that the strategy we choose can be undone or altered with reasonable ease. The end result should be a set of concrete recommendations that are thoroughly in accordance with precautionary thinking in remaining humble about our state of knowledge, while taking into account the full range of scientific evidence.
[…] The question of how good a particular outcome would be, were it to arise, should be wholly independent of the question of how likely that outcome is. And yet it turns out that we tend to overestimate the likelihood of outcomes we favour, while underestimating the likelihood of outcomes we don’t want. This is known as ‘optimism bias’. And ‘affiliation bias’ results in (for example) the conclusions of studies on the effects of passive smoking varying according to the authors’ affiliation with the tobacco industry. Needless to say, these psychological results suggest that policy-makers need to be attentive to the institutional sources of the data they use. And this, in turn, underlines a long-standing theme of work among social scientists, who have claimed that broad public participation in risk planning may increase the quality of risk analysis. Mitchell’s stance on policy isn’t complete, but perhaps that is to be expected in a complex world.