For future reference, a collection of statements about the mistreatment of climate-related uncertainties by climate scientists.
Notably, there is not a skeptic in sight among the authors of the statements below.
- Mike Hulme about the BBC’s recent “Science under attack” docufiction
I do not recognise [Nurse's] claim that “climate science is reducing uncertainty all the time”. There remain intractable uncertainties about future predictions of climate change. Whilst Nurse distinguishes between uncertainty arising from incomplete understanding and that arising from irreducible stochastic uncertainty, he gives the impression that all probabilistic knowledge is of the latter kind (e.g. his quote of average rates of success for cancer treatments). In fact with climate change, most of the uncertainty about the future that is expressed in probabilistic terms (e.g. the IPCC) is Bayesian in nature. Bayesian probabilities are of a fundamentally different kind to those quoted in his example. And when defending consensus in climate science – which he clearly does – he should have explained clearly the role of Bayesian (subjective) expert knowledge in forming such consensus.
- John Beddington, Vicky Pope, Myles Allen, Hans von Storch, Robert Watson, extensively quoted at Bishop Hill
- Lord Oxburgh, during one of the hearings by the UK Parliament’s Science and Technology Committee
Q40 Chair: So you concluded that the approach that Professor Jones had adopted was one of dealing with presentation of the data rather than an attempt to deceive?
Lord Oxburgh: Absolutely. I think when you come to the presentation of complicated scientific observations and making them available to a much wider audience, you come up against some very tough “honesty” decisions. How much do you simplify? It is the same when you are teaching undergraduates. How much do you simplify in order to get a general idea across? I, personally, think that in various publications for public consumption those who have used the CRU data and those who have used other climatic data have not helped their case by failing to illuminate the very wide uncertainty band associated with it.
- Lord Oxburgh and his Scientific Assessment Panel, in their report about Climategate
Recent public discussion of climate change and summaries and popularizations of the work of CRU and others often contain over- simplifications that omit serious discussion of uncertainties emphasized by the original authors. For example, CRU publications repeatedly emphasize the discrepancy between instrumental and tree-based proxy reconstructions of temperature during the late 20th century, but presentations of this work by the IPCC and others have sometimes neglected to highlight this issue. While we find this regrettable, we could find no such fault with the peer-reviewed papers we examined
- Lord Oxburgh’s Panel member Michael Kelly, Professor of Electronics at Cambridge
Up to and throughout this exercise, I have remained puzzled how the real humility of the scientists in this area, as evident in their papers, including all these here, and the talks I have heard them give, is morphed into statements of confidence at the 95% level for public consumption through the IPCC process. This does not happen in other subjects of equal importance to humanity, e.g. energy futures or environmental degradation or resource depletion. I can only think it is the ‘authority’ appropriated by the IPCC itself that is the root cause
- Sir Muir Russell and his Independent Climate Change E-mails Review, in their report about Climategate
On the allegation that the references in a specific e-mail to a „trick‟ and to „hide the decline‟ in respect of a 1999 WMO report figure show evidence of intent to paint a misleading picture, we find that, given its subsequent iconic significance (not least the use of a similar figure in the IPCC Third Assessment Report), the figure supplied for the WMO Report was misleading. We do not find that it is misleading to curtail reconstructions at some point per se, or to splice data, but we believe that both of these procedures should have been made plain – ideally in the figure but certainly clearly described in either the caption or the text.
Understanding requires proper statistical interpretation, i.e. to determine the confidence level associated with a statement such as “the present is likely warmer than the past”. To do this as objectively as possible would require a complex (and difficult) study to perform hypothesis testing in a mathematically rigorous way, taking proper account all of the uncertainties and their correlations. We are not aware that this has been done in the production of IPCC reports to date, but instead qualitative statements have been made based on definitions of ―likely‖, ―very likely‖ etc according to criteria laid down by the IPCC ( ̳Likely‘ means a probability greater than 66%, and ̳Very Likely‘ means a probability greater than 90%).
The best one might hope for the future of peer review is to be able to foster an environment of continuous critique of research papers before and after publication. Many writers on peer review have made such a proposal, yet no journal has been able to create the motivation or incentives among scientists to engage in permanent peer review (50-52). Some observers might worry that extending opportunities for criticism will only sustain maverick points-of-view. However, experience suggests that the best science would survive such intensified peer review, while the worst would find its deserved place at the margins of knowledge.
This process of weeding out weak research from the scientific literature can be accelerated through more formal mechanisms, such as the systematic review. A systematic approach to selecting evidence focuses on the quality of scientific methods rather than the reputations of scientists and their institutions. This more rigorous approach to gathering, appraising, and summing up the totality of available evidence has been profoundly valuable to clinical medicine. There may be useful lessons here for the IPCC. Climate sceptics and climate scientists, along with their colleagues in other scientific disciplines, would likely welcome this greater rigour and scrutiny. It would certainly promote quality and strengthen accountability to a more critical public (and media) with higher expectations of science. More importantly, intensified post as well as pre publication review would put uncertainty – its extent and boundaries – at the centre of the peer review and publication process. This new emphasis on uncertainty would limit the rhetorical power of the scientific paper (53), and offer an opportunity to make continuous but constructive public criticism of research a new norm of science.
- The InterAcademy Council, in their Climate Change Assessment, Review of the Processes & Procedures of the IPCC
Characterizing and communicating uncertainties. IPCC’s guidance for addressing uncertainties in the Fourth Assessment Report urges authors to consider the amount of evidence and level of agreement about all conclusions and to apply subjective probabilities of confidence to conclu- sions when there was ‘high agreement, much evidence.’ However, such guidance was not always followed, as exemplified by the many statements in the Working Group II Summary for Policymakers that are assigned high confidence but are based on little evidence. Moreover, the apparent need to include statements of ‘high confidence’ (i.e., an 8 out of 10 chance of being correct) in the Summary for Policymakers led authors to make many vaguely defined statements that are difficult to refute, therefore making them of ‘high confidence.’ Such statements have little value. Scientific uncertainty is best communicated by indicating the nature, amount, and quality of studies on a particular topic, as well as the level of agreement among studies. The IPCC level-of-understanding scale provides a useful means of communicating this information.
Chapter Lead Authors should provide a traceable account of how they arrived at their ratings for level of scientific understanding and likelihood that an outcome will occur.
In addition, IPCC’s uncertainty guidance should be modified to strengthen the way in which uncertainty is addressed in upcoming assessment reports. In particular, quantitative probabilities (subjective or objective) should be assigned only to well-defined outcomes and only when there is adequate evidence in the literature and when authors have sufficient confidence in the results. Assigning probabilities to an outcome makes little sense unless researchers are confident in the underlying evidence (Risbey and Kandlikar, 2007), so use of the current likelihood scale should suffice.
Studies suggest that informal elicitation measures, especially those designed to reach consensus, lead to different assessments of probabilities than formal measures. (Protocols for conducting structured expert elicita- tions are described in Cooke and Goossens .) Informal procedures often result in probability distributions that place less weight in the tails of the distribution than formal elicitation methods, possibly understating the uncertainty associated with a given outcome (Morgan et al., 2006; Zickfeld et al., 2007).
Climate change assessments | Review of the processes and procedures of the IPCC 41
►The likelihood scale should be stated in terms of probabilities (numbers) in addition to words to improve understanding of uncertainty.
► Where practical, formal expert elicitation procedures should be used to obtain subjective probabilities for key results.
According to the IPCC uncertainty guidance, quantitative scales should be used when the results are themselves quantified and when there is ‘high agreement, much evidence.’ For many of the Working Group III conclusions, this is clearly not the case.
The IPCC uncertainty guidance provides a good starting point for charac- terizing uncertainty in the assessment reports. However, the guidance was not consistently followed in the fourth assessment, leading to unnecessary errors. For example, authors reported high confidence in statements for which there is little evidence, such as the widely quoted statement that agricultural yields in Africa might decline by up to 50 percent by 2020. Moreover, the guidance was often applied to statements that are so vague they cannot be disputed. In these cases the impression was often left, incorrectly, that a substantive finding was being presented.
The Working Group II Summary for Policymakers has been criticized for various errors and for empha- sizing the negative impacts of climate change. These problems derive partly from a failure to adhere to IPCC’s uncertainty guidance for the fourth assess- ment and partly from shortcomings in the guidance itself. Authors were urged to consider the amount of evidence and level of agreement about all conclu- sions and to apply subjective probabilities of confi- dence to conclusions when there was high agree- ment and much evidence. However, authors reported high confidence in some statements for which there is little evidence. Furthermore, by making vague statements that were difficult to refute, authors were able to attach ‘high confidence’ to the statements. The Working Group II Summary for Policymakers contains many such statements that are not supported sufficiently in the literature, not put into perspective, or not expressed clearly.
(whole section: Use the appropriate level of precision to describe findings)
The quantitative scales used by Working Group I raise four additional issues: 1. It is unclear what the use of separate confidence and likelihood scales accomplishes. For example, one could have very high confidence that obtaining two sixes when rolling a pair of fair dice is extremely unlikely. But why not just say that obtaining two sixes when rolling a pair of fair dice is extremely unlikely? This suggests that the confidence scale is redundant when the likelihood scale is used, a point also made by Risbey and Kandlikar (2007).
It is well-documented in the literature that people interpret the terms ‘very unlikely,’ ‘likely’ etc. in Table 3.3 in different ways (Patt and Schrag, 2003; Budescu et al., 2009; Morgan et al., 2009). Specifically, the use of words alone may lead people to underestimate the probability of high- probability events and to overestimate the probability of low-probability events (see also Lichtenstein et al., 1978).
More consistency is called for in how IPCC Working Groups characterize uncertainty.
The extent to which results are quantified and measurement or model uncertainty is presented differs significantly across the chapters of the Working Group II report.
The extent to which results are quantified also differs in the Working Group II Summary for Policymakers and the Technical Summary. The Summary for Policymakers presents quantitative information on the extent of agreement between different physical and biological trends and trends in temperature. Conclusions about observed impacts of climate on the natural and human environments and about future impacts (Sections B and C of the Summary for Policymakers) are usually stated in qualitative terms using the confidence and likelihood scales. No additional informa- tion is presented to characterize the uncertainty in the results of individual studies or to indicate the range of estimates across studies. In contrast, the Technical Summary includes more quantitative information about uncer- tainty.
In the Committee’s view, assigning probabilities to imprecise statements is not an appropriate way to characterize uncertainty. If the confidence scale is used in this way, conclusions will likely be stated so vaguely as to make them impossible to refute, and therefore statements of ‘very high confidence’ will have little substantive value.11 More importantly, the use of probabilities to characterize uncertainty is most appropriate when applied to empirical quantities (Morgan et al., 2009)
(the whole chapter 3. IPCC’s evaluation of evidence and treatment of uncertainty)