11 February 2010

Unpublished Paper on Problems in Scientific Assessments

Last week I alluded to a paper that I had submitted for publication and which was ultimately rejected. The journal was MTS Journal and the occasion was that a colleague was putting together a special issue and had asked me to contribute something. I was not too bothered by the rejection, apparently, and at the time I was getting ready to move to Oxford for a sabbatical. In light of recent events regarding the IPCC, the paper now appears a bit more significant than it did back then. Below are links to the (a) original submission, (b) reviewer comments and my responses, and (c) the revised resubmission, which was rejected for publication.

If nothing else the review comments indicate how criticisms of the IPCC were received circa late 2006. Comments welcomed.

(a) Original submission (PDF)
(b) Reviewer comments and response (PDF)
(c) Revised submission (PDF)
Pielke Jr., R. A. 2010 (2006). Effective Science Arbitration: Some Lessons from Recent Scientific Assessments, unpublished manuscript, December, 2006.
For those not interested in all the details here are the three lessons that I draw:
The three cases discussed here were not selected through some random procedure, but happened to be instances in which I observed problems in the assessment process while doing research. Thus it is difficult to assess how widespread the issues discussed here might be in the assessment literature. However broad the problem is, as the IPCC prepares to publish its fourth assessment report, and scientists and policy makers continue to emphasize the importance of assessments, it seems critical to carefully evaluate procedures for accuracy, and for users of assessments to understand the strengths and limits of assessments. . .

Each of the three cases discussed in this paper reinforce the continuing importance of the conventional peer-reviewed literature. . . While assessments can serve as a useful “shortcut” to researchers, particularly for areas outside their direct expertise, it is appropriate for researchers to continue to rely on original literature in their scientific work, rather than to simply depend on assessments as accurate means to convey scientific findings. Inevitably, assessments must simplify, in the process losing much of the nuance and uncertainties that characterize any complex scientific study. . .

Asking an assessment to distill the potential relevance for action, or at a minimum to specify criteria of policy relevance, would not necessarily require abandonment of a focus on positive questions. An assessment built upon questions provided by policymakers would create a close tie between the information demanded by decision makers and that being produced in assessments . . . In each of the three cases discussed here, shortfalls in credibility have potential threaten the assessment legitimacy. And both credibility and legitimacy could be enhanced through a more explicit focus on assessment salience, which was lacking in all three instances.


  1. I just had a quick skim of the paper and the review comments. It seems to me that the expert reviwers were not....mmmm....expert?

  2. You say in your response, The IPCC is designed to be “policy neutral” which means that it shouldn’t in practice color its presentation to, for instance, favor mitigation over adaptation.

    How salient do you think a literature review is as a decision support product? Wouldn't a detailed and specific technical analysis of policy options be more appropriate?

    It seems to me that there's plenty of science-based decision support provided and acted on in government and industry every day, and it rarely, if ever, consists of a lit review.

  3. In addition to the problems you noted, AR4 in many places uses other assessments are references rather than the primary source.

    When looking to see if the Himalayan Glacier errors made it into peer reviewed literature I found that there are peer reviewed papers that then use AR4 as a reference.

    So a reader of the latest peer reviewed paper has to wade through at least two layers of papers to find the original statement.

    Papers quoting or paraphrasing assessments that quote or paraphrase other assessments that quote or paraphrase other papers doesn't mean that anything is incorrect, but it is a hell of a way to run a railroad.

  4. It's getting a bit tiresome reading all these so called experts who tell us the conclusions are solid despite the peer-reviewed science not actually supporting them. In the review comments though it becomes 'well these may indeed be errors but who cares?'.

    Reviewer 3 in particular brings up the argument that it's impossible to separate facts from judgments, either in the IPCC report or in the underlying literature. Yes quite. Not the impression given to the press or politicians is it?

    Not that it matters much if nobody reads the report. I was amused by the UK scientific advisor, when asked recently by Andrew Neil if he had read the report spluttered "it's 6000 pages". To which the reply was "but you're the current scientific advisor and you were an IPCC chairman". He then explained that he was extremely busy working for the world bank environmental team at the time these errors were put in. Clearly that job didn't require him to read the IPCC report either. It doesn't inspire any confidence whatsoever!

    Conclusion: It doesn't matter whether it is based on sound science because only the simplistic sound-bite message is important.

    Somewhat like these financial reports that said in 1000 words of legalese that the sliced and diced mortgage-based structure investment vehicles were hugely risky but with a neat summary that said they were rated triple AAA and were therefore a recommended buy. Of course those clever bankers only read the summary. Not forgetting that our politicians ritually vote on enormous policy documents that they haven't had a chance or even a desire to read.

  5. The entire IPCC evaluation process is flawed to the point of fraudulence. The Summary for Policymakers was finalised and published before the WG1 (Science) section. The editors of the latter were under implicit pressure and in some cases ,I believe explicit instructions to make the latter fit the former instead of the other way around as should have been the case.
    Where this was not done the conclusions of WG1 were simply ignored by the editors of the Summary. The most egregious case goes to the heart of and in fact destroys the entire AGW paradigm.
    The key part of the science is in section WG1 8.6 which deals with forcings, feedbacks and climate sensitivity. The conclusions are in section 8.6.4 which deals with the reliability of the projections.It concludes:

    "Moreover it is not yet clear which tests are critical for constraining the future projections,consequently a set of model metrics that might be used to narrow the range of plausible climate change feedbacks and climate sensitivity has yet to be developed"

    What could be clearer. The IPCC says that we dont even know what metrics to put into the models to test their reliability.- ie we don't know what future temperatures will be and we can't calculate the climate sensitivity to CO2.
    This also begs a further question of what mere assumptions went into the "plausible" models to be tested anyway.

    Nobody ever seems to read or quote the WG1 report- certainly not the compiler of the Summary. In spite of the WG1 8.6.4. conclusion the Summary says:

    "The understanding of anthropogenic warming and
    cooling influences on climate has improved since
    the TAR, leading to very high confidence7 that the global average net effect of human activities since 1750 has been one of warming, with a radiative forcing of +1.6 [+0.6 to +2.4] W m–2 "

    This statement is fraudulent on its face when compared to 8.6.4.

    Those of us interested in objective science should try to see that the 8.6.4 conclusion gets as much exposure as possible. It deserves to be on the front page of the NY Times, The Guardian quoted by the BBC and read into the Congressional record in the USA.
    Roger - see what you can do. Regards Dr Norman Page.

  6. I don´t have an opinion on the quality of the article, but it is both interesting and scary that the politicization of climate research are evident in some of the the reviewers comments.