Thursday 27 January 2011

Scepticism versus Denialism - Delingpole Part II

IMPORTANT CORRECTION: I have corrected an erroneous statement about the makeup of IPCC Working Group II, left in with strikeouts. My apologies for this sloppy error.

Last night I wrote a piece about James Delingpole's unfortunate appearance on the BBC program Horizon on Monday. In that piece I refered to one of his own Telegraph articles in which he criticizes renowned sceptic Dr Ben Goldacre for betraying the principles of scepticism in his regard of the climate change debate. That article turns out to be rather instructional as it highlights perfectly the difference between real scepticism and the false scepticism commonly described as denialism. I don't know whether James is aware of the difference. Perhaps he hasn't taken a step back far enough to see the intellectual trap he has set for himself. I will endeavour to explain by using his own examples.

This Telegraph piece is by far the mildest of his that I've ever read. It appears that James has tremendous respect for Ben Goldacre, who is a qualified medical doctor and has written a best-selling book about science scepticism called Bad Science and continues to write a popular Guardian science column. Here's what Delingpole has to say about Dr Goldacre:
Many of Goldacre’s campaigns I support. I like and admire what he does. But where I don’t respect him one jot is in his views on ‘Climate Change,’ for they jar so very obviously with supposed stance of determined scepticism in the face of establishment lies.
Okay, first of all we need to examine the meaning of scepticism because it seems that Delingpole doesn't get it. Scepticism is not some sort of rebellion against the establishment as Delingpole claims. It is not in itself an ideology. It is merely an approach to evaluating new information. There are varying definitions of scepticism, but Goldacre's variety goes like this: A sceptic does not support or promote any new theory until it is proven to his or her satisfaction that the new theory is the best available. Evidence is examined and accepted or discarded depending on its persuasiveness and reliability. Sceptics like Ben Goldacre have a deep appreciation for the scientific method of testing a hypothesis through experimentation and are generally happy to change their minds when the evidence supports the opposing view. Sceptics are not true believers, but they search for the truth. Far from challenging the established scientific consensus, Goldacre in Bad Science typcially defends the scientific consensus against alternative medical views that fall back on untestable positions. In science the consensus is sometimes proven wrong, and while this process is imperfect it eventually results in the old consensus being replaced with a new one.

That's scepticism. So the question becomes "what is denialism?" Denialism is a mindset that chooses to deny reality in order to avoid an uncomfortable truth. Denialism creates a false sense of truth through the subjective selection of evidence (cherry picking). Unhelpful evidence is rejected and excuses are made, while supporting evidence is accepted uncritically - its meaning and importance exaggerated. It is a common feature of denialism to claim the existence of some sort of powerful conspiracy to suppress the truth. Rejection by the mainstream of some piece of evidence supporting the denialist view, no matter how flawed, is taken as further proof of the supposed conspiracy. In this way the denialist always has a fallback position.

In the next paragraph Delingpole makes the following claim:
Whether Goldacre chooses to ignore it or not, there are many, many hugely talented, intelligent men and women out there – from mining engineer turned Hockey-Stick-breaker Steve McIntyre and economist Ross McKitrick to bloggers Donna LaFramboise and Jo Nova to physicist Richard Lindzen….and I really could go on and on – who have amassed a body of hugely powerful evidence to show that the AGW meme which has spread like a virus around the world these last 20 years is seriously flawed.
I'm glad Delingpole has done this because it gives me the opportunity to debunk. So he mentions a bunch of people who are intelligent and talented and have amassed evidence to the effect that the consensus of AGW (Anthropogenic Global Warming) is a myth. Should I take his word for it? No. I am a sceptic. I will examine the evidence and the people behind it.

First there is McIntyre and McKitrick. These guys are heroes to the climate "sceptic" movement because they co-authored two papers that the denialists claim refute the consensus that average temperatures on earth have increased at an alarming rate during the last half of the 20th century. These two manuscripts were written in 2003 and 2004 and are both referred to as MM. The second of the two was submitted to and rejected by the journal Nature. Both make the same claim that the main feature of an earlier academic paper by Mann et al in 1998 (the "hockey stick" shape of the historical temperature record) is an artifact. MM claims that global temperatures are not accelerating. The claims have however been roundly disproved as explained here. It is worth noting at this point that neither man is a climate scientist. McKitrick is an economist and McIntyre is a mining industry policy analyst. It is clear from the very detailed rebuttal article that McIntrye and McKitrick have no qualifications to critique the earlier paper and betray fundamental misunderstandings of methodologies employed in that study. It should come as no surprise that the peer review process discredited them. This is not a global conspiracy. This is how science works. This Wikipedia article explains in better laymens terms how the MM claims are faulty. Here we can see lead author Michael Mann explain away the MM corrections:
"...so-called 'correction' was nothing more than a botched application of the MBH98 procedure, where the authors (MM) removed 80% of the proxy data actually used by MBH98 during the 15th century period... Indeed, the bizarre resulting claim by MM of anomalous 15th century warmth (which falls within the heart of the "Little Ice Age") is at odds with not only the MBH98 reconstruction, but, in fact the roughly dozen other estimates now published that agree with MBH98 within estimated uncertainties..."
It is difficult for me to find out much about blogger Donna LaFrambois. As far as I can see she runs her own blog at http://nofrakkingconsensus.wordpress.com and is the founder of another site here http://www.noconsensus.org/. It's not very clear to me what her credentials are or if she has any. If you search for her name you will see what I mean. She seems to be a critic of the so-called climate bible, a comprehensive report by the UN Intergovernmental Panel on Climate Change (IPCC) every five years or so. I am familiar with some of the criticisms of this panel. Working Group 2 famously overstated the estimated rate of disappearance of the Himalayan glacier in 2007 and was forced to admit the error. Working Group 2 is a panel of biologists and sociologists whose job is to evaluate the impact of climate change. These people are not climate scientists. The expertise on glaciers in the IPCC rests mostly within Working Group I (The Physical Basis), not Working Group II, which looks at the effects. Their report takes for granted the scientific basis of climate change, which has been delivered by Working Group I and is regarded as sound (of course this is just a conspiracy, right?) At any rate, I don't know why I should pay attention to this blogger. Anyone can write a blog and anyone with money can own a domain. She may be intelligent, but I don't know anything about her and with all the millions of blogs out there I'm not convinced hers is of any special significance. Oh she's also apparently writing a book called Decoding the Climate Bible: Almost Nothing You've Heard about the UN's Uber Report is True. Catchy. But I'm not buying it.

Jo Nova. Another blogger? I really can't be arsed. Sorry.

Richard Lindzen. Okay, there's information about this guy. He has a wiki page, which is more than I can say for the previous two. He is an atmospheric physicist and Professor of Meteorology at MIT. In 2007 he had this to say on Larry King Live:
"we're talking of a few tenths of a degree change in temperature. None of it in the last eight years, by the way. And if we had warming, it should be accomplished by less storminess. But because the temperature itself is so unspectacular, we have developed all sorts of fear of prospect scenarios -- of flooding, of plague, of increased storminess when the physics says we should see less.
I think it's mainly just like little kids locking themselves in dark closets to see how much they can scare each other and themselves."
According to Wikipedia, it would seem that Lindzen is well respected in his field and represents the 3% of the climate science community who disagree with the 97% consensus. Fair enough. Interestingly there are other climate scientists that Delingpole could have mentioned such as Willie Soon and Sallie Baliunas. I'll let the link speak for itself as to why he probably left them out. The second to last paragraph of Delingpole's article asks this:
If  Goldacre really wants to stick his neck out, why doesn’t he try arguing against a rich, powerful, bullying Climate-Change establishment which includes all three British main political parties, the National Academy of Sciences, the Royal Society, the Prince of Wales, the Prime Minister, the President of the USA, the EU, the UN, most schools and universities, the BBC, most of the print media, the Australian Government, the New Zealand Government, CNBC, ABC, the New York Times, Goldman Sachs, Deutsche Bank, most of the rest of the City, the wind farm industry, all the Big Oil companies, any number of rich charitable foundations, the Church of England and so on?
I hope Ben won't mind if I take this one for him (first of all, Big Oil companies? Are you serious?) The answer is a question and the question is "Where is your evidence?"

72 comments:

  1. "These two manuscripts..."

    There were three actually.

    "...were written in 2003 and 2004..."

    Umm, 2003 and 2005 actually

    "...and are both referred to as MM. The second of the two was submitted to and rejected by the journal Nature."

    The Nature submission (which a fourth article) was rejected on grounds of space.

    "Both make the same claim that the main feature of an earlier academic paper by Mann et al in 1998 (the "hockey stick" shape of the historical temperature record) is an artifact."

    No, the first paper (MM03) outlines errors in the Mann et al database - use of old versions, extrapolations, truncations, mislocations. The second paper may be the one you are thinking of (MM05 -GRL).


    "MM claims that global temperatures are not accelerating."

    No they have never claimed that the world has not warmed.

    The claims have however been roundly disproved as explained here."

    The NAS report and the Wegman report suggest otherwise. Do you really want to argue that a small bunch of trees is NW USA can be used to reconstruct global temperatures? (This is the essence of Mann's argument you link to) With all due respect that is a bunch of woo.

    "It is worth noting at this point that neither man is a climate scientist. McKitrick is an economist and McIntyre is a mining industry policy analyst. It is clear from the very detailed rebuttal article that McIntrye and McKitrick have no qualifications to critique the earlier paper and betray fundamental misunderstandings of methodologies employed in that study. "

    Ad hominem.

    It should come as no surprise that the peer review process discredited them.

    It hasn't. See the NAS report and the Wegman report.

    ReplyDelete
  2. Very good post Matt. Your essay also nicely underlines the fact that it takes a good deal more work for a sceptic to research and hence either refute or support an argument, than for a denialist to simply give a list of others who share their prejudices.

    Cheers
    Alan
    http://alanwinfield.blogspot.com

    ReplyDelete
  3. Bishop Hill, please forgive me. If I make factual errors I will endeavour to correct them. I link to a source that suggests there were two MM manuscripts from 2003 and 2004. As for the accusation of ad hominem, no. It is my opinion after gathering evidence that the men who wrote those papers are not qualified. Even within a broad discipline of science there have been publications where authors have been outside their areas of expertise and discredited. Often these are industry funded and the industry can't find a champion within the narrow field. Here is an outline of prominent scientists opposed to the consensus.

    http://en.wikipedia.org/wiki/List_of_scientists_opposing_the_mainstream_scientific_assessment_of_global_warming

    I am a sceptic not a scientist, but I spent several hours researching this piece. I don't get paid for my time. I leave it to others to evaluate those positions.

    ReplyDelete
  4. @ Bishop Hill

    Ah - the Wegman report.

    You do realise that Wegman is in bit of trouble for academic dishonesty, and for cutting and pasting stuff he clearly didn't understand?
    Where's the scepticism about that?

    http://www.usatoday.com/weather/climate/globalwarming/2010-11-21-climate-report-questioned_N.htm

    As for the absurd claim that the only reason M&M weren't published in Nature because of 'lack of space' - that's funny because that's exactly the same reason they don't publish my paper on the perpetual motion machine. Evidence? Why a link to the following blog should be sufficient shouldn't it - you can't possibly be sceptical about claims you find on blogs can you??

    http://crispian-jago.blogspot.com/2011/01/james-delingpole-hockey-stick.html

    ReplyDelete
  5. There's nothing to forgive. People make mistakes. It's fine so long as they are corrected when uncovered.

    This story about there being two papers, one of which was rejected, is one I come across occasionally, and we can see where it originated from. It should have been corrected.

    The arguments that are put forward in that RealClimate piece don't rise much above the level of homeopathy to my mind. The idea that you can take a handful of tree ring series (the bristlecones) from one part of the US, and known to be contaminated with a non-climatic signal, then use them to reconstruct global temperatures is an outrage against science. (Arguing that a PC4 needs to be included is a recognition that the hockey stick shape is not a common pattern in the database).

    Can I recommend a good book on the subject? (Snip the self-advertising if you like). :-)

    ReplyDelete
  6. Deano

    Yes I've seen that. As I said on my blog post at the time, the allegations lead to two possible conclusions:

    1. That Wegman et al are guilty of plagiarism and that Mannian short-centred PCA is biased.
    2. That Wegman et al are not guilty of plagiarism and that Mannian short-centred PCA is biased.

    Wegman's conclusions are unaffected. Nobody is arguing that short-centring is a valid technique any longer. Ian Jolliffe, the world's leading expert on PCA says it's wrong. David Hand of the Royal Statistical Society has said it's wrong.

    This is a sceptic blog right? I hope that everyone therefore recognises that you are playing the man and not the ball. (BTW, that was why I raised ad hominem argument with our host - arguing credentials is an ad hominem argument. The expression has become synonymous with "rude", but it literally means criticising the man and not his arguments.)

    ReplyDelete
  7. Deano again

    Sorry, are you saying that McIntyre has fabricated the rejection letter from Nature?

    ReplyDelete
  8. Bishop Hill. You suggest that global climate reconstructions depend entirely on 'a handful of tree ring series form one part of the US is false isn't it?

    Let's say to satisfy you we discard them completely.

    What do the climate reconstructions from ice cores, boreholes, other tree ring series, and many other sources say?

    Precisely the same thing.

    You're using the old creationist tactic of trying to pick a hole in the science, rather than provide any evidence for your position. Even if you managed to discredit one line of evidence, there'd still be hundreds of others, and there would be no reason to revert to your preferred default position

    ReplyDelete
  9. @ Bishop Hill

    Ah, so you're the author of The Hockey Stick Illusion. Your plug is accepted. I will perhaps give it a go.

    ReplyDelete
  10. Flay

    Great. I'd love to know what you make of it.

    ReplyDelete
  11. Deano

    If one looks at the AR4 spaghetti graph, pretty much all the reconstructions either use bristlecones or don't extend back to the putative medieval warm period - i.e. the boreholes.

    In terms of other tree ring series, take a look at Mann's database. (google for pcproxy.txt). Not many hockey sticks in there! (I can email a copy if you can't find it, but you should get it from Mann's website).

    While we're on the subject of logical fallacies, comparing me to a creationist is the fallacy known as the "abusive analogy". We do logic here, right? :-)

    ReplyDelete
  12. Re the Wegman stuff

    I actually agree with Bishop Hill the Wegman plagiarism is less interesting than the quality of the Wegman report itself.

    But I think the Wegman report is highly misleading.

    see the Deep Climate post

    http://deepclimate.org/2010/11/16/replication-and-due-diligence-wegman-style/

    ReplyDelete
  13. Peter

    Are you disputing the bit about short-centred PCA? This frankly is the only bit that matters.

    ReplyDelete
  14. Bishop - you really need to read that post, but everyone now agrees short centred PCA is a poor methodical technique, and could theoretically give misleading results. But when the effect is quantified for MBH98 it's effect is negligible

    ReplyDelete
  15. here from the National Research Council’s report on paleclimatology “MBH methodology does not appear to unduly influence reconstructions of hemispheric mean temperature” (Surface Temperature Reconstructions for the Last 2,000 Years, p. 113)

    you can download the pdf :

    http://www.nap.edu/catalog.php?record_id=11676

    ReplyDelete
  16. Peter

    I agree with the post that the hockey stick pattern is still there when the data is correctly centred. It must be there, because it's there in the raw data. The problem is it's there as a PC4. This is the point I made in my first post on this thread. Using data from a small group of trees in NW USA to reconstruct global temperatures, particularly when those trees are known to be contaminated with a non-climatic signal, is woo.

    ReplyDelete
  17. Peter

    Thanks, I've read the NRC report. They say that although Mann used incorrect methodology and inappropriate data (bristlecones), his results were still "plausible" (damning with faint praise?) because of their (alleged) similarity to other reconstructions. The chairman of the panel Gerald North, has admitted that the panel didn't examine the other studies to see if they used the same inappropriate data as Mann. They did - one used short-centring too, IIRC.

    ReplyDelete
  18. Bishop,

    Do you accept that for the MBH98 that the short centring makes negligible difference ?

    That is not what the Wegman report says.

    ReplyDelete
  19. Not really. If you centre properly and bring in the PC4 I'm sure you don't get much difference, but do you see why using the PC4 is a problem? Using data known to be contaminated with a non-climatic signal would make the result invalid anyway, surely? (The bristlecones dominate the reconstruction because the (non-climatic) uptick correlates well with the instrumental uptick - the weighting is based on how well they correlate.

    I feel sure we must be able to agree that using contaminated data is wrong.

    ReplyDelete
  20. OK - but you are conflating issues - I was talking purely about the short centring issue and trying to get to the bottom of that one before moving on to choice of data.


    "I feel sure we must be able to agree that using contaminated data is wrong."

    I don't think it is necessarily as black and white as that.

    I don't know this issue in detail, but if all you have is 'contaminated data' then it depends whether you can still get useful information from it.

    Obviously the contamination is likely to affect the precision, but you need to quantify the effect.

    ReplyDelete
  21. OK, if I try to frame my argument only with reference to the short-centring, why would you want your reconstruction dominated by a PC4? It's by definition, an obscure pattern in the data. There just aren't many hockey stick series in the dataset. Why should the result look like a hockey stick?

    The problem is that nobody knows what the effect is or how big it is. It's not therefore possible to get any meaningful information out at all.

    ReplyDelete
  22. OK, if you do the analysis without short centring, I agree the hockey stick is PC4.

    So when you apply is what Mann et all says is the 'standard' selection rule to the data without short centring that indicates 5 PCs should be included in the analysis.

    Is there another alternative selection rule that when you apply it to the data without short centring indicates less than 4 PCs should be included ? (if you've got a link to this I would be interested)

    ReplyDelete
  23. There are no rules in this area. McIntyre cites Jackson [1993] as saying "there are few guidelines". Rule N is one of several rules of thumb that could have been used. Other possibilities include the Bootstrapped Kaiser–Guttman Criterion, the Broken Stick, and Bartlett’s Test of the quality of Variance. (These are the ones I mention in the Hockey Stick Illusion). Original discussion here.

    McIntyre has pointed out that Mann can't have actually used Rule N because it doesn't explain the number of PCs retained in the other PC calculations. What criteria were used for PC retention in the original paper remains the great unsolved mystery of the Hockey Stick.

    Again though, I come back to the point that a reconstruction that ends up with its shape based on just a small fraction of its data is not robust. So even if you could identify a "rule" that would include the PC4, would you find the result credible (particularly as we know the bristlecones have a non-climatic signal)?

    ReplyDelete
  24. Peter

    Sorry for the delay. Posted a response an hour or so ago, but can't have filled in the catcha.

    There are no rules in this area. McIntyre quotes one expert in the area as saying there are "few guidelines". There are lots of rules of thumb (see same link), but no evidence that Mann used Rule N in practice. What criteria he did use is one of the great remaining mysteries of the Hockey Stick.

    That said, even if you could find a rule that allowed retention of the PC4, would you think that a reconstruction dominated by such a low-order pattern was robust? Especially when you know that the PC4 contains a non-climatic signal too?

    ReplyDelete
  25. @ Bishop Hill, your comment was too long and went into Spam. I've corrected that.

    ReplyDelete
  26. OK - Thanks for the link (and your time) - it doesn't really answer my question which is "Is there any alternative PC selection rule that you legitimately could use which would justify excluding the PC4 ?"

    From the RC [url=http://www.realclimate.org/index.php/archives/2005/02/dummies-guide-to-the-latest-hockey-stick-controversy/]post[/url] it basically asserts that
    1) you should use all PCs that have a significant fractional difference from a set of randomly generated PCs that have the same noise characteristics
    2) If your answer depends on the number of PCs included, then you haven’t included enough.

    If that is true that would seem to argue that PC4 should be included.

    I'm not sure if this point is disputed or not.

    (That still leaves the arguments about the verification skill and the 'contaminated data')

    ReplyDelete
  27. OK - Thanks for the link (and your time) - it doesn't really answer my question which is "Is there any alternative PC selection rule that you legitimately could use which would justify excluding the PC4 ?"

    From the RC post http://www.realclimate.org/index.php/archives/2005/02/dummies-guide-to-the-latest-hockey-stick-controversy/ it basically asserts that
    1) you should use all PCs that have a significant fractional difference from a set of randomly generated PCs that have the same noise characteristics
    2) If your answer depends on the number of PCs included, then you haven’t included enough.

    If that is true that would seem to argue that PC4 should be included.

    I'm not sure if this point is disputed or not.

    (That still leaves the arguments about the verification skill and the 'contaminated data')

    ReplyDelete
  28. Bishop - I was interested in your overall impression of palaeoclimatic studies in general.(I'm no expert) but I have the impression that MBH98 was the first attempt, and that later studies (eg Moberg) found a more pronounced MWP but still led to the IPCC conclusion
    "Average Northern Hemisphere temperatures during the second half of the 20th century were very likely higher than during any other 50-year period in the last 500 years and likely the highest in at least the past 1,300 years"
    (with likely meaning >66%)

    This seems quite a modest conclusion given none of the 'spaghetti graphs' show a temperature approaching the temperatures in the 2nd half of the 20th Century.

    ReplyDelete
  29. I find it amazing that we are still discussing this issue in 2011. If you go to McIntyre's 2005 FAQs at http://climateaudit.org/faq/ you will find the following statement

    "There is surprising agreement between Mann and ourselves on the effect of differing assumptions on the NH temperature reconstruction – if the assumptions are specified exactly. For example, both of us get high early 15th century results with centered PC calculations and 2 PCs in the AD1400 North American network and both of us get low early 15th century results with centered PC calculations and 5 PCs in the North American network.
    We have tried to canvass these matters in an evenhanded way in our E&E article (see pages 75-76) to show what is agreed and what is not agreed. Mann has categorically denied that his PC method generates hockey stick shaped series from red noise (realclimate#Temperature question #5, but we see no way that he will able to sustain this argument, in the face of the compelling evidence to the contrary in our GRL paper."

    There is of course more. The real issue is whether the bristlecones are picking up temperature or whether they are - as the original collectors of the data speculated - picking up co2 fertilisation or or a subset of bark damaged trees whose rings have been distorted by the mechanical damage, as McItyre suggests.

    ReplyDelete
  30. Peter

    You could buy my book for a detailed account of these issues, but in brief:

    MBH98 was not the first multiproxy study.

    Moberg does indeed find a more pronounced MWP, although there are some very troubling aspects to his study too - for example he uses increased concentrations of cold water foraminifera in Arabia as evidence of warming.

    The studies shown in AR4 and the problems with them are (IIRC):

    MBH99 (bristlecones, short-centring)
    Mann & Jones 03 (bristlecones, short-centring)
    Briffa 00 (Yamal/Polar Urals, divergence)
    Esper 02 (bristlecones)
    Jones 98 (bodged data, etc)
    Rutherford 05 (bristlecones)
    Moberg (bristlecones, Yamal see also above)
    D'Arrigo (Yamal)
    Hegerl (bristlecones)
    Pollack & Smerdon (doesn't cover Medieval Warm Period)
    Oerlemanns (Doesn't cover MWP)

    Many, if not most of these studies also have the question of cherrypicking of series hanging over them.

    That is why the medieval temps all appear lower than modern temps - the shapes of all the relevant graphs is being driven by the bristlecones and a handful of equally problematic series - Yamal/Polar Urals and Tornetrask being the best known (and most alarming from a scientific point of view).

    ReplyDelete
  31. Bishop - OK thanks - so you think that the whole body of scientific literature is flawed in this area and none of the participants in the peer reviewed literature have raised these issues - that doesn't ring true to me - there is the opportunity for someone to scientifically really 'make their name' by pointing this out in the peer reviewed literature - this has been over 10 years, plenty of time for the 'self correcting' nature of science.

    I found Von Storch / Zorita's commentary on this quite convincing http://blogs.nature.com/climatefeedback/2007/05/the_decay_of_the_hockey_stick.html

    "science itself has indeed corrected claims of premature knowledge"

    He also seems to agree that the short centring / PC4 was not an issue

    "we do not think that McIntyre has substantially contributed in the published peer-reviewed literature to the debate about the statistical merits of the MBH and related method. They have published one peer-reviewed article on a statistical aspect, and we have published a response – acknowledging that they would have a valid point in principle, but the critique would not matter in the case of the hockey-stick."

    ReplyDelete
  32. Peter

    The von Storch and Zorita study involved making pseudo proxies - artificial proxies constructed from a climate trend plus noise to make them look like a real proxy. Then they fed these into Mann's algorithm and found the answer wasn't very different to the original study. The problem was that VS and Z assumed that there should be a reasonable correlation between the local temperature and the proxy value - not unreasonable you would say, because the tree growth should respond to local temperature, right?

    The problem is that in Mann's database, there was essentially no correlation at all between local temperature and proxy value. He claims the trees are responding to larger-scale climate signals by means of "teleconnections". This explanation is better known in skeptic circles as "woo". In reality he had a big database and his algorithm mined for anything that correlated with tempeature in the 20th Century, giving the blade of the stick. In earlier periods the correlations ho longer held and the proxies all cancelled out, giving the long flat handle.

    ReplyDelete
  33. Peter

    Re your second point, it is clear from the climategate emails that the scientific literature is largely closed to sceptics. There is mention in the emails as far back as 2003 that only GRL and Climate Research would accept sceptic papers. Then there was the Soon and Baliunas affairs and the coup at GRL after they published McIntyre's paper (see my book), so it is pretty much closed now. Mann says in the emails that the hole at GRL had been "plugged".

    It's not watertight, of course. We saw the O'Donnell et al paper the other week, but it took a year and over a hundred pages of correspondence to get past the peer review. It's clear though that the playing field is not level.

    ReplyDelete
  34. Bishop Hill.

    You seem desperate to reject all of the data from other sources, including independent data from ice cores, corals, stalagmites, boreholes, as well from historical records such as grape harvests, and the very long record of cherry blossom in China.
    Then of course there is the instrumental record, which goes back beyond 1850 here in England - which goes back to 1659.

    By picking an argument about a very minor statistical point, you are trying to claim that this invalidates the entire body of evidence, and that your position wins by default. I'm afraid that is precisely the tactic used by creationists, and it's clear to me you've borrowed it from them. However if you really claim that there is no possibility of reconstructing past temperature by proxy, you destroy your own preferred default position, which is that there was a medieval warm period, which was warmer than today. You can't have your cake and eat it in science I'm afraid. If you insist we have to rely on the instrumental record, then guess what, it too shows a 'hockey stick'.
    You may be able to pull the wool over the eyes of fools like Delingpole, but I'm afraid that your tricks don't work on scientists Mr Bishop

    ReplyDelete
  35. You'll also be aware that Wegman C&P'd stuff straight from M&M Bishop, without understanding what he was doing?

    http://deepclimate.org/2010/11/16/replication-and-due-diligence-wegman-style/

    Wegman's 'report' isn't only contaminated by plagiarism, it's contaminated by academic sloppiness and political influence. It's hardly surprising it reflects the findings of M&M since it amounts to little more than an uncritical parroting of their 'work', bulked out with some stuff his students found on the internet. Call that 'Ad Hominem' if you like - but since he's under investigation by his university for academic misconduct on this paper,I'm afraid that that defence is rather premature. In the meantime his plagiarism is there for all to see, and you don't need a scientific education to spot it.
    You seem blind to the errors of the handful of people on your side of the argument, whilst claiming that the entire global science community have got things wrong. I suggest that you've got a rather large plank in your eye, and your sceptic vision has a rather large blind spot. Conveniently perhaps, that blind spot has enabled you to write a polemical book which you can sell to fools like Delingpole, but I'm afraid your contribution to science is zero.

    ReplyDelete
  36. Keep it clean boys. Keep it clean.

    ReplyDelete
  37. I think there is a problem with the way these proxy reconstructions are done that many people don't get. Under normal circumstances, you'd think that the more data was used - the better. And the more confidence you could have in the result.

    And if you used a really simple technique, say just taking the average of all the datasets being used, that would be true. All the data would have the same weight/importance. The more data you had the beter the result. If one or two datasets turned out to be - well rather questionable, it wouldn't affect the final result. They'd be outweighed by the majority of the datasets.

    But that doesn't apply to the way these proxy reconstructions are done. And I think many people don't realise how fragile the results therefore are.

    In these reconstructions, some datasets are more equal than others. Some technique (usually statistical) is used to attach a lot of weight to one or two datasets. So in the case of the first hockey stick papers MBH98/99, although several dozen proxies are used, from all over the northern hemisphere the published curve is entirely dependent on one or two tree ring proxies. Both from mountainous parts of the western United States.

    Without the bristlecone pines, there is no hockey stick curve. With them there is. And all the other data is largely irrelevant. You could throw most of the rest of the data away - or replace it with randome numbers. You could add a few hundred more random datasets and it wouldn't matter.

    So it's not what you might exepct. Sure these studies use lots of datasets - but the answer depends on just one or two datasets.

    And it gets worse. Although IPCC AR4 includes a load of different reconstructions, they pretty much all rely on the same two or three datasets to get the hockey stick.

    And there are good reasons for thinking that the key datasets should not be included in this kind of reconstruction. The bristlecones have much of their bark missing. Most dendro experts would not use them. The Yamal data has a proboem, something like their is one tree with a very strange response that normally you would discount but in this case has been included. In a more recent paper, Mann has used some scandinavian lake sediments. Only they are used upside down, and the person who obtained the data says you can't trust the 20th century bit (the blade of the hockey stick) because it's caused by changes in land use in the area.

    It loiks like lots of different scientists have independently done research using many hundreds of diferent datasets. And that is true - well maybe. But it's misleading. They've used hundreds of datasets that are just padding and don't make any contribution to the results. They have all used, possibly had to use, the same two or three datasets to get their hockey stick shaped results.

    Get under the skin of the statistics and the methodology used, and it turns out there isn't a huge amount of data supporting the claims being made.

    ReplyDelete
  38. Deano

    I think I've responded to the point re other types of proxy elsewhere. Of the studies in AR4 that are not primarily tree ring based none go back to the MWP. I don't reject these other proxies per se though - I think they are far likelier to be usable than tree rings, which are contaminated by many other factors. However, the IPCC doesn't rely on them so I'm assuming they must have their own doubts about their reliability.

    The instrumental record in central england back to 1659 can tell us nothing about the MWP on a hemispheric or global scale.

    You say I am arguing "about a very minor statistical point...to claim that this invalidates the entire body of evidence". I'm sorry but you haven't been following the argument. I state quite clearly that, in the AR4 studies, the short-centred PCA argument applies to MBH99 and MJ03. It indeed discredits these two papers, but the idea that I think it somehow discredits any of the other papers is an invention of your own.

    I haven't said it is impossible to recreate temperatures from proxies. Again, this is something that you have made up. As a science though, temperature reconstruction is in its infancy and we have yet to see whether anything reliable will come out of it.

    We have already discussed Wegman. You don't seem to understand what an ad hominem argument is. Even if the inquiry finds he or his co-authors guilty of plagiarism it doesn't disprove his argument that short-centring is wrong. Suggesting it does is fallacious. Everybody, but everybody who has looked at this issue agrees that it is not a valid procedure. PCA only works with centred data. It is intrinsic to the underlying maths.

    You say my book is polemical. I assume then that you haven't read it. Something else you made up I guess.

    ReplyDelete
  39. If you claim that none of the data goes back to the 'Medieval Warm Period' Bishop Hill - how do you know there was a 'Medieval Warm Period'?

    What data are you using?

    ReplyDelete
  40. As for Wegman, if he is found guilty of academic misconduct, I'm afraid that casts doubt on the piece of work in question - as we will no longer be able to assume that he was acting in good faith, and assume that he hasn't indulged in other deceitful activity to come to his claimed conclusions. Indeed, upon deeper investigation this would seem to be the case. It may be there are valid criticisms of the statistical methods used, but if Wegman was just uncritically parroting the work of M&M, as appears to be the case, that does not amount to validation of that work, and his report can be discounted as worthless.

    ReplyDelete
  41. Nick - you clearly don't know what you're talking about - the independent data sets show hockey sticks before they're combined, there is no question of them only showing a hockey stick if they are combined with data sets that Bishop Hill disapproves of.

    ReplyDelete
  42. Deano - Perhaps I didn't make myself clear. when I say dataset I am referring to some proxy dataset, something like a tree ring reconstruction, or borholes, or sediment deposits, or slices through stalagmites. Something like MBH98 merges several dozen of those. to produce a single multi-proxy reconstruction. Which I've tried to refer to as a reconstruction but I might not have been totally consistent in my post.

    Many separate reconstructions show some kind of hockey stick curve. The official collection of these is shown in a graph in IPCC AR4. And is sometimes known colloquially as the spaghetti graph. each of these is sort of hockey stick shaped and if you averaged them together you would probably still get something hockey stick-ish shaped.

    But that wasn't what I was trying to say. One of these multi-proxy reconstructions is based on several separate datasets. And many/most of those aren't, in fact, hockey stick shaped.

    What I wanted to convey, was that the techniques being used to get a hard to locate signal out of very noisy data, are clever. But they have a downside. They aren't very robust. If 5% of the data turns out to be suspect, it might have no effect on the result - or it might wipe out the results completely.

    ReplyDelete
  43. Okay Nick - unlike Bishop you concede that the data from independent sources does show hockey sticks. If you wished to find a signal in other data that is more messy, then Principle Components Analysis is the appropriate method to use. There are a number of different approaches you can take, and different parameters that you can set - but these still reveal a hockey stick signal, and provide additional confirmation form what we know from independent sources, and the instrumental record. There are claims that the application of PCA to random data produces a hockey stick - it doesn't, and attempts to do so depend on the transparently dishonest technique of making a lot of data runs, and cherry picking graphs which suit your purpose.

    This has been exposed at 'Deep Climate' and show that M&M were cheating and Wegman was guilty of lack of due diligence in simply parroting them:

    http://deepclimate.org/2010/11/16/replication-and-due-diligence-wegman-style/

    "It turns out that the sample leading principal components (PC1s) shown in two key Wegman et al figures were in fact rendered directly from McIntyre and McKitrick’s original archive of simulated “hockey stick” PC1s. Even worse, though, is the astonishing fact that this special collection of “hockey sticks” is not even a random sample of the 10,000 pseudo-proxy PC1s originally produced in the GRL study. Rather it expressly contains the very top 100 – one percent – having the most pronounced upward blade. Thus, McIntyre and McKitrick’s original Fig 1-1, mechanically reproduced by Wegman et al, shows a carefully selected “sample” from the top 1% of simulated “hockey sticks”. And Wegman’s Fig 4-4, which falsely claimed to show “hockey sticks” mined from low-order, low-autocorrelation “red noise”, contains another 12 from that same 1%!,

    ReplyDelete
  44. Bishop,

    1) The O'Donnel paper looks good quality to me. Maybe he could take some of McIntyre's points on palaeoclimatology and strip out the extraneous issues (e.g. short centring and PC4) and distil the real issue (contamination of data ?), state it clearly, quantify it so it can be debated properly in the peer reviewed literature. I note that Ryan O’Donnel in his post at the Air Vent said of his recent paper

    " I am quite satisfied that the review process was fair and equitable, although I do believe excessive deference was paid to this one particular reviewer at the beginning of the process"

    2) My impression is that the peer reviewed journals are bending over backwards to represent the 'sceptic' viewpoint, even if the quality of the papers are very poor. Some examples (this might get caught by the 'spam filter'

    McLean, J. D., C. R. de Freitas, and R. M. Carter (2009), Influence of the Southern Oscillation on tropospheric temperature, J. Geophys. Res., 114, D14104, doi:10.1029/2008JD011637.

    "Thermal pollution causes global warming" Bo Nordell, Global and Planetary Change 38 (2003) 305–312.

    G. V. Chilingar, L. F. Khilyuk, O. G. Sorokhtin. Cooling of Atmosphere Due to CO2 Emission, Energy Sources, Part A: Recovery, Utilization, and Environmental Effects, 30(1), 1 - 9 (2008).

    Christopher Essex, Ross McKitrick, Bjorne Andresen; Does a Global Temperature Exist? Journal of Non-EquilibriumThermodynamics, 32(1) 1-27.

    Ferenc Misckolczi; Greenhouse effect in semi-transparent planetary Atmospheres. Quarterly Journal of the Hungarian Meteorological 111(1), January–March 2007, 1–40.

    Gerhard Gerlich and Ralf Tscheushner, Falsification Of The Atmospheric CO2 Greenhouse Effects Within The Frame Of PhysicsInternational Journal of Modern Physics B, 23(3), 275-364 (2009).

    Heat capacity, time constant, and sensitivity of Earth's climate system. Schwartz S. E. J. Geophys. Res. , D24S05 (2007).

    Spencer, R.W., and W.D. Braswell "Potential biases in cloud feedback diagnosis: A simple model demonstration", J. Climate, vol. 21, p. 5624 (2008)

    Nicola Scafetta and Bruce J. West, “Estimated solar contribution to the global surface warming using the ACRIM TSI satellite composite,” Geophys. Res. Lett., 32(24), doi:10.1029/2005GL023849 (2005)

    Nicola Scafetta and Bruce J. West , “Phenomenological solar signature in 400 years of reconstructed Northern Hemisphere temperature record,” Geophys. Res. Lett., 33, doi:10.1029/2006GL027142. (2006).

    Nicola Scafetta, and Bruce J. West, “Phenomenological reconstructions of the solar signature in the NH surface temperature records since 1600.” J. Geophys. Res., 112, D24S03, doi:10.1029/2007JD008437 (2007)

    http://onlinelibrary.wiley.com/doi/10.1002/joc.1651/abstract

    ReplyDelete
  45. Deano - I'm not sure you aren't putting words into my and Bishop's mouth. In a previous post Bishop listed in brackets after each named reconstruction the key data-series used in each reconstruction that gives each one it's hockey stick shape.

    I am sure if you cut down a large enough number of trees or stalagtites; or pull up enough drill samples from lake beds, or count enough cherry blossom dates in japanese temples, you'll get some hockey stick curves. You'll also get some elephant shaped curves, some camel shaped curves (bactrian and dromedary) and even some flat lines.

    Whether any of these is significant or has any relationship to temperature is harder to work out.

    I'm not sure PCA is the way to do this sort of thing - it's certainly one. Jeez, I thought this whole hockey stick argument had been settled years ago. I'm not sure I can remember the details. To be honest when it comes to statistics I'm with Lord Rutherford (he said something along the lines of: if your experiment needs statistics to understand the result - design a better experiment).

    PCA isn't actually needed. I think the latest Mann proxy reconstruction avoids using it completely, and McShane & Wyner use other methods but the same data as Mann.

    The thing about the original Mann paper - and most of the ones that use a statistical technique, is they try to use a correlation with the instrumental temperature record to justify that one proxy is 'better' than another. Since the instrumental record only exists for the last hundred or so years - and in general shows a steep warming. This tends to prefer any dataset that has a steep hockey stick blade.

    At fast glance, this seems like a reasonable thing to do. But I do wonder. In the case of the Mann paper it means that we end up trusting some bristlecone pines that happened to grow in sync with the instrumental temperature record during a large chunk of the 20th century. Although that means they probably didn't grow in sync with the actual temeperature where they wer actually growing. That's a bit physically implausible - though not impossible.

    But what worries me more, is that it's suspiciously like one of those circular proofs. Like those for the existence of God put forward by theologians. You start by selecting data that shows an 'unprecedented warming in the 20th Century'. By selecting on a correlation with instrumental temperature that is what you are doing. And then after a lot of statisitical analysis you prove that there has been 'unprecendented warming in the 20th century'. And also look how well our data correlates with the instrumental record.

    I've not followed the Wegman plagiarism thing in detail. And I get the impression that deep climate has an agenda of his own. The science is the science - whoever actually wrote it.

    But from what I can tell, whenever anyone who knows more about statistics than Mann or McIntyre they have all at least in the technical aspects of the statistics backed McIntyre.

    That includes Wegman, the NAS, Joliffe and more recently the McShane and Wyner paper. That last one is actually surprsingly readable considering it's a statistics paper. Joliffe wrote the textbook on PCA and was one of the peopel who peer reviewed the original Mann hockey stick paper. He's since backed McIntyre's criticisms and has said he wouldn't have let the Mann paper through, so easily, if he'd known about the non standard centering issues.

    ReplyDelete
  46. Nick,

    1) forget the Wegman plagiarism issue for a minute and looks at http://deepclimate.org/2010/11/16/replication-and-due-diligence-wegman-style/

    2) All Joliffe said is short centred PCA is a poor technique - everyone agrees with that - he didn't quantify the effect on MBH98 (which is minimal - I think everyone that has looked at that agrees - except Mcintryre)

    ReplyDelete
  47. Peter

    Are you basing your comments on the impact of short-centring on von Storch? If so, I think you need to say where my analysis of their paper is wrong. This is a very fancy, and as I have indicated, rather flawed approach. The question really should be what happens when you take out the bristlecones (the NOAMER PC1)? There is an absolutely gobsmacking story of what Mann had to do to get the MWP cooler than the current era.

    1. Use the bristlecones
    2. Use shortcentring
    3. Use a series called Gaspe, but extrapolate the dodgy early part of the series (based on a single tree) and then include the series in his database twice (although only extrapolated in one version).

    Good eh?

    ReplyDelete
  48. Deano

    I'm not quite sure what you mean by "data from independent sources does show hockey sticks". Are you talking about individual series or multiproxy reconstructions.

    I think I've covered the multiproxy reconstruction angle already - the big issue is the use of bristlecones in so many of them.

    In terms of the data though, take a look at the datasets for Mann 2008, which I think I'm right in saying has the biggest database of proxies to date. There's some nice animated GIFs of the proxies linked below, and you'll see that there are not many hockey sticks in there.

    Mann's non-tree-ring MWP proxies are here.

    The tree ring ones are here.

    ReplyDelete
  49. Peter - I wrote a reply to your last comment and then lost it. I'm not sure I can be bothered to redo the whole thing.

    I don't much like deepclimate's style of writing he seems to take every opportunity to make snide comments or smear anyone on the sceptic side.

    However, I had a brief look and I also quickly skimmed a copy of the Wegman report.

    Wegman used McIntyre's code because McIntyre provided it. This is stated in the Wegman report and is highlighted in the summary. It didn't work straight away - but once they fixed a few file paths it did. Though looking at the deep climate post you'd think this was somehow important. Wegman didn't use Mann's code because Mann couldn't/wouldn't provide something that actually worked.

    That means that at some places Wegman was probably using not Mann's code but McIntyre's best guess at what he thought Mann had done. e.g. in the code where McIntyre labels something as Mannomatic.

    McIntyre, I think claimed that Mann's algorithm selectively mined for hockey sticks. If there was a hockey stick in the source data - somewhere - the algorithm would pull it out and give it more weight. I think this is now accepted by everyone - even Mann, who at least now knows that he must normalise and center his data before doing PCA.

    I pretty sure that McIntyre has also said that if there is no hockey stick anywhere in the source data - then you don't get a hockey stick. mann's code was good at finding hockey sticks but there had to be one there - somewhere.

    To test this McIntyre/McKitrick used red noise to generate some random data. And they had to generate 10000 series in order to get around a 100 hockey stick shaped dummy datasets. I kind of think sdo what? I mean it would be a pretty broken kind of random noise source if 100% of the datasets had been hockey sticks.

    But hey, does anyone care. Mann's code was wrong - it may not matter to the result. But it was wrong, it should be ackowledged and should be fixed, so people don't make that mistake again.

    The Steig et al and O'Donnell et al papers show that this sort of thing can be handled better.

    ReplyDelete
  50. Nick. Sorry your earlier comment did go through. It went into Spam because of its length. I've put it through now.

    ReplyDelete
  51. Deano

    I don't think you understand why Mann used PCA. Here's the quote from MBH98:

    "Certain densely sampled regional dendroclimatic data sets have been represented in the network by a smaller number of leading principal components (typically 3–11 depending on the spatial extent and size of the data set). This form of representation ensures a reasonably homogeneous spatial sampling in the multiproxy network (112 indicators back to 1820)."

    North America in particular was overrepresented in the database. So instead of throwing data away or averaging he used PCA to summarise it before calibration.

    As for the rest of your comment, Are you saying the NRC also cherrypicked their red noise simulations when they presented a hockey stick from red noise? I mean, wow! :-)

    ReplyDelete
  52. Nick,

    I also find deepclimate style grating at times (as I do with McIntyre's) however

    look at figure 4.1 comparing stationary trend noise with MBH98 - they look identical

    They fail to mention this was carefully selected from the top 100 upward bending PC1s, a mere 1% of all the PC1s. misleading ?

    ReplyDelete
  53. Bishop,

    As far as the short centring is concerned and whether PC4 should be included and whether the short centring makes any difference

    a) I asked further up the thread "Is there any alternative PC selection rule that you legitimately could use which would justify excluding the PC4 ?" - you pointed me to a post on CA where SM said he couldn't work out what rule Mann had used and there were several other possible rules and PC4 included data that he didn;t like anyway (which is a different argument) That doesn't really answer my question.

    b) I posted the link from RC which asserted
    1) you should use all PCs that have a significant fractional difference from a set of randomly generated PCs that have the same noise characteristics
    2) If your answer depends on the number of PCs included, then you haven’t included enough.

    c) the NRC concluded “MBH methodology does not appear to unduly influence reconstructions of hemispheric mean temperature”

    d) Von Storch / Zorita argued "the critique would not matter in the case of the hockey-stick."

    I know this is partially an argument from authority, but I think relevant authorities - but McIntyre seems to have failed to convince anyone that he is right on this issue

    ReplyDelete
  54. Peter - PC1s, a mere 1% of all the PC1s. misleading ?

    Dunno. If they had used the last 100 randomly generated data-series, I would guess that the graph would have pointed the other way, it would have been an upside down version of the MBH graph. I'm not sure what happens if you use data from the middle of the 10000 samples.

    But what exactly is the point aat issue. deep climate thinks he's done something clever by discovering all this stuff. But the Wegman report seems pretty clear that they used the McIntyre code to reproduce the graphs - they are labelled clearly as such. And deep climate thinks he's discovered something in McIntyre's code - but he didn't have to look very hard all the code is available on McIntyre's site. It probably isn't as 'turnkey' as McIntyre's later stuff. But it's all fairly clear. (Well clear if you happen to be a whizz at statistics - I admit I have to take it pretty slow.)

    ReplyDelete
  55. @ Nick Moon:

    I am sure if you cut down a large enough number of trees or stalagtites; or pull up enough drill samples from lake beds, or count enough cherry blossom dates in japanese temples, you'll get some hockey stick curves. You'll also get some elephant shaped curves, some camel shaped curves (bactrian and dromedary) and even some flat lines.

    Nope it's all hockey sticks - "Hockey sticks all the way down" to paraphrase Terry Pratchett.

    If you've got raw data for camel or elephant shaped curves, I'm sure the Oil Industry will chuck you a few dollars to post that data on the internet. I don't think they are too broke to do that are they??? ;)

    ReplyDelete
  56. Hi Bishop,

    I' still waiting for you to get back to me on why you are so sure there was a global 'medieval warming period' - when at the same time you are so sure that you are so sure that it is impossible to make climate reconstructions on the basis of data.

    Could you clarify your position please?

    ReplyDelete
  57. Deano

    I referred to it as "putative" up the thread somewhere in a response to one of your questions. My position is we don't know. I think I say this in the book somewhere, but I can't find the reference at the moment.

    ReplyDelete
  58. Had to change ID because OpenID is falling over (this morning.)

    ReplyDelete
  59. Peter

    Re the Wegman thing, McIntyre shows in his GRL paper that with red noise you get a hockey stick 95% of the time (HS being defined as where the 20th century mean is more than 1SD from the long-term mean). But again, nobody is disputing that short-centring is biased any longer.

    ReplyDelete
  60. Peter again:

    I don't know whether a different retention rule would have led to different numbers of PCs being retained. But I don't think it is credible for Mann to argue after the case that Rule N is the way to do it when he didn't use Rule N in his tree ring PC calculations in the first place. If he had done, then he might have a case to retain four PCs. Since his now-released code doesn't include the code for PC retention, it looks as though it was done ad-hoc.

    And even then, you still can't get away from the fact that the bristlecones are contaminated anyway and shouldn't be the database in the first place.

    I'm not sure you haven't misinterpreted the sentence from the NRC. They are simply saying that short-centring is not a commonly used technique. This is true. Only Mann has ever used it.

    I've discussed von Storch and Zorita already - they got it wrong.

    ReplyDelete
  61. Bishop / Andrew

    The problem when Steve writes he often

    1) Conflates a load of issues together
    2) Doesn't quantify the effect

    I'm not saying he is stupid or incompetent or he is wrong on everything, but this stuff has to be tested and thrashed out in the peer reviewed journals not just blogged. I'm sure he has hit 'gatekeeping' issues, but if somebody could come up with a credible analysis that showed the MWP is higher than late 20th Century temperatures or that the whole field of palaeoclimatology is flawed that would be a massive coup for a journal as well as the author of the paper. If you look back at some of the early CA posts von Storch, Zorita and quite a few others got completely frustrated trying to help Steve turn his points into something publishable in mainstream journals. Contrast with the finished O Donnel paper that won universal praise from everyone.

    ReplyDelete
  62. Peter

    The impact of the various issues including short-centring is discussed in MM05 (EE), so it is in the literature.

    Steve has been very clear that he is not claiming that the MWP was warmer than today - he does not think it is possible to do this from tree rings. He has demonstrated that MBH98 was not robust (and that there are enormous problems with the studies relied on by the IPCC).

    ReplyDelete
  63. "Working Group 2 is a panel of biologists and sociologists whose job is to evaluate the impact of climate change. These people are not climate scientists."

    ??

    To the author: How come you are so ill-informed, and yet have waded into the climate debate?


    Your statement above is really wrong, and it shows you as a beginner. There is a lot of background that needs to be covered before your criticism can be considered valid.

    ReplyDelete
  64. @nigguraths This was a mistake and I was sloppy. Sorry. I read something to that effect but could not find a source. I will correct. This is why we blog, isn't it? I have waded into the debate in order to become better informed. Thank you for helping me.

    ReplyDelete
  65. Hi flay,
    I did not mean to come across strongly and I apologize if I did. :)

    I wrote a long comment, but blogspot chewed it up. Aargh!!

    If you want to understand the skeptics' case, you'll have to expend a lot of time and effort. Take the Soon and Baliunas affair, for example. It is clear, today, from all available information, that climate scientists acted in an utterly motivated, and dishonest fashion. And I am saying this as a researcher myself (in a completely different field).

    But yet, in order to reach this conclusion, there is no one well-written source to go to. It requires the reader to directly read all the source documents and the relevant emails and even then, the story is not fully clear. But you realize that the 'official account' cannot hold true.

    It is painful, time-wasting, and frankly useless being a climate skeptic.

    ReplyDelete
  66. Andrew,

    But it is no good publishing in Energy and Environment, it will be ignored. It needs to be in the proper peer reviewed literature (ie http://en.wikipedia.org/wiki/ISI_Web_of_Knowledge)

    He has published in GRL, He was co-author in the O Donnel paper - I know it is hard to get stuff published and all scientific authors often feel hard done to by reviewers e.g.

    http://www.youtube.com/watch?v=-VRBWLpYCPY

    but that is the only way the scientific establishment will take points on board

    ReplyDelete
  67. I know they will ignore it - but this is just the genetic fallacy isn't it? The question is, is it right?

    Yes, he has published in GRL, but they gave the editor who accepted the paper the shove (they took away responsiblity for the paper from him). What editor would accept one of his papers?

    I don't know if you are aware of the problems Anastassia Makarieva has been having even to get someone to peer review her paper.

    ReplyDelete
  68. Hi flay,
    I was hoping I could point out a few other aspects you could re-consider.

    If you could, please pay close attention to the language of the statement from the IPCC you link to above. The IPCC *does not* acknowledge the error it has made w.r.t to the Himalayan Glaciers. This is particularly notable because the IPCC has no mechanism for recognizing and correcting errors.

    Secondly, the IPCC, to date, does not have clearly established guidelines on the types of evidence considered admissible for use. The IPCC only has very general guidelines, which it considers as 'clear and well-established standards of evidence'.

    ReplyDelete
  69. Hi @nigguraths. Thanks again for correcting me earlier and I did not take it personally. The IPCC may not have acknowledged the error, but the point I was trying to make is that the report of Working Group I is much less controversial. WG2 is concerned with the effects of climate change, not the physical causes. Although I admit that this is integral to the debate.

    ReplyDelete
  70. Bishop,

    But the only way something gets properly examined is if it gets published in the peer reviewed literature - a good example is the Steig 'Antarctic Warming' paper - O Donnel used a slightly different methodology and found an overall trend of about half that of Steig. Steig has examined it, agreed O Donnel had suggested several improvements in the methodology but has made three important decisions that have underestimated the warming trend and asserts the warming found by O Donnel does match the trend at one of the stations . Steig asserts that once these decisions are corrected the warming trend will match his original paper. He is going to publish this finding. Now I don't have enough expertise to know who is right but they need to 'duke it out' in the peer reviewed literature.

    ReplyDelete
  71. Peter,
    Mann seems to avoid duking it out in the literature. He has never replied to the M&M critiques in either GRL or E&E (which is peer reviewed, by the way). The only replies by him are on Real Climate, to which he is a main contributor and which controls debate tightly. I think it's a deliberate strategy to try and marginalise his critics.
    Moreover there are several examples of attempted suppression of debate that can be documented. Take for example the papers by Michaels and McKitrick on the deficiencies of the land surface temperature record. AR4 included a statement that purported to be a refutation, but was not supported by any published paper. McKitrick tried to get a refutation of the"refutation" into the published literature. He did eventually succeed - see http://rossmckitrick.weebly.com/uploads/4/8/0/8/4808045/ac.preprint.pdf - Statistics, Politics and Policy, Vol 1 No. 1, July 2010.
    But the travails along the way were many. The story is told here http://rossmckitrick.weebly.com/uploads/4/8/0/8/4808045/gatekeeping.pdf
    I'll break this post here for fear of it getting too long.

    ReplyDelete
  72. Continuing,
    Gavin Schmidt published a reply to the Michaels and Mcitrick's paper, but not in the same journal. He chose to publish in the International Journal of Climatology, 2009. The paper made two flawed criticisms of M&M. The first was the accusation that the results were suspect because of spatial auto-correlation. In itself that is a fair criticism, if the RESIDUALS were autocorrelated. The flaw was that the only evidence he produced for this was of spatial autocorrelation in the dependent and explanatory variables. One of the peer (pal?) reviewers was none other than Phil Jones and his cursory review can be found here: http://rossmckitrick.weebly.com/uploads/4/8/0/8/4808045/review_schmidt.doc
    A good reviewer who understood statistics would have asked for evidence of residual autocorrelation, but Jones missed the issue entirely. McKitrick and Michaels were not invited to comment at all.
    The second criticism was that Schmidt argued that the M&M results were just a fluke. The evidence for this was supposed to be that the temperature produced by model runs produced similar coefficients if the M&M analysis was applied using model results as the dependent variable. But the coefficients so produced were in fact significantly different from the M&M results, so that argument fails.
    McKitrick wrote a response for the IJOC, but it was rejected on incompetent referee reports. See here

    http://rossmckitrick.weebly.com/uploads/4/8/0/8/4808045/response_to_ijoc.pdf

    However a full response has now been published here

    http://rossmckitrick.weebly.com/uploads/4/8/0/8/4808045/final_jesm_dec2010.formatted.pdf

    in the Journal of Economic and Social Measurement.
    Note that the issues are generally statistical and that the statistical journals accepted the papers but the climatology journals seemed to get the statistical issues thoroughly confused.

    ReplyDelete

Flayman on LiveJournal (old)