To quote John Reisman, “Science is not a democracy. It is a dictatorship. It is evidence that does the dictating.” It’s this evidence based ‘dictatorship’ that is the basis for a scientific consensus. Based on this ‘dictatorship’ of evidence we know that global warming is real, we’re causing it, and that it’s a problem if we don’t act. This presents a real problem for those denying that there is a problem or want to minimize the consequences.
In science careers are made by overturning existing ideas and findings. This is why a consensus can only arise in science when other scientists cannot find flaws in earlier findings. As Richard Alley has said about showing that global warming isn’t a problem: “Is there any possibility that [among] tens of thousands of scientists there isn’t one of them that got the ego to do that? It’s absurd!”
Falsifying a well established scientific theory or concept advances the career and the reputation of a scientist far more than confirming it does.
Attacking the consensus
This is why the paper published in 2013 by Cook et al. that analysed the scientific consensus on human caused global warming in the scientific literature is attacked so much. Finding a consensus of 97% in the scientific literature that we’re causing most of the rise in temperature is inconvenient for them. If they manage to discredit these studies, or sow doubt about them, you’ll prevent the public from acting:
Like Oreskes said, spreading doubt is the most effective strategy a science denier has. This type of attack is crucial to maintain a gap between what scientists agree on and what the public thinks scientists agree on so you can delay action. This tactic is how the tobacco industry successfully delayed action against the harmful effects of smoking for decades.
A consensus is dangerous for those who deny human caused global warming, or minimize its consequences, as it shows, that they are in the minority. They are a very small percentage compared to the scientists who say, based on the evidence, that human caused global warming is a reality.
That’s why one of the most often used tactics is trying to make the consensus seem tiny. During an interview with me Dana Nuccitelli, one of the authors of Cook 2013, explained how this tactic is used on Cook 2013:
Which brings me to the latest attack by one of the most persistent attackers of Cook 2013: Richard Tol.
He has a history of attacking Cook 2013 with strange claims about flawed methodologies and saying that the data doesn’t show a 97% consensus. Which is rather odd as the 97% was also found when Cook asked authors what the position of their paper was.
What makes Tol’s persistence in attacking Cook 2013 still stranger is that Tol has said that “There is no doubt in my mind that the literature on climate change overwhelmingly supports the hypothesis that climate change is caused by humans. I have very little reason to doubt that the consensus is indeed correct” and that “The consensus is of course in the high nineties.”
But then he publishes this chart:
He published this graph in his blog post More nonsenus [sic] (archived here) in which he announced a response to Cook 2013 that he has written (a comment to Cook 2013 that’s currently being reviewed by the journal Environmental Research Letters). This graph strongly implies that Cook 2013 is the outlier with the consensus it found, when it isn’t.
I’ve already mentioned to Tol that context is everything when talking about consensus percentages. This context is what makes it clear that what Tol has produced is a perfect example of nonsensus (not the other way around). He uses the tactics I mentioned earlier, and a lot more that I haven’t mentioned yet, to make it look like the consensus is lower than it is.
So how did Tol manage to make it appear like Cook 2013 is an outlier? When I asked Nuccitelli this question he gave me the following answer:
What Tol has essentially done is to define “consensus” in a number of different ways, most of them making no sense whatsoever, and put the incomparable results together on a single chart in a grossly misleading manner. For the literature surveys, in some cases he’s included papers that don’t take a position on what’s causing global warming (i.e. in Oreskes’ 75%), and in others he’s omitted them (i.e. in Cook’s 97%). For the scientist surveys, in some cases he’s included both expert and non-expert opinion, sometimes just experts, and in some cases he’s even included numbers that only represent the ‘consensus’ among those who reject human-caused global warming, most of whom have no expertise in climate science. These are not comparable numbers, and putting them together on one chart makes no sense.
Yes, it’s indeed this bad. It’s something I immediately noticed when I looked at the graph and the studies he cited. But you don’t have to take Nuccitelli’s or my word on this. I contacted the authors of the cited consensus studies and I got back some scathing responses.
When I asked Naomi Oreskes if Tol had used her data and findings correctly she wasn’t happy with Tol changing a consensus of 100% into one of 75% (bolding mine and link mine):
No it is not accurate. As usual, he is misrepresenting scientific work, in this case mine.
Obviously he is taking the 75% number below and misusing it. The point, which the original article made clear, is that we found no scientific dissent in the published literature. This demonstrates that the “dissent” that was being reported in the media was politically-driven, not scientifically driven, which was, of course, the point of the paper, and led to our book, Merchants of Doubt, which explains where the political dissent comes from.
Oreskes is referring to a passage in her article where she said that of the papers she investigated 75% either explicitly or implicitly accept the consensus view (which is in her paper defined as most of the observed warming over the last 50 years is likely to have been due to the increase in greenhouse gasses). However, 25% of the papers she investigated didn’t say anything on human caused global warming. But none disagreed with the consensus position.
This is how Tol changed a consensus of 100% into 75%, by counting papers that did not say anything about the question Oreskes was trying to answer. Bart Verheggen, another cited author, publicly stated to Tol about this that “You can’t just divide the number of affirmative statements by all papers in the sample, if many papers didn’t actually stake out any position on the question at hand. The latter should logically be excluded, unless you want to argue that of all biology papers, only 0.5% take an affirmative position on evolution, hence there is low consensus on evolution.”
The effect of an evidence based consensus is that what already is established will seldomly be mentioned in a paper, an effect Cook 2013 very clearly found with their large sample. Verheggen gave more detail when I corresponded with him about Tol’s usage of his research (bolding mine):
Tol selectively quotes results from our survey. We provided results for different subsamples, based on different questions, and based on different types of calculating the level of agreement, in the Supporting Information with our article in ES&T. Because we cast a very wide net with our survey, we argued in our paper that subgroups based on a proxy for expertise (the number of climate related peer reviewed publications) provide the best estimate of the level of scientific consensus. Tol on the other hand presents all subsamples as representative of the scientific consensus, including those respondents who were tagged as “unconvinced”. This group consists to a large extent of signatories of public statements disapproving of mainstream climate science, many of whom are not publishing scientists. For example, some Heartland Institute staffers were also included. It is actually surprising that the level of consensus in this group is larger than 0%. To claim, as Richard Tol does, that the outcome for this subsample is somehow representative of the scientific consensus is entirely nonsensical.
Another issue is that Richard Tol bases the numbers he uses on just one of the two survey questions about the causes of recent climate change, i.e. a form of cherry picking. Moreover, we quantified the consensus as a fraction of those who actually answered the question by providing an estimate of the human greenhouse gas contribution. Tol on the other hand quantifies the consensus as a fraction of all those who were asked the question, including those who didn’t provide such an estimate. We provided a detailed argument for our interpretation in both the ES&T paper and in a recent blogpost.
Cherry picking is the tactic of focussing on specific pieces of data, often out of context, while excluding any data that conflicts with the desired conclusion. Verheggen’s research was in a similar way misrepresented by Rick Santorum during an interview.
Verheggen also raised very clearly what I also already mentioned to Tol: expertise matters. It’s these detailed questions that made it possible for Verheggen to find the interesting result that attribution experts, scientists who investigate what is changing temperatures and by how much, say that humanity has caused more than 100% of the warming (natural trends and factors would make temperatures drop slightly if we weren’t increasing greenhouse gasses).
The problem with Tol ignoring expertise for his consensus percentages was spelled out by William Anderegg when I asked him if Tol had used his results correctly (bolding mine):
This is by no means a correct or valid interpretation of our results. For our sampling strategy, we bent over backwards to include as many doubters as possible within our sample, so analyzing the whole sample is completely misleading and misrepresenting our study. We showed that 50% of the doubter group had *zero* publications in the peer-reviewed climate literature whatsoever, and 80% had fewer than 20 publications, which was our cut-off to be included as an expert. The basic premise of analyzing expert consensus is that you should only count the views of true experts in the subject. You wouldn’t count the opinions of astronomers on the best heart surgery technique. Thus it makes no sense at all to count the vast numbers of non-experts doubters included in our sample. We showed in a follow up example that a large fraction didn’t have a Ph.D. at all and those that did were primarily in fields almost entirely unrelated to climate science.
Neil Stenhouse, another cited author, had the same issue with how Tol calculates his consensus percentages and highlights a point why not all studies can be directly compared (bolding mine):
Tol’s description omits information in a way that seems designed to suggest -- inaccurately -- that the consensus among relevant experts is low. This is contrary to the conclusion we took from our data, which is that there is a high level of consensus among actively publishing climate experts. Tol’s reference to “subgroups” generally, as if the nature of the subgroups were irrelevant, omits the fact that the subgroup with the highest level of expertise is also the subgroup with the highest level of agreement that global warming is human-caused.
Tol also omits something else we mentioned -- that our estimates of consensus may be conservative, given that (due to an oversight) we asked about the past 150 years of global warming, rather than the past 50 or so years (the period commonly studied for signs of human causation). Several respondents emailed to suggest their answer would have been different if we had asked about the last 50 years.
Because he omits this kind of information that is centrally relevant to interpreting the numbers correctly, despite clear discussion of it in the article text, I have to wonder about his commitment to clarifying these matters for readers -- his claimed motive for writing the comment.
Peter Doran really hammered the point about how Tol is incorrectly comparing datasets and results. He also very clearly spells out why expertise matters (bolding and link mine):
Well, I would never express it that way. I’ve attached the EOS paper which is very short and readable.
We sent a survey to 10,257 Earth Scientists listed in the AGI directory. [Of these scientists] 3146 people responded, 90% of this group answered that they agreed temperatures were increasing, 82% expressed the view that humans have played a significant role (exact questions in the attachment). […]
Our results showed, that when you focus on the most knowledgeable group with regards to climate -- those who self-identify as climate scientists and are active in climate research and publication […] -- this subset has the strongest response to Q2 about the human influence. They are >97% in support.
But all 97% are not equal. The 97% in the Anderegg study and the 97% in the Cook study all address slightly different things and groups. To properly state our 97% it would be “>97% climate scientists, who are actively publishing in the field, think human activity is a significant contributing factor in changing mean global temperatures since the industrial revolution.”
All the other numbers Tol throws out are subsets from less qualified people in our survey. It’s also not all of them. The EOS paper was based on my student’s thesis (which is referenced in the EOS paper) where the full survey and details are kept. It was much too big for an EOS article. So Tol lists the opinion of our lowest groups only with equal weight to our highest group and overall result. In fact, in the survey there were 25 categories of expertise. So if he wants to take the approach he’s taken, he should be pulling 27 numbers from our survey (these 25 categories, plus the overall number, plus the 97% for publishing climatologists). But wait, why stop there, to be complete you would have to include another 25 numbers for all of these groups for people who are not active publishers, and another 25 for people who are active publishers, and another 25 for those who have PhD’s, another 25 for those who have MS degrees, etc. That’s not how you do statistics. Our study focused on expertise. The conclusion was that the more expertise you have in climate science and/or being an active researcher, the stronger your support for humans playing a significant role. To pull out a few of the less expert groups and give them the same weight as our most expert group is a completely irresponsible use of our data. It would be like me having a medical team tell me I need surgery to remove a life-threatening malignant growth, but going to my local Starbucks to get the opinion of the team of baristas and giving both recommendations the same weight.
The two citations of the work of Hans von Storch and Dennis Bray are the only ones that are the least problematic citations (one of the percentages is correct). Though here Tol again didn’t account for no responses or correct the data in any way so the different consensus percentages can be compared.
There are more issues in the comment Tol has written. The cited authors responded to one paragraph in Tol’s comment, there are four more paragraphs.
Same refuted claims
These paragraphs contain old accusation that were already refuted by the authors of Cook 2013 or by fellow skeptics. Some of which by me.
I already started writing about the strange claims made by Tol in 2013. The last time I wrote about Tol was when he got a severely flawed paper published criticising Cook 2013. The authors highlighted 24 errors, one of them being Tol misrepresenting stolen private correspondence. More details can be found in my article Richard Tol’s 97% Scientific Consensus Gremlins.
Several major issues I mentioned in that article I had already highlighted in Richard Tol Versus Richard Tol On The 97% Scientific Consensus. You can also read more about one of the dubious papers Tol cited in 97% Climate consensus ‘denial’: the debunkers again not debunked.
You can read more about Tol misrepresenting research by the authors of Cook 2013 in their articles 97% global warming consensus meets resistance from scientific denialism, Climate contrarians accidentally confirm the 97% global warming consensus and in of course their 24 errors document (full disclosure: I was one of the reviewers of the 24 errors document and I’ve contributed a couple of small sections to it).
Fellow skeptics also engaged with Tol in comment sections to point out a myriad of issues and mistakes in his claims. You can read those comments below the articles Deconstructing the 97% self-destructed Richard Tol, The fall and fall of Gish galloping Richard Tol’s smear campaign, The Evolution of a 97% Conspiracy Theory -- The Case of the Abstract IDs and More nonsense – sorry, nonsensus – from Richard Tol.
There’s far more, but this is enough to establish a pattern of refusing to correct mistakes and incorrect claims.
I think Tol made a big mistake here by trying to paint Cook 2013 as an outlier, as shown by the responses that I got from the authors of the cited consensus studies. Two days before Tol published his Cook 2013 comment I had already warned him that Verheggen, Doran, and Anderegg would not agree with the conclusions he was drawing from their papers and data (he was publishing previews of the graph on twitter). A warning that he dismissed.
I don’t know why Tol has such a dislike for Cook 2013, but it has driven him to reject evidence that shows his claims have no merit whatsoever. But this type of behaviour is nothing new. When Andrew Gelman, a statistical heavy weight, analysed and critiqued a paper from Tol he got very defensive. To the point that Gelman saw the need to say the following:
There’s no shame in being confused—statistics is hard. But if your goal is to do science, you really have to move beyond this sort of defensiveness and reluctance to learn […] I’m sure you can go the rest of your career in this manner, but please take a moment to reflect. You’re far from retirement. Do you really want to spend two more decades doing substandard work, just because you can? You have an impressive knack for working on important problems and getting things published. Lots of researchers go through their entire careers without these valuable attributes. It’s not too late to get a bit more serious with the work itself.
Gelman wrote that in May of 2014. I truly hope that Tol’s current attempt of critiquing Cook 2013 is not a positive answer to the question about the two decades of substandard work.