08 November 2012

I Am "Roughly" 18 Feet Tall: A Critique of Grinsted et al. 2012

UPDATE 18 March 2013: Today Grinsted et al. have another paper out in PNAS in which they follow up the one discussed below. They make the fantabulous prediction of a Katrina every other year. They say in the new paper:
[W]e have previously demonstrated that the most extreme surge index events can predominantly be attributed to large landfalling hurricanes, and that they are linked to hurricane damage (20). We therefore interpret the surge index as primarily a measure of hurricane surge threat, although we note that other types of extreme weather also generate surges such as hybrid storms and severe winter storms. . .
As I showed in this post, which Gristed commented on, the surge record does not accurately reflect hurricane incidence or damage. Another poor showing for PNAS in climate science. 

Last month the Proceeding of the National Academy of Sciences published a paper by Grinsted et al. titled, “Homogeneous record of Atlantic hurricane surge threat since 1923.” In what follows I provide a critique of their paper and offer my argument for why it does not actually tell us much about hurricanes, much less about damage.

The paper looked at 6 tide gauge stations along the US Gulf and Atlantic coasts to develop an annual index of storm surges in the United States. The paper explains why this is important:
[F]rom the economic damage perspective the hurricanes that remain far away from shore in the Atlantic are much less important than those closer to land. Hence in constructing an unbiased record of storms we need to ask what we want to measure. The strong winds and intense low pressure associated with tropical cyclones generate storm surges. These storm surges are the most harmful aspect of tropical cyclones in the current climate (1, 12), and wherever tropical cyclones prevail they are the primary cause of storm surges. A measure of storm surge intensity would therefore be a good candidate measure of cyclone potential impact.
My attention was drawn to the paper because unlike other studies and data, which have found no trends in US landfalling hurricane numbers or intensities, Grinsted et al. do find a trend. They write:
We have constructed a homogeneous surge index on the basis of instrumental records from six long tide-gauge records. We demonstrate that the surge index correlates with other measures of Atlantic cyclone activity and that it responds in particular to major landfalling cyclones. The surge index can be used to identify and estimate potential remaining biases in other records of cyclone activity.

We detect a statistically significant increasing trend in the number of moderately large surge index events since 1923.
They also compare their surge index with our record of normalized damage (Pielke et al. 2008). Because their dataset has an upward trend and the normalized loss dataset has no trend they conclude that our dataset is “suspect.”

I contacted Dr. Grinsted, who has been extremely responsive in providing data and exchanging views, for which I think him. He and co-authors (along with Kerry Emanuel, who edited the paper for PNAS) were provided a chance to offer comments on a first draft of this blog post -- they have not as yet, though the offer will remain open.

The first thing I noticed about their paper is that their surge dataset contains 465 surge events from 1923 to 2008 (July through November) yet over that same time period there were only 147 landfalling hurricanes (data from NOAA). You can see a comparison of the two datasets in the following graph.

Clearly, there has been no trend in hurricane events, yet there has been an increase in surges. I am not sure what this means, but logically it does seem pretty obvious that there have not been more hurricane-related surges as there have not been more landfilling hurricanes.

However, the paper says something different:
To estimate the trend in landfalling storm counts, we count the number of large surge events greater than 10 units in 1 y, which is roughly equivalent to hurricane categories 0–5. This threshold was chosen as a compromise between looking at large events and having sufficiently many events to obtain robust statistics. Since 1923 the average number of events crossing this threshold has been 5.4/y …
The actual number of hurricanes to make landfall averaged 1.7 per year over 1923-2008. So to claim that their selection threshold is “roughly equivalent” to hurricane landfalls is to provide a very generous interpretation of the term “roughly.” It is like me saying that I am "roughly" 18 feet tall.

The lack of precision in event specification was a point that Dr. Grinsted has admitted is a weakness of the study, as discussed in a news article which covered the paper:
There’s one obvious caveat about the new results: not every hurricane creates a storm surge, since they don’t always hit land. And not every storm surge is caused by a hurricane. “The storm surge index,” Grinsted said, “is sensitive to strong winter storms as well.” And it’s quite possible, he said, that the intensity of a given storm surge could be made greater or less by the angle at which a hurricane hits land.

Surges aren’t, in short, a perfect stand-in for hurricanes, but Grinsted said that they’re pretty good. In cases where they could do so, the team has lined up hurricane data with surge data, and, he said, “there are clear correlations. So while our paper might not explain everything, it is still useful."
I reject the notion that the 465 surges used in the paper are a "pretty good" stand-in for 147 hurricanes, and the data supports such a rejection. That this claim made it through the PNAS review process does not help this journal's reputation with respect to the quality of recent papers on climate. But I digress.

To address the discrepancy between surges and hurricanes, during our exchange, Dr. Grinsted performed an additional analysis that did not appear in their paper, which was to look only at the top 150 events in the database, in order to explore whether this subset of their dataset would better capture landfalling hurricanes. This data is shown in the following graph.

You can see that the scale is obviously more appropriate and the trend is reduced (and is significant at the 0.1 level but not 0.05) but the datasets still do not match up well. The correlation between the 465 events and the 147 hurricanes over 1923 to 2008 is 0.54, but when the surge dataset is reduced to the top150 events the correlation with the 147 hurricanes is actually reduced to 0.49. This means that hurricane events explain only 25% of the variance in the surges, telling us that there is a lot more going on in this database than just hurricane-related surges.

The situation with damage is similar -- because it is a further a subset of the 147 hurricanes that cause most damage, far from the 465 surge events of which the vast majority of which are not associated with any damage. However, the top 15 years in terms of surge events from Grinsted et al. account for 46% of the normalized damage from 1923-2008. So there is some valuable information in the surge dataset at the most extreme end of the scale.

It is very important to note that the median date of these top 15 years is 1969, almost exactly the mid-point of the period examined by Grinsted et al. 1923 to 2008, a conclusion which actually supports the finding of no trend in the normalized loss dataset. Thus, a closer reading of the data presented in Grinsted et al. finds that the normalized loss dataset, rather than being “suspect,” is actually pretty robust.

To summarize, Grinsted et al. have created a dataset of storm surges which they have sought to associate with landfalling hurricanes and then further link to hurricane damage. Unfortunately, the connection between the surge dataset and hurricanes is, in their words “rough,” and shown here, tenuous. A further linkage to damage doesn't stand up. However, a closer look at the most extreme years in the surge dataset and its relation to normalized losses does find value. Here the surge dataset actually helps to confirm the finding of no trend in normalized losses.

Thanks again to Dr. Grinsted for engaging.

UPDATE: After writing this post I have been pointed to similar criticisms of Grinsted at al. by Tom Knutson and Gabe Vecchi of NOAA at DotEarth


  1. A couple thoughts:

    1. “The storm surge index,” Grinsted said, “is sensitive to strong winter storms as well.”

    Does this mean they looked at surges outside the hurricane season? If so, I'm scratching my head.

    2. They have this data, and they also have reports of storm damage available through various sources. Before I accept surge data as a proxy for storm damage, I'd want to see it compared to actual, known damage. Why use data to imply damaging storms when you already have data on damamging storms?

    As always, it's frustrating to discuss a paper without reading it first. Depending on the location of their surge gauges, they could be tracking Nor'easters, which are not tropical cyclones. Certainly here in New England, we get more storm surges from Nor'easters than from tropical cyclones/hurricanes. But then Kerry Emmanuel would know that.

  2. Hi Roger,

    They wrote, "...which is roughly equivalent to hurricane categories 0–5."

    You responded, "The actual number of hurricanes to make landfall averaged 1.7 per year over 1923-2008. So to claim that their selection threshold is “roughly equivalent” to hurricane landfalls is to provide a very generous interpretation of the term “roughly.” It is like me saying that I am "roughly" 18 feet tall."

    But perhaps the diffence is related to "Category 0" hurricanes. Maybe "Category 0" is code for "tropical storm." So maybe their storm surge data includes tropical storms, of which there were probably a lot.

    Just a thought.


    P.S. It seems better to me to try to figure out if there's a legitimate reason for the difference rather than to ridicule the paper.

    P.P.S. Not that I always follow that advice myself. But do as I say, not as I do. ;-)

  3. -2-Mark Bahner

    Thanks for your comment, good questions.

    1. No, the results are not sensitive to inclusion of tropical storms (39 mph-74 mph). (You can see this in the correlations in their Table 1).

    2. One could also ask the same question about storms of <39 mph strength. This sort of classification of the characteristics of the 465 events would have made good sense for the paper.

    3. TS are in fact included in the damage data (and as you might guess do not add much damage at all to the normalized hurricane losses).

    4. If the focus is on extreme events, damage and hurricanes, which is what the paper focuses on, the lesser strength storms are not relevant (though scientifically may be of broader interest, of course).


  4. From a layman's perspective, the logic of the paper is puzzling - akin to attempting the invalidation of a data set of oranges by comparing it to a data set of apples. That said:

    1. Is it a robust conclusion that surges of greater than ten units [feet?] have trended upward in a statistically significant manner?

    2. If so:
    a. Can a similar upward trend be seen in normalized losses if extratropical cyclones are included? [It seems that there is a paper in that, if it has not already been written!]
    b. Is this confirmation of a signal correlating rising temperatures and storm surges associated with extratropical, rather than tropical cyclones?

  5. What problems do changes in coastal geography -landfill construction, estuary depth (especially given the catastrophic loss of the East Coast's oyster reefs during this period), bulkheading, etc. present to comparing storm surge data over time? How do we compare a surge measured in say NYC's Battery Park in 1920 with the fundamentally different landscape found there today?

  6. Couple of thoughts on this. First, it is not clear to me that storm surges are the most harmful aspect of tropical cyclones in the current climate. Certainly, they can do enormous damage. But for just a recent example, the rainfall was certainly the most harmful aspect of last year's Irene.

    As a geologist, this quote kind of surprised me - "And it’s quite possible, he said, that the intensity of a given storm surge could be made greater or less by the angle at which a hurricane hits land." "Quite possible" is not the right term to use here, each and every storm surge is absolutely affected by the angle of the coastline relative to the cyclone. Sandy made that quite clear in the New York Bight.

    Finally, somewhat in the authors defense, measurable storm surges can and do occur even with non-landfalling cyclones.

  7. Hi Roger,

    You write, "Thanks for your comment, good questions."

    I had no questions. :-) Just comments.

    My main comment was that the discrepancy between your 147 landfalling hurricanes and their 465 storm surge events was possibly due to them counting "Category 0" hurricanes (which I assume might mean tropical storms).

    I'm not very interested in the whole subject of whether or not storm surges are getting worse. It seems to me they have already been bad enough over the last 100 years. I'm much more interested in promoting/developing a portable storm surge protection system that would actually reduce the damage from storm surges to a small fraction of what they currently are. (For example, developing a portable storm surge protection system that would have prevented virtually all of the storm surge damage from Katrina and Sandy. And that will protect Miami, Tampa, Virginia Beach, etc. when the get hit in the future. Which they will, regardless of whether or not global warming continues.)

    I find it very depressing that it seems like virtually all the time/effort/money in the U.S. dedicated to analyzing problems, where virtually no time/effort/money is dedicated to solving them.

    But I digress. ;-)


  8. Roger,

    Good analysis. This is similar to the comments I recently made at Skeptical Science when they offered Grinsted as an example of landfalling hurricane trends and evidence that your normalization of damages is faulty. It seems obvious to me that when a proxy (like storm surge index) shows a trend where the actual data (# of landfalling hurricanes) does not, the problem is with the proxy and not the data. Besides, it's worth noting that their seasonally averaged index shows no significant trend, so it's the selection of "large events" that's screwy. Noting that their number of events was much too large was a good catch.

  9. All of this is very interesting but still seems to leave the problem of a trend in surges.

    "Clearly, there has been no trend in hurricane events, yet there has been an increase in surges."

    Which makes me wonder, if I am run over by a bus but there has been no increase in dump truck traffic, should I feel ok? :-)

    For some of us the significance of Grinsted's paper isn't about past storm damage or storm nomenclature but instead an increase of hazard and hence risk.

    But as long as we're focusing on hurricanes, one of the notable things about Sandy was the arrival of damaging surge while the storm was 500 miles offshore. If Sandy had tracked up the coast and never made landfall we'd still have counted damage from the storm. This effect seems due to the storm's unusually large size and sluggish pace until shortly after making its final turn west. Is it possible Grinsted's paper is telling us something about whether the single metric of a storm's making landfall or not is perhaps going to become less useful as time passes?

  10. dbostrom,

    No, you are missing some crucial details about the Grinsted paper. There is in fact NO trend in surges. The trend was only in surges of 10 units per year or more. And as Roger pointed out above, that trend is reduced to below statistical significance when looking at the top 150 surge events. This alone suggests the detection of the trend is likely spurious. The fact that landfalling hurricanes show no trend merely confirms this.

    Now, you are trying to suggest that the surge statistic is giving us important information about hurricanes that don't hit land but still cause damage. But Grinsted went to great pains to show that the surge index they put together captures LANDFALLING events and not Atlantic hurricanes in general. Your claim turns Grinsted's intent on it head.

    Finally, while damage from surges are not to be dismissed, let me point out again that there are no significant trends in Grinsted's surge index in general nor in the most powerful (top 150) events. Consequently, there is no justification for claiming increased damage risk from surge events.

  11. Brian: "There is in fact NO trend in surges."

    Brian: "The trend was only in surges of 10 units per year or more."

    Roger: "Clearly, there has been no trend in hurricane events, yet there has been an increase in surges."

    Sounds like disagreement. You guys will have to sort it out between yourselves.

  12. Brian,

    Roger writes that the trend using the top 150 surge events is significant at the 0.1 level. This is different from your statement that there "is in fact NO trend in surges".


    What drove the choice of the top 15 years and why not the top 20 years? Surely the top 20 years account for more than half of the normalized damage from 1923-2008 and would give a better sample size for calculating the median year.

  13. Unknown (#12),

    Yes, perhaps I should have said there is no trend in the seasonally averaged surge data, which is what I was referring to. Grinsted et al. says "We do not find a statistically significant trend in the seasonal average surge index."

    Even with the top 150 and the > 10-unit surge data, the trend is not real. The Grinsted surge data goes from the 1920s, which was a low point of the AMO cycle, up to the 2010s, which is a high point. The trend may be statistically significant at the 0.1 level (but not the standard 2-sigma) for the top 150 data, but it's likely capturing the change in the AMO cycle from low to high rather than a real trend. Just to put this into perspective, the trend in ACE for the Atlantic basin is significant at about the 3-sigma level for 1925 to 2011, but no longer significant when calculated over two full AMO cycles.

  14. dbostrom (#11),

    No disagreement in what Roger and I said. I just stated things less precisely than I could have. Grinsted et al. themselves report no significant trend in the seasonal average surge index, which is what I was referring to. And the reported trend in > 10-unit events is likely not real, as I explain in my response to Unknown (#12).

  15. So why does the surge index track much better with other measures of Atlantic tropical cyclones that the damage index?

  16. -15-Eli Rabett

    "So why does the surge index track much better with other measures of Atlantic tropical cyclones that the damage index?"

    Thanks for asking. The answer is that normalized damage systematically outperforms the surge index. The fact that the surge index is not well correlated with other measures of hurricane activity should be a first tip-off that it is not adding much value.


  17. Hmmm, no trend in damages?

    Energy and Environment
    Going to Extremes: The $188 Billion Price Tag from Climate-Related Extreme Weather

    February 12, 2013

    "The United States was subjected to many severe climate-related extreme weather over the past two years. In 2011 there were 14 extreme weather events—floods, drought, storms, and wildfires—that each caused at least $1 billion in damage. There were another 11 such disasters in 2012. These extreme weather events reflect part of the unpaid bill from climate change—a tab that will only grow over time.

    CAP recently documented the human and economic toll from these devastating events in our November 2012 report “Heavy Weather: How Climate Destruction Harms Middle- and Lower- Income Americans.” Since the release of that report, the National Oceanic and Atmospheric Administration, or NOAA, has updated its list of “billion-dollar”-damage weather events for 2012, bringing the two-year total to 25 incidents."