Playing with Data

Personal Views Expressed in Data

Tornado Warning Seminar

FIG 1: Yearly Mean Tornado Warnings on a 1KM grid derived from the 10-year period 2002 through 2011. Only polygon coordinates were used. (Note: This figure is not shown in the presentation. The figures shown in the presentation are divided based on the Storm-Based Warning switch date: 01 October 2007.)

Today I gave a version of my presentation on Tornado Warnings. This presentation was originally given earlier this month at the University of Alabama at Huntsville. We recorded today’s presentation so that others could see it, but I will warn you that my delivering the presentation this time did not go as smoothly as it did in Huntsville. (I stumbled over my words a couple of times and missed a few points I wanted to make.) But as a good sport, and someone who wants to see the conversation continue, I’m posting the link to the recording so that others may watch it and contribute feedback on what they thought.

When watching the presentation, a couple of questions I would love for you to keep in mind:

  • I am not an operational forecaster. No matter what I may say; no matter what I may think, I have never been in the position of actually having to issue a warning. Until I am in that position, everything I say should be considered my opinion. This seminar is in no way an attack on operational forecasters. They do a tremendous job under extremely stressful situations. This seminar is aimed at fostering a discussion on policy, not on specific actions a forecaster should or should not take.
  • Current Tornado Warning metrics center around Probability of Detection, False Alarm Ratio, and other contingency table measures. However, not every detection and not every false alarm are created equal. Are there better metrics that could be used to measure tornado warning performance? If so, what would they look like?
  • As mentioned above, not all false alarms are created equal. Furthermore, issues such as areas within the warning not being impacted by a severe event and broadcast meteorologists interrupting regular programming to cover warnings within demographic areas all give rise to the notion of perceived false alarm ratio. How can we adequately measure this, and maybe more importantly, is there anything we as a community can do to address issues arising from this?
  • Warning philosophies (severe and tornado) vary from office to office, leading to the sometimes asked question, “Do we have a single National Weather Service or 122 Local Weather Services?” Are these differing warning philosophies a good thing or a bad thing? If it is a good thing, how can we better communicate the different philosophies to users, or is that even necessary? If it is a bad thing, how do determine which philosophy(ies) do we standardize around? Or, is there a third option here that we’re (I’m) missing?
  • Should warnings be meteorology centric or people centric? Although population centers appear to show up in the datasets, is this a reflection of being people centric or merely a reflection that radar locations tend to be co-located with population centers and our understanding of thunderstorm phenomena are inherently tied to radars?
  • Instead of moving toward an Impact Based Warning paradigm, or a tiered warning paradigm, is it time to consider including probabilities or other means of communicating certainty/uncertainty information into the warning process? If so, how do we go about doing this in a manner that does not leave the users of these products behind? In other words, how do we move toward an uncertainty paradigm in which average citizens can understand?

I firmly believe that the warning system in place has undoubtedly saved thousands of lives throughout it’s history. However, I do believe that it has problems and stands to be improved. However, I cannot put into words what the problem(s) is(are). I believe that it will require community efforts to address these problems. This includes all of the severe weather community: research meteorologists, operational meteorologist, NWS management, emergency managers, broadcast meteorologists, and, maybe the most overlooked piece, social scientists.

Lastly, I must apologize to Greg Blumberg for coming across much harsher than I intended to when addressing a comment he made during the presentation. My response was intended in jest since I know Greg, but that didn’t come across to everyone in the audience, which tells me I shouldn’t have said it. Greg, my sincerest apologies, and I hope you understand that my response was entirely in jest.

With that said, I hope you enjoy the presentation, and I look forward to hearing your ideas!

Comments

Comments