Doomcasting and the snowstorm that wasn’t
Graphic courtesy Jay Searles
Through the weekend before the snowstorm was scheduled to hit, The Weather Channel was still forecasting six inches of snow for a large portion of the state.
In early March, nearly a week before the Snowstorm that Wasn’t, commercial forecasters and media outlets alike jumped on the snowpocalypse bandwagon, which turned out to be a dud.
Why did they alert the public days before the models settled down on a different forecast? What kinds of forces drive doomcasting, and how can we avoid it?
Upon arriving at my son’s day care on the morning of Thursday, February 27, 2014, I was quickly asked about the giant snowstorm heading our way. I was puzzled. Giant snowstorm? What are they talking about? Apparently it was stated by local radio broadcasters that there would be a major snow storm on March 3 that would affect a large portion of Pennsylvania.
I knew there was a potential for a winter storm, but not that big, and not necessarily here. Sitting at my PC later I got a ping on Facebook alerting me to a message. It was from a neighbor with a link to a picture: the ECMWF (European Centre for Medium-Range Weather Forecasts) model output from the previous night .
This image was apparently the source of the radio forecasts which mentioned that anywhere from 6 to 22 inches of snow would fall across the state by the end of Monday the 3rd, about 5 days away.
A review of all the weather models did show a fairly significant storm for the eastern portion of the U.S., but exactly where the snow would fall and how heavy it would be varied greatly in the data. So about a week away there was already a lot of uncertainty about who would get what, and where it would fall.
On Thursday night the model solutions had already begun to show a trend of the storm moving to the south, with the Canadian Meteorological Centre (CMC)’s model leading the predictions. This forecast showed that a parcel of Arctic air would rotate around the strong polar vortex over eastern Canada and push the energy arriving from the West Coast southward into Virginia.
By Saturday, March 1st the consensus of most of the models continued this southerly trend. They also indicated that a fairly large portion of Pennsylvania would receive substantial snow, with up to 4 to 5 inches indicated by the ECMWF for State College.
However, by the time the predictions were due the actual snowstorm was a far cry from the modeled predictions, and it appears that combined solutions from the GFS (the U.S.’s Global Forecast System) and the CMC as early as Saturday afternoon had the storm hitting Virginia and areas well south of Pennsylvania.
All through the weekend many media sources were still calling for “ground zero” to be in Pennsylvania, and the Weather Channel was still forecasting six inches of snow for a large portion of the state.
Further, it was stated as early as Friday night that confidence in the forecast was very low and that “this storm could easily move south and bypass Pennsylvania altogether.”
As early as Saturday morning, I had reduced my prediction for State College to 3 to 6 inches, and then reduced it again the next morning to 2 to 4 inches. Finally on Sunday afternoon I informed people that no significant snow would fall in State College. So what led to this seemingly endless supply of inaccurate predictions so far in advance? Here are some thoughts.
One major problem is the commercialization of weather information. The hyping process begins with ratings. February is a ratings month, plus survey books are not sent until well into March.
What is the best way to crank up ratings? Make whatever is being talked about exciting — and nothing excites (or panics) a large number of people more than bad weather, especially a storm that could dump 20 inches of snow.
The media outlets pounced on this potential storm so hard that they couldn’t let it go, even after it became obvious that Pennsylvania wasn’t going to get hit as badly as predicted and that the storm would, indeed, weaken and shift south.
What about the National Weather Service, which shouldn’t be affected by ratings and market forces? That system is a bureaucracy caught up in rules and procedures (consider their various manuals and thousands of pages on how to do things, including forecasting the weather).
The highly trained meteorologists on the front line are also caught up in rotating shift work — every 7 to 14 days or so, your body is on a different schedule, including overnights. I experienced it when I was part of this crowd.
Some people claim they can handle it, but no one handles rotating shift work; it handles you! The quality of product suffers, sometimes dramatically, though it’s still a step better than the media sources.
Graphic courtesy Jay Searles
The European Centre for Medium-Range Weather Forecast (ECMWF) weather model created this accumulated snow forecast on Feb. 28, days before the snowstorm was scheduled to hit. It called for a prodigious amount of snow, a forecast which some media outlets and commercial weather organizations were quick to adopt — despite the large uncertainties inherent in forecasting snow accumulations days in advance.
Graphic courtesy Jay Searles
This graphic shows the total amount of accumulated snowfall for the 24 hour period ending 11:28 a.m. on Mar. 4, 2014. Central Pennsylvania, previously forecasted to receive more than a dozen inches of snow, received non, while points that did receive snow received far less snow than predicted.
But in this storm event, advisories and warnings for PA were still posted well into Sunday evening, after the models had shown the storm moving far to the south of the area.
The other problem causing the forecasting issues is the rip and read fiasco: The local radio, and to some degree television stations, do what the business calls “rip and read” for weather forecasting.
This means that the station takes the NWS forecast and read it to their listeners.
There is little or no information about the accuracy of the forecast product that is being delivered. It is common for the forecast to be out of date, sometimes by several days or more, and often does not correlate to the real world.
That can happen for many reasons beyond the scope of this article, but it is a major source for the “stone throwing” at meteorologists for poor forecasts.
A final problem comes from mobile technology. Inexpensive or free smartphone apps do not give you the best forecast.
Those predictions actually come directly from what is called MOS, Model Output Statistics (which means, directly from computers), or some geographical derivative of that output.
Human beings rarely ever interact with this data, much less experienced meteorologists, except occasionally in the largest cities.
These forecasts are not vetted for precision or accuracy, and contribute to the poor perception of meteorological forecasts.
And if you value your privacy at all, these are not the way to go. ■
By JAY SEARLES
VOICES Staff Writer
Jay Searles has been a member of the American Meterological Society for 26 years and is a professional forecaster for Weather Ranger.