What to expect this winter: NOAA’s 2016-17 Winter Outlook
Will we or won’t we see La Niña emerge this year? Does it even matter? Shoot, if one of the strongest El Niño episodes in history didn’t deliver much drought relief to California last winter, what are the chances for significant improvement this year? I’ll attempt to answer these and other questions here in my 5th blog post. If you'd rather watch a video recap of the winter outlook, we have that, too.
This blog is brought to you by the letter p, for “probability”
So what might influence our climate this winter? As you know from the ENSO blog, there is a link between the fall and winter conditions across the tropical Pacific and the average winter climate in the U.S. If the likely La Niña develops, certain patterns of temperature and precipitation would be favored across the country. Over the past few years, we’ve discussed the patterns favored by El Niño, but haven’t really discussed what we often see during La Niña winters. Roughly speaking, La Niña impacts are opposite to what is observed during El Niño winters (see the figure).
So while the southern (and especially southeastern) part of the U. S. is often wetter and colder than average during El Niño winters, La Niña generally favors below-average precipitation and above-average temperatures in those same regions. We also often see opposite patterns across the northern part of the nation, with warmer and drier conditions during El Niño winters and colder and wetter conditions during La Niña years.
Before discussing the actual winter outlook, I want to remind readers that these are probabilities (% chance) for below, near, or above average seasonal climate outcomes with the maps showing only the most likely temperature or precipitation outcome (footnote 1). Because the probabilities shown are less than 100%, it means there is no guarantee you will see temperature or precipitation departures that match the color on the map. As we’ve explained in earlier blog posts, even when one outcome is more likely than another, there is still always a chance that a less favored outcome will occur (witness precipitation last winter over the western United States).
Given the potential La Niña, it’s not surprising that both the temperature and precipitation outlooks are consistent with typical La Niña impacts. However, because there is still some uncertainty that La Niña will develop and persist through the winter, probabilities on the maps this year are fairly conservative, smaller than the ones in the outlook last year.
As shown in the figure above, the winter precipitation outlook favors below-normal precipitation across the entire southern U. S. and southern Alaska, with probabilities greatest (exceeding 50%) across the Gulf Coast from Texas to Florida. This also includes southern California and the Southwest, although the shift in the probabilities in these locations is very small, barely tilting the odds toward below average. In contrast, above-average precipitation is favored in the northern Rockies, around the Great Lakes, in Hawaii, and western Alaska. This forecast does not bode well for drought in the country, as we’ll likely see drought persist in central and southern California and the Southwest and potentially expand in the Southeast. Thus, the likely weak La Niña means California drought relief is not likely.
The temperature outlook (see the figure below) favors above-average temperatures across the southern U. S., extending northward through the central Rockies, in Hawaii, in western and northern Alaska and in northern New England. Chances are highest in an area extending from the desert Southwest to central and southern Texas (greater than 50%), with a greater than 6 in 10 chance in southern New Mexico and western Texas. Odds favor colder-than-normal temperatures along the northern tier from Washington eastward to the Great Lakes. However, the likelihood of below-average temperatures across the North is modest, with no regions reaching 50%.
Both maps include blank regions where neither above-, near- nor below-normal is favored. These areas (shown in white and labeled EC for “equal chances”) have the same chance for above, near, or below-normal (33.33%) average seasonal climate conditions. This doesn’t mean that near-average temperature or precipitation is favored this winter in those regions, but rather that there’s no tilt in the odds toward any seasonal outcome.
Last winter’s outlook
And before I close, let’s take a quick look back at how last winter’s CPC outlooks fared, since we neglected to do that last spring. Because we had such a strong El Niño last winter, we issued forecasts of temperature and precipitation with relatively high probabilities compared to our past seasonal outlooks. And that certainly worked out well for those parts of the nation where we favored above-normal temperatures, as shown in the figure below.
In fact, most of the temperature outlook verified, with the Heidke Skill Score (footnote 2) approaching +70, meaning that the forecast was correct for about 80% of the locations (in areas where a forecast was made). With above average temperatures largely blanketing the nation from coast to coast, only the forecasts in the southern Plains and along the Pacific Northwest coast were missed.
The precipitation outlook, on the other hand, did not do very well, and was, in fact, quite disappointing, scoring near zero. As shown in the figure above, while the forecast pattern favored above-normal precipitation across most of the South and along the East Coast, the observed pattern was shifted somewhat north, with much of the northern parts of the country experiencing a wetter-than-average winter, and the southwestern and south-central U. S. being either normal or drier than normal. Only along the East Coast was there a match between the observations and the favored forecast category.
This closing brought to you by the letter l, for “long game”
Making seasonal forecasts remains a very challenging endeavor. Seasonal climate forecasts are not as skillful as weather predictions, and phenomena like El Niño or La Niña only provide some clues, not certainty, as to what might occur during an upcoming season. Longer-term trends are also an important player as well.
And while we are aware of and participating in ongoing research on new strategies for seasonal predictability—for example, the possible influence of fall Arctic sea ice extent and Siberian snow cover on subsequent Northern Hemisphere winter climate—at this point, these relationships are still being tested. It is not yet clear how they might improve predictions beyond the current set of tools that we already consider. (Note: Many state-of-the-art climate models are run using recent conditions of sea ice and snow).
CPC issues probabilistic seasonal forecasts so users can take risk and opportunities into account when making climate-sensitive decisions. The maps show only the most likely outcome where there is greater confidence, but this is not the only possible outcome. As we saw last winter in California, even the less likely outcome can and sometimes does occur.
It’s natural to wonder what good a seasonal forecast is if it winds up with a skill score of zero. However, keep in mind that these outlooks will primarily benefit those who play the long game. That’s because even though some seasons are a bust, over the span of many years, the forecasts are right more often than you’d expect due to chance.
(1) The three possible categories of outcome are below normal, near normal and above normal. These categories are defined by the boundaries separating them, called terciles. The terciles, technically, are the 33.33 and 66.67 percentile positions in the distribution. In other words, they are the boundaries between the lower and middle thirds of the distribution, and between the middle and upper thirds. The distribution consists of the observations, for the season and the location in question, over the 30 years of 1981-2010. The CPC forecasts show the probability of the favored category only when there is a favored category; otherwise, they show EC (“equal chances”). In the maps, the probability is shown only for the favored category, but not for the other two categories. Often, the near-normal category remains at 33.33, and the category opposite the favored one is below 33.33 by the same amount that the favored category is above 33.33. When the probability of the favored category becomes very large, such as 70% (which is very rare), the above rule for assigning the probabilities for the two non-favored categories becomes different.
(2) As stated in one of Tom’s previous blogs about verification measures, the Heidke Skill Score (or HSS) is computed as the number of grid points that had a correct categorical forecast, minus the number of grid points expected to be correct by pure chance. The difference between these two numbers is then divided by the total number of grid squares across the country, but after subtracting from that total the number expected by chance. Then we multiply by 100 so that it is expressed as a percentage instead of as a proportion. Note that we are only scoring those grid points that did not have an “equal chances” (EC) forecast. As a formula, the above is written:
HSS = (hits – expected hits) / (total – expected) and then multiply the result by 100.
For example, suppose there are 220 grid squares across the U. S., but 120 of them have the EC forecast. We ignore those points with EC, and only score the 100 that are not EC points—i.e., only those points that have a forecast tilted toward below-normal, near-normal or above-normal. Suppose 40 of those points turn out to have a correct forecast, which means that the observations turned out to match the category that the forecast favored, even if the probability involved in the forecast was only weakly greater than 33.33%. Now, with 100 points being verified, we expect 33.3 of them to be matches just by chance, since there are 3 categories having equal likelihood on average, and even if we knew nothing, we would be expected to guess correctly one-third of the time, on average. So, plugging numbers into the formula, we would get:
HSS = (40 – 33.3) / (100 – 33.3) and this gives us 0.10.
Then multiplying by 100, we get a score of 10%.
Lead reviewer: Tony Barnston, IRI
The ENSO blog is written, edited, and moderated by Michelle L’Heureux (NOAA CPC), Emily Becker and Tom DiLiberto (contractors to CPC), Anthony Barnston (IRI), and Rebecca Lindsey (contractor to NOAA CPO). Posts reflect the views of the bloggers themselves and not necessarily Climate.gov, NOAA, or Columbia University/IRI.