Skip to main content

Why Past ENSO Cases Aren’t the Key to Predicting the Current Case

Lately, many of us are wondering if a 2014-15 El Niño is going to materialize, and if so, how strong it might become and how long it will last. It might cross some folks’ minds that the answer to these questions can be found by collecting past ENSO cases that are similar and see what happened. Such an approach is known as analog forecasting, and on some level it makes intuitive sense.

In this post, I’ll discuss why the analog approach to forecasting often delivers disappointing results. Basically, it doesn’t work well because there are usually very few, if any, past cases on record that mimic the current situation sufficiently closely. The scarcity of analogs is important because dissimilarities between the past and the present, even if seemingly minor, amplify quickly so that the two cases end up going their separate ways.

Past Cases (Analogs) Similar to 2014

The current situation is interesting because it seems we have been teetering on the brink of El Niño, as our best dynamical and statistical models keep delaying the onset but yet continue to predict the event starting in fairly short order. Which raises the question: have there been other years that have behaved similarly to 2014? Before we check, let’s talk for a minute about how we find good analogs for the current situation.

The set of criteria by which the closest analogs are selected is a contested issue in forecasting.  One can select years based on time series, maps, or among many variables across different periods of time. There are also many different ways to measure similarity, and one has to select the appropriate level of closeness to past cases—in other words, decide how close is close enough.

One main criticism of analog forecasting is the subjectivity of making such choices, which can lead to different answers.  Here, I will use one method based on similarities of sea surface temperature (SST) in the Nino3.4 region (1). Figure 1 shows six other years during the 1950-2013 period that have behaved similarly to this year in terms of SST, and also shows what happened in the seven months following September.

In checking out the analog forecast possibilities in Fig. 1, it is clear that the outcomes are diverse. Out of the six selected cases, three indicate ENSO-neutral for the coming northern winter season, while the other three show El Niño (at least 0.5˚ anomaly)—and all three attain moderate strength (at least 1˚C anomaly) for at least one 3-month period during the late fall or winter (2). For the coming January, the 6 analogs range from -0.3 to 1.4˚C, revealing considerable uncertainty in the forecast (3).

Although this uncertainty in outcomes is somewhat smaller than that what we would have if we selected years completely randomly from the history, it is larger than that from our most advanced dynamical and statistical models. This is one reason analog forecasting systems have been largely abandoned over the last two decades as more modern prediction systems have proven to provide better accuracy.

Why Analogs Often Don’t Work Well

The large spread among the six analog cases selected for a current ENSO forecast is not unusual, and it would be nearly as large even if we came up with a more sophisticated analog ENSO forecast system (4). The big problem in analog forecasting is the lack of close enough analogs in the pool of candidates.

Furthermore, the criteria by which we select cases always ignore some relevant information, and this missed information introduces differences between the current case and the past analog cases. Even if we knew and included everything that did matter, the fact that the ocean and atmosphere are fluids means that tiny differences between the current and past cases often quickly grow into larger differences.

Van den Dool (1994)’s ”Searching for analogs, how long must we wait?” calculates that we would have to wait about 1030 years to find 2 observed atmospheric flow patterns that match to within observational error over the Northern Hemisphere. While the ocean is not as changeable as the atmospheric flow, it is clear that finding close matching analogs would also require a very long historical dataset.

Even finding good matches with the relatively simple Nino3.4 time series is an obstacle (see the left side of Fig. 1).  In the case of ENSO, there are only ~60 years in the “well observed” historical record of tropical Pacific sea surface temperatures and even fewer cases of El Niño or La Niña years. The severe shortness of the past record prevents an analog approach from bearing much fruit. More complex statistical and coupled ocean-atmosphere dynamical models can make better predictions than analogs.

Yet, despite the above warning, forecasters continue to enjoy identifying analogs when we consider what ENSO might have up its sleeve for the forthcoming seasons. For example, notice in Fig. 2, which shows what happened with each of the 6 analogs nine months farther into the future than shown in Fig. 1, that in early autumn 1986 (medium red line), a late-starting El Niño attained moderate strength during 1986-87, but also continued for a second year and reached even greater strength in 1987-88.

Do late-starting El Niño events tend to endure into a second ENSO cycle and take two years instead of one to run their course? While that is a topic for another post, let’s just say that there have been so few cases of two-year events that it would be foolhardy to actually predict one without more evidence. Interestingly, though, two of the models on this month’s IRI/CPC ENSO forecast plume (Fig. 3) do suggest the possibility of an El Niño both this year and a second year (2015-16). One of those models is NOAA/NCEP’s own CFSv2, and another is the Lamont-Doherty Earth Observatory (LDEO) intermediate model. Could they be on to something?

Line graph showing ENSO predictions from numerous numerical and statistical models

Figure 3. ENSO prediction plume from September 2014 (see official version of this graphic), for SST anomaly out to Jun-Jul-Aug 2015. The orange lines show predictions of individual dynamical models, and blue lines those of statistical models; the thicker lines show the averages of the predictions from those two model types. The black lines and dots on the left side show recent observations. A weak El Niño (SST of at least 0.5˚, but less than 1˚C anomaly) continues to be predicted for late fall and winter 2014-15 by many of the dynamical and statistical models. The NCEP CFSv2 and the LDEO dynamical models, highlighted in brighter orange, suggest continuation and intensification of El Niño in spring 2015, presaging a possible 2-year event, as occurred in 1986-87-88 (but the LDEO model holds back on a full-fledged El Niño for the first year).

Footnotes

(1) This method uses the 13 months prior to (and including) the most recently completed month, but weights the more recent months more heavily than the less recent ones. In the case of Fig. 1, the relative weights from September 2013 through September 2014 are .02, .02, .03, .05, .06, .06, .07, .09, .10, .11, .12, .14, and .13. This weighting pattern was based on correlations between the earlier and current SSTs, averaged over all starting/ending times of the year. In a more refined system, the weighting pattern would change noticeably depending on these starting/ending times of the year. This variation exists due to the typical seasonal timing of ENSO events. For example, the correlation between SST in February with that in July is low, while the correlation between SST in September with that in February is quite a bit higher, as many ENSO events begin during summer and last through the following winter.

So, as we might expect from the weighting pattern, the similarity between this year and the selected analog years is seen in Fig. 1 to be greatest over the most recent 3 to 5 months. The closeness of the match is calculated as the square root of the sum of the weighted squared differences between the Nino3.4 SST this year and the candidate year. This metric is often called the Euclidean distance, and the smaller the number, the better the analog match.

(2) The time unit used in this analog prediction system is 1-month, in contrast to the 3-month averages usually used by NOAA in ENSO diagnostics and prediction.

(3) This spread is occurring at a time of year when persistence (i.e., maintenance over time of either positive or negative anomalies of SST) is typically strong and so forecasts using just recent observations generally do better than at other times of the year.

(4) More sophisticated analog systems used in earlier decades for seasonal climate forecasting are documented in Barnett and Preisendorfer 1978, Livezey and Barnston 1988, and Barnston and Livezey 1989.

References

Barnett, T. P., R. W. Preisendorfer, 1978: Multifield analog prediction of short-term climate fluctuations using a climate state vector, J. Atmos. Sci., 35, 1771–1787.

Barnston, A. G., and R. E. Livezey, 1989: An Operational Multifield Analog/Anti-Analog Prediction System for United States Seasonal Temperatures. Part II: Spring, Summer, Fall and Intermediate 3-Month Period Experiments. J. Climate, 2, 513–541.

Livezey, R. E., and A. G. Barnston, 1988: An operational multifield analog/antianalog prediction system for United States seasonal temperatures: 1. System design and winter experiments. J. Geophys. Res., Atmospheres, 93, D9, 10953–10974. DOI: 10.1029/JD093iD09p10953

Van den Dool, H. M., 1994: Searching for analogues, how long must we wait?  Tellus A, 46, 314-324.

Comments

Couple of quick points: 1. With regard to subjectivity in selecting criteria for analog forecasts, how is this any different from the subjective determinations made in developing numerical models? Sure, there are many different ways one can construct an analog forecast technique, but there are just as many ways one can construct a dynamical model. Various approaches in each camp use common starting points: the former looking at physically relevant observed (potential) boundary conditions, the latter at numerical integration of the only equations known to govern the coupled system. 2. Any statistical forecasting system uses the same premise as an analog technique, namely, using empirical relationships between potential predictors and the predictand. So to differentiate between 'advanced' statistical methods and analogs is a non sequitur. Any of the complaints raised against analogs here could just as well be applied to any other statistical technique. While the author may not have been implicating all statistical techniques, that is an implicit takeaway. The forecast plume the author developed using limited time and computer resources compares rather well to the spread- and PDF-corrected plume from the CFSv2 from earlier in the month: http://www.cpc.ncep.noaa.gov/products/people/wwang/cfsv2fcst/images1/nino34SeaadjPDFSPRDC.gif So in attempting to dissuade the reader from believing analog methods, the author shows that they are actually pretty darn good! If any forecast model showed a plume like that for the next 15 months or so, we'd be very happy indeed.

Very good comments. Let's take them one at a time, and in order.

To be sure, dynamical modelers must also make subjective decisions, such as about which processes are to be included in the model. Presumably, scale analyses can help determine the relative priorities. Also, not everything can be included explicitly, and some important processes (like tropical convection) must be abbreviated using statistical parameterization schemes. In that sense, the choice of which physics to include, and how to include it, in dynamical models parallels the choice of the criteria for matching in analog forecasting. However, the way in which analogs are composited into a single forecast is likely to be more crude than the way in which a full ensemble of model runs is used for a climate forecast. In the example shown here, only 6 past cases were selected, and each of them had obvious deviations from the current case over the selection period. Most climate model ensemble runs manage to include a lot more than 6 ensemble members. Perhaps a more important difference between the two methods is the level of explicitness in representing the physics in dynamical models, compared with the black box style of making an analog forecast. Although this in itself does not necessarily mean the long-term forecast skill will be higher in dynamical models, it does mean that part of the forecast job is nailed down in a more precise manner by representing the underlying physics exactly. Yet the bottom line skill averages are what really determine the relative worth of the two approaches. So let's look at that. Since analog methods have generally not continued as a current method to develop climate forecasts, it is hard to compare their skill with that of today's leading dynamical models. It is assumed that the dynamical models deliever higher skill, on average, than analog systems. Perhaps this needs to be demonstrated in a formal way, and of course it is possible that the analog approach would still match the dynamical approach, or that the skill difference would not be statistically significant. But I tend to doubt it. In a recent study comparing the real-time forecasts of ENSO forecast models of different types since 2002, the dynamical models tended to have highest skills. (See "Skill of real-time seasonal ENSO model predictions during 2002-11: Is our capability increasing?" by A. G. Barnston, M. K. Tippett, M. L. L'Heureux, Michelle L.; etc. in Bulletin of the AMS, 93, 631-651.)

My comparison between the analog method and more sophisticated statistical method was in the context of the simple analog method used here, where only SST observations over the previous year were used, and where only about 60 years of record are available so that only 6 cases passed as being at least minimally similar toe current case. The lack of enough close matches (or even a single VERY close match) is emphasized. Basing a forecast on a limited set of analogs does allow for some nonlinearity to enter into the forecast, but the luck of the draw on the selected similar cases leads to questionable stability in the forecast implications. In the case of Huug van den Dool's constructed analog, ALL years in the available record are used and given weights (positive or negative), so that the method becomes somewhat like a linear regression method, showing a bridge between analogs and multiple linear regression. Unless the nonlinear component in seasonal climate predictability is substantial, which it has not been shown to be, I favor regression (using all past years of data) to using a limited set of analogs to try to capture nonlinearity in addition to the linear components of the variability. I believe that in using a small set of analogs, sampling variabiity usually outweighs the beneficial incorporation of nonlinearity. This is why I think that more traditional statistical methods are likely to have higher skills than a limited set of analogs, none of which match the current case extremely well.

About the set of analog-derived forecasts shown here for 2014-2015, just because it currently looks similar to the spread of the NCEP CFSv2 model doesn't mean it is as good a method. To compare the skills of the two systems, a hindcast test covering all years (hopefully over 30) would need to be conducted. A single case rarely tells us very much about what level of skill to expect over a long term.

Thanks for your comments. I entirely agree with almost everything you wrote, except that you must see that the simple example you provided, though a single case, does not back up the thesis of the post. It is very important for people to understand and be skeptical of a overly simplified analog approach (we see this a lot in the field); I understand that is the purpose of the post. However, even a simple method like the one you designed, if done on the entire data set, could be a useful tool with a quantified estimate of forecast skill. As you said, it seems it shouldn't be as good as a multiple linear regression, though.

I agree that using one signal - ENSO - to make near term climate predictions, this also is a key in CPC long-lead predictions. Primarily because of the relative predictability and statistical correlations to 'weather' in many areas of the Globe, it has credible validity. For local, downscaled outlooks I have found a multi-signal approach works far better. While the ENSO suggests the SSTA component, others such as SOI, AO, EP/NP and NAO reflect the atmospheric response. By filtering analog years via a multi-signal approach we have found useful skill in locally generated, seasonal outlooks. The basic premise is simple: Certain ENSO events trigger certain atmospheric responses; these responses are seen in the various atmospheric signals, typically at a lagged interval. By looking at the trends in ENSO and the trends in the atmospheric response, one can give a range of outcomes that enhance the ENSO only approach. The moderate El Nino of 2009/10 is a good example. While the tools did not predict the record negative [AO] levels achieved that winter, it did suggest the negative AO/NAO pattern that would destructively influence the classic El Nino signal. And that it did. The Red River VAlley of the North was colder and snowier than an El Nino only prediction would have suggested. While the predictability of short term signals such as AO, NAO, EP/NP etc. is undeniable, there are statistical approaches that can help 'hedge ones bets' as to the gross phase for the next few seasons. Obviously, not perfect, but an enhancement to the ENSO signal approach alone. It obviously goes well beyond simply looking at a signals value as calculated at CPC; actually looking at the various patterns of surface pressure, mid level height anomalies and jet structure associated with these values is crucial. So, there is help in diagnosing the ENSO signal and the impact on climate & weather in your area. It takes a lot of time, effort - and yes good old fashioned synoptic reasoning. Basic Attribution at its finest. It is as imperfect as any technique, but does have utility when used with the limitations in mind.

The main message here is well taken -- namely, that ENSO is not the only game in town when it comes to winter climate prediction in the U.S. Other well-known climate patterns such as AO, NAO, EP/NP, and others can play important roles also. In fact, we had two blog pieces about some of these other patterns: http://www.climate.gov/news-features/blogs/enso/other-climate-patterns-… and http://www.climate.gov/news-features/blogs/enso/how-much-do-climate-pat…. Although it was not clear whether the comment implied that these other sources of predictability are somewhat triggered or controlled by ENSO or not, it should be said that some of them may at least partially be so controlled, although they also likely have their own independent component. A point of differentiation between ENSO and these other phenomena is that ENSO tends to be better predicted than the other patterns, and MUCH better predicted once it has locked into one of its two phases (El Nino or La Nina), normally by September or October. (Note that this year is an exception, where even in November it is not entirely clear whether we will have a weak El Nino or not this winter, as the atmosphere has not completely been playing ball even if the SST has recently clearly exceeded the minimum threshold). Applying the idea of multi-faceted controls to analog forecasting, such a forecasting tool should be more effective if it were able to capture several phenomena instead of just ENSO alone. Such analog forecast systems have been developed. The main problem, however, is that the period of record from which to find analog matches is usually only several decades, and looking for more ways to define a good match (not just for ENSO) makes it even harder to find such a match. So the basic flaw in analog forecasting (lack of enough possible past cases to choose from) bites us even more severely when we look for a match in several dimensions instead of just one or two. As for localized, downscaled forecasts, I believe they are possible to the extent that the local data are built into the forecast system. Once the analog year(s) are picked, the resulting forecast can be applied to anything, whether the predictability is good or poor. To summarize my response, I say that analog forecasts may become slightly better (for both large-scale climate anomaly predictions or more downscaled, localized ones) when more dimensions are included in the analog search (e.g., more than just ENSO), but that the increment in value is not large because of the lack of predictability in the non-ENSO phenomena (with the possible exception of long-term climate change-related trends) and particularly because of the lack of a huge sample of past cases.

The model forecast regarding the two-year El Niño starting out weak in 2014-15 and then going strong for 2015-16 definitely falls in line with my theory that for ever La Niña that begins an ENSO cycle, it must go El Niño at the trailing end of the respective cycle to balance it out. 2007-08, for example, was a moderate La Niña, then 2008-09 was ENSO-neutral, and 2009-10 was an El Niño whose SOI readings were almost exactly just as negative as 2007-08's SOIs were positive. 2002-03, as another example, had SOI readings that, with the exception of one single spike, were almost identically negative to 2000-01's positive values. 2004-05 appears to have been the same as the above cases when compared to 2003-04 on the Australian BoM's graphs (fluctuating SOI's usually mean very weak ENSO amplitude), and 2005-06 SOIs completely mirror 2006-07 across the neutral line. With the above cases considered and examined, I was able to conclude that this current cycle began with the exceptionally strong 2010-11 La Niña event, which is by many standards the strongest La Niña on record. That was followed by weak La Niña (which began as strong, then faded) for 2011-12, then ENSO-neutral for 2012-13 and 2013-14, followed by 2014-15, which, just like the 2011-12 La Niña, began as high-amplitude and faded to low-amplitude. These models of a double-dip El Niño that would end up weak for 2014-15 only to go (some might argue unprecedentedly) strong for 2015-16 to offset the 2010-11 La Niña, then, only back up this theory of mine.

First, I am Remy, I am 16 almost 17 going into Junior year in High School. I live in NY near NYC, and have been working with Joe D'Aleo, Bill Gray and other members of wxbell for a year now researching Ocean circulations, teleconnections, oscillations and other large scale weather patterns and their relation to eachother and local weather. I also have my own website which I run with my friend, we do weather forecasts and outlooks. Go take a look! Its pretty sweet :) www.weatherinthehud.com I wrote this a few weeks ago: Regarding the El Nino, and the upcoming winter…Since I got back this is basically all I have been looking at. At camp I read 2 books, one called El Nino by J. Madeleine Nash, a fantastic book, and another book To Follow The Water by Dallas Murphy, also a fabulous book. I learned a ton in both these books on how the El Nino works and the oceans. So, using what I learned and then looking at the hundreds of available resources I looked at the El Nino. **I am not including the graphics here because it would take up too much room, and they are not 100% necessary** Right now we have the presence of the El Nino, obviously. We have the warm waters in the Eastern Pacific along the equator, east of the International Dateline. Most of warmest waters right now can be seen off the coast of western South America with values above 3°C anomaly just off the coast. It is interesting to note that during the ’97-’98 Super Nino, we saw equally warm waters in the same region, but much more spread out throughout the Eastern Pacific. In addition, SST’s worldwide were a much different scenario, with cold pools in the gulf of Alaska and Western Pacific and Cold Atlantic (although warming) With this 2015 Nino the warmest waters are much more concentrated to the area closest to the South American Coast. In addition to this, the SST setup worldwide is completely different as well. We have a very warm pool in the Gulf of Alaska and around Alaska, cold pool in western Pacific, very warm gulf, very warm pacific in general from Alaska to South America, and a warm Atlantic with warm pools off the NE Coast and Greenland in the Labrador sea. As we all know, this past winter the main driver was not the NAO, as we had a positive NAO for the most part, it was the SST’s in the pacific which set up the ridge in the West and trough in the East. Regardless of the El Nino situation this Winter, we still do have much of the same SST set up as we did last winter. This will still play a role, although influenced by the El Nino. One thing that I have been noticing in the forecasts for the winter I have seen so far, which in of itself is very far away, is that the maps look identical to that of a “typical” El Nino year. There is never a “typical” El Nino year because there are other things that will effect the El Nino and will change the look. The El Nino is not the only player out there. IMHO. So, anyway, the warm +PDO is still very much entrenched in the Pacific, as it has been. The NAO for this winter will again, likely be + as the AMO goes – in its multi-decadal cycle. Looking back at El Nino years where we had a positive NAO we can get 1982-83, ’72-’73, and ’65-’66. These stayed positive DEC-JAN and turned Negative in February, except for ’72-’73. If you recall, last winter we had a similar situation where the NAO did go negative for a brief time in February. In ’82-’83 we also had a warm PDO for DJF along with +NAO DJ, negative in F. The Models for the last 2 months so have been flirting around with the strength of the ENSO as we get into fall and Winter, however they have been more or less consistent with its ending. Important to note that the overall Model Plume of ENSO predictions from Mid July with 16 dynamic models and 8 statistical models still have a mean strength for the El Nino only getting up to a maximum of around +1.9-2.0°C anomaly in NINO3.4. This has been more or less consistent for a while. The height of the Event should peak in around OND/NDJ at this point, in the middle of fall. I am going to keep stressing, the +PDO will have a big effect on the winter pattern, as it has for the last 2 seasons. In ’97-’98 we had a +EPO and +NAO/AO, which allowed the Pacific Air to entrench itself in the eastern portion of the USA. However, with a +PDO which will want to keep ridging in the west, and a –EPO, this will be more difficult. That being said, the NAO/AO will still be positive. It is very likely that we will have a warm start to winter before it changes on a dime, like last year in late January. 1966 and 2007 are good examples with weakening in the eastern basins for SST’s which led to a pattern flip in late January into February of those winters allowing for cold air and some good snows. 1983 was also a good year with very good snows in the NE and Mid Atlantic. The bottom line, I think, is that this is a hit or miss winter. If we continue the El Nino warming and develop a more traditional look like in 97-98 then the idea of a great winter for snow lovers will dwindle more. However, I think this will be like last winter (not as cold!) and turn on a dime in the latter half as the El Nino dies down and the PDO has a better chance of really taking cold. I like 1983, and 2007 for this. Winter is still very far off and as we all know things change fast and many times, plenty of time for forecasting it. If we can get the –NAO for a brief period, even if as brief as last winter, then this will help get snows and cold. The PDO should help with that as well. Lets hope the snow in the northern reaches stays and falls hard, and lets get epic Eurasian snowfall to boost the cold air a bit….lots of things to look at, lots of time. But anyway, just my opinion for now after 4 days of looking at the El Nino and tons of plots which I unfortunately cannot put in here for lack of space.

Add new comment

CAPTCHA
Enter the characters shown in the image.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.