Long-Lead Seasonal Temperature and Precipitation ...

17
3398 VOLUME 17 JOURNAL OF CLIMATE Long-Lead Seasonal Temperature and Precipitation Prediction Using Tropical Pacific SST Consolidation Forecasts R. W. HIGGINS, H.-K. KIM, AND D. UNGER Climate Prediction Center, NOAA/NWS/NCEP, Washington, D.C. (Manuscript received 2 April 2003, in final form 2 March 2004) ABSTRACT Objective seasonal forecasts of temperature and precipitation for the conterminous United States are produced using tropical Pacific sea surface temperature forecasts for the Nin ˜o-3.4 region in conjunction with composites of observed temperature and precipitation keyed to phases of the ENSO cycle. The objective seasonal forecasts are validated against observations for the period February–March–April (FMA) 1995 to September–October–November (SON) 2002, and compared to NOAA’s Official Seasonal Forecasts issued by the Climate Prediction Center(CPC) for the same period. The objective forecasts are shown to produce skill that is comparable to (and even exceeding) that achieved by the Official Seasonal Forecasts at all leads out to 12.5 months. The forecasts are divided into high-frequency (HF) and trend-adjusted (TA) components in order to show that seasonal forecasters could achieve higher skill in both temperature and precipitation forecasts by taking full advantage of trend information, especially at longer leads. The objective forecasts are fully automated and available each month as a tool for use in preparation of the Official Seasonal Forecasts. (The latest objective forecasts are available on the CPC homepage at http:// www.cpc.ncep.noaa.gov/products/precip/CWlink/ENSO/total.html.) 1. Introduction Improved understanding of the El Nin ˜o–Southern Os- cillation (ENSO) cycle and its impacts has contributed to significant advances in seasonal prediction over the past two decades (e.g., Barnston et al. 1999). Many studies have documented relationships between the phase of the ENSO cycle and important climate phe- nomena, such as the Indian and Australasian monsoons, rainfall patterns along the Pacific coast of South Amer- ica, and the temperature and precipitation patterns in North America (e.g., Horel and Wallace 1981; van Loon and Madden 1981; Rasmusson and Carpenter 1983; Ro- pelewski and Halpert 1986, 1987, 1989). Many studies have relied on composites keyed to a particular index such as the Southern Oscillation index or the Nin ˜o-3.4 sea surface temperature (SST) Index to highlight regions of strong, consistent relationships to the ENSO cycle (e.g., Ropelewski and Halpert 1996). On a monthly basis the National Oceanic and At- mospheric Administration’s (NOAA) Climate Predic- tion Center (CPC) issues official seasonal (90 day) fore- casts for United States temperature and precipitation for leads out to 12.5 months. Statistical input for the sea- sonal forecasts is obtained from the optimal climate nor- Corresponding author address: Dr. R. W. Higgins, Analysis Branch, Climate Prediction Center, NOAA/NWS/NCEP, 5200 Auth Road, Camp Springs, MD 20746. E-mail: [email protected] mals (OCN; Huang et al. 1996), and the canonical cor- relation analysis (CCA; e.g., Barnston and Ropelewski 1992), while dynamical input comes from the National Centers for Environmental Prediction (NCEP) coupled model (Ji et al. 1998). Each input consists of graphical percentage anomaly (i.e., departure from random chance, which is 33.3% in a three-class system) maps of temperature and precipitation, together with appro- priate skill masks. CPC forecasters use objectively weighted averages of the input, together with last month’s forecasts, the latest information on the phase of the ENSO cycle, and SST forecasts to generate sea- sonal temperature and precipitation forecast maps for the nation. Annual reviews of the skill of CPC’s seasonal forecasts have shown at least marginal skill at all leads, with much of the success in recent years due to the OCN (e.g., van den Dool et al. 1996, 1998). In addition to the statistical and dynamical inputs mentioned above, composites of United States temper- ature and precipitation keyed to the warm and cold phas- es of the ENSO cycle are frequently used as an addi- tional tool of opportunity, especially for Northern Hemi- sphere (NH) winter and early spring forecasts when the impacts of the ENSO cycle on the United States are the strongest. Typically these composites are based on sets of El Nin ˜o, La Nin ˜a, and ENSO-neutral years in the historical record. There is some subjectivity in the choice of index used to define a particular phase of the ENSO cycle, but the Nin ˜o-3.4 index (defined later) is a common choice. Unauthenticated | Downloaded 01/24/22 06:39 PM UTC

Transcript of Long-Lead Seasonal Temperature and Precipitation ...

3398 VOLUME 17J O U R N A L O F C L I M A T E

Long-Lead Seasonal Temperature and Precipitation Prediction Using Tropical PacificSST Consolidation Forecasts

R. W. HIGGINS, H.-K. KIM, AND D. UNGER

Climate Prediction Center, NOAA/NWS/NCEP, Washington, D.C.

(Manuscript received 2 April 2003, in final form 2 March 2004)

ABSTRACT

Objective seasonal forecasts of temperature and precipitation for the conterminous United States are producedusing tropical Pacific sea surface temperature forecasts for the Nino-3.4 region in conjunction with composites ofobserved temperature and precipitation keyed to phases of the ENSO cycle. The objective seasonal forecasts arevalidated against observations for the period February–March–April (FMA) 1995 to September–October–November(SON) 2002, and compared to NOAA’s Official Seasonal Forecasts issued by the Climate Prediction Center (CPC)for the same period. The objective forecasts are shown to produce skill that is comparable to (and even exceeding)that achieved by the Official Seasonal Forecasts at all leads out to 12.5 months. The forecasts are divided intohigh-frequency (HF) and trend-adjusted (TA) components in order to show that seasonal forecasters could achievehigher skill in both temperature and precipitation forecasts by taking full advantage of trend information, especiallyat longer leads. The objective forecasts are fully automated and available each month as a tool for use in preparationof the Official Seasonal Forecasts. (The latest objective forecasts are available on the CPC homepage at http://www.cpc.ncep.noaa.gov/products/precip/CWlink/ENSO/total.html.)

1. Introduction

Improved understanding of the El Nino–Southern Os-cillation (ENSO) cycle and its impacts has contributedto significant advances in seasonal prediction over thepast two decades (e.g., Barnston et al. 1999). Manystudies have documented relationships between thephase of the ENSO cycle and important climate phe-nomena, such as the Indian and Australasian monsoons,rainfall patterns along the Pacific coast of South Amer-ica, and the temperature and precipitation patterns inNorth America (e.g., Horel and Wallace 1981; van Loonand Madden 1981; Rasmusson and Carpenter 1983; Ro-pelewski and Halpert 1986, 1987, 1989). Many studieshave relied on composites keyed to a particular indexsuch as the Southern Oscillation index or the Nino-3.4sea surface temperature (SST) Index to highlight regionsof strong, consistent relationships to the ENSO cycle(e.g., Ropelewski and Halpert 1996).

On a monthly basis the National Oceanic and At-mospheric Administration’s (NOAA) Climate Predic-tion Center (CPC) issues official seasonal (90 day) fore-casts for United States temperature and precipitation forleads out to 12.5 months. Statistical input for the sea-sonal forecasts is obtained from the optimal climate nor-

Corresponding author address: Dr. R. W. Higgins, AnalysisBranch, Climate Prediction Center, NOAA/NWS/NCEP, 5200 AuthRoad, Camp Springs, MD 20746.E-mail: [email protected]

mals (OCN; Huang et al. 1996), and the canonical cor-relation analysis (CCA; e.g., Barnston and Ropelewski1992), while dynamical input comes from the NationalCenters for Environmental Prediction (NCEP) coupledmodel (Ji et al. 1998). Each input consists of graphicalpercentage anomaly (i.e., departure from randomchance, which is 33.3% in a three-class system) mapsof temperature and precipitation, together with appro-priate skill masks. CPC forecasters use objectivelyweighted averages of the input, together with lastmonth’s forecasts, the latest information on the phaseof the ENSO cycle, and SST forecasts to generate sea-sonal temperature and precipitation forecast maps forthe nation. Annual reviews of the skill of CPC’s seasonalforecasts have shown at least marginal skill at all leads,with much of the success in recent years due to theOCN (e.g., van den Dool et al. 1996, 1998).

In addition to the statistical and dynamical inputsmentioned above, composites of United States temper-ature and precipitation keyed to the warm and cold phas-es of the ENSO cycle are frequently used as an addi-tional tool of opportunity, especially for Northern Hemi-sphere (NH) winter and early spring forecasts when theimpacts of the ENSO cycle on the United States are thestrongest. Typically these composites are based on setsof El Nino, La Nina, and ENSO-neutral years in thehistorical record. There is some subjectivity in thechoice of index used to define a particular phase of theENSO cycle, but the Nino-3.4 index (defined later) isa common choice.

Unauthenticated | Downloaded 01/24/22 06:39 PM UTC

1 SEPTEMBER 2004 3399H I G G I N S E T A L .

TABLE 1. Warm (El Nino) episodes during the period 1950–2002.

DJF JFM FMA MAM AMJ MJJ JJA JAS ASO SON OND NDJ

19581964196619691970

19581964196619691973

19581966196919731983

19581966196919831987

19531957195819691972

19531957196519721982

19531957196319651972

19531957196319651972

19531957196319651969

19531957196319651969

19571963196519681969

19571963196519681969

19731977197819831987

19771983198719881992

19871992199319951998

199219931998

19821983198719911992

19831987199119921993

19821987199119921993

19771982198619871991

19721976197719821986

19721976197719821986

19721976197719821986

19721976197719821986

1988199219951998

199319951998

1993199719982002

19972002

199419972002

199419972002

19871991199419972002

19871991199419972002

19871991199419972002

19871991199419972002

The use of composites always leads to a discussionabout sample size (usually too small) and noise (highfor small sample size). The compositing approach hasbeen quite successful in elucidating general aspects ofthe relationship between the ENSO cycle and its impactson a regional basis. However, composites can be mis-leading if they are dominated by one or two strongevents or if there are not enough events in the compositeto produce statistically meaningful results. Moreover,relationships between the ENSO cycle and impacts arenot linear (e.g., van den Dool 2000), so one should notassume that there is a one-to-one correspondence be-tween the strength of a particular warm (cold) episodeand the magnitude of the associated impacts.

Despite the nonlinear relationship between the inens-ity of a warm or cold episode and the associated impacts,it is nevertheless useful to examine the fraction of ep-isodes that are associated with temperature or precipi-tation anomalies in a three-class system (i.e. above, nor-mal, or below) as a measure of the range of possibilitiesfor a particular phase of the ENSO cycle. While thisapproach still does not clarify the relationship betweenthe intensity of a warm or cold episode and the mag-nitude of its impacts, it does cast the composite infor-mation in a probabilistic form that is more directly use-ful to seasonal forecasters.

In this study we will evaluate the skill of a new sea-sonal forecast tool that combines forecasts of tropicalPacific SSTs and composites of United States precipi-tation and temperature keyed to the phase of the ENSOcycle to produce objective seasonal temperature and pre-cipitation forecasts at leads out to 12.5 months. El Ninoand La Nina episodes during the period 1950–2002 areselected using a historical sea surface temperature da-taset and suitable definitions of El Nino and La Nina.As in NOAA’s official seasonal forecasts, this tool in-dicates where probabilities of above-normal, near-nor-mal, or below-normal categories are increased above theclimatological level of 1/3 (33.3%) for each category.Forecasts produced using this tool are validated againstobservations for the period February–April (FMA) 1995

to September–November (SON) 2002 and the skill ofthese forecasts is compared to that of NOAA’s officialseasonal forecasts. The objective forecasts are dividedinto high-frequency (HF) and trend-adjusted (TA) com-ponents in order to examine the relative skill of eachcomponent of the forecast.

Section 2 discusses the definitions used to select ElNino and La Nina events, the observed temperature andprecipitation datasets used in the composites, and a localsignificance test for the composites. United States tem-perature and precipitation composite anomalies and per-centage anomalies keyed to the phase of the ENSO cycleare presented in section 3. CPC’s consolidated forecastsfor Nino-3.4 SSTs and the objective temperature andprecipitation forecasts for the United States are dis-cussed in section 4. An evaluation of the skill of ourobjective forecasts and a comparison to NOAA’s OfficialSeasonal Forecasts is presented in section 5. We sum-marize in section 6.

2. Methodology and data

a. Definitions of El Nino and La Nina

NOAA recently established operational definitionsfor El Nino and La Nina based on sea surface tem-perature departures from normal (for the 1971–2000base period) in the Nino-3.4 region (58N–58S, 1208–1708W):

• El Nino: 3-month averages of SST departures in theNino-3.4 region greater than or equal to 10.58C;

• La Nina: 3-month averages of SST departures in theNino-3.4 region less than or equal to 20.58C.

These definitions are used to select events for thecomposites in this study. No distinction is made betweenweak, moderate, and strong events. The warm (El Nino)and cold (La Nina) episodes that satisfy these defini-tions, during the period 1950–2002 are given in Tables1 and 2, respectively.

Unauthenticated | Downloaded 01/24/22 06:39 PM UTC

3400 VOLUME 17J O U R N A L O F C L I M A T E

TABLE 2. Cold (La Nina) episodes during the period 1950–2002.

DJF JFM FMA MAM AMJ MJJ JJA JAS ASO SON OND NDJ

19501951195519561957

19501951195519561962

19501955196819711974

19501955195619641968

19501954195519561964

19501954195519561964

19501954195519561959

19501954195519561959

19501954195519561959

19501954195519561959

19501954195519561959

19501954195519561959

19621963196519681971

19681971197419751976

19751976198519891996

19711974197519761985

19711973197419751976

19711973197419751985

19641970197119731974

19611964197019711973

19611962196419671970

19611962196419671970

19611962196419671970

19611962196419671970

19721974197519761985

19851989199619992000

199920002001

198919992000

19851988198919992000

198819992000

19751985198819981999

19741975198519881998

19711973197419751985

19711973197419751984

19711973197419751984

19711973197419751984

19891996199920002001

2001 1999 1988199519981999

19881995199819992000

19881995199819992000

19881995199819992000

b. Data

SST data for the period 1950–2002 are from the Ex-tended Reconstructed Sea Surface Temperature datasetof Smith and Reynolds (2003). SST anomalies weredefined as departures from base period (1971–2000)mean values. Data were averaged over the Nino-3.4(58N–58S, 1208–1708W) region and the El Nino and LaNina definitions were applied to choose events.

Observed United States surface air temperature dataare from the analysis of Janowiak et al. (1999) Dailydata are gridded to a horizontal resolution of 1.08 31.08 latitude–longitude for the period 1950–2002. Ob-served U.S. precipitation data are from the unified pre-cipitation reanalysis of Higgins et al. (2000). Daily dataare gridded to a horizontal resolution of 1.08 3 1.08 forthe period 1950–2002. Both temperature anomalies andprecipitation anomalies are defined as departures frombase period (1971–2000) mean values for forecasts afterMay 2001 and as departures from base period (1961–90) mean values for forecasts prior to May 2001. Thisis done so that the comparison to NOAA’s official sea-sonal forecasts is valid, since this is the manner in whichthe normal conversion was carried out in CPC’s oper-ations. Time series of the seasonal anomalies for each3-month season (JFM, FMA, MAM, AMJ, . . .) weregenerated for the temperature and precipitation fieldsprior to the composite analysis.

c. Local significance test

In the composites we show the percentage of casesin the appropriate tercile class (i.e., the temperature andprecipitation distributions are divided into thirds) ex-pressed as departures from random chance (33.3%) ofthe indicated category, and use shading starting withpercentage anomalies at 5%. It is important to know

whether these anomalies are significant. In order to de-termine whether a particular tercile class has moreevents than expected (since this is the situation that isof interest to forecasters) we can apply a simple bino-mial test (H. van den Dool 2003, personal communi-cation):

p 5 1/3 (expected probability),

q 5 2/3 5 (1 2 p),

n 5 number of cases,

sd 5 Ïnpq

E 5 expected number of cases.

For a one-sided test, assuming that the distribution isapproximately normal, E 1 (1.65 3 sd) would be the95% confidence threshold to conclude that the tercileclass of interest has more cases than expected. In thecomposites of section 3 and the forecasts of section 4,the percentage anomalies satisfying the above confi-dence threshold are indicated in each figure caption.

3. High-frequency and trend-adjusted composites

There have been significant trends in precipitation andtemperature in the United States in recent decades, soit is worthwhile to examine the influence of trends onthe ENSO composites. For this purpose, two basic typesof composites are used:

• high-frequency composites, and• trend-adjusted composites.

High-frequency (or detrended) composites are ob-tained after first removing 11-yr (15 yr) running meansfrom the raw temperature (precipitation) seasonal timeseries. The 11- and 15-yr averages are used for the low-

Unauthenticated | Downloaded 01/24/22 06:39 PM UTC

1 SEPTEMBER 2004 3401H I G G I N S E T A L .

frequency signal to bring the time series in line withseasonal temperature prediction using OCN at the Cli-mate Prediction Center (Huang et al. 1996). In partic-ular, Huang et al. (1996) experimented with all possibleaverages and found that 10-yr (15-yr) averages are op-timal (minimize root-mean-square error) for forecast-designed OCN. Because an odd number centers the run-ning filter, 11-yr averages are used for temperature inthis study. At the beginning and end of the time series,the closest possible approximation to the low-frequencysignal is used. The HF composites are then computeddirectly. The TA composites are obtained after addingthe most recent 11-yr (15-yr) average value (computedfrom the raw data after removing the base period 1971–2000 climatological mean) to the detrended time series.In this way the detrended time series are ‘‘adjusted’’ toaccount for the most recent estimate of the low-fre-quency climate signal.

a. Anomalies

High-frequency and TA composites of United Statestemperature anomalies and precipitation anomalies forEl Nino, La Nina, and ENSO-neutral events during1950–2002 are available on the CPC homepage(http://www.cpc.ncep.noaa.gov/products/precip/CWlink/ENSO/total.html). Examples for JFM El Ninoevents are shown in Figs. 1 and 2. Composites on theCPC homepage are available for each season of the year(i.e., JFM, FMA, MAM, etc.).

b. Percentage anomalies

In order to determine the percentage of events thatoccur in a particular tercile class, it is necessary to de-termine class limits. Tercile class limits for the HF com-posites are determined from ranked values of the HFtime series for the entire period of record. Tercile classlimits for the TA composites are determined fromranked values of the total temperature (precipitation)field fit to a normal (gamma) distribution using baseperiod 1961–90 means prior to May 2001, and baseperiod 1971–2000 means thereafter. Two base periodsare used so that the results can be compared to the CPCofficial forecasts (also see section 2b).

Maps of the fraction of El Nino, La Nina, and ENSO-neutral events during 1950–2002 that occurred in eachtemperature and precipitation class are available on theCPC homepage (http://www.cpc.ncep.noaa.gov/products/precip/CWlink/ENSO/total.html). Examples for JFM ElNino events are shown in Figs. 3 and 4. Results areexpressed as departures from random chance (33.3%)of the indicated category. The events are those listed inTables 1 and 2. ENSO-neutral years are those not listedin Tables 1 and 2. Composites on the CPC homepageare available for each season of the year (i.e., JFM,FMA, MAM, etc.).

4. Objective forecasts

a. Consolidated forecasts for Nino-3.4 sea surfacetemperatures

CPC issues consolidated forecasts for tropical PacificSSTs (Barnston et al. 1999) once each month near mid-month, coincident with 30- and 90-day seasonal fore-casts for temperature and precipitation in the UnitedStates. The consolidated forecast is currently a four-waylinear regression, using as predictors the forecasts ofone dynamical model [the NCEP Coupled Model (CMP;Ji et al. 1998)] and three statistical models [CCA (Barn-ston and Ropelewski 1992), Constructed Analog (CA;van den Dool 1994; van den Dool and Barnston 1995),and the Markov Model (Xue et al. 2000)]. The purposeis to combine several forecasts having differentstrengths into a consensus forecast with higher overallskill than each of its components. Because the NCEPcoupled model forecasts go out to 9 months in advance,the consolidated forecasts for longer lead times are madefrom the other models.

In this study we employ the historical archive of con-solidation forecasts that are available for the period fromJanuary 1957–present. CCA and CA components wereused from January 1957–May 1981, and CCA, CA, andCMP components were used from May 1981–August1996. From September 1996–present the linear regres-sion equations were derived from the complete set offorecasts available up to the forecast issuance time. TheMarkov component has been used since October 2002.

Forecast probabilities (fractional form) for the below-normal, near-normal, and above-normal terciles ofNino-3.4 SSTs are used. Terciles are estimated from theNino-3.4 climatologies (61–90 prior to 2001, and 71–2000 afterward). The thresholds for above- (below-)near-normal SSTs are determined by the seasonal meanplus (minus) 0.431 seasonal standard deviations. Prob-abilities are obtained by comparing the forecast distri-bution (assumed normal) against the thresholds. A sam-ple forecast for Nino-3.4 SSTs made 6 December 2002is shown in Fig. 5. Target months at each lead (centralmonth of the 3-month season) are indicated along theabscissa.

b. Temperature and precipitation forecasts

Objective seasonal forecasts of U. S. temperatureand precipitation are obtained for lead times out to 12.5months using CPC’s consolidation forecast for Nino-3.4 SSTs (section 4a). In particular, the standardizedanomaly version of the consolidation forecasts is usedto obtain weights (i.e., a projection fraction) for ElNino, La Nina, and ENSO-neutral conditions at eachlead. The weights are applied to the El Nino, La Nina,and ENSO-neutral composites (percentages are used asdescribed in section 3.2) and summed to obtain theforecasts at each lead. The latest U. S. temperature andprecipitation forecasts for all leads are available on

Unauthenticated | Downloaded 01/24/22 06:39 PM UTC

3402 VOLUME 17J O U R N A L O F C L I M A T E

FIG. 1. The HF and TA temperature anomalies (8C) for El Nino events during JFM 1950–2002.

Unauthenticated | Downloaded 01/24/22 06:39 PM UTC

1 SEPTEMBER 2004 3403H I G G I N S E T A L .

FIG. 2. The HF and TA precipitation anomalies (mm day21) for El Nino events during JFM 1950–2002.

Unauthenticated | Downloaded 01/24/22 06:39 PM UTC

3404 VOLUME 17J O U R N A L O F C L I M A T E

FIG. 3. The HF and TA temperature percentage anomalies by tercile class for El Nino events during JFM 1950–2002. Shading indicates departures from random chance (33.3%) of the indicated category. Percentage anomaliesgreater than 21.6% satisfy the 95% confidence limit (n 5 13).

the CPC homepage (http://www.cpc.ncep.noaa.gov/products/precip/CWlink/ENSO/total.html). These fore-casts are automatically updated during midmonth intime to be used in the preparation of the official seasonalforecasts. The projection fractions for La Nina, ENSO-neutral, and El Nino at each lead are also available intabular format.

A sample of the HF and TA forecasts for JFM 2003(made in mid-December 2002) are shown in Figs. 6 and7, respectively. The projection fractions used for thisforecast were 0.000, 0.010, and 0.990 for La Nina,ENSO-neutral, and El Nino. Percentage anomalies sat-isfying the 95% confidence limit are indicated in thefigure captions.

Unauthenticated | Downloaded 01/24/22 06:39 PM UTC

1 SEPTEMBER 2004 3405H I G G I N S E T A L .

FIG. 4. The HF and TA precipitation percentage anomalies by tercile class for El Nino events during JFM 1950–2002. Shading indicates departures from random chance (33.3%) of the indicated category. Percentage anomaliesgreater than 21.6% satisfy the 95% confidence limit (n 5 13).

5. Skill evaluation

In order to assess the skill of the objective seasonalforecasts, we produced HF and TA forecasts forFMA1995 to SON2002. For this period there are 92overlapping 3-month seasons at 0.5-month lead. It isimportant to note that the composites are generated ateach forecast time using data up to (but not beyond) the

forecast time. For example, the forecasts issued in Jan-uary 2002 are based on data through December 2001.Since future ENSO states are not used in the hindcastperiod, the hindcasts do not involve dependent data, sothe skill evaluation is fair.

As in the official CPC forecasts, each objective fore-cast was made for leads out to 12.5 months. The veri-fication presented here is done against observed tem-

Unauthenticated | Downloaded 01/24/22 06:39 PM UTC

3406 VOLUME 17J O U R N A L O F C L I M A T E

FIG. 5. Consolidation forecast for Nino-3.4 SSTs made 6 Dec 2002. The forecast is expressed in terms ofstandardized anomalies (8C). Target months are indicated along the abscissa.

perature and precipitation datasets (section 2b) at a hor-izontal resolution of 18 3 18. The primary verificationmeasure is the Heidke skill score (e.g., van den Doolet al. 1996; van den Dool et al. 1998) as in CPC op-erations. We also consider the ranked probability skillscore (e.g., Wilks 1995, 2000), which is more suitablefor evaluation of probability forecasts.

a. Heidke skill score

At each grid point, the category with the maximumpercentage is chosen (based on tercile classes forabove-, below-, or near-normal). If all three classeshave percentage anomalies less than 5%, then equalchances (EC) are assigned as in CPC operations. Theclass limits of the observed temperature (precipitation)distribution are determined by fitting to a normal (gam-ma) distribution (see section 3b). The Heidke skillscores are calculated for the above- and below-normalforecasts when departures from random chance(33.3%) exceed 5%. We define two skill scores as invan den Dool et al. (1996). The Heidke score on non-EC gridpoints, referred to as SS1, is

SS1 5 100 3 (H1 2 T1/3)/(2/3 T1),

where T1 is the total number of non-EC forecasts (num-ber of gridpoints times the number of forecasts), andH1 is the number of hits for non-EC forecasts. For a

score on all gridpoints, we assume that 1/3 of the ECforecasts are rights and 2/3 are wrong, so that

SS2 5 SS1 3 (T1/T),

where T is the total number of forecasts (number ofgridpoints times the number of seasons). SS2 is usefulfor intercomparing different tools or for the same toolat different leads when the coverage is not the same.

For comparison, the Heidke skill scores of NOAA’sOfficial Seasonal Forecasts are also computed for thesame period (FMA 1995–SON 2002). Before the Heidkeskill scores are calculated, the official forecasts are grid-ded from 102 climate divisions to a horizontal resolutionof 18 3 18 using a Cressman (1959) scheme with mod-ifications (Glahn et al. 1985; Charba et al. 1992) so thatthe same observed class limits can be used as describedabove.

The Heidke skill scores of the objective TA temper-ature forecasts (solid lines) and the official temperatureforecasts (dashed lines) at the 0.5-month lead during theperiod FMA 1995–SON 2002 are compared in Fig. 8.Note that the coverage rate (C 5 T1/T) of the officialforecasts is somewhat smaller than might be expectedbecause 1) non-EC forecasts are for departures fromrandom chance greater than 5% (as for the objectiveforecasts above) and 2) the gridding process may havea tendency to smooth the forecasts.

Although the official forecasts have higher SS1 than

Unauthenticated | Downloaded 01/24/22 06:39 PM UTC

1 SEPTEMBER 2004 3407H I G G I N S E T A L .

FIG. 6. Objective HF and TA seasonal temperature forecasts for JFM 2003 based on CPC’s consolidation forecast forNino-3.4 SSTs. Results are expressed as percentage anomalies by tercile class. Shading indicates departures from randomchance (33.3%) of the indicated category. The projection fractions for this forecast were 0.000, 0.010, and 0.990 for LaNina, ENSO-neutral, and El Nino. Percentage anomalies greater than 21.6% satisfy the 95% confidence limit (n 5 13).

the objective forecasts, the objective forecasts usuallyhave higher SS2 because of the larger coverage rate ofthe non-EC forecasts, particularly during the 1995–97period. The SS2 of the objective forecasts is the highestduring the winters of the major El Nino in 1997–98 andthe subsequent La Nina in 1998–99 and 1999–2000.The SS2 of the objective HF temperature forecasts also

shows similar behavior, with lower scores than for theTA forecasts (not shown).

The mean value of SS2 during FMA 1995–SON 2002for the objective TA temperature forecasts at 0.5-monthlead is slightly higher than that of the official forecasts(9.5 versus 7.1). However, at longer leads, the TA tem-perature forecast maintains significantly higher SS2 skill

Unauthenticated | Downloaded 01/24/22 06:39 PM UTC

3408 VOLUME 17J O U R N A L O F C L I M A T E

FIG. 7. Objective HF and TA U. S. seasonal precipitation forecasts for JFM 2003 based on CPC’s consolidationforecast for Nino-3.4 SSTs. Results are expressed as percentage anomalies by tercile class. Shading indicates departuresfrom random chance (33.3%) of the indicated category. The projection fractions for this forecast were 0.000, 0.010,and 0.990 for La Nina, ENSO-neutral, and El Nino. Percentage anomalies greater than 21.6% satisfy the 95% confidencelimit (n 5 13).

scores than either the official forecasts or the objectiveHF forecasts (Fig. 9b). For example, the SS2 skill scoresfor the objective TA temperature forecast and the officialforecast at 12.5-month lead are 10.0 and 1.9, respec-tively. This result implies that additional considerationof recent trends in NOAA’s official seasonal forecasts

would lead to significant increases in skill, particularlyat longer leads. The SS2 skill scores for precipitationshow similar results to that of temperature, though thescores are lower (Fig. 10).

In order to determine where the skill in the seasonalforecasts is coming from, we examined the geographical

Unauthenticated | Downloaded 01/24/22 06:39 PM UTC

1 SEPTEMBER 2004 3409H I G G I N S E T A L .

FIG. 8. Heidke skill scores (a) SS1 and (b) SS2 for the 0.5-monthlead objective TA temperature forecasts (solid lines) and NOAA’sofficial temperature forecasts (dashed lines) for the conterminousUnited States during FMA 1995–SON 2002. The horizontal linesindicate mean values for the period. Coverage rates (5T1/T) areshown for (c) the objective TA temperature forecasts and (d) NOAA’sofficial temperature forecasts.

FIG. 9. Heidke skill scores (a) SS1 and (b) SS2 as a function offorecast lead time in months for the objective TA temperature fore-casts (open circles), objective HF temperature forecasts (opensquares), and NOAA’s official temperature forecasts (closed circles).

FIG. 10. Heidke skill scores (a) SS1 and (b) SS2 as a function offorecast lead time in months for the objective TA precipitation fore-casts (open circles), objective HF precipitation forecasts (opensquares), and NOAA’s official precipitation forecasts (closed circles).

distribution of the number of hits (H1) for non-EC fore-casts with percentage anomalies at or above 5%. The0.5-month lead official temperature forecasts have thehighest counts in the U.S. Southwest, where forecastershave most consistently taken advantage of recent tem-perature trends (Fig. 11a). The objective TA temperatureforecasts also have the highest counts in this region (Fig.11c), but show a higher number of hits in other regionsof the country as well. The number of hits drops dra-matically in the official forecasts at 12.5-months lead(Fig. 11b), but changes very little in the TA temperatureforecasts (Fig. 11d). Similar results are obtained for pre-cipitation (Fig. 12), though the locations of the maxi-mum numbers of hits are different in the official andobjective forecasts at both leads. We also examined theseasonality of the geographical distribution of H1 andobtained the same basic set of patterns for both tem-perature and precipitation, but with an annual cycle tiedto observed trends (not shown). Overall, these resultsconfirm that the TA objective forecasts produce higherskill over much of the country, especially at longerleads. Since the number of hits for the HF objectiveforecasts (not shown) is lower than that for the TA, these

results also underscore the importance of separating theobjective forecasts into HF and TA components.

b. Ranked probability skill score

The ranked probability skill score (RPSS) is intendedfor probability forecasts while the Heidke skill score ismore suitable for categorical forecasts. Since CPC willconvert from categorical (e.g., Heidke skill score) toprobabilistic (e.g., RPSS) skill measures in the near fu-ture, in this section we provide an evaluation of theobjective forecasts using RPSS.

To obtain RPSS, we calculate a ranked probabilityscore (RPS) at each gridpoint:

Unauthenticated | Downloaded 01/24/22 06:39 PM UTC

3410 VOLUME 17J O U R N A L O F C L I M A T E

FIG. 11. Number of hits (H1) in (a) NOAA’s official seasonal temperature forecasts at 0.5-month lead, (b) NOAA’sofficial seasonal temperature forecasts at 12.5-month lead, (c) the objective TA temperature forecasts at 0.5-month lead,and (d) the objective TA temperature forecasts at 12.5-month lead. At 0.5-month lead there are 92 forecasts and at12.5-month lead there are 80 forecasts for the period FMA 1995–SON 2002. Percentage anomalies greater than 5% areshaded.

J

2RPS 5 (Y 2 O ) ,O m mm51

where Ym and Om are cumulative forecasts (yj) and ob-servations (oj), respectively, and they are defined as

m m

Y 5 y , and O 5 o , m 5 1, . . . , J.O Om j m jj51 j51

The RPSS measures the skill with respect to climatology(i.e., EC forecasts), and is defined as

[RPS]RPSS 5 1 2 ,

[RPS ]cl

where [ ] denotes the average over the gridpoints in theforecast area, and RPScl represents the ranked proba-bility score of the climatology. The RPSS of a perfectforecast is 1. Negative RPSS values imply that the fore-cast is less skillful than an EC forecast. Note that ac-cording to the definition of the ranked probability score,the RPSS has a heavy penalty for two category errors,unlike the Heidke skill score, which is not sensitive tothis.

Figure 13 shows the RPSS of the objective TA tem-perature forecasts (solid line) and NOAA’s Official Tem-perature Forecasts (dashed line) at the 0.5-month leadduring the period FMA 1995–SON 2002. Although the

Unauthenticated | Downloaded 01/24/22 06:39 PM UTC

1 SEPTEMBER 2004 3411H I G G I N S E T A L .

FIG. 12. Number of hits (H1) in (a) NOAA’s official seasonal precipitation forecasts at 0.5-month lead, (b) NOAA’sofficial seasonal precipitation forecasts at 12.5-month lead, (c) the objective TA precipitation forecasts at 0.5-monthlead, and (d) the objective TA precipitation forecasts at 12.5-month lead. At 0.5-months lead there are 92 forecasts andat 12.5-month lead there are 80 forecasts for the period FMA 1995–SON 2002. Percentage anomalies greater than 5%are shaded.

details are different, the characteristics of the RPSS aresimilar to those of the Heidke skill score (compare Fig.13 with Fig. 8b). Again, the RPSS of the objective fore-cast is highest during the winters of major El Nino(1997–98) and La Nina (1998–99, 1999–2000) epi-sodes, and lowest during the winter of 2000–01 whenmidlatitude variability (e.g., the Arctic Oscillation) wasunusually large. The discussion of this latter case isdeferred to the next section.

The RPSS of the official forecasts is relatively flatcompared to the objective forecasts, that is, closer to 0(Fig. 13). This feature is consistent with the fact that

EC is used more frequently in the official forecasts (cf.Fig. 8d, which shows the coverage rates). The objectiveforecasts are made at each gridpoint, and there is a ten-dency for fewer EC forecasts.

For our skill evaluation, we separated the full set ofobjective forecasts into two sets; those with positiveRPSS values and those with negative RPSS values. Thesame procedure was used for the official forecasts. Fig-ure 14 shows the average RPSS values for the forecastswith positive (Figs. 14a and 14c) and negative (Figs.14b and 14d) skill. The numbers of forecasts includedin each average are given in Table 3. The impression

Unauthenticated | Downloaded 01/24/22 06:39 PM UTC

3412 VOLUME 17J O U R N A L O F C L I M A T E

FIG. 13. RPSS for the 0.5-month lead objective TA temperatureforecasts (solid line) and NOAA’s official temperature forecasts(dashed line) for the conterminous United States during FMA 1995–SON 2002.

FIG. 14. RPSS as a function of forecast lead time in months for the temperature forecasts (a) with positive skill and(b) with negative skill, and for the precipitation forecasts (c) with positive skill and (d) with negative skill. The lineswith open circles, open squares, and closed circles represent the objective TA forecasts, the objective HF forecasts,and NOAA’s official forecasts, respectively.

in Fig. 14 does not change if the selection criterion isbased on either the official or the objective forecasts(i.e., for most forecasts the RPSS values are of the samesign in both the objective and the official forecasts; cf.Fig. 13).

For forecasts with positive RPSS values, the averageskill is higher in the objective forecasts (Figs. 14a and14c). Alternately, for forecasts with negative RPSS val-ues, the average skill is lower in the objective forecasts(Figs. 14b and 14d). In the latter case this is mainly dueto the fact that the RPSS includes a heavy penalty fortwo-category errors, which are more likely in the ob-jective forecasts as explained above.

6. Summary and discussion

Objective seasonal forecasts of temperature and pre-cipitation for the conterminous United States were pro-duced using tropical Pacific SST forecasts for the Nino-3.4 region in conjunction with composites of UnitedStates temperature and precipitation keyed to phases ofthe ENSO cycle.

The objective seasonal forecasts were validatedagainst observations for the period FMA 1995–SON2002, and compared to NOAA’s official seasonal fore-casts for the same period. When the Heidke skill mea-sure was used, the objective forecasts were shown to

Unauthenticated | Downloaded 01/24/22 06:39 PM UTC

1 SEPTEMBER 2004 3413H I G G I N S E T A L .

TABLE 3. Number of cases with positive (negative) RPSS during the period FMA1995–SON2002.

Lead months 0.5 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5 9.5 10.5 11.5 12.5

TA temperature

HF temperature

Official temperature

121212

464633595438

454634575833

444640505535

454439505732

414734545335

414630574938

394727595036

404532534738

404430544935

394430534934

344826564834

344726554536

344626544535

TA precipitation

HF precipitation

Official precipitation

121212

425044484943

395244473952

385247433456

385143463950

385044444048

375042454047

365045414145

364941443649

345041443252

354834493548

334936463250

344739422556

344639412852

produce skill that is comparable to (and even exceeding)that achieved by the official seasonal forecasts at allleads out to 12.5 months. However, when the RPSS skillmeasure was used, on average, the objective forecastswere better only when the RPSS was positive.

The forecasts were divided into high frequency (HF)and trend adjusted (TA) components. Based on a com-parison of the objective forecasts and NOAA’s officialseasonal forecasts, we found that the TA objective fore-casts generally do as well or better than the operationalforecasts. This implies that forecasters could achievehigher skill in both temperature and precipitation fore-casts by using the TA objective tool as a benchmarkwhen producing their seasonal outlooks. An examina-tion of where the skill is coming from showed that thenumber of hits drops dramatically in the official seasonalforecasts at 12.5-months lead, whereas it drops verylittle in the objective TA forecasts. These results un-derscore the importance of separating the objective fore-casts into HF and TA components.

The objective forecasts have been fully automatedat CPC since the autumn of 2002 and are availableeach month as a tool for use in the preparation ofseasonal forecasts. The objective forecasts could beused as a first guess, and a seasonal forecaster couldthen use other tools and current conditions to adjustthem. The latest objective forecasts are available onthe CPC homepage (http://www.cpc.ncep.noaa.gov/products/precip/CWlink/ENSO/total.html).

Use of the Heidke and RPSS skill measures revealsseveral limitations of the objective forecast method.

1) The forecast skill relies heavily on the SST forecasts(i.e., the consolidation SST forecasts for the Nino-3.4 region in this case). Thus, when the SST forecastis poor, the objective forecasts tend to be poor (e.g.,the spring of 1995; see Fig. 13). This implies thatconsiderable effort should be applied to improvingthe SST forecasts.

2) As in other forecast methods, the skill during ENSOtransition periods decreases dramatically for the ob-jective forecasts (e.g., the spring of 1998, see Figs.8b and 13).

3) The objective forecasts are primarily based on the

impacts of forcing from the tropical Pacific. In caseswhen ENSO influences are absent and midlatitudevariability dominates (e.g., the winter of 2000–01),the skill of the objective forecasts decreases (seeFigs. 8b and 13).

In the future, we will consider some additional issues,including forecasts for the states of Alaska and Hawaii,the geographical distribution of skill, and the seasonalityof skill. In an effort to improve our seasonal forecastskill, as a next step we intend to focus on the impactof midlatitude variability (e.g., the Arctic Oscillationand the Pacific–North American pattern) on the seasonalforecasts over the United States.

Acknowledgments. We gratefully acknowledge JohnJanowiak for help in accessing the United States surfaceair temperature data, Huug van den Dool for providingthe significance test, and Vern Kousky for providing theEl Nino and La Nina episodes in Tables 1 and 2, re-spectively. We also acknowledge Ed O’lenic and VernKousky for many useful conversations during the courseof this investigation, and two anonymous reviewers forsuggestions that led to significant improvements in themanuscript.

REFERENCES

Barnston, A., and C. Ropelewski, 1992: Prediction of ENSO episodesusing canonical correlation analysis. J. Climate, 5, 1316–1345.

——, M. H. Glantz, and Y. He, 1999: Predictive skill of statisticaland dynamical climate models in SST forecasts during the 1997–98 El Nino episode and the 1998 La Nina onset. Bull. Amer.Meteor. Soc., 80, 217–243.

Charba, J. P., A. W. Harrell III, and A. C. Lackner III, 1992: A monthlyprecipitation amount climatology derived from published atlasmaps: Development of a digital data base. TDL Office Note 92-7, NOAA, U.S. Department of Commerce, 20 pp.

Cressman, G. P., 1959: An operational objective analysis system.Mon. Wea. Rev., 87, 367–374.

Glahn, H. R., T. L. Chambers, W. S. Richardson, and H. P. Perrotti,1985: Objective map analysis for the local AFOS MOS Program.NOAA Tech. Memo. NWS TDL 75, NOAA, U.S. Departmentof Commerce, 34 pp.

Higgins, R. W., W. Shi, E. Yarosh, and R. Joyce, 2000: ImprovedUnited States precipitation quality control system and analysis.NCEP/Climate Prediction Center Atlas Number 7, 40 pp. [Avail-

Unauthenticated | Downloaded 01/24/22 06:39 PM UTC

3414 VOLUME 17J O U R N A L O F C L I M A T E

able online at http://www.cpc.ncep.noaa.gov/researchppapers/nceppcpcpatlas/7/index.html, and from Climate Prediction Cen-ter, World Weather Building, Room 605, Camp Springs, MD,20746.]

Horel, J. D., and J. M. Wallace, 1981: Planetary-scale atmosphericphenomena associated with the Southern Oscillation. Mon. Wea.Rev., 109, 813–829.

Huang, J., H. M. van den Dool, and A. G. Barnston, 1996: Long-lead seasonal temperature prediction using optimal climate nor-mals. J. Climate, 9, 809–817.

Janowiak, J. E., G. D. Bell, and M. Chelliah, 1999: A gridded database of daily temperature maxima and minima for the conter-minous United States: 1948–1993. NCEP/Climate PredictionCenter ATLAS No. 6, 50 pp. [Available from Climate PredictionCenter, World Weather Building, Room 605, Camp Springs, MD,20746.]

Ji, M., D. W. Behringer, and A. Leetmaa, 1998: An improved coupledmodel for ENSO prediction and implications for ocean initial-ization. Part II: The coupled model. Mon. Wea. Rev., 126, 1022–1034.

Rasmusson, E. M., and T. H. Carpenter, 1983: The relationship be-tween eastern equatorial Pacific sea surface temperatures andrainfall over India and Sri Lanka. Mon. Wea. Rev., 111, 517–528.

Ropelewski, C. F., and M. S. Halpert, 1986: North American precip-itation and temperature patterns associated with the El Nino/Southern Oscillation (ENSO). Mon. Wea. Rev., 114, 2352–2362.

——, and ——, 1987: Global and regional scale precipitation patternsassociated with the El Nino/Southern Oscillation. Mon. Wea.Rev., 115, 1606–1626.

——, and ——, 1989: Precipitation patterns associated with the highindex phase of the Southern Oscillation. J. Climate, 2, 268–284.

——, and ——, 1996: Quantifying Southern Oscillation-precipitationrelationships. J. Climate, 9, 1043–1059.

Smith, T. M., and R. W. Reynolds, 2003: Extended reconstruction ofglobal sea surface temperatures based on COADS data (1854–1997). J. Climate, 16, 1495–1510.

van den Dool, H. M., 1994: Searching for analogues, how long mustwe wait? Tellus, 46A, 314–324.

——, 2000: Recent trends in U.S. temperature: Diagnostics and pre-diction. Proc. 25th Annual Climate Diagnostics and PredictionWorkshop, Palisades, NY, NOAA/CPC, 157–160.

——, and A. G. Barnston, 1995: Forecasts of global sea surfacetemperature out to a year using the constructed analogue method.Proc. 19th Annual Climate Diagnostics Workshop, College Park,MD, NOAA/CPC, 416–419.

——, and Coauthors, 1996: 1st annual review of skill of CPC longlead seasonal predictions, Proc. 21st Climate Diagnostics andPrediction Workshop, Huntsville, AL, NOAA/CPC, 13–16.

——, and Coauthors, 1998: 3rd annual review of skill of CPC realtime long lead predictions: How well did we do during the greatENSO event of 1997–1998?, Proc. 23d Climate Diagnostics andPrediction Workshop, Miami, FL, NOAA/CPC, 9–12.

van Loon, H., and R. Madden, 1981: The Southern Oscillation. PartI: Global associations with pressure and temperature in northernwinter. Mon. Wea. Rev., 109, 1150–1162.

Wilks, D. S., 1995: Statistical Methods in the Atmospheric Sciences:An introduction. International Geophysical Series, Vol. 59, Ac-ademic Press, 464 pp.

——, 2000: Diagnostic verification of the Climate Prediction Centerlong-lead outlooks, 1995–98. J. Climate, 13, 2389–2403.

Xue, Y., A. Leetmaa, and M. Ji, 2000: ENSO Prediction with Markovmodels: The impact of sea level. J. Climate, 13, 849–871.

Unauthenticated | Downloaded 01/24/22 06:39 PM UTC