Bridging the gap between weather and seasonal forecasting: intraseasonal forecasting for Australia

17
Quarterly Journal of the Royal Meteorological Society Q. J. R. Meteorol. Soc. 137: 673 – 689, April 2011 A Bridging the gap between weather and seasonal forecasting: intraseasonal forecasting for Australia Debra Hudson,* Oscar Alves, Harry H. Hendon and Andrew G. Marshall Centre for Australian Weather and Climate Research (CAWCR), Melbourne, Victoria, Australia *Correspondence to: D. Hudson, Bureau of Meteorology, CAWCR, GPO Box 1289, Melbourne, Victoria 3001, Australia. E-mail: [email protected] This study examines the potential use of the Predictive Ocean Atmosphere Model for Australia (POAMA), the Bureau of Meteorology’s dynamical seasonal forecast system, as an intraseasonal prediction tool for Australia. This would fill the current prediction capability gap between weather forecasts and seasonal outlooks for Australia. The intraseasonal forecast skill of a 27-year hindcast dataset is investigated, focusing on precipitation and minimum and maximum temperatures over Australia in the second fortnight (average days 15–28 of the forecast). Most of the skill for forecasting precipitation and maximum temperature in the second fortnight is focused over eastern Australia, during austral winter and spring for precipitation and during spring for maximum temperature. For this region and seasons the forecast of the second fortnight performs generally better than using climatology, persistence of observed, or persistence of the forecast for the first fortnight (average days 1 –14). The model has generally poor skill in predicting minimum temperatures. The role of key drivers of Australian climate variability for providing predictability at intraseasonal time-scales is investigated. This is done for the austral winter and spring seasons, when POAMA’s skill for predicting precipitation is highest. Forecast skill is found to be increased during extremes of the El Ni˜ no Southern Oscillation, the Indian Ocean Dipole and the Southern Annular Mode. The regions of impact of these modes of climate variability on forecast skill are similar to those regions identified in observed studies as being influenced by the respective drivers. In contrast, there is no significant relationship between intraseasonal forecast skill for precipitation and the amplitude of the Madden Julian Oscillation (MJO) in winter and spring, although the analysis does not distinguish between the phases of the MJO. The results indicate that the use of POAMA for intraseasonal forecasting is promising. Copyright c 2011 Royal Meteorological Society Key Words: monthly forecasts; prediction skill; coupled ocean-atmosphere model Received 31 May 2010; Revised 24 November 2010; Accepted 10 December 2010; Published online in Wiley Online Library 21 March 2011 Citation: Hudson D, Alves O, Hendon HH, Marshall AG. 2011. Bridging the gap between weather and seasonal forecasting: intraseasonal forecasting for Australia. Q. J. R. Meteorol. Soc. 137: 673 – 689. DOI:10.1002/qj.769 1. Introduction The Bureau of Meteorology has been providing weather forecasts since 1908 and seasonal climate prediction since 1989 (Day, 2007). However, there is a notable gap in prediction capability beyond 1 week and shorter than a season. This is because it is notoriously difficult to provide skilful predictions for this intraseasonal or monthly time- scale, particularly from the second week to the first month of the forecast. As noted by Vitart (2004), after about the first week the forecast system has typically lost most of the information from the atmospheric initial conditions, which Copyright c 2011 Royal Meteorological Society

Transcript of Bridging the gap between weather and seasonal forecasting: intraseasonal forecasting for Australia

Page 1: Bridging the gap between weather and seasonal forecasting: intraseasonal forecasting for Australia

Quarterly Journal of the Royal Meteorological Society Q. J. R. Meteorol. Soc. 137: 673–689, April 2011 A

Bridging the gap between weather and seasonal forecasting:intraseasonal forecasting for Australia

Debra Hudson,* Oscar Alves, Harry H. Hendon and Andrew G. MarshallCentre for Australian Weather and Climate Research (CAWCR), Melbourne, Victoria, Australia

*Correspondence to: D. Hudson, Bureau of Meteorology, CAWCR, GPO Box 1289, Melbourne, Victoria 3001, Australia.E-mail: [email protected]

This study examines the potential use of the Predictive Ocean Atmosphere Modelfor Australia (POAMA), the Bureau of Meteorology’s dynamical seasonal forecastsystem, as an intraseasonal prediction tool for Australia. This would fill the currentprediction capability gap between weather forecasts and seasonal outlooks forAustralia. The intraseasonal forecast skill of a 27-year hindcast dataset is investigated,focusing on precipitation and minimum and maximum temperatures over Australiain the second fortnight (average days 15–28 of the forecast). Most of the skillfor forecasting precipitation and maximum temperature in the second fortnight isfocused over eastern Australia, during austral winter and spring for precipitation andduring spring for maximum temperature. For this region and seasons the forecast ofthe second fortnight performs generally better than using climatology, persistenceof observed, or persistence of the forecast for the first fortnight (average days1–14). The model has generally poor skill in predicting minimum temperatures.The role of key drivers of Australian climate variability for providing predictabilityat intraseasonal time-scales is investigated. This is done for the austral winter andspring seasons, when POAMA’s skill for predicting precipitation is highest. Forecastskill is found to be increased during extremes of the El Nino Southern Oscillation,the Indian Ocean Dipole and the Southern Annular Mode. The regions of impactof these modes of climate variability on forecast skill are similar to those regionsidentified in observed studies as being influenced by the respective drivers. Incontrast, there is no significant relationship between intraseasonal forecast skill forprecipitation and the amplitude of the Madden Julian Oscillation (MJO) in winterand spring, although the analysis does not distinguish between the phases of theMJO. The results indicate that the use of POAMA for intraseasonal forecasting ispromising. Copyright c© 2011 Royal Meteorological Society

Key Words: monthly forecasts; prediction skill; coupled ocean-atmosphere model

Received 31 May 2010; Revised 24 November 2010; Accepted 10 December 2010; Published online in WileyOnline Library 21 March 2011

Citation: Hudson D, Alves O, Hendon HH, Marshall AG. 2011. Bridging the gap between weather and seasonalforecasting: intraseasonal forecasting for Australia. Q. J. R. Meteorol. Soc. 137: 673–689. DOI:10.1002/qj.769

1. Introduction

The Bureau of Meteorology has been providing weatherforecasts since 1908 and seasonal climate prediction since1989 (Day, 2007). However, there is a notable gap inprediction capability beyond 1 week and shorter than a

season. This is because it is notoriously difficult to provideskilful predictions for this intraseasonal or monthly time-scale, particularly from the second week to the first monthof the forecast. As noted by Vitart (2004), after about thefirst week the forecast system has typically lost most of theinformation from the atmospheric initial conditions, which

Copyright c© 2011 Royal Meteorological Society

Page 2: Bridging the gap between weather and seasonal forecasting: intraseasonal forecasting for Australia

674 D. Hudson et al.

are the basis for weather forecasts. Also, in the first month theocean state probably has not changed much since the startof the forecast; hence it is difficult to beat persistence as aforecast (Vitart, 2004). However, over the past few years, withthe improvement of numerical prediction models, ensembleprediction techniques and initialization, skilful intraseasonalpredictions based on general circulation models are nowbeing delivered operationally (e.g. by the European Centrefor Medium-Range Weather Forecasts, ECMWF; Vitartet al., 2008) and there is an increasing focus on dynamicalintraseasonal prediction (e.g. Toth et al., 2007; Gottschalcket al., 2008).

Forecast information on intraseasonal time-scales ispotentially useful for a range of sectors of society, suchas agriculture (e.g. Hammer et al., 2000; Meinke andStone, 2005), energy (e.g. Roulston et al., 2003; Taylor andBuizza, 2003), water management (e.g. Sankarasubramanianet al., 2009) and the financial markets and insurance (e.g.Zeng, 2000; Jewson and Caballero, 2003). In Australiathere has been increasing demand for intraseasonalforecasts from the agricultural community in particular(e.g. CliMag, 2009). Many farmers currently respond toclimate variability through flexibility in their practices.Reliable intraseasonal forecasts may be valuable fordecision making related to the scheduling of plantingand harvesting, as well as within-season decisions, suchas those related to fertilizer or pesticide application(Meinke and Stone, 2005; CliMag, 2009). Some farmersare already integrating existing intraseasonal forecasts intotheir decision-making framework. For example, somecotton growers in Queensland schedule the timing of theircotton harvests based on the expected passage of the nextMadden Julian Oscillation (MJO) (Meinke and Stone, 2005).Forecasts on the intraseasonal time-scale would add toexisting climate information available to farmers, assistingin the development of better risk management strategies.

Currently at the Bureau of Meteorology dynamical sea-sonal prediction is based on the Predictive Ocean Atmos-phere Model for Australia (POAMA), which is a coupledocean–atmosphere climate model and data assimilation sys-tem (Alves et al., 2003; Wang et al., 2008; Hendon et al.,2009; Lim et al., 2009; Spillman and Alves, 2009; Zhao andHendon, 2009; Hudson et al., 2011). Although the real-timeversion of POAMA routinely produces an intraseasonalforecast from realistic atmospheric initial conditions, theskill of these intraseasonal forecasts has not, until recently,been assessed. This is because the atmosphere and land com-ponents of the hindcasts of the original POAMA-1 systemwere initialized from unrealistic atmospheric initial condi-tions; the atmospheric initial conditions were derived fromAtmospheric Model Intercomparison Project (AMIP)-styleatmosphere-only simulations. These initial atmosphere con-ditions, although appropriate for seasonal prediction whereocean initial conditions dominate, did not capture the trueintraseasonal atmospheric/land surface state. However, themost recent version of POAMA (version 1.5) has a newAtmosphere and Land Initialization scheme (ALI; Hudsonet al., 2011), which provides realistic atmospheric initial con-ditions for both the hindcasts and real-time forecasts. Thusthis updated POAMA-1.5 system has the potential to bridgethe gap between weather and seasonal forecasting, sinceforecasts in the 10- to 60-day range are influenced by initialconditions of the atmosphere and land, as well as the ocean.

In this paper, intraseasonal forecast skill of the hindcastsfrom the POAMA-1.5 system is examined, focusing onfortnightly precipitation and maximum and minimumtemperature anomalies over Australia. Section 2 describesthe POAMA-1.5 forecast system, the hindcast dataset and theverification method. Section 3 documents the intraseasonalskill for precipitation, maximum and minimum temperatureover Australia. Section 4 examines how some of the drivers ofintraseasonal variability of rainfall over Australia are relatedto POAMA’s skill in predicting intraseasonal variations ofAustralian rainfall. Conclusions are presented in section 5.

2. Methods

2.1. POAMA-1.5 forecast system and hindcast experiments

The atmospheric model component of POAMA-1.5 is theBureau of Meteorology’s atmospheric model (BAM version3.0) (Colman et al., 2005; Wang et al., 2005; Zhong et al.,2006), which has a T47 horizontal resolution and 17 levelsin the vertical. This horizontal resolution, together with thegrid configuration, means that the southernmost state ofAustralia, the island of Tasmania, is not resolved as landin POAMA; therefore our analysis is restricted to mainlandAustralia. The land surface component is a simple bucketmodel for soil moisture (Manabe and Holloway, 1975) andhas three soil levels for temperature. The ocean model is theAustralian Community Ocean Model version 2 (ACOM2)(Schiller et al., 1997, 2002) and is based on the GeophysicalFluid Dynamics Laboratory Modular Ocean Model (MOMversion 2). The ocean grid resolution is 2◦ in the zonaldirection and in the meridional direction it is 0.5◦ at theEquator and gradually increases to 1.5◦ near the poles. Theatmosphere and ocean models are coupled using the OceanAtmosphere Sea Ice Soil (OASIS) coupling software (Valckeet al., 2000).

POAMA-1.5 obtains ocean initial conditions from thePOAMA ocean data assimilation system (PODAS; Smithet al., 1991) and atmospheric initial conditions from theALI scheme (Hudson et al., 2011). ALI involves the creationof a new reanalysis dataset by continuously nudging theatmospheric model of POAMA toward a global atmosphericanalysis. ALI nudges to the analyses from ERA-40 (Uppalaet al., 2005) for the period 1980 to August 2002, andto the Bureau of Meteorology’s operational global NWPanalysis thereafter. The ALI scheme thus generates realisticatmospheric initial conditions that are more balanced forthe POAMA atmospheric model, as well as producingland surface initial conditions that are in balance withthe atmospheric forcing.

The hindcast dataset is a 10-member ensemble starting onthe first day of every month for 1980–2006. The ensembleis generated through perturbing the atmospheric initialconditions by successively initializing each member withthe atmospheric analysis 6 h earlier (i.e. the 10th memberwas initialized 2.25 days earlier than the first member). Theocean initial conditions are from the analyses provided byPODAS for the first of each month and are not perturbed.Forecast skill is assessed using anomalies from the hindcastclimatology. These anomalies are created by producinga lead-time dependent ensemble mean climatology fromthe hindcasts. The ensemble mean forecast (or individualensemble member) is compared against this climatologyto create anomalies, and in so doing a first-order linear

Copyright c© 2011 Royal Meteorological Society Q. J. R. Meteorol. Soc. 137: 673–689 (2011)

Page 3: Bridging the gap between weather and seasonal forecasting: intraseasonal forecasting for Australia

Intraseasonal Forecasting for Australia 675

correction for model bias or drift is made (e.g. Stockdale,1997).

2.2. Verification methodology

Hindcast performance of the POAMA-1.5 system forseasonal time-scales has been thoroughly assessed (e.g. Wanget al., 2008; Hendon et al., 2009; Lim et al., 2009; Spillmanand Alves, 2009; Zhao and Hendon, 2009; Hudson et al.,2011). Some aspects of the forecast skill of the MJO havealso been determined (Rashid et al., 2011; Marshall et al.,2011). In this paper, the skill of POAMA in predictingthe first and second fortnight (average of days 1–14and 15–28 respectively) of the forecast for precipitationand temperature anomalies over Australia is assessed. TheBureau of Meteorology National Climate Centre griddeddaily analyses (averaged into fortnights) of precipitation,maximum and minimum temperature are used for theverification. These gridded analyses are produced fromquality-controlled station data by the application of a three-pass Barnes successive-correction analysis (Mills et al., 1997).They are available on a 0.25◦ grid and are averaged onto thePOAMA T47 atmospheric grid.

Since POAMA uses realistic atmosphere initial conditions,derived from nudging towards a high-quality global analysis(i.e. using the ALI scheme), deterministic forecast skill inthe first week is high. That is, in the first week POAMAis essentially behaving as a global NWP system. After thefirst week, forecast spread is large and the forecast needsto be delivered and assessed in a probabilistic fashion.Deterministic verification is, however, shown in the formof correlation, which measures the linear correspondencebetween the ensemble mean forecast and observed, and rootmean square error (RMSE), which provides information onerrors in forecast amplitude. For probabilistic verification,focus is placed on probabilistic forecasts of exceeding tercilethresholds, and the relative operating characteristic (ROC)score, ROC curve, reliability diagram and Brier skill scoremetrics are used (e.g. Mason and Graham, 1999, 2002; Joliffeand Stephenson, 2003; Wilks, 2006; Mason and Stephenson,2008). For calculation of the tercile thresholds, anomalydata from all the ensemble members are used. Calculationof the terciles from both the model and observationsand the verification of the forecasts are subject to leave-one-out cross-validation. ROC scores, Brier scores andreliability diagrams are used for verifying the performance ofdichotomous (yes/no) predictions, e.g. whether the forecastprecipitation anomaly falls within the upper tercile or not.They are based on contingency tables of the number ofobserved occurrences and non-occurrences of the eventoccurring in predefined forecast probability bins. In thisstudy, five probability bins are defined: 0–0.2, 0.2–0.4,0.4–0.6, 0.6–0.8 and 0.8–1.0. This was done, rather thanusing the full set of 11 probability values that are availablefrom the 10-member ensemble, in order to avoid sparsenessof some of the probability categories.

The ROC score (also referred to here as the ROCarea) measures the ability of the forecasting systemto discriminate between events and non-events, therebyproviding information on forecast resolution. The ROCcurve is produced by plotting the hit rate (fraction ofobserved events that were correctly forecast) against thefalse alarm rate (fraction of non-events that were incorrectlyforecast as events) calculated for each probability bin, and

by definition passes through the points (0.0, 0.0) and (1.0,1.0). The no-skill line on an ROC curve is the diagonal,where hit rates equal false alarm rates. A forecast systemwith positive skill has a curve which lies above the diagonaland bends towards the top left corner (0.0, 1.0), such thathit rates exceed false alarm rates. The area under the ROCcurve is often used to summarize the skill. It is normalizedsuch that a perfect forecast system has an area of 1 (i.e.the curve passes through (0.0, 0.0), (0.0, 1.0) and (1.0,1.0)) and a curve lying on the diagonal (no skill) has anarea of 0.5. Statistical significance of the area under theROC curve is determined using the Mann–Whitney U-statistic (Mason and Graham, 2002; Wilks, 2006). The ROCarea is rescaled into a Mann–Whitney U-statistic and thestatistical significance is evaluated in the context of a normaldistribution (for large samples, the distribution of the U-statistic approximates the normal distribution) (Mason andGraham, 2002; Wilks, 2006).

A reliability diagram shows the conditional relativefrequency of occurrence of an event (observed relativefrequency) as a function of the forecast probability, therebyproviding information on forecast reliability. The diagonalline on a reliability diagram indicates perfect reliability,the horizontal line represents the observed climatologicalfrequency and the vertical line the model climatology. If aset of forecasts is not reliable, then the corresponding curvewill lie away from the diagonal. If the curve is shallower(steeper) than the diagonal, then the forecast system isoverconfident (underconfident). A curve lying on or nearthe horizontal line indicates a forecast system that has noresolution. Deviations from the perfect reliability diagramcan be due to sampling limitations rather than necessarilytrue deviations from reliability (Jolliffe and Stephenson,2003). As such, a reliability diagram is usually accompaniedby an indication of the sample size in each probability bin,such as a histogram. The histogram can therefore be usedto indicate the confidence associated with the result foreach probability bin, as well as the sharpness of the forecastsystem. If all the forecasts fell in the model climatologyprobability bin, then the system would have no sharpness(sharpness is the tendency to forecast extreme values).

The Brier score measures the squared difference betweenpredicted and observed probabilities (Brier, 1950). TheBrier score can be decomposed into three components,two of which measure forecast resolution and reliability(Murphy, 1973). The third term is a function of theclimatological frequency of the occurrence of the event.The relative quality of the score is measured with the Brierskill score (BSS), which is defined as the improvement ofthe probabilistic forecast relative to a reference forecast,usually the climatological frequency of the event. PositiveBSS values (with a maximum value of 1) indicate forecaststhat are better than climatology. In contrast, BSS valuesbelow zero mean that there is no skill and the forecast isworse than a climatological forecast. The BSS with referenceto climatology is, however, negatively biased for hindcastswith small ensemble sizes, such as in the present study,due to sampling errors in the forecast probabilities (Mulleret al., 2005). Essentially, imperfectly estimated probabilitiesfrom the sample of forecasts are unfairly compared toperfectly estimated climatological probabilities (Mason andStephenson, 2008). This problem can be addressed by theaddition of an uncertainty term to the BSS calculationwhich takes into account the number of ensemble members

Copyright c© 2011 Royal Meteorological Society Q. J. R. Meteorol. Soc. 137: 673–689 (2011)

Page 4: Bridging the gap between weather and seasonal forecasting: intraseasonal forecasting for Australia

676 D. Hudson et al.

(Mason and Stephenson, 2008; Weigel et al., 2007). Thisnew formulation of the BSS is referred to as the debiasedBSS (BSSD) and is what has been used in the current study.

3. Intraseasonal skill

3.1. Precipitation

Figure 1(a)–(d) shows the temporal correlation forprecipitation in the first and second fortnights for allforecast start months over the hindcast period 1980–2006.The degradation in skill in the second half of the monthis very clear (Figure 1(c)), and most of the skill in thefirst fortnight (Figure 1(a)) comes from the first week of theforecast (not shown). Model forecast skill for both fortnightsbeats persistence of observed (Figure 1(a)–(d)). The latter iscalculated by persisting the average of the observed fortnightimmediately prior to the forecast start date. Correlation skillin the second fortnight varies a great deal as a functionof region and forecast start month. It is highest in winterand spring (June–November, JJASON) over southern andeastern Australia (Figure 1(e)). The winter months (JJA)contribute more to the skill over southwestern Australiaand the southern regions of eastern Australia, whereas thespring months (SON) contribute more to the skill overthe northern regions of eastern Australia (not shown). Thecorrelations over these regions and at these times of the yearare clearly greater than those from forecasts of persistenceof observed (Figure 1(f)). The average RMSE over Australiain the second fortnight in JJASON is 0.79 mm per day,compared to 1.05 mm per day for persistence forecasts.

ROC curves and ROC areas provide information onforecast resolution by measuring the ability of a forecast todiscriminate between the occurrence and non-occurrenceof an event. An ROC area or score below 0.5 implies askill lower than climatology. Figure 2 shows the ROC area(normalized area under the ROC curve) of the probabilitythat the precipitation anomaly averaged over the first orsecond fortnight is in the lower or upper tercile, for thewinter and spring forecast start months (i.e. same monthsshown in Figure 1(e) and (f); JJASON). The skill over theeast and southeast is apparent in both fortnight and tercilecategories. In the second fortnight, for both categories, muchof the southeast has skill greater than the climatologicalvalue of 0.5, suggesting that the model has some abilityin distinguishing between the occurrence of rainfall fallingin the upper (lower) tercile and its non-occurrence. InFigure 1 ‘persistence of observed’ is used as a baseline ofcomparison of the skill of the second fortnight forecasts.Perhaps a stricter baseline of comparison would be the skillfrom the persistence of the probabilities obtained from thefirst forecast fortnight, given the high skill that occurs in thefirst week of the forecast. This is following the approach ofVitart (2004). In other words, does the extended dynamicalforecast provide anything useful over and above what wecould get from persistence of the first 2 weeks of the forecast?The model does indeed perform better over certain regions,particularly the southeast, in the second fortnight comparedto persisting forecast probabilities of the first fortnightfor this time of year (compare Figure 2(c) and (e) andFigure 2(d) and (f)).

ROC and reliability curves for southeastern Australia forthe first and second fortnights are displayed in Figure 3.The curves are obtained by averaging the contingency tables

obtained from each grid box (29 boxes in the region) fallingwithin the masked area shown, for forecasts starting inJJASON months. The ROC curves exhibit a decline in skillfrom the first fortnight (ROC area, A, for the upper tercile =0.68; lower tercile A = 0.71) to the second fortnight (uppertercile A = 0.66; lower tercile A = 0.63) for precipitationfalling in the upper and lower tercile respectively, althoughskill (ROC area greater than 0.5) still prevails in the secondfortnight (Figure 3(a) and (b)). As shown in Figure 2, themodel provides more skill in the second fortnight thansimply persisting the probabilities from the first fortnight(Figure 3(b), solid versus dashed lines). The reliabilitydiagrams show that the forecast system is over-confident,with over-forecasting biases associated with large forecastprobabilities (Figure 3(c) and (d)). The forecasts have somereliability; they correctly indicate increases and decreases inthe probability of precipitation falling in the lower (upper)tercile, but the changes in probability are exaggerated. Thisis a common situation for seasonal forecasting (Masonand Stephenson, 2008). There may be some potentialfor improving this forecast reliability through statisticalcalibration, e.g. by inflating the ensemble spread (Doblas-Reyes et al., 2005; Johnson and Bowler, 2009). Currently,the only post-processing performed is the removal of modelbias using the hindcast model climatology. The histogramaccompanying the reliability diagram for the forecast in thesecond fortnight indicates that the forecast probabilities peakin frequency at the climatological probability (Figure 3(d)).This is in contrast to the results from the first fortnight,where the frequency of forecasts peaks in the lowest andhighest probability bins for both the upper and lower tercilecases (Figure 3(c)). This indicates that as the forecast leadtime progresses the forecasts become less sharp (i.e. there isa tendency towards forecasts of climatology).

The BSS incorporates reliability and resolution attributesof forecast quality and measures the accuracy of the forecastrelative to a reference forecast, taken here as climatology.The debiased BSSs for the reliability curves for the firstfortnight, shown in Figure 3(c), indicate increases in skillover climatology of 13% and 2% for precipitation falling inthe lower and upper terciles, respectively. For the secondfortnight the corresponding values are 7% and 8% for thelower and upper terciles, respectively (for the reliabilitycurves in Figure 3(d)). Figure 4 shows spatial plots of thedebiased BSS for the second fortnight over Australia. Regionswhere the BSS is positive indicate that the model is moreskilful than climatology. The skill in both cases (upper andlower tercile) is modest, with a 10–15% improvement inskill over climatology in parts of the southeast.

3.2. Temperature

Correlation skill for maximum temperature is generallyhigher than that for precipitation, particularly for thefirst fortnight (Figure 5 compared to Figure 1). However,it is clear that the skill for ‘persistence of observed’is also higher for maximum temperature compared toprecipitation; thus it is more difficult to beat the persistenceforecasts (e.g. over northern Australia in the first fortnightfor maximum temperature, Figure 5(a) and (b)). For thesecond fortnight, correlation skill is greatest during spring(SON) (Figure 5(e)). This skill is focused over easternAustralia and beats persistence of observed (Figure 5(f)).The average RMSE over Australia in the second fortnight

Copyright c© 2011 Royal Meteorological Society Q. J. R. Meteorol. Soc. 137: 673–689 (2011)

Page 5: Bridging the gap between weather and seasonal forecasting: intraseasonal forecasting for Australia

Intraseasonal Forecasting for Australia 677

Figure 1. Correlation of precipitation anomalies with observed for the first (top row) and second (middle and bottom rows) fortnights of the forecast fromPOAMA (left column) and from persistence of observed (i.e. persisting the observed fortnight prior to the start date; right column). Plots (a)–(d) showthe skill from all forecast start months (n = 324), and (e) and (f) are for winter and spring (JJASON) forecast start months (n = 162). Correlationssignificantly different from zero are shaded (t-test, n = 324 (162), r > 0.1 (0.2) is significant at p = 0.05) and the contour interval is 0.1.

during spring (SON) is 1.89◦C, compared to 2.35◦C forpersistence forecasts.

The second fortnight ROC area and curves for maximumtemperature falling in the upper tercile show some skill forthe spring forecast start months (SON, Figures 6 and 7(a)).The ROC area is highest over eastern and southeasternAustralia (ROC areas > 0.7), but a large proportionof the country exhibits ROC scores significantly (95%confidence) greater than 0.5, suggesting that the modelperforms better than climatology (Figure 6). In addition,for the second fortnight, the model is able to providemore skill than persisting the forecast probabilities fromthe first fortnight (Figures 6 and 7(a); in the latter, theROC area is 0.70 for fortnight 2 and 0.59 for persistence

of fortnight 1). The reliability diagram for southeasternAustralia indicates an over-forecasting bias, particularly forlarge forecast probabilities (Figure 7(b)). The histogramshowing the frequency of forecasts in different forecastprobability bins suggests that the forecasts are still relativelysharp in the second fortnight (frequencies do not peak at theclimatological frequency) (Figure 7(b)), although they arestill much less sharp than those of the first fortnight, wherethe histogram exhibits a characteristic ‘U’ shape (not shown).According to the debiased BSS the skill improvement overclimatology over eastern Australia at this skilful time of year(SON) ranges from 10% to 25% (Figure 8).

Minimum temperature is the least skilful of the threevariables analysed here, particularly for the second fortnight

Copyright c© 2011 Royal Meteorological Society Q. J. R. Meteorol. Soc. 137: 673–689 (2011)

Page 6: Bridging the gap between weather and seasonal forecasting: intraseasonal forecasting for Australia

678 D. Hudson et al.

Figure 2. ROC area of the probability that precipitation averaged over the first (a, b) and second fortnight (c, d) is in the lower (left column) or upper(right column) tercile for winter and spring (JJASON) forecast start months. (e) and (f) show the ROC areas obtained by persisting the probabilitiesfrom the first forecast fortnight. ROC areas significant at the 5% significance level are shaded (Mann–Whitney U-test) and the contour interval is 0.05.

(Figure 9). This is also true for the probabilistic forecasts(not shown). An analysis of 3-month rolling seasons showsthat the correlation at any time or grid point for the secondfortnight is mostly less than 0.3, and there are no timesor regions of cohesive appreciable skill (not shown). Aninvestigation of the reasons why the model has less skill forminimum temperature compared to maximum temperatureis beyond the scope of this paper. It is may be related tomodel error, but it is possible that it may also reflect reducedpredictability for minimum temperature. In an examinationof the relationship between the El Nino Southern Oscillation(ENSO) and Australian land surface temperature, Jones andTrewin (2000) found that statistically significant correlationsbetween the Southern Oscillation Index (SOI) and seasonal

mean minimum temperature were less widespread than formaximum temperature (although there were locally highcorrelations). Similarly, Hendon et al. (2007) found thatthe relationship of the Southern Annular Mode (SAM) withdaily minimum temperatures over Australia was weakerthan the relationship with daily maximum temperatures.

4. Sources of predictability

The impact of key drivers of Australian rainfall variabilityon the skill of forecasting intraseasonal rainfall in POAMA isinvestigated. This investigation is motivated by the need tounderstand what is providing the predictability diagnosed

Copyright c© 2011 Royal Meteorological Society Q. J. R. Meteorol. Soc. 137: 673–689 (2011)

Page 7: Bridging the gap between weather and seasonal forecasting: intraseasonal forecasting for Australia

Intraseasonal Forecasting for Australia 679

Figure 3. ROC (a, b) and reliability (c, d) diagrams of the probability that precipitation averaged over the first (left column) or second (right column)fortnight is in the upper (black lines with star symbols) or lower (grey lines with square symbols) tercile. The dashed lines in (b) are the ROC curvesobtained by persisting the probabilities from the first forecast fortnight. The diagonal line in the ROC diagrams (a, b) represents the no-skill line.In the reliability diagrams (c, d), the histogram represents the frequency of forecasts, i.e. the relative population of each forecast probability bin. Thedashed horizontal line represents a no-resolution forecast (observed climatology) and the dashed vertical line represents a no-sharpness forecast (modelclimatology). The solid diagonal line represents perfect reliability. Both the ROC and reliability diagrams are obtained from winter and spring (JJASON)forecast start months for southeastern Australia (map inset).

Figure 4. Debiased BSS (reference score is climatology) for the probability that precipitation averaged over the second fortnight is in the (a) lower or(b) upper tercile for winter and spring (JJASON) forecast start months. The contour interval is 0.05. Grid boxes with positive (negative) BSSs greater(less) than 0.05 (−0.05) are dark (light) shaded.

in section 3, as well as to provide insight into how modelshortcomings may be limiting forecast skill. We focus onthe effect of the ENSO, the Indian Ocean Dipole (IOD), theSAM and the MJO on Australian precipitation during winterand spring (JJASON), when POAMA’s skill for predictingprecipitation is highest (Figure 1(e)).

4.1. ENSO and the IOD

ENSO is the primary driver of predictable interannualvariations of Australian rainfall, particularly over easternregions during winter (JJA) and spring (SON) (e.g. Risbeyet al., 2009). POAMA can skilfully predict tropical sea

Copyright c© 2011 Royal Meteorological Society Q. J. R. Meteorol. Soc. 137: 673–689 (2011)

Page 8: Bridging the gap between weather and seasonal forecasting: intraseasonal forecasting for Australia

680 D. Hudson et al.

Figure 5. Correlation of maximum temperature anomalies with observed for the first (top row) and second (middle and bottom rows) fortnights ofthe forecast from POAMA (left column) and from persistence of observed (i.e. persisting the observed fortnight prior to the start date; right column).Plots (a)–(d) show the skill from all forecast start months (n = 324), and (e) and (f) are for spring (SON) forecast start months (n = 81). Correlationssignificantly different from zero are shaded (t-test, n = 324 (81), r > 0.1 (0.2) is significant at p = 0.05) and the contour interval is 0.1.

surface temperature (SST) anomalies associated with ENSOtwo to three seasons in advance (Wang et al., 2008) and,on a seasonal time-scale, can depict the teleconnection toAustralian rainfall (Lim et al., 2009). Here, the impact ofENSO on intraseasonal rainfall prediction skill in POAMAis examined by stratifying forecasts into El Nino/La Ninacases and neutral cases. The observed monthly (3-monthrunning mean imposed) ENSO index from the USNational Weather Service Climate Prediction Center (CPC;http://www.cpc.noaa.gov/products/analysis monitoring/ensostuff/ensoyears.shtml) is used to classify forecast startmonths as El Nino/La Nina or neutral events, usingthresholds of ±0.5◦C. The analysis is performed for allforecast start times in June–November (JJASON). The

stratification used in this paper does not distinguishbetween El Nino and La Nina cases because the sample sizeis too small. Hence we focus on the impact of ENSO beingin an extreme versus being neutral.

The correlation skill of predicting rainfall in thesecond fortnight is substantially higher over eastern-coastal,northeastern, northern and southwestern Australia in ElNino and La Nina years (Figure 10(a)) compared to neutralyears (Figure 10(c)). These regions of higher skill are alsowhere ENSO tends to have an impact on Australian rainfallon a seasonal time-scale, particularly in spring (e.g. Risbeyet al., 2009) and where POAMA successfully simulatesthe teleconnection to rainfall (Lim et al., 2009). In theseENSO extreme cases, the model provides additional skill

Copyright c© 2011 Royal Meteorological Society Q. J. R. Meteorol. Soc. 137: 673–689 (2011)

Page 9: Bridging the gap between weather and seasonal forecasting: intraseasonal forecasting for Australia

Intraseasonal Forecasting for Australia 681

Figure 6. ROC area of the probability that maximum temperature averaged over the second fortnight is in the upper tercile for spring (SON) monthsfrom (a) POAMA and (b) that obtained for the second fortnight by persisting the probabilities from the first forecast fortnight. ROC areas significant atthe 5% significance level are shaded (Mann–Whitney U-test) and the contour interval is 0.05.

Figure 7. (a) ROC and (b) reliability diagrams of the probability that maximum temperature averaged over the second fortnight is in the upper tercile.The dashed line in (a) is the ROC curve obtained by persisting the probabilities from the first forecast fortnight. The diagonal line in the ROC diagramrepresents the no-skill line. In (b) the histogram represents the frequency of forecasts, i.e. the relative population of each forecast probability bin. Thedashed horizontal line represents a no-resolution forecast (observed climatology) and the dashed vertical line represents a no-sharpness forecast (modelclimatology). The solid diagonal line represents perfect reliability. Both the ROC and reliability diagrams are obtained for spring (SON) forecast startmonths for the southeast of Australia (map inset).

Figure 8. Debiased BSS (reference score is climatology) for the probabilitythat maximum temperature averaged over the second fortnight is in theupper tercile for the spring (SON) forecast start months. The contourinterval is 0.05. Grid boxes with positive (negative) BSSs greater (less) than0.05 (−0.05) are dark (light) shaded.

in the second fortnight compared to that obtained frompersistence of observed (Figure 10(a) and (b)). Persistence isalso a better forecast in extreme ENSO cases than in neutralcases over northeastern Australia (Figure 10(b) and (d)). It

thus appears that some of the enhanced skill during ENSOextreme cases may stem from increased persistence, whichPOAMA faithfully replicates.

Low-frequency coupled ocean–atmosphere variability inthe Indian Ocean, namely the IOD, has been shown to affectrainfall over Australia (Ansell et al., 2000; Saji and Yamagata,2003; Meyers et al., 2007; Risbey et al., 2009; Ummenhoferet al., 2009). The IOD is defined as the SST anomalydifference between the western (50–70◦E; 10◦S–10◦N) andeastern (90–110◦E; 10◦S–0◦N) tropical Indian Ocean (Sajiet al., 1999). POAMA can skilfully predict the peak phaseof the occurrence of the IOD in austral spring (SON) withabout 4 months lead time (Zhao and Hendon, 2009) andthe teleconnection to rainfall across southern Australia isfaithfully represented in POAMA (Lim et al., 2009). Toinvestigate the impact of the IOD on the skill of predictingprecipitation in the second fortnight in POAMA, the IODat the initial forecast time is classified based on monthlydata (JJASON months). The IOD is calculated from theReynolds OI.v2 SSTs (Reynolds et al., 2002) for June 1982 toNovember 2006 and prior to this (June 1980 to November1981) from the HadISST dataset (Rayner et al., 2003). Theclassification stratifies months into those when the IOD is

Copyright c© 2011 Royal Meteorological Society Q. J. R. Meteorol. Soc. 137: 673–689 (2011)

Page 10: Bridging the gap between weather and seasonal forecasting: intraseasonal forecasting for Australia

682 D. Hudson et al.

Figure 9. Correlation of minimum temperature anomalies with observed for all forecast start months for the first (top row) and second (bottom row)fortnights of the forecast. (a) and (c) are the skill from POAMA and (b) and (d) are the skill obtained from persistence of observed (i.e. persisting theobserved fortnight prior to the start date). Correlations significantly different from zero are shaded (t-test, n = 324, r > 0.1 is significant at p = 0.05)and the contour interval is 0.1.

strong or extreme (greater than the mean ±0.5 standarddeviation; n = 91) and those when it is weak or neutral(within the mean ±0.5 standard deviation; n = 71). Themean and standard deviation are calculated from all JJASONmonths in the analysis period (1980–2006).

There is a marked increase in forecast skill in thesecond fortnight obtained when forecasts are initializedin months when the magnitude of the IOD is large(Figure 11(a)) compared to when it is small (Figure 11(c))(note that there is no discrimination between positiveand negative IOD events). In addition, there is increasedskill in POAMA’s second fortnight over southeastern andsouthwestern regions compared to the skill from persistenceof observed (Figure 11(a) and (b)). Again, as for ENSO-extreme forecasts, some of the increased skill for IOD-extreme forecasts may stem from increased persistence,particularly over southern Australia (Figure 11(b) and (d)).

The pattern of the correlations in fortnight 2 for thestrong IOD case (Figure 11(a)) is similar to that obtainedfor the El Nino/La Nina case (Figure 10(a)). This maybe related to the fact that the IOD is not independentof ENSO (e.g. Saji et al., 2006; Meyers et al., 2007). Inorder to examine the impact of the IOD with the effectof ENSO ‘removed’, those cases that are associated withEl Nino or La Nina events are removed. As before, theobserved monthly (3-month running mean imposed)ENSO index from the US National Weather Service CPC(http://www.cpc.noaa.gov/products/analysis monitoring/ensostuff/ensoyears.shtml) is used, and warm and cold

events are defined based on thresholds of ±0.5◦C. Inthe updated IOD classification, IOD-extreme months areanalysed only if the corresponding ENSO index falls within−0.5 and +0.5◦C (i.e. a neutral ENSO event). The resultsshow that most of the rainfall skill in POAMA that isattributable to the IOD is located over the southeast andsouthern regions of the country (Figure 12(a)). However,the skill from POAMA over central southern regions is notappreciably higher than that obtained from persistence ofobserved (Figure 12(a) and (b)). Removing the effect ofENSO removes significant correlations over the eastern andnorthern regions (everything north of 25◦S) as well as overthe southwest (Figure 11(a) compared to Figure 12(a)).Similar results were obtained by Risbey et al. (2009) andLim et al. (2009) when looking at the observed correlationbetween Australian rainfall and the IOD. They foundthat when the IOD was considered without the effects ofENSO, then significant correlations in regions where ENSOdominates disappear (primarily the northeast), but the IODinfluence on rainfall remains over a broad part of southernAustralia.

From the above analysis of skill during ENSO and IODextremes, it is clear that some of the skill in the secondfortnight can be attributed to seasonal variability appearingon the intraseasonal time-scale. In other words, the week tomonth persistence of ENSO and IOD SST anomalies createsa tendency for equally persistent precipitation anomaliesthat appear as skill on intraseasonal time-scales, but whichactually arise from longer time-scale phenomena. This is

Copyright c© 2011 Royal Meteorological Society Q. J. R. Meteorol. Soc. 137: 673–689 (2011)

Page 11: Bridging the gap between weather and seasonal forecasting: intraseasonal forecasting for Australia

Intraseasonal Forecasting for Australia 683

Figure 10. Correlation skill of forecasting precipitation in the second fortnight in JJASON forecast start months for El Nino and La Nina cases (n = 76)(top row) from (a) POAMA and (b) persistence of observed, versus in neutral cases (n = 86) (bottom row) from (c) POAMA and (d) persistence ofobserved. Correlations significantly different from zero are shaded (t-test, n = 76 (86), r > 0.2 is significant at p = 0.05) and the contour interval is 0.1.

evident to some extent from the skill seen in the persistenceforecasts (Figures 10(b), 11(b) and 12(b)). However, itwould be interesting to know how much of the intraseasonalskill to which ENSO and the IOD contribute is genuinelyfrom intraseasonal variability or processes. To try andaddress this issue we have high-pass filtered the forecastand observed cases in order to isolate the intraseasonalanomalies. For each event identified (e.g. an ‘IOD large’event for a particular forecast start month and year) weremove the mean of the first 2 months of the forecast fromthe forecast for the second fortnight. The same is done forthe corresponding observed data and the correlation skill isobtained using this filtered data.

Figure 13 shows the skill in the second fortnight from thisfiltered data during extremes of ENSO (Figure 13(a)) andthe IOD (with the effect of ENSO ‘removed’) (Figure 13(b))and can be compared to the skill obtained from theunfiltered data (Figures 10(a) and 12(a), respectively).Under the extremes of ENSO, much of the skill seen in theunfiltered data (Figure 10(a)) disappears in the filtered data(Figure 13(a)), particularly over northern and northeasternAustralia. This suggests that much of that skill in thesecond fortnight has its origins in longer-time-scale, seasonalphenomena. This is confirmed by the skill of the persistenceforecast (Figure 10(b)). Figure 13(a) suggests that some ofthe skill over southwestern Australia, in particular, may bedue to intraseasonal processes which operate differently, andare more predictable under ENSO extreme cases comparedto ENSO neutral cases (the skill is lower over this region forneutral events in the filtered data; not shown).

Under extremes of IOD, the difference in skill betweenthe filtered (Figure 13(b)) and unfiltered (Figure 12(a))data is less dramatic. There is a slight reduction in themagnitude and spatial extent of the skill in the filtereddata, but in general the skill over southern and southeasternAustralia remains (Figure 13(b)). The skill of the persistenceforecasts shows that there is less skill from the persistentanomalies under IOD extremes (Figure 12(b)) compared toENSO extremes (Figure 10(b)), and that the main region ofskill from persistent anomalies under IOD extremes is overnorthwestern Australia (Figure 12(b)). Correspondingly,we do see a reduction in skill over this region in thefiltered data (Figure 13(b)) compared to the unfiltereddata (Figure 12(a)), although the skill in both cases ispoor. The results suggest that, in contrast to ENSO, asignificant proportion of the skill under extremes of IODcan be attributed to the predictability of intraseasonalprocesses, rather than just skill from predicting seasonalvariability which manifests at the intraseasonal time-scale.It is beyond the scope of this study to actually identify whichintraseasonal processes are operating under the extremesof ENSO or IOD to bring about improved intraseasonalprediction skill, but will form part of future work.

4.2. SAM

The SAM (also known as the Antarctic Oscillation or theHigh Latitude Mode) is an important mode of variabilityfor high and middle latitudes and is characterized byshifts in the strength of the zonal flow between about

Copyright c© 2011 Royal Meteorological Society Q. J. R. Meteorol. Soc. 137: 673–689 (2011)

Page 12: Bridging the gap between weather and seasonal forecasting: intraseasonal forecasting for Australia

684 D. Hudson et al.

Figure 11. Correlation skill of forecasting precipitation in the second fortnight in JJASON forecast start months when the magnitude of the IOD is large(i.e. greater than the mean ±0.5 standard deviations, n = 91) (top row) from (a) POAMA and (b) persistence of observed, versus when the magnitude ofthe IOD is small (i.e. within the mean ±0.5 standard deviations, n = 71) (bottom row) from (c) POAMA and (d) persistence of observed. Correlationssignificantly different from zero are shaded (t-test, n = 91(71), r > 0.2 is significant at p = 0.05) and the contour interval is 0.1.

55–60◦S and 35–40◦S (e.g. Gong and Wang, 1999;Thompson and Wallace, 2000). SAM has been shownto be an important contributor to rainfall variabilityover Australia (Hendon et al., 2007; Risbey et al., 2009).Although the decorrelation time of the SAM is relativelyshort (∼15 days), its long time-scale relative to synopticweather may be a source of multi-week predictability.To stratify our dataset based on extremes of the SAM,the observed daily SAM index for JJASON months isexamined. The daily SAM index was obtained uponrequest from the US National Weather Service CPC(http://www.cpc.ncep.noaa.gov/products/precip/CWlink/daily ao index/aao/aao index.html). The index is con-structed by projecting daily 700 hPa height anomaliesonto the leading empirical orthogonal function (EOF) ofmonthly mean 700 hPa height poleward of 20◦S (dataavailable extended to May 2005; thus the stratificationis from 1980–2004). The classification here stratifies thehindcast dataset into forecast start months where theobserved SAM index, averaged over the first 7 days of themonth, is strong (greater than the mean ±0.5 standarddeviation; n = 84) and when it is weak (within the mean±0.5 standard deviation; n = 66). The mean and standarddeviation of the observed SAM index, used in the thresholdcalculations, are calculated from all the days falling inJJASON months from 1980 to 2004. The classification doesnot distinguish between positive and negative SAM events.

There has not been any previous research on the skillof POAMA in predicting the daily SAM index. In order to

investigate the capability of POAMA for predicting the SAM,the daily SAM index is calculated for each POAMA hindcastin a similar fashion to that observed: daily anomalies areprojected onto the observed EOF pattern of monthly mean700 hPa height poleward of 20◦S. Using the observed EOFfor calculating the predicted SAM index allows for a directcomparison with the observed SAM index. The correlationis computed between the daily observed SAM index and theensemble mean SAM index, stratified for large and smallSAM events at the initial time (defined above), for lead timesout to 30 days. Figure 14 shows that for large SAM eventsthe correlation remains above 0.5 out to about 13 days. Thecorrelation drops below 0.5 after about 6 days for forecastsinitialized during neutral SAM. Because of the shorter time-scale and prediction lead time for the SAM than for ENSOand the IOD, we assess the prediction skill during extremesof the SAM for the fortnight comprising weeks 2 and 3,rather than weeks 3 and 4.

There is clearly more skill in forecasting precipitation inweeks 2 and 3 over eastern and southwestern Australia whenthe SAM is strong compared to when it is weak (Figure 15(a)and (c)). When the SAM is strong, most of the increasedskill over a persistence forecast occurs over southwesternand central-eastern Australia (Figure 15(a) and (b)), whichare regions where the SAM has an appreciable impact inwinter and spring, respectively (e.g. Hendon et al., 2007;Risbey et al., 2009). In contrast to what might be expectedfrom observed studies, our results do not show improvedrainfall skill over the extreme southeast (Figure 15(a)). This

Copyright c© 2011 Royal Meteorological Society Q. J. R. Meteorol. Soc. 137: 673–689 (2011)

Page 13: Bridging the gap between weather and seasonal forecasting: intraseasonal forecasting for Australia

Intraseasonal Forecasting for Australia 685

Figure 12. As Figure 11, but for non-ENSO cases (see text for details). Correlations significantly different from zero are shaded (t-test, n = 48 for IODlarge case and n = 38 for IOD small case, r > 0.3 is significant at p = 0.05) and the contour interval is 0.1.

Figure 13. Correlation skill of forecasting precipitation in the second fortnight in JJASON forecast start months for (a) El Nino and La Nina cases (i.e.same cases as for Figure 10(a), n = 76) and (b) when the magnitude of the IOD is large (with the effect of ENSO ‘removed’; i.e. same cases as forFigure 12(a), n = 48). The data are high-pass filtered: the mean of the first 2 months of the forecast is removed from the second fortnight for each case(see text for details). Correlations significantly different from zero are shaded and the contour interval is 0.1.

may be related to the relatively low resolution of POAMA,such that the Victorian Alps are poorly resolved.

In winter and spring the observed SAM is uncorrelatedwith ENSO and removal of ENSO variation does not affectthe observed correlation of SAM with Australian rainfall(Hendon et al., 2007; Risbey et al., 2009). However, this isnot necessarily the case in the forecast model. To test this,only non-ENSO cases are examined, as done in section 4.1.The resultant effect was to remove the high correlationsover northeastern Australia (Figure 16(a) compared to

Figure 15(a)), which makes the results more comparablewith observed studies. The corresponding ‘SAM neutral’stratification with the effect of ENSO ‘removed’ showshardly any significant correlations for both POAMA andpersistence of observed (not shown).

4.3. MJO

The MJO is possibly a significant source of predictabilityon intraseasonal time-scales (Waliser et al., 2006) and is

Copyright c© 2011 Royal Meteorological Society Q. J. R. Meteorol. Soc. 137: 673–689 (2011)

Page 14: Bridging the gap between weather and seasonal forecasting: intraseasonal forecasting for Australia

686 D. Hudson et al.

Figure 14. Correlation skill of the SAM index for large SAM events (solidline) (see text for details) in JJASON months (n = 84) and small SAMevents (dashed line) (n = 66) as a function of lead time (days).

important for Australia, given its direct and remote impactson tropical and extratropical rainfall (Hendon and Lieb-mann, 1990; Hall et al., 2001; Wheeler and Hendon, 2004;Risbey et al., 2009; Wheeler et al., 2009). The MJO has itsgreatest impact on Australian rainfall over northern regionsin summer (DJF). During winter (JJA) and spring (SON)the MJO has an impact on extratropical Australian rainfall(particularly the southeast), associated with remotely forcedvertical motion occurring within anomalous extratropicalhighs and lows (Wheeler et al., 2009). The POAMA modelis able to simulate reasonably realistic MJO events (Zhanget al., 2006; Marshall et al., 2008) and can predict the large-scale structure of the MJO in the Tropics out to about21 days (measured by the bivariate correlation of an MJOindex exceeding 0.5; Rashid et al., 2011). To investigate theimpact of the MJO on the skill of predicting Australianrainfall in winter and spring, the hindcast dataset is stratifiedbased on the existence or absence of an MJO (indepen-dent of the phase of the MJO) using the daily observedreal-time multivariate MJO (RMM) index of Wheelerand Hendon (2004). These data are available online athttp://cawcr.gov.au/staff/mwheeler/maproom/RMM/. AnMJO is defined as being strong if it has an RMM amplitudegreater than 1 (based on the average amplitude of the first7 days of the month) and weak or absent if it is less than 1.The impact on the skill of predicting Australian rainfall inthe second fortnight of the forecast, as well as the fortnightobtained from averaging weeks 2 and 3, is examined.

The correlation results obtained are inconclusive andmany regions showing significant correlations (rainfallprediction skill) are higher in the ‘weak MJO’ case ratherthan the ‘strong MJO case’ (not shown). There is no clearsignificant relationship between precipitation skill in winterand spring and the amplitude of the MJO. Unfortunately,the hindcast dataset is too small to allow useful stratificationof cases by, for example, the phase of the MJO and season.Such stratification may highlight phases of the MJO forwhich skill in predicting Australian rainfall exists. Marshallet al. (2011) examined the relationship between rainfall andthe MJO in POAMA and found that in JJA and SON themodel was unable to simulate the teleconnection betweenthe MJO and extratropical (or tropical) Australian rainfallacross the different MJO phases. Given their result, it is

not surprising that POAMA does not have any extra skillin forecasting rainfall over Australia during strong MJOperiods. Even if POAMA forecasts the large-scale tropicalstructure of the MJO perfectly, at week 2 it does not simulatethe correct relationship between the MJO and rainfall overmost of Australia during winter and spring (Marshall et al.,2011).

5. Summary and conclusions

This paper investigates the potential of the POAMA seasonalforecast system to fill the current prediction capabilitygap between weather forecasts and seasonal outlooks inAustralia. This initial examination of the intraseasonal skillof POAMA is promising. There are definite indicationsof useful skill for certain regions at certain times of theyear. Most of the skill for forecasting precipitation andmaximum temperature in the second fortnight (averagedays 15–28 of the forecast) is focused over eastern andsoutheastern Australia, during austral winter and spring forprecipitation and during spring for maximum temperature.Over these regions at these times, the forecast of the secondfortnight from the model performs generally better thanforecasts of persistence of observed, better than persistenceof the forecast of the first fortnight (average days 1–14) andbetter than climatology. The model shows very little skill inforecasting minimum temperature in the second fortnight.

In the second half of the paper, the source of theintraseasonal predictability of rainfall was explored. Theanalysis is performed on the austral winter and springseasons, when POAMA has demonstrable rainfall skill.Results from this initial investigation indicate that ENSO,the IOD and the SAM are all important contributors tothe intraseasonal rainfall forecast skill evident in winter andspring. There are other possible drivers of intraseasonalvariability that may provide intraseasonal predictabilitythat have not been considered in this study, for examplethe roles of antecedent soil moisture, blocking andstratosphere–troposphere interactions.

Even though ENSO is an interannual phenomenon, ourresults show that it has a clear impact on intraseasonaltime-scales. There is significantly higher skill in predictingrainfall over eastern-coastal, northeastern, northern andsouthwestern Australia in the second fortnight during ElNino/La Nina cases compared to neutral cases. These regionsof higher skill are also where ENSO tends to have an impacton Australian rainfall on a seasonal time-scale, particularlyin spring (e.g. Risbey et al., 2009) and where POAMAsuccessfully simulates the teleconnection to rainfall (Limet al., 2009). In addition, during El Nino/La Nina cases themodel beats the skill obtained from forecasts of persistenceof observed. Some of the higher skill during ENSO extremescompared to neutral years, particularly over northern andnorth-eastern Australia, stems from enhanced persistence.

The IOD has been shown to influence rainfall over thesouthern portion of Australia (Ansell et al., 2000; Saji andYamagata 2003; Meyers et al., 2007; Risbey et al., 2009;Ummenhofer et al., 2009). Since the IOD is not entirelyindependent of ENSO (e.g. Saji et al., 2006; Meyers et al.,2007), the impact of the IOD with and without the effect ofENSO removed is assessed. Removing the effect of ENSOmainly removes the significant skill found over eastern andnorthern regions. Most of the rainfall skill in POAMA that isattributable to the IOD (for non-ENSO cases) is located over

Copyright c© 2011 Royal Meteorological Society Q. J. R. Meteorol. Soc. 137: 673–689 (2011)

Page 15: Bridging the gap between weather and seasonal forecasting: intraseasonal forecasting for Australia

Intraseasonal Forecasting for Australia 687

Figure 15. Correlation skill of forecasting precipitation in the fortnight comprising the average of weeks 2 and 3 in JJASON forecast start months whenthe magnitude of the SAM is large (i.e. greater than the mean ±0.5 standard deviations, n = 84) (top row) from (a) POAMA and (b) persistence ofobserved, versus when the magnitude of the SAM is small (i.e. within the mean ±0.5 standard deviations, n = 66) (bottom row) from (c) POAMA and(d) persistence of observed. Correlations significantly different from zero are shaded (t-test, n = 84 (66), r > 0.2 is significant at p = 0.05) and thecontour interval is 0.1.

Figure 16. As Figure 15, but for non-ENSO cases. Only the stratification of ‘SAM large’ is shown (n = 46). Correlations significantly different from zeroare shaded (t-test, n = 46, r > 0.3 is significant at p = 0.05) and the contour interval is 0.1.

the southeast and southern regions of the country, given bythe marked increase in forecast skill in the second fortnightin months when the magnitude of the IOD is large comparedto when it is small. Again, this is a region where POAMAfaithfully simulates the teleconnection between Indo-PacificSST variations and Australian rainfall (Lim et al., 2009).There are indications that some of the enhanced skill oversouthern Australia in IOD-extreme cases may be related toincreased persistence.

The SAM and the MJO are processes that operateon intraseasonal time-scales and have been shown to

be important drivers of intraseasonal Australian rainfallvariability (Hendon et al., 2007; Wheeler et al., 2009).Results from this study show that the SAM does contributeto intraseasonal rainfall skill in winter and spring in POAMA,specifically over the southwest and over New South Wales,but the influence is at shorter lead times than for the IODand ENSO. These regions have been identified in observedstudies of the impact of SAM on Australian rainfall (Hendonet al., 2007; Risbey et al., 2009). In contrast to the results forthe SAM, the model does not appear to have any extra skillfor forecasting rainfall over Australia during strong MJO

Copyright c© 2011 Royal Meteorological Society Q. J. R. Meteorol. Soc. 137: 673–689 (2011)

Page 16: Bridging the gap between weather and seasonal forecasting: intraseasonal forecasting for Australia

688 D. Hudson et al.

periods in winter and spring. There is no clear significantrelationship between precipitation skill in these seasons andthe existence of an MJO. This analysis does not distinguishbetween the various phases of the MJO, and is consistentwith the results of Marshall et al. (2011) that show thedifficulty that POAMA has at reproducing the observedMJO relationship with Australian rainfall at these times ofyear.

The current version of the POAMA seasonal forecastsystem was not designed with intraseasonal forecasting inmind. As such, it has deficiencies related to its use inthis regard. Firstly, one forecast start date per month isnot sufficient to adequately sample the various modes ofintraseasonal variability, such as the MJO (e.g. a largerhindcast dataset would increase the number of forecastsstarted per month so that each phase of the MJO is bettersampled). Secondly, the ensemble is constructed from laggedatmospheric initial conditions such that the 10th ensemblemember is initialized 2.25 days earlier than the first member.Such large lags in the initial conditions may not be optimalfor the intraseasonal time-scale. The new hindcast datasetplanned for the next version (POAMA version 2) will bedesigned to incorporate intraseasonal prediction, includingoptimally perturbing the atmospheric initial conditions.Another significant limitation of the current version ofPOAMA is its simple bucket-type land surface model.There may be benefits for intraseasonal forecasting fromimproving the initialization and simulation of the landsurface, primarily due to soil moisture memory in theEarth–atmosphere system. The future transition of POAMA(version 3) to the new Australian Community and EarthSystem Simulator (ACCESS) model (Puri, 2006) may beadvantageous in this regard. This model has a more complexand flexible land surface model: the CSIRO AtmosphereBiosphere Land Exchange model (CABLE) (Wang et al.,2006).

The intraseasonal forecast skill reported here is reasonablymodest. However, there is clearly useful skill at certaintimes of the year and for certain regions of Australia.For precipitation, the skill is focused over the east andsoutheast in winter and spring. This represents a majoragricultural region of Australia, incorporating the Murray-Darling Basin, and skilful rainfall prediction at this time ofyear would be potentially beneficial for water managementand agriculture. This paper represents an initial investigationof the processes and drivers involved in intraseasonalprediction over Australia with POAMA. It also providesa baseline of comparison for future work planned to look ataspects such as the effect of initialization shock and ensembledesign on intraseasonal prediction.

Acknowledgements

This work was supported by the Managing Climate Vari-ability Program of the Grains Research and DevelopmentCorporation, Australia. Matthew Wheeler, Andrew Watkinsand two anonymous reviewers are thanked for their usefulcomments in the preparation of this manuscript.

References

Alves O, Wang G, Zhong A, Smith N, Tzeitkin F, Warren G,Shiller A, Godfrey S, Meyers G. 2003. POAMA: Bureau of MeteorologyOperational Coupled Model Forecast System. National Drought Forum,15–16 April, Brisbane.

Ansell T, Reason CJC, Meyers G. 2000. Variability in the tropicalsoutheast Indian Ocean and links with southeast Australian winterrainfall. Geophys. Res. Lett. 27: 3977–3980.

Brier GW. 1950. Verification of forecasts expressed in terms ofprobability. Mon. Weather Rev., 78: 1–3.

CliMag. 2009. ‘Multi-week forecasts will help bridge the gap’. In CliMag(Managing Climate Variability Newsletter) 18: December. Availablefrom the Grains Research and Development Corporation, Australia,[email protected]

Colman R, Deschamps L, Naughton M, Rikus L, Sulaiman A, Puri K,Roff G, Sun Z, Embury G. 2005. BMRC Atmospheric Model (BAM)version3.0: comparison with mean climatology. BMRC Research ReportNo. 108. Bureau of Meteorology, Melbourne.

Day D. 2007. The Weather Watchers: 100 years of the Bureau ofMeteorology. Melbourne University Publishing and Australian Bureauof Meteorology: Melbourne.

Doblas-Reyes FJ, Hagedorn R, Palmer TN. 2005. The rationale behindthe success of multi-model ensembles in seasonal forecasting. II.Calibration and combination. Tellus 57A: 234–252.

Gong D, Wang S. 1999. Definition of Antarctic Oscillation index.Geophys. Res. Lett. 26: 459–462.

Gottschalck J, Wheeler M, Weickmann K, Waliser D, Sperber K, Vitart F,Savage N, Lin H, Hendon H, Flatau M. 2008. ‘Madden–JulianOscillation forecasting at operational modelling centres’. CLIVARExchanges 13: October.

Hall JD, Matthews AJ, Karoly DJ. 2001. The modulation of tropicalcyclone activity in the Australian region by the Madden–JulianOscillation. Mon. Weather Rev. 129: 2970–2982.

Hammer G, Nicholls N, Mitchell C (eds). 2000. Applications ofSeasonal Climate Forecasting in Agricultural and Natural Ecosystems:An Australian Experience. Kluwer Academic: Dordrecht.

Hendon HH, Liebmann B. 1990. A composite study of onset of theAustralian Summer Monsoon. J. Atmos. Sci. 47: 2227–2240.

Hendon HH, Thompson DWJ, Wheeler MC. 2007. Australian rainfalland surface temperature variations associated with the SouthernHemisphere annular mode. J. Climate 20: 2452–2467.

Hendon HH, Lim E, Wang G, Alves O, Hudson D. 2009. Prospects forpredicting two flavors of El Nino. Geophys. Res. Lett. 36: L19793, DOI:10.1029/2009GL040100.

Hudson D, Alves O, Hendon HH, Wang G. 2011. The impact ofatmospheric initialisation on seasonal prediction of tropical PacificSST. Clim. Dyn. 36: 1155–1171.

Jewson S, Caballero R. 2003. The use of weather forecasts in the pricingof weather derivatives. Meteorol. Appl. 10: 377–389.

Johnson C, Bowler N. 2009. On the reliability and calibration of ensembleforecasts. Mon. Weather Rev. 137: 1717–1720.

Joliffe I, Stephenson D. 2003. Forecast Verification: A Practitioner’s Guidein Atmospheric Science. Wiley: New York.

Jones DA, Trewin BC. 2000. On the relationships between the ElNino–Southern Oscillation and Australian land surface temperature.Int. J. Climatol. 20: 697–719.

Lim E-P, Hendon HH, Hudson D, Wang G, Alves O. 2009. Dynamicalforecast of inter-El Nino variations of tropical SST and Australianspring rainfall. Mon. Weather Rev. 137: 3796–3810.

Manabe S, Holloway J. 1975. The seasonal variation of the hydrologicalcycle as simulated by a global model of the atmosphere. J. Geophys.Res. 80: 1617–1649.

Marshall AG, Alves O, Hendon HH. 2008. An enhanced moistureconvergence–evaporation feedback mechanism for MJO air–seainteraction. J. Atmos. Sci. 65: 970–986.

Marshall AG, Hudson D, Wheeler MC, Hendon HH, Alves O. 2011.Assessing the simulation and prediction of rainfall associated withthe MJO in the POAMA seasonal forecast system. Clim. Dynam.DOI:10.1007/s00382-010-0948-2.

Mason SJ, Graham NE. 1999. Conditional probabilities, relativeoperating characteristics, and relative operating levels. WeatherForecast 14: 713–725.

Mason SJ, Graham NE. 2002. Areas beneath the relative operatingcharacteristics (ROC) and relative operating levels (ROL) curves:statistical significance and interpretation. Q. J. R. Meteorol. Soc. 128:2145–2166.

Mason SJ, Stephenson D. 2008. How do we know whether seasonalclimate forecasts are any good? In Seasonal Climate: Forecasting andManaging Risk, Troccoli A, Harrison M, Anderson DLT, Mason SJ(eds). NATO Science Series. Springer: Berlin; 259–289.

Meinke H, Stone R. 2005. Seasonal and inter-annual climate forecasting:the new tool for increasing preparedness to climate variability andchange in agricultural planning and operations. Clim. Change 70:221–253.

Copyright c© 2011 Royal Meteorological Society Q. J. R. Meteorol. Soc. 137: 673–689 (2011)

Page 17: Bridging the gap between weather and seasonal forecasting: intraseasonal forecasting for Australia

Intraseasonal Forecasting for Australia 689

Meyers G, McIntosh P, Pigot L, Pook M. 2007. The years of El Nino, LaNina, and Interactions with the Tropical Indian Ocean. J. Clim. 20:2872–2880.

Mills GA, Weymouth G, Jones DA, Ebert EE, Manton MJ. 1997. Anational objective daily rainfall analysis system. BMRC TechniquesDevelopment Report No. 1, Bureau of Meteorology: Australia.

Muller WA, Appenzeller C, Doblas-Reyes FJ, Liniger MA. 2005. Adebiased ranked probability skill score to evaluate probabilisticensemble forecasts with small ensemble sizes. J. Clim. 18: 1513–1523.

Murphy AH. 1973. A new vector partition of the probability score. J.Appl. Meteorol. 12: 595–600.

Puri K. 2006. Overview of ACCESS. BMRC Research Report No. 123,Bureau of Meteorology: Australia.

Rashid HA, Hendon HH, Wheeler MC, Alves O. 2011. Predictability ofthe Madden–Julian Oscillation in the POAMA dynamical seasonalprediction system. Clim. Dyn. 36: 649–661.

Rayner NA, Parker DE, Horton EB, Folland CK, Alexander LV,Rowell DP, Kent EC, Kaplan A. 2003. Global analyses of sea surfacetemperature, sea ice, and night marine air temperature since the latenineteenth century. J. Geophys. Res. 108: 4407.

Reynolds R, Rayner NA, Smith TM, Stokes DC, Wang W. 2002. Animproved in situ and satellite SST analysis for climate. J. Clim. 15:1609–1625.

Risbey JS, Pook MJ, McIntosh PC, Wheeler MC, Hendon HH. 2009. Onthe remote drivers of rainfall variability in Australia. Mon. WeatherRev. 137: 3233–3253.

Roulston MS, Kaplan DT, Hardenberg J, Smith LA. 2003. Usingmedium-range weather forecasts to improve the value of wind energyproduction. Renew. Energy 28: 585–602.

Saji NH, Yamagata T. 2003. Possible impacts of Indian Ocean dipolemode events on global climate. Clim. Res. 25: 151–169.

Saji NH, Goswami BN, Vinayachandran PN, Yamagata T. 1999. A dipolemode in the tropical Indian Ocean. Nature 401: 360–363.

Saji NH, Xie S, Yamagata T. 2006. Tropical Indian Ocean variability in theIPCC twentieth-century climate simulations. J. Clim. 19: 4397–4417.

Sankarasubramanian A, Lall U, Devineni N, Espinueva S. 2009. Therole of monthly updated climate forecasts in improving intraseasonalwater allocation. J. Appl. Meteorol. Climatol. 48: 1464–1482.

Schiller A, Godfrey J, McIntosh P, Meyers G. 1997. A global ocean generalcirculation model climate variability studies. CSIRO Marine ResearchReport No. 227, CSIRO: Australia.

Schiller A, Godfrey J, McIntosh P, Meyers G, Smith N, Alves O, Wang O,Fiedler R. 2002. A new version of the Australian community ocean modelfor seasonal climate prediction. CSIRO Marine Research Report No. 240,CSIRO: Australia.

Smith NR, Blomley JE, Meyers G. 1991. A univariate statisticalinterpolation scheme for subsurface thermal analyses in the tropicaloceans. Prog. Oceanogr. 28: 219–256.

Spillman C, Alves O. 2009. Dynamical seasonal prediction of summersea surface temperatures in the Great Barrier Reef. Coral Reefs 28:197–206.

Stockdale TN. 1997. Coupled ocean–atmosphere forecasts in thepresence of climate drift. Mon. Weather Rev. 125: 809–818.

Taylor JW, Buizza R. 2003. Using weather ensemble predictions inelectricity demand forecasting. Int. J. Forecast. 19: 57–70.

Thompson DWJ, Wallace JM. 2000. Annular modes in the extratropicalcirculation. Part I: Month-to-month variability. J. Clim. 13:1000–1016.

Toth Z, Pena M, Vintzileos A. 2007. Bridging the gap betweenweather and climate forecasting: research priorities for intraseasonalprediction. Bull. Am. Meteor. Soc. 88: 1427–1429.

Ummenhofer CC, England MH, McIntosh PC, Meyers GA, Pook MJ,Risbey JS, Gupta AS, Taschetto AS. 2009. What causes southeastAustralia’s worst droughts? Geophys. Res. Lett. 36: L04706, DOI:10.1029/2008GL036801.

Uppala SM, Kallberg PW, Simmons AJ, Andrae U, Da Costa Bechtold V,Fiorino M, Gibson JK, Haseler J, Hernandez A, Kelly GA. 2005. TheERA-40 re-analysis. Q. J. R. Meteorol. Soc. 131: 2961–3012.

Valcke S, Terray L, Piacentini A. 2000. Oasis 2.4, Ocean atmosphere seaice soil: user’s guide. TR/CMGC/00/10, CERFACS: Toulouse.

Vitart F. 2004. Monthly forecasting at ECMWF. Mon. Weather Rev. 132:2761–2779.

Vitart F, Buizza R, Balmaseda MA, Balsamo G, Bidlot J-R, Bonet A,Fuentes M, Hofstadler A, Molteni F, Palmer TN. 2008. The newVarEPS-monthly forecasting system: a first step towards seamlessprediction. Q. J. R. Meteorol. Soc. 134: 1789–1799.

Waliser D, Weickmann K, Dole R, Schubert S, Alves O, Jones C,Newman M, Pan H-L, Roubicek A, Saha S, Smith C, Van den Dool H,Vitart F, Wheeler M, Whitaker J. 2006. The Experimental MJOprediction project. Bull. Am. Meteorol. Soc. 87: 425–431.

Wang G, Alves O, Smith N. 2005. BAM3.0 tropical surface flux simulationand its impact on SST drift in a coupled model. BMRC Research ReportNo. 107, Bureau of Meteorology: Australia.

Wang G, Alves O, Hudson D, Hendon HH, Liu G, Tseitkin F. 2008.SST skill assessment from the new POAMA-1.5 System. BMRC ResearchLetters No. 8, 2–6, Bureau of Meteorology: Australia.

Wang Y, Kowalczyk E, Law R, Abramowitz G. 2006. The CSIROatmosphere biosphere land exchange model and future developmentfor ACCESS. BMRC Research Report No. 123, 84–87, Bureau ofMeteorology: Australia.

Weigel AP, Liniger MA, Appenzeller C, 2007. The discrete Brierand ranked probability skill scores. Mon. Weather Rev. 135:118–124.

Wheeler MC, Hendon HH. 2004. An all-season real-time multivariateMJO index: development of an index for monitoring and prediction.Mon. Weather Rev. 132: 1917–1932.

Wheeler MC, Hendon HH, Cleland S, Meinke H, Donald A. 2009.Impacts of the Madden–Julian oscillation on Australian rainfall andcirculation. J. Clim. 22: 1482–1498.

Wilks D. 2006. Statistical Methods in Atmospheric Sciences (2nd edn).Academic Press: Burlington, MA.

Zeng L. 2000. Weather derivatives and weather insurance: concept,application, and analysis. Bull. Am. Meteorol. Soc. 81: 2075–2082.

Zhang C, Dong M, Gualdi S, Hendon H, Maloney E, Marshall A,Sperber K, Wang W 2006. Simulations of the Madden–Julianoscillation in four pairs of coupled and uncoupled global models.Clim. Dyn. 27: 573–592.

Zhao M, Hendon HH. 2009. Representation and prediction of the IndianOcean dipole in the POAMA seasonal forecast model. Q. J. R. Meteorol.Soc. 135: 337–352.

Zhong A, Alves O, Hendon H, Rikus L. 2006. On aspects of themean climatology and tropical interannual variability in the BMRCAtmospheric Model (BAM 3.0). BMRC Research Report No. 121,Bureau of Meteorology: Australia.

Copyright c© 2011 Royal Meteorological Society Q. J. R. Meteorol. Soc. 137: 673–689 (2011)