Quality Aspects of HARMONIE - NetFAM 2005-2009netfam.fmi.fi/harmonietrain/Quality_XY.pdf · Yang,...

59
Yang, HARMONIE Training 2011 Quality Aspects of HARMONIE HARMONIE quality monitoring Pre-release validation (36h1.3) Verification inter-comparison Trunk monitoring Challenges with meso-scale verification

Transcript of Quality Aspects of HARMONIE - NetFAM 2005-2009netfam.fmi.fi/harmonietrain/Quality_XY.pdf · Yang,...

Yang, HARMONIE Training 2011

Quality Aspects of HARMONIE

•HARMONIE quality monitoring•Pre-release validation (36h1.3)•Verification inter-comparison•Trunk monitoring

–Challenges with meso-scale verification

Yang, HARMONIE Training 2011

– See Carl and Ulf's presentation about methods and tools

– The presentation will skip some part of materials, latter included as reference

– This talk covers only meteorological quality. Technical quality of HARMONIE is not un-important. One such example is the computational efficiency and stability, and hence, feasibility for a frequent and early delivery

Yang, HARMONIE Training 2011

Pre-release validation of36h1.3

(most of materials presented at the ASM 2011)

Yang, HARMONIE Training 2011

• Prior to official tagging, multi-month validation for historical episodes are organised for meteorological quality assurance– Compared to previous taggings (35h1.3, 36h1.2)– 'Traditional observation verification'– Participation by a wide developer group very important

Pre-release validation:Harmonie 36h1.3

Yang, HARMONIE Training 2011

Summer (201008), ~10 km

ALADINE11M09RCR

PMSLPMSLPMSL

T2m

PMSL

T2m W10

Yang, HARMONIE Training 2011

201008, ~3 km

AROMEALARO 5.5

S03G05

Yang, HARMONIE Training 2011

Winter (201001), ~10 km

ALADINE11M09RCR

Yang, HARMONIE Training 2011

201001, ~3 km

AROMEALARO

5.5S03G05

Yang, HARMONIE Training 2011

Aug 14 2010Copenhagen torrential rain

Yang, HARMONIE Training 2011

Aug 17 2010Bornholm flashflood

Yang, HARMONIE Training 2011

Aug 18 2010Billund torrential rain

Yang, HARMONIE Training 2011

Aug 14: AROME 3h precipitation: +21h

With DA

surface DANo DA

Yang, HARMONIE Training 2011DA

No DA Surface DA

Aug 17: AROME 3h precipitation: +18h

Yang, HARMONIE Training 2011DA

No DA Surface DA

Aug 18: AROME 3h precipitation: +21h

Yang, HARMONIE Training 2011

Impact of data assimilation: T2

No dasurface da3dvar+da

Yang, HARMONIE Training 2011

Impact of data assimilation: T2

No dasurface da3dvar+da

Yang, HARMONIE Training 2011

T2 over Denmark, Jan 2010

AROMEAROME ALARO 5.5 ALADIN 5.5

Yang, HARMONIE Training 2011

T2 over FINLAND, Jan 2010

AROMEAROME ALARO 5.5 ALADIN 5.5

Yang, HARMONIE Training 2011

”Denmark”, 36h1.3Jan 2011

”FINLAND, 35h1”Dec 2010 Jan 2011 Feb 2011

Yang, HARMONIE Training 2011

ALADIN 10

ALADIN 5.5E11M09RCR

HirlamT2

Netherland

Scandinavia

Yang, HARMONIE Training 2011

ALADIN 10 Hirlam M09W10

Netherland

Scandinavia

Yang, HARMONIE Training 2011

W10 over mountains, Jan 2010

AROMEAROME ALARO 5.5 ALADIN 5.5

Yang, HARMONIE Training 2011

• HARMONIE (AROME, ALARO and ALADIN) forecasts, grossly speaking, has a comparable meteorological performances to those of HIRLAM– These refer mainly to average model properties

(pmsl, t2m, cloud, precipitation)– Good potential shown for strong summer convection

• Several obvious shortcomings were identified during the validation studies– Severe wind bias in AROME, --- corrected in 36h1.4 – Severe problem in producing cold nordic winter

temperature even though the bias in average is strongly negative

– Generally too weak wind over mountain area

Conclusions from 36h1.3 Validation

Yang, HARMONIE Training 2011

Model inter-comparison in HIRLAM

Yang, HARMONIE Training 2011

• Portal: https://hirlam.org/trac/wki/oprint• Operational HIRLAM suites in all member services

participate• Harmonie suites

– DMI: 36h1.3+ arome (denmark)– SMHI: 36h1.2+? Alaro (scandivavia 5.5)– KNMI: 36h1.3+, arome (netherland)– MetEireann: 36h1.3, arome (ireland 2.5)– Real-time monitoring run at ECMWF: 36h1.4 (denmark, arome;

GLAMEPS_v0, aladin), trunk (denmark, arome; scandinavia 5.5, alaro)

– HARMONIE runs from FMI, met.no, LHMS and AEMET promised to be included

Observation Verification Intercomparison

Yang, HARMONIE Training 2011

• Given an unified tool and definition/method, it is easier to spot differences between models

• A way to follow quality trend and its evolution from operational models

• An additional means for the community to detect model and implementation problems

• Promote common tools and practices in verification

Why multi-model inter-comparison

Yang, HARMONIE Training 2011

GLAMEPS ALADIN-control member vs HARMONIE 36h1.4-ALADIN: mslp

Yang, HARMONIE Training 2011

GLAMEPS ALADIN-control member vs HARMONIE 36h1.4-ALADIN: v10m

Yang, HARMONIE Training 2011

GLAMEPS ALADIN-control member vs HARMONIE 36h1.4-ALADIN: t2m

Yang, HARMONIE Training 2011

There are strong variability among operational HIRLAM systems...

PMSL std and bias for 200904 comparing 8 operational HIRLAM + ECMWF models

Yang, HARMONIE Training 2011

(pmsl, Xyntia episode)

(Yang, HIRLAM-MG visit to DMI, 2010)

Quality of host model(same model, different BC

coupling)

Yang, HARMONIE Training 2011

Same model/configuration, but “DMI”(red) domain more than double that of “EMHI” (green): indicating dominance of host model quality/lateral boundary data

“Smaller domain => better PMSL?”

Yang, HARMONIE Training 2011

Importance of stratified verification

Upper: Average W10m for EWGLAM station list with HIRLAM-7.3 RCR (red) and HARMONIE-ALADIN (green)

Right: same but for mountain stations above 500 m altitude

Upper: Average W10m for EWGLAM station list with HIRLAM-7.3 RCR (red) and HARMONIE-ALADIN (green)

Right: same but for mountain stations above 500 m altitude

Question: does the “green” model provide better wind forecast than “red ones”?

---- Not for strong wind condition nor for mountain stations!

Yang, HARMONIE Training 2011

Upper left: Scatter plot of HIRLAM RCR-7.3 W10m in 201011 for EWGLAM station list

Upper right: Scatter plot of Harmonie-ALADIN W10m in 201011 for EWGLAM station list

Right: Scatter plot of Harmonie-ALADIN W10m in 201011 for MOUNTAIN stations

Yang, HARMONIE Training 2011

–Model characteristics•model realisations such as versions, resolution, components ( da, initialisation, dynamics, physics...); coupling strategy, host model

–Parameter definition•Pmsl, t2m, rh2m, ...

–Verification method•QC criteria; sampling (time, area...); classification

Factors Behind Verification Results?

Yang, HARMONIE Training 2011

• on ECMWF-ecgate, quasi-real time cycling using AROME and ALARO options from latest technically running HARMONIE versions are maintained, provides gross monitoring information about technical and meteorological aspects of the trunk

• General data portal on HARMONIE/HIRLAM trunk:https://hirlam.org/trac/wiki/trunkmonitoring

Monitoring of HARMONIE trunk

Yang, HARMONIE Training 2011

Hirlam's steep learning curve about new cycles...

Yang, HARMONIE Training 2011

Trunk: arome

Yang, HARMONIE Training 2011

Trunk: aladin

Yang, HARMONIE Training 2011

Trunk: alaro

Yang, HARMONIE Training 2011

HARMONIE steps toward final release

do i=1,n (n=?)•Porting and adaptation of source codes, bfs•Build•Scripts and name-list adaptations•Test runs (those most common -c options)•Debug•Tagging (alpha, beta, rc, official, bf)•Error report (member services, staffs...)

• end do

Yang, HARMONIE Training 2011

Challengies in verification of meso-scale NWP systems

Yang, HARMONIE Training 2011

Perspectives in an NWP verification

– How does A model compare to B model? Does A offer added values over B (Authorities, consortia, ... )

– How's the evolution of an NWP system quality (“Director's curve”)

– How does the new model behave in comparison to an old one (Users concern)

– Diagnosis of performances and detection of deficiencies (Modellers concern)

Yang, HARMONIE Training 2011

What's particular with quality measure of HARMONIE?

• Remember what the meso-scale model is for:– Severe, high impact weather– Higher spatial and temporal resolution– Still, all-weather application as before

• Hence the quality check on– How it does for 'average' weather– What does it do about high resolution features– How it performs on high impact weather

Yang, HARMONIE Training 2011

“Conventional” observation verification such as HIRLAM's reference verification method rely largely on data from GTS surface observation network for verification of surface parameter. Such data typically has quite coarse resolution.

How does such data reflect high impact weather?

Limitation in representativity of GTS surface observation data

Yang, HARMONIE Training 2011

• The data resolution of surface synoptic observation network is normally insufficient to reflect meso-scale weather phenomena, which are often associated with high impact events

Deficiencies with in-situ data

e.g., DMI surface observation network includes 83 synoptic stations, 9 radiosonde stations (including 2 ASAP units), 4 weather radars, 80 automatic precipitation stations, 500 voluntary rainfall stations

Yang, HARMONIE Training 2011

A local strong convection event - Aug 20, 2007, South Jutland, Denmark

Yang, HARMONIE Training 2011

Rain sum derived from radar(Flemming Vejen, DMI)

Yang, HARMONIE Training 2011

Observed “truth”from GTS data : 2 mm in 12 h --- Validation of precipitation forecast using GTS gauge data highly unreliable for mesoscale convective events

Yang, HARMONIE Training 2011

July 3 2011, Copenhagen after flashflood

Yang, HARMONIE Training 2011

Yang, HARMONIE Training 2011

Verification in Meso-scale NWP

• Data– Usual 'synoptic network' has in-sufficient resolution for

meso-scale weather– Model analysis not suitable for validation either– New methods, with focus on smaller scale high impact

weather, often require 2D or 3D data (usually remote sensing data)

• Algorithms– Limitation of predictability; double penalty due to higher

sensitivity to phase error.... requires new methods which are suitable for validation of non-synoptic, high impact, local events instead of “average” weather and parameters

– Spatial and upscaling verification appear to suite better for high resolution features

K K

SWS= (1 + ∑ Jmeso ) / (1 + ∑ Jref )

j=1 j=1

SWS (B. Sass): “severe weather score” -- A performance measure on relative skills between two models on correct forecasts for defined events,

with upscaling principles:

First results

First results

Yang, HARMONIE Training 2011

Heavy Rain

Yang, HARMONIE Training 2011

T2m > 25 C

Yang, HARMONIE Training 2011

• Please remember the difference between official releases of HIRLAM/HARMONIE systems from those with trunk or tagged alpha, beta or rc.

• The fact that HIRLAM-programme put high priority to quality assure each official release of HARMONIE version is no gurantee that the system works “out of box” for member service and your application. This is especially so with a meso-scale model with limited area coverage.

• So, do your own quality assurance, but share experiences.

!! do ones own quality assurance !!

Yang, HARMONIE Training 2011

!! Severe shortcoming with in-situ data for HARMONIE verification !!

• Verification of mesoscale NWP can not rely solely on in-situ observation data due to representativity. This is especially in terms of moist variables.

• More efforts needed in meso-scale verification: high resolution, 2D or 3D observation data or retrieval; algorithm for spatial verification, upscaling and with probabilistic view

• Nevertheless, the existing reference verification package provides an useful tool in regular monitoring (and sanity check) of the HARMONIE system