ShakeAlert Testing Procedure Discussion
Philip Maechling
26 March 2010
1
SCEC has the opportunity to define a testing approach for the CISN ShakeAlert System.– Testing approach should be consistent with USGS
interests in the ShakeAlert System.– CTC effort should provide a longitudinal study of
ShakeAlert Capabilities– Science-oriented testing focus (rather than engineering
focus) is more consistent CSEP model– CTC effort provides SCEC with an opportunity to
demonstrate the general capabilities of CSEP infrastructure other problems.
2
ShakeAlert Testing
CTC plan must be implemented within funded level of effort approximately 12 hours per month.– SCEC should establish scientific framework for
ShakeAlert Testing– Initial testing approach should be simple– Initial testing should provide value to USGS and
ShakeAlert developers– Initial Testing should communicate value of EEW testing
to SCEC community and CISN
3
Scale of SCEC CTC Activity
Bridging the gap between science and engineering: avenues for collaborative research
Christine Goulet, PhDSr Geotechnical Engineer, URS
Lecturer, UCLA
2009 Annual Meeting: Palm Springs, CA
5
Conclusion
• Collaboration is an outcome-driven process (mission, vision, etc.)
• We can benefit from collaboration if we commit toSpend time and effort in the processKeep an open mindKeep a eye on the goal
• Benefit for engineersA better understanding and integration of seismological
phenomena = better design
• Benefit for scientistsThe application and dissemination of their results into the
built world = greater impact
6
On collaboration
Collaboration is a process through which people work together, pooling their ressources to achieve a shared desired result or outcome.
The collaboration process:• Involves a catalyst (common interest, reaction to an event)• Provides a broader insight into a problem and its potential
solutions• Allows a knowledge transfer by which each participant’s
specialty benefits the group (knowledge optimization) • Gives access to new problems and ideas
Successful collaboration requires: • Effective communication• A clearly defined goal or vision
Collaboration is an outcome-driven process
7
On communication
To communicate is human…
…it does not mean we’re naturally good at it.
Key elements for a better communication:• Sharing a common language• Saying what you mean• Developing improved active listening skills • Using feedback techniques (“What I understood is… Is this
correct?”)• Keeping an open mind
8
A shared vision?
Scientists Engineers
Interest
Goal/desired outcome
Earthquakes
Understanding Design a product
Group
9
Interface(s)
• Source effects Fault mechanism,
magnitude and location Recurrence models
• Travel paths
• Site effects Wave propagation to the
surface Basin effects Topographic effects Directivity
• Structural response Including foundation
• Loss analysis
Geologists &SeismologistsSeismologists &
Engineers
Geotechnical Engineers &Seismologists
Geotechnical &Structural Engineers
Engineers, loss modelers
Establish Testing Emphasis with USGS and CISN Development Groups
10
ShakeAlert Forecast Evaluation Problems:– Scientific publications provide insufficient information for
independent evaluation– Data to evaluate forecast experiments are often
improperly specified– Active researchers are constantly tweaking their codes
and procedures, which become moving targets– Difficult to find resources to conduct and evaluate long
term forecasts– Standards are lacking for testing forecasts against
reference observations
11
Problems in Assessing Forecasts
SCEC Annual Meeting, Palm Springs, Sept. 14-16, 2009
Warner Marzocchi
INGV, Istituto Nazionale di Geofisica e Vulcanologia, Rome, Italy
In collaboration with: Anna Maria Lombardi (INGV), Gordon Woo (RMS), Thomas van Stiphout (ETH), Stefan Wiemer (ETH)
Long- and short-term operational earthquake forecasting in Italy: the case of the April 6, 2009, L'Aquila
earthquake
Long- and short-term operational earthquake forecasting in Italy: the case of the April 6, 2009, L'Aquila
earthquake
Design of Testing Experiment
13
The EEW tests we implement should be valid for CISN and any other EEW implementation including commercial systems and community contribution-based systems.
14
Additional Goal for Testing
Many CSEP testing principles are applicable to CISN EEW Testing. The following definitions need to be made to evaluate forecasts:– Exact definition of testing area– Exact definition of a forecast– Exact definition of input data used in forecasts– Exact definition of reference observation data– Measures of success for forecasts
15
Design of an Experiment
Design of EEW Science Testing introduces elements that CSEP has not had to consider– Must decide whether to test both forecast and “alerts”– Different algorithms produce different forecasts
• Some (e.g. On-site) produce site-specific information (PGA), event magnitude, but no origin time or distance to event
• Some (e.g. Vs) produces full event parametric information.• Some (e.g. Elarms) produce site specific ground motion
estimates on a regular grid.• Some produce single values (On-site)• Some produce time-series with updates (Vs,Elarms)
16
Design of an Experiment
Design of EEW Science Testing introduces elements that CSEP has not had to consider– More difficult to determine information used in forecast
especially with Bayesian approach is fully implemented– More difficult to determine what data is used in forecast
at any time.– Time-basis of forecast (forecast term e.g. 60 second …1
second) varies by event– Greater interest in summary of performance on an event
by event basis. Should support push-based distribution of results after significant events.
17
Design of an Experiment
Example of stations that could contribute to forecasts.
18
Design of an Experiment
SCEC Annual Meeting, Palm Springs, Sept. 14-16, 2009
The 1-day forecasts (the palette represents the rate of M 4+)Daily forecasts released at 8:00 AM (no overlaps)
SCEC Annual Meeting, Palm Springs, Sept. 14-16, 2009
Testing the forecasts (using M 2.5+ events)
N-test Spatial test
21
2. GMPE prediction, distance-scaling term
Image: J. Stewart, L. Star
1 10 100
Rrup (km)
0.001
0.01
0.1
1
Sa (
g)
CB (2008)PGA Original
SA,T=1s Original
SA,T=10s Original
Strike-slip EQVS30=540m/s
Propose Time Dependent tests as forecasts before origin (or peak ground motion at site)– Could produce a peak ground motion map at origin time
and later. Forecasts produce ground motion maps and any regions that have not received peak ground motion contribute to the forecast. Series of forecast maps for each algorithm as they produce them. Any regions in any maps that have not experienced their time of PGV is credited. Map regions will fall over time eventually reaching zero forecasts to be evaluated for the event.
– For next test maybe we can ignore whether sites receive a warning.
– Plot the forecast by time like slide 15 with improvement in forecast with shorter forecast times.
22
Design of an Experiment
23
• First test is to reproduce the ShakeMap
24
Design of an Experiment
• Map of reporting stations used in Shakemap
Propose Time Dependent tests as forecasts before origin (or peak ground motion at site)– Introduce the use of first provided estimate as important
measure. – Introduce use of announcers as a new system that
provides forecasts. Announcers would be easy to add and easy to remove.
– Which side of the interface is the probability set? They provide forecasts and probabilities, or do we set tests at probability level and let them figure out whether it meets the specified level.
25
Design of an Experiment
SCEC Annual Meeting, Palm Springs, Sept. 14-16, 2009
Point to bring home on short-term forecasts
We perform daily aftershock forecasts in real-time. From the test on the first months, the forecast seems well calibrated, describing correctly the space-time evolution of the aftershock sequence.
The same model (retrospectively) detected an increase in probability before the main event; the (daily) probability did not reach a value of 1%.
SCEC Annual Meeting, Palm Springs, Sept. 14-16, 2009
The Challenge is for scientists to articulate uncertainty without losing credibility and to give public officials the information they need for decision-making
Scientists
Public officials
this requires to bridge the gap between scientific output (probability) and the boolean logic (YES-NO) of decision-makers
Introducing the problem
Design of EEW Science Testing introduces elements that CSEP has not had to consider– CISN seems to be distinguishing event module
(produces event parameters) and user module which produces site-specific ground motion estimates
– User modules are likely to vary by tolerance for false alarms and by conversion from location/magnitude to site-specific ground motion estimates.
– I recommend we make it easy to add new forecast sources, and remove old ones so that we can support experimentation on forecasters by CISN.
28
Design of an Experiment
New Waveform Processing LibraryAlgorithm Code Memory
buffersImport from Delays
On-site algorithm compact internal Multicast Network or
Earthworm
< 0.01 seconds
Virtual Seismologist
compact internal Waveform Data Area (WDA)
3-5 seconds
ElarmS 4 modules + ElarmS program
shared Waveform Data Area (WDA)
3-5 seconds + delays caused by
writing/ reading to shared memory
buffers
Development of a new Waveform Processing Library (based on the same idea already used by the On-site algorithm): The old framework used GCDA (Generic Continuous Data Area) to store waveforms which slowed down the read/write access to the waveforms and overall processing thread. To avoid that problem the new version will use internal memory buffers and work in a single process multi-threaded environment.
Decision Module (DM)
• The Decision Module is expected to - receive short, independent messages from the three Event Detectors - be running on different machines than the Event Detectors.
The passing of messages between the three Event Detectors to the DM as well as the broadcast of the outputs of the DM to users will likely be based on
Apache ActiveMQ (public-subscribe messaging system; asynchronous message passing and persistent message storage).
• Preliminary API is almost finished• challenging: association & up-dates of messages
• up-date DM event, if possible; if misfit is too large, disassociate all messages of the event and create a new DM event (similar to Binder)
• requires that the On-site algorithm provides eventIDs (done)
- most probable… Mw
… location… origin time… ground motion and uncertainties
- probability of false trigger, i.e. no earthquake
- CANCEL message if needed
Bayesian approachup-dated with time
Decision Module(Bayesian)
τc-Pd
On-site Algorithm
Virtual Seismologist
(VS)ElarmS
Single sensor Sensor network Sensor network
Task 1: • increase reliability
CISN ShakeAlert
USER Module- Single site warning- Map view
CISN EEW Testing Center Test users
Task 1: • increase reliabilityTask 2: • demonstrate &
enhance
• predicted and observed ground motions
• available warning time• probability of false alarm•…
feed
-bac
k
Decision Module(Bayesian)
CISN ShakeAlertτc-Pd
On-site Algorithm
Virtual Seismologist
(VS)ElarmS
Single sensor Sensor network Sensor network
Methodology development
slide courtesy of Holly Brown
Presented 23 June 2009 at
Joint Meeting of MeteoAlarmand the
WIS CAP Implementation Workshop on Identifiers
by Eliot Christian <[email protected]>
Identifiers and the Common Alerting
Protocol (CAP)
World Meteorological Organization (WMO)Observing and Information Systems Department
WMO Information System (WIS)
June 23, 2009Common Alerting Protocol (CAP) 35
Outline
What is CAP? Why and How would
MeteoAlarm use CAP? What are the issues with
Identifiers?
June 23, 2009Common Alerting Protocol (CAP) 36
What is CAP?
The Common Alerting Protocol (CAP) is a standard message format designed for All-Media, All-Hazard, communications: over any and all media (television, radio,
telephone, fax, highway signs, e-mail, Web sites, RSS "Blogs", ...)
about any and all kinds of hazard (Weather, Fires, Earthquakes, Volcanoes, Landslides, Child Abductions, Disease Outbreaks, Air Quality Warnings, Beach Closings, Transportation Problems, Power Outages, ...)
to anyone: the public at large; designated groups (civic authority, responders, etc.); specific people
June 23, 2009Common Alerting Protocol (CAP) 37
Structure of a CAP AlertCAP Alert messages
contain: Text values for human
readers, e.g., "headline", "description", "instruction", "area description", etc.
Coded values useful for filtering, routing, and automated translation to human languages
June 23, 2009Common Alerting Protocol (CAP) 38
Filtering and Routing Criteria
Date/Time Geographic Area
(polygon, circle, geographic codes)
Status (Actual, Exercise, System, Test)
Scope (Public, Restricted, Private)
Type (Alert, Update, Cancel, Ack, Error)
June 23, 2009Common Alerting Protocol (CAP) 39
Filtering and Routing Criteria
Event Categories (Geo, Met, Safety, Security, Rescue, Fire, Health, Env, Transport, Infra, Other)
Urgency: Timeframe for responsive action (Immediate, Expected, Future, Past, Unknown)
Severity: Level of threat to life or property (Extreme, Severe, Moderate, Minor, Unknown)
Certainty: Probability of occurrence (Very Likely, Likely, Possible, Unlikely, Unknown)
June 23, 2009Common Alerting Protocol (CAP) 40
Typical CAP-based Alerting System
http://www.weather.gov/alerts
Existing proposals for EEW Testing Agreements
42
We propose that initial CTC testing supports science groups first, engineering second.
– Accuracy and timeliness of event-oriented parameters (location, magnitude)
– Accuracy and timeliness of ground motion forecasts (pgv, psa, intensity) for both site-specific and grid-based site specific forecasts
43
Design of an Experiment
Many CSEP testing principles are applicable to CISN EEW Testing. The following definitions need to be made to evaluate forecasts:– Exact definition of testing area– Exact definition of a forecast– Exact definition of input data used in forecasts– Exact definition of reference observation data– Measures of success for forecasts
44
Design of an Experiment
Are the 3 CSEP regions valid for EEW ?
• Region Under Test• Catalog Event Region• Buffer to avoid catalog issues
45
Design of an Experiment
Many CSEP testing principles are applicable to CISN EEW Testing. The following definitions need to be made to evaluate forecasts:– Exact definition of testing area– Exact definition of a forecast– Exact definition of input data used in forecasts– Exact definition of reference observation data– Measures of success for forecasts
46
Design of an Experiment
Caltech Tauc-Pd RT/AL:
For each triggered station ≤ Dist-max, send one alert of:– M-est with Talert and Talgorithm– PGV-est with Talert and Talgorithm
For each M ≥ M-min, send one alert of:– Number of reporting and non-reporting stations ≤ Dist-max as a function of Talert
and Talgorithm
UC Berkeley ElarmS RT and ETH VS:
For each triggered event, send one alert of:– M-est as a function of Talert– Loc-est as a function of Talert– PGA-est at each station ≤ Dist-max without S-wave arrival as a function of Talert– PGV-est at each station ≤ Dist-max without S-wave arrival as a function of Talert
• Number of reporting and non- reporting stations ≤ Dist-max as a function of Talert
47
Design of an Experiment
Many CSEP testing principles are applicable to CISN EEW Testing. The following definitions need to be made to evaluate forecasts:– Exact definition of testing area– Exact definition of a forecast– Exact definition of input data used in forecasts– Exact definition of reference observation data– Measures of success for forecasts
48
Design of an Experiment
Input to forecasts are based on CISN real-time data
– If system performance (e.g. missed events) are to be evaluated, CTC will need station-list in use at any time
– Existing CISN often has problems keeping track of which stations are being used in forecasts
49
Design of an Experiment
Many CSEP testing principles are applicable to CISN EEW Testing. The following definitions need to be made to evaluate forecasts:– Exact definition of testing area– Exact definition of a forecast– Exact definition of input data used in forecasts– Exact definition of reference observation data– Measures of success for forecasts
50
Design of an Experiment
Two authorized data sources have been integrated into the current CTC:
– ANSS Catalog• Earthquake Catalog
– ShakeMap Shake_RssReader • Event-based Observed Ground Motions delivered in
Stationlist.xml files
51
Design of an Experiment
Summary Reports for each M ≥ M-min: Key documents is 3 March 2008 document which specifies six types of tests.
– Summary 1: Magnitude– Summary 2: Location– Summary 3: Ground Motion– Summary 4: System Performance– Summary 5: False Triggers– Summary 6: Missed Triggers
53
Proposed Performance Measures
Design of Testing Experiment
54
Use CSEP Forecast Groups to Test different EEW information.
– Event Parameters• Magnitude• Location
– Site-specific Parameters:• Site specific ground motion intensity
55
Design of an Experiment
Forecast Groups for different EEW Forecasting Systems.
– Event Parameters• Magnitude• Location
– Site-specific Parameters:• Site specific ground motion intensity
56
Design of an Experiment
Forecast Group
Forecast Producer Example Forecasters
Forecast Parameters
T1 P-wave detector Commercial Alarm Peak Site Intensity
T2 On-Site Commercial Alarm, On-Site
Magnitude,Peak Site Intensity
T3 Event Parameter System
Network System Location, Magnitude
T4 Event Parameter System with User Module
Network System feeding User Modules
Location, Magnitude, Grid-based Peak Site Intensities
Summary Reports for each M ≥ M-min: Key documents is 3 March 2008 document which specifies six types of tests.
– Summary 1: Magnitude– Summary 2: Location– Summary 3: Ground Motion– Summary 4: System Performance– Summary 5: False Triggers– Summary 6: Missed Triggers
57
Proposed Performance Measures
Summary 1.1: Magnitude X-Y Diagram
Measure of Goodness: Data points fall on diagonal line
Relevant: T2,T3,T4
Drawbacks: Timeliness element not represented
Which in series of magnitude estimates should be used in plot.
58
Experiment Design
Summary 1.2: Initial magnitude error by magnitude
Measure of Goodness: Data points fall on horizontal line
Relevant: T2,T3,T4
Drawbacks: Timeliness element not represented
59
Experiment Design
Summary 1.3: Magnitude accuracy by update
Measure of Goodness: Data points fall on horizontal line
Relevant: T3,T4
Drawbacks: Timeliness element not represented
60
Experiment Design
Summary Reports for each M ≥ M-min: Key documents is 3 March 2008 document which specifies six types of tests.
– Summary 1: Magnitude– Summary 2: Location– Summary 3: Ground Motion– Summary 4: System Performance– Summary 5: False Triggers– Summary 6: Missed Triggers
61
Proposed Performance Measures
62
Experiment Design
Summary 2.1: Cumulative Location Errors
Measure of Goodness: Data points fall on vertical zero line
Relevant: T3, T4
Drawbacks: Does not consider magnitude accuracy or timeliness
Summary 2.2: Magnitude and Location error by time after origin
Measure of Goodness: Data points fall on horizontal zero line
Relevant: T3, T4
Drawbacks: Event-specific not cumulative
63
Experiment Design
Summary Reports for each M ≥ M-min: Key documents is 3 March 2008 document which specifies six types of tests.
– Summary 1: Magnitude– Summary 2: Location– Summary 3: Ground Motion– Summary 4: System Performance– Summary 5: False Triggers– Summary 6: Missed Triggers
64
Proposed Performance Measures
65
Experiment Design
Summary 3.1 : Intensity Map Comparisons
Measure of Goodness: Forecast map matches observed map
Relevant: T4
Drawbacks: Not a quantitative results
Summary 3.2: Intensity X-Y Diagram
Measure of Goodness: Data points fall on diagonal line
Relevant: T1,T2,T4
Drawbacks: Timeliness element not represented
Which in series of intensity estimate should be used in plots T3.
66
Experiment Design
Summary 3.3: Intensity Ratio by Magnitude
Measure of Goodness: Data points fall on horizontal line
Relevant: T1,T2,T4
Drawbacks: Timeliness element not represented
Which intensity estimate in series should be used in plot.
67
Experiment Design
Summary 3.3: Predicted to Observed Intensity Ratio by Distance and Magnitude
Measure of Goodness: Data points fall on horizontal line
Relevant: T1,T2,T4
Drawbacks: Timeliness element not represented
Which intensity estimate in series should be used in plot.
68
Summary 3.3: Evaluate Conversion from PGV to Intensity
Group has proposed to evaluate algorithms by comparing intensities and they provide a formula for conversion to Intensity.
69
Summary 3.4: Evaluate Conversion from PGV to Intensity
Group has proposed to evaluate algorithms by comparing intensities and they provide a formula for conversion to Intensity.
70
71
Experiment Design
Summary 3.5: Statistical Error Distribution for Magnitude and Intensity
Measure of Goodness: No missed events or false alarms in testing area
Relevant: T4
Drawbacks:
72
Experiment DesignSummary 3.6: Mean-time to
first location or intensity estimate (small blue plot)
Measure of Goodness: Peak of measures at zero
Relevant: T1,T2,T3,T4
Drawbacks: Cumulative and does not involve accuracy of estimates
Summary Reports for each M ≥ M-min: Key documents is 3 March 2008 document which specifies six types of tests.
– Summary 1: Magnitude– Summary 2: Location– Summary 3: Ground Motion– Summary 4: System Performance– Summary 5: False Triggers– Summary 6: Missed Triggers
73
Proposed Performance Measures
74
Experiment Design
No examples for System Performance Summary defined as
Summary 4.1: Ratio of reporting versus non-reporting stations:
Summary Reports for each M ≥ M-min: Key documents is 3 March 2008 document which specifies six types of tests.
– Summary 1: Magnitude– Summary 2: Location– Summary 3: Ground Motion– Summary 4: System Performance– Summary 5: False Triggers– Summary 6: Missed Triggers
75
Proposed Performance Measures
76
Experiment Design
Summary 5.1: Missed event and False Alarm Map
Measure of Goodness: No missed events or false alarms in testing area
Relevant: T3, T4
Drawbacks: Must develop definitions for missed events and false alarms, Does not reflect timeliness
77
Experiment Design
Summary 5.2: Missed event and False Alarm Map
Measure of Goodness: No missed events or false alarms in testing area
Relevant: T3, T4
Drawbacks: Must develop definitions for missed events and false alarms, Does not reflect timeliness
Summary Reports for each M ≥ M-min: Key documents is 3 March 2008 document which specifies six types of tests.
– Summary 1: Magnitude– Summary 2: Location– Summary 3: Ground Motion– Summary 4: System Performance– Summary 5: False Triggers– Summary 6: Missed Triggers
78
Proposed Performance Measures
79
Experiment DesignSummary 6.1: Missed
Event map
Measure of Goodness: No missed events in testing region
Relevant: T3, T4
Drawbacks: Must define missed event. Does not indicate timeliness
End
80
SCEC: An NSF + USGS Research Center
Application of the CSEP Testing Approach to Earthquake Early Warning and other Seismological Forecasts
Philip MaechlingInformation Technology ArchitectSouthern California Earthquake Center (SCEC)24 September 2009
Premise: EEW In California Is Imminent
EEW in Use in Japan - JMA Issued Ground Motion Alerts
EEW in Use in Japan – Emerging commercial market for ground motion alarms
Testing of Earthquake Forecast and Earthquake Early Warning is often Retrospective without
Comparison to other Approaches
Can we Apply the CSEP Testing Approach to other Seismological Forecasts?
CISN and SCEC recently received funding from USGS to develop and evaluate prototype network-based EEW:
CISN Earthquake Early Warning (EEW) Testing Center which evaluates the system and seismological performance of the CISN real-time earthquake monitoring system.
Discussions at SCEC Annual Meeting about Needed Test Center:
Ground Motion Modeling Testing Center which verifies and validates 3D wave propagation simulations by comparing observational data against synthetic seismograms.
Testing Center System Requirements
The goals of both an EEW and Earthquake Forecast Testing Center Goals (as outlined by Schorlemmer and Gerstenberger (2007)) describe what is needed to build trust in results:
Controlled EnvironmentTransparencyComparabilityReproducibility
Applying CSEP Style Testing To Other Seismological Forecasts
CSEP collaboration has worked to define how short term earthquake forecast models can produce comparable results.
– Define standard problems– Define standard forecast definition– Define standard regions under test– Define standard evaluation criteria– Testing performed independent of forecast developers
CSEP testing approach helps to build acceptance and trust in forecast evaluations through its independent and transparent testing approach.
We believe that other seismological forecasting groups can benefit from CSEP testing approach including:
(a) Earthquake Early Warning (EEW) forecasts of final magnitude or peak ground intensity.
(b) Computer modeling of 3D earthquake wave propagation which produces synthetic seismograms.
SCEC3 Organization SCEC Director
Board of Directors
Planning Committee
External Advisory Council
CEO Program
Earthquake Geology
Tectonic Geodesy
Seismology
Fault & RuptureMechanics
EarthquakeForecasting &Predictability
LithosphericArchitecture & Dynamics
Crustal Deformation Modeling
Unified Structural Representation
Seismic Hazard& Risk Analysis
Public Outreach
K-12 & InformalEducation
PetaShake
PetaSHA-1
PetaSHA-2
Special ProjectsDisciplinaryCommittees Focus Groups CEO Activities
USEIT/SUREIntern Programs
BroadbandPlatform
CenterAdministration
InformationArchitect
KnowledgeTransfer
Ground MotionPrediction
Earthquake EarlyWarning
CSEP ACCESS Forum
PetaShake
PetaSHA-1
PetaSHA-2
BroadbandPlatform
Earthquake Early Warning
CSEP
California Integrated Seismic Network (CISN) Earthquake Early Warning Evaluation
• Funded by USGS NEHRP – $120K over 3 years (ending 2012)
• Science thrust areas:– CISN Development of a single integrated
Real-time Earthquake Alerting system– Evaluation of system performance
• Computer science objectives– Unified CISN EEW system– Independent testing and analysis
Testing of EEW and STEF use Similar Science Techniques
Comparison between algorithms encourages scientists to produce a results in a common and comparable format:
• CSEP:
– e.g. RELM testing region defined for testing
– CSEP Standard Grid and forecast statement
– Standard evaluation test (N,L,R tests)
• EEW:
– PGA or PGV converted to Intensity for comparison
– Defined evaluation tests (CISN EEW document March 2008)
Earthquake Catalog
Earthquake Catalog
Retrieve Data
FilterCatalog
Filtered Earthquake
Catalog
Earthquake Forecast
Evaluation of Earthquake Predictions
Earthquake Catalog
Forecast EQs
Evaluate Forecast
Evaluation of CSEP Forecasts
CSEP Collaboratory
Earthquake Catalog
Retrieve Data
FilterCatalog
Filtered Earthquake
Catalog
CISN EEW Performance Summary Processing
CISN EEW Testing Center and Web Site
ANSS Earthquake
Catalog
UCB/ElarmSNIEEW Data Source
CIT/OnSite EEW Data Source
Load Reports
EEW Trigger
Reports
EEW Trigger
Reports
Observed ANSS Data
CISN EEW Trigger Data
Produce Web
Summaries
CSEP Evaluation of two one day forecasts STEP and ETA using R (log likelihood ratio) Test
EEW Testing Center Provides On-going Performance Evaluation
Can CSEP Be Adapted to Support Ground Motion Synthetics
Synthetic Seismograms are in use by engineering communities:
• Development of hybrid attenuation relationships
• Seismograms for studying Tall Building Response to Strong Ground Motions
• Probabilistic Seismic Hazard Maps using 3D wave propagation as Ground Motion Prediction Equation (GMPE)
EEW Testing Center Provides On-going Performance Evaluation
EEW Testing Center Provides On-going Performance Evaluation
Fig. 11. IM SA3.0 at POE 2% in 50 Years. Base is UCERF2 and average of 4 attenuation relationships
Fig. 11. IM SA3.0 at POE 2% in 50 Years. CyberShake 1.0 Map based on 224 Hazards curves at 10km spacing
Fig. 11. IM SA3.0 at POE 2% in50 Years. Difference between Base Map and CyberShake Map showing increase of hazard in LA Basin and in Riverside.
Fig. 6. Comparable Vs profiles across the Los Angeles Basin are shown with CVM4.0 (top) and CVM-H (bottom). The differences between the CVM 4.0 and CVM-H velocity models contribute to uncertainties in high frequency simulations. The CME collaboration is working with both velocity models in order to determine which produces best match to observation or if a new combined or merged model will be required for 2.0Hz and higher frequency deterministic wave propagation simulations for Southern California.
Dalguer et al (2008) Implications of the ShakeOut Source Description for Rupture Complexity and Near-Source Ground Motion
Ensemble Dynamic Rupture ShakeOut Simulations
Ensemble of dynamic ruptures for ShakeOut scenario produced a set of Kinematic source descriptions called the ShakeOut-D ruptures.
Fig. 7. Validating regional scale wave propagation simulation results against observed data may require thousands of comparisons between observed and simulated data. The CME has developed an initial implementation of a Goodness of Fit (GOF) measurement system and is applying these new tools to help evaluate the 2Hz Chino Hills simulations. In this GOF scale, 100 is a perfect fit. The maps (left) show how GOF values vary geographically for AWP-Olsen, Chino Hill M5.4 event, and two different SCEC Community Velocity Models, CVM4.0 (left) and CVM-H 5.7 (right).
Assertions for Discussion
1. Broad impact of seismological technologies (EEW, STEF, GMPE) are great enough to warrant significant effort for evaluation.
2. Independent evaluation for STEF, EEW, GMPE provides valuable service to agencies including CISN, USGS, CPEC, NEPC, and others.
3. Prospective must be done to before techniques will be accepted. 4. Similarities between problems lead to similar scientific techniques.5. Similarities between problems lead to similar technology approach and
potentially common infrastructure.6. “Neutral” third party testing has significant benefits to the science grous
involved in forecasting.7. CSEP infrastructure can be adapted for use in CISN EEW Testing
Centers.8. A GMPE (Ground Motion Prediction Equation) Testing Center; using
techniques similar to CSEP would have value both seismologists and building engineers.
Top Related