CookbookCookbookCookbook

57
11/23/2001 ZMAP A TOOL FOR ANALYSES OF SEISMICITY PATTERNS TYPICAL APPLICATIONS AND USES: A COOKBOOK MAX WYSS, STEFAN WIEMER & RAMÓN ZÚÑIGA 

Transcript of CookbookCookbookCookbook

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 1/57

11/23/2001

ZMAP

A TOOL FOR ANALYSES OF SEISMICITY

PATTERNS

TYPICAL APPLICATIONS AND USES: A

COOKBOOK

MAX WYSS, STEFAN WIEMER & RAMÓN

ZÚÑIGA 

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 2/57

  2

Table of Content

 INTRODUCTION.............................................................................................................. 3 

CHAPTER I ....................................................................................................................... 4 

What’s going on with this earthquake catalog? Which parts are useful? Whatscientific problems can be tackled? ............................................................................. 4 

CHAPTER II.................................................................................................................... 14 

Are there serious problems with heterogeneous reporting in a catalog? What isthe starting time of the high-quality data? ............................................................... 14 

CHAPTER III .................................................................................................................. 22 

Measuring Changes of Seismicity Rate..................................................................... 22 

CHAPTER IV................................................................................................................... 30 

Measuring Variations in b-value ............................................................................... 30 CHAPTER V .................................................................................................................... 37  

Stress Tensor Inversions............................................................................................. 37 Introduction............................................................................................................... 37

Data Format .............................................................................................................. 37Plotting focal mechanism data on a map .................................................................. 38

Inverting for the best fitting stress tensor. ................................................................ 39

Inverting on a grid..................................................................................................... 40Plotting stress results on top of topography.............................................................. 42

Using Gephart’s code................................................................................................ 43

The cumulative misfit method .................................................................................. 45References................................................................................................................. 48

CHAPTER VI................................................................................................................... 50 

Tips an tricks for making nice figures ...................................................................... 50 Editing ZMAP graphs ............................................................................................... 50

Exporting figures from ZMAP.................................................................................. 53Working with interpolated color maps ..................................................................... 56

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 3/57

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 4/57

  4

CHAPTER I

What’s going on with this earthquake catalog? Which parts areuseful? What scientific problems can be tackled?

Step 1:  Read the catalog of interest into ZMAP. The Alaska catalog used in this analysiscan be downloaded in *.mat format from the ZMAP resources page. First click on load*.mat and go in the  menu window, and select the mat-file containing your catalog.

Review the limits of the basic catalog parameters in the general parameters  window

(Figure 1.1) that opens after you click on the mat-file containing the catalog.

Figure 1.1: General Parameters window 

Notice at a glance: (1) This catalog contains 78028 events, (2) covers the period 1898.4

to 1999.5, (3) contains a strange flag in the field of magnitudes for some events (-999),

(4) the largest shock has M=8.7, and (5) the depth ranges from –3 to 600 km.

First decision:  Decide that you are not interested in earthquakes whose M is not known.Therefore, replace the value for Minimum Magnitude with 0.1 by typing this into the

yellow window spot. Then click on Go.

The epicenter map appears (Figure 2), sometimes displaying scales that look strange because they are not taking into account that X and Y are coordinates on a globe. For anicer map with more appropriate scaling, try the Tools -> Plot map using m-map option

from the ZTools menu of the seismicity map (Figure 2). The large events (within 0.2

units of the largest one) are labeled, because you left the default option in the buttonlabeled Plot Big Events with M>.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 5/57

  5

 

Figure 2: Seismicity map of the entire catalog. (top): Normal ZMAP display (bottom), plotted in Lambert

 projection using M-Map, bottom, plus topography.

Rough selection of the area of interest:  Based on your experience, you know that the

coverage in the Aleutians is much inferior to that of mainland Alaska, you decide toconcentrate on the seismicity in central and southern Alaska. Click on the button Select in the Seismicity Window  and choose your method of selection. Select EQ insidepolygon  may be the most convenient. Cross hairs appear. Click on the corners of the polygon you like with the left mouse button and with the right one for the last point. The

Cumulative Number window will open (Figure 1.3) and display the selected events as a

function of time.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 6/57

  6

Rough selection of the period of interest:  It is evident from Figure 1.3a that only very

large events are reported before the mid 1960s. Because this is not the subject of yourquest and you want to concentrate on the small magnitude events, you delete all data

 before the reporting increase in the 1980s by selecting Cuts in Time Cursor  in the

ZTools button of the Cumulative Number window and  clicking with the appearing

crosshairs at the beginning and end of the period you want. This re-plots the cumulative

Figure 1.3: Cumulative number of the selected earthquakes as a function of time.

number plot (Figure 1.3b) for a period during which the rate of earthquakes reported wasapproximately constant with time. This suggests that the reporting may have been

homogeneous from 1989 on. Because this is the type of catalog you want, you now click

on Keep as newcat, which re-plots the epicenter map as seen in Figure 1.4.

Figure 1.4:  Epicenters after rough selection of area and period.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 7/57

  7

Save new mat-file: At this point it is time to save the culled catalog in a mat-file by

clicking on the Save selected Catalog (mat)  button in the Catalog  menu of theseismicity map. It is a good idea, but not necessary, to now reload this new catalog.

Inspecting the catalog:  From the ZTools menu select Histograms, then select Depth.

The resulting Figure 1.5 shows that there must be an erroneously deep event in the data

and

Figure 1.5:  Histogram as a function of depth.

that there exists a minimum at 35 km depth, which might be the separation between the

crustal and the intraslab activity. Next, select Hour of Day from the Histogram button in

Figure 1.6: The absence of smokestacks at certain day hours suggests there are few or no explosions

the ZTools menu. The resulting Figure 1.6 shows that the data are not contaminated by

explosions (or at least not much) because the reporting is uniform through day and night.

Finally, check out the distribution as a function of magnitude by plotting the appropriate

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 8/57

  8

 

Figure 1.7:  Frequency as a function of magnitude.

histogram. From Figure 1.7 one learns that magnitudes near zero are sometimes reported,

 but that the maximum number is near M2, suggesting that the magnitude of completeness

is generally larger than M2, but that it may be near M2 in some locations. An alternative

Figure 1.8:  Frequency magnitude distribution of the over-all catalog. Plotted is both the cumulative

(squares) and non-cumulative form (triangles).

 presentation in the cumulative form can be obtained by first plotting the cumulative

number as a function of time (the button to do this is found in ZTools of the seismicity

map), and then selecting the button Mc and b-value estimate (with the proper sub-choice). An automatic estimate yields Mc=2.0 for the overall catalog (Figure 1.8).

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 9/57

  9

Narrowing the target of investigation:  At this point one might decide to study just the

shallow seismicity. Because of the minimum of the numbers at 35 km depth (Figure 5),this offers itself as the natural cut. The bulk Mc for the shallow events can be estimated

 by the same steps as outlined above. It turns out to be 1.9. Therefore, you may want to

map the Mc for the shallow seismicity with a catalog without the events below M1.5,

 because we know that not enough parts of the catalog can be complete at that level. Thecatalog for the period and area with depths shallower than 35 km and M>1.4 contains

15078 events. The map of Mc (Figure 9) is obtained by selecting Calculating Mc and b-

Figure 9:  Map of magnitude of complete reporting.

value Map from the Mapping b-values menu in the ZTools menu of the seismicity map.The cross hairs that appear are used to click by left mouse button the polygon apexes

desired, and terminating the process by clicking the right mouse button. Once the

computation is completed, you can save the resulting grid (which also contains the

catalog used) for reloading later on. Pressing Cancel will just move on without saving.It might be fun to interpret the b-value map that is presented at first after the calculation,

 but first we should examine the Mc map. We call for it by selecting mag ofcompleteness map in the menu of Maps  in the seismicity window (Figure 9). Here the

symbols for the epicenters are selected as none, such that they do not interfere with theinformation on Mc.). In it, we see that the offshore catalog is inferior since Mc>3.5.

Before we accept the Mc map, it is a good idea to sample a number of locations to see if

we agree with the algorithm’s choice of Mc by visual inspection of the FMD plots. For

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 10/57

  10

this quality control, we open the select menu in the seismicity map, click on select in

circle and place the cross hairs into the red zone offshore, where we click, to learn ifreally the resolution is a bad as the algorithm shows. Then we repeat the selection

 process, only this time we Select Earthquakes in Circle Overlay existing plot, such that

we can click on a deep blue area in the interior of Alaska and compare its FMD with the

one we already have. The two FMS are indeed vastly different and we see that thealgorithm has defined Mc correctly in both cases (Figure 10).

Figure 10:  Comparison of frequency magnitude distributions for offshore (squares) and central (dots)

Alaska.

After accepting the Mc-map, we limit the study are further to the part of the catalog thatis of high quality, let’s use Mc=2.2 for the boundary. Selecting the area by the same

method as before we create a new and final catalog for study. The polygon we just

clicked to select the final area can be saved by typing into the Matlab command window

“ save filename xy –ascii”.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 11/57

  11

 Figure 11:  Resolution map with the scale in km set from 5 to 100 such that radii larger than 100 km (they

reach 231 km) are lumped together by setting the scale limits in the Display menu in the b-value map,

 because they are of no interest.

Parameters for Analysis:  Now that we have a Mc-map, we might as well check the

resolution map (Figure 11) by selecting it from the map menu. From this map we can

learn what the range of radii is with the selected N=100 events. Of course, this is stillwith Mmin=1.5, which means that in many sample there are events, which are not used in

the estimate of b, but it provides an approximate assessment of the radius we may choose

if we decide to calculate a b-value map with constant radius, from which a local

recurrence time map or, equivalently, a local probability map can be constructed. One

can see that to cover the core of Alaska with a probability map one would have to selectR=40 km.

For each map that we select there is a histogram option available (from the Maps menu).

For the radii mapped in Figure 11, the distribution is shown in Figure 12. One sees that

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 12/57

  12

Figure 12:  Histogram of radii in Figure 11.

35 km is the most common radius.

A further of quality control is offered by the standard error map for the b-value estimates

(Figure 13). This map allows the investigator to select samples from areas where problems may exist with straight line fits of the magnitude frequency distribution.

Figure 13:  Map of the standard error of the b-value estimate.

Often, these errors are due to the presence of a single large event that does not fit the rest

of the distribution, as in the case of the red pot near 63.3!/-145.8!  (Figure 13). But

sometimes they flag volumes with families of events with approximately constant size.

Figure 14:  Frequency magnitude distribution from 63.3!/-145.8!, where a poor fit is flagged in the errormap (Figure 12).

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 13/57

  13

Articles in which tools discussed in this chapter were used:Zuniga, R., and M. Wyss, Inadvertent changes in magnitude reported in earthquake

catalogs: Influence on b-value estimates, Bull. Seismol. Soc. Am., 85, 1858-1866,

1995.

Zuniga, F.R., and S. Wiemer, Seismicity patterns: are they always related to natural

causes?, Pageoph, 155, 713-726, 1999.Wiemer, S., and M. Baer, Mapping and removing quarry blast events from seismicity

catalogs, Bulletin of the Seismological Society of America, 90, 525-530, 2000.

Wiemer, S., and M. Wyss, Minimum magnitude of complete reporting in earthquakecatalogs: examples from Alaska, the Western United States, and Japan,  Bulletin of

the Seismological Society of America, 90, 859-869, 2000.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 14/57

  14

CHAPTER II

 Are there serious problems with heterogeneous reporting in acatalog? What is the starting time of the high-quality data?

Work done already:  We assume that you have acquainted yourself with the general properties of the catalog. You deleted the hypocenters outside the periphery of the

network and those of erroneously large depth, as well as the M0, if they are meaningless,

and the explosions. For this cases study, we use the seismicity on the San Andreas faultnear Parkfield.

Preliminary Declustering:  If you want to evaluate whether or not the catalog contains

rate changes that are best interpreted as artificial, it may be that aftershocks and swarms

get in the way. If you feel that is the case, please decluster leaving all earthquakes withmeaningful magnitudes in the data. The earthquakes smaller than Mc contain important

information on operational changes in the network.

Running GENAS: Once you have loaded the catalog of interest, select RunGenas from

the ZTools  in the seismicity map  window. Enter the desired values into the GenasControl Panel (Figure 2.1).

Figure 2.1: Genas Control Panel. Select the minimum and maximum magnitudes such that you calculate

rate changes for magnitude bins that have enough earthquakes in them to warrant an analysis. Base your

 judgment on the distribution you saw in the histogram of magnitudes. It is not worthwhile skimping on theincrement.

tart the calculation by activating the button Genas. Habermann’s algorithm now searches

for significant breaks in slope, starting from the end of the data, and for all magnitude bins for M<Mi and M>Mi. The purpose of separately investigating magnitude bins is to

isolate the magnitude band in which individual reporting changes occur.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 15/57

  15

 

Figure 2.2:  Genas1 window. The cumulative numbers of earthquakes with M>Mi and with M<Mi are

shown as a function of time. Vertical bars mark changes in rate, judged significant by the algorithm 

The output of Geneas consists of two plots in the windows Genas1 and Genas2 (Figures 2

and 3).

Figure 2.3:  Genas2 window. (A) Times of significant rate changes (decreases in red, increases in blue) asa function of time. This window contains a button at the top on the right-labeled BW-display, which offers

three options, of which the plus-minus display produces Figure 2.3B, if activated. The button Save out generates an ASCII file containing the dates of the rate changes shown in the figure. (B) Same as (A) with

symbols proportional to the significance of the rate change.

For 1970 to 1972 Figures 2 and 3 show increased reporting in magnitude bands for small

and large events. A decrease of larger events is noticeable in 1978. After this time, onlysmall events show changes of reporting. A strong increase of reporting of small events is

indicated in 1980, a mild decrease occurred in 1985, followed by another increase in

1988, and the last change is a decrease in 1995.5.

First decision:  The catalog was in a phase of buildup till 1972. This part is inferior towhat follows. We will not use it in further analysis.

What happened in 1995? Figure 2.3 shows a clean period (without changes) between

1988 and 1995.5, and a clean period from there to the end. The comapre two rates (no

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 16/57

  16

fit) button is found in the cumulative number window in the menu offered by the Ytools

 button. Clicking on it opens the Time selection window shown in Figure 2.4. In thisfigure we type in the limits of the periods we wish to compare. In this case the limits of

the clean periods before and after the change in 1995.

Figure 2.4: Time selection window. Limits of periods in which to compare rate changes can be selected by typing in the times, or by cursor on the cumulative number window.

The result of the comparison is presented in two windows. The compare two rateswindow (Figure 2.5A, 2.5B and 2.5C) compares the earlier and later data in a cumulative

and logarithmic-scale plot, a non-cumulative plot and a magnitude signature (Habermann,

1988), each as a function of magnitude. The frequency-magnitude distributionwindow (Figure 2.5D) shows the same change in the usual FMD format, and not

normalized.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 17/57

  17

 

Figure 2.5:  Comparison of the rates as a function of magnitude for two periods, which are printed at the

top. The rate change took place in 1995.5 along the Parkfield segment of the San Andreas fault (35.3! to

36.4!). The numbers are normalized by the duration of the periods. (A) Frequency-magnitude curve. (B) Non-cumulative numbers of events as a function of magnitude. (C) Magnitude signature. (D) The rate

comparison before and after 1995.5 in the usual FMD format. The three lines below the graph give the

results of fits by two methods to the FMD of the first period (black) and the result by the WLS method for

the second period. Inside the figure, at the top, appears the summary of the data, numbers of events used,

and b-values found. Also, the probability, p, that the two sets are drawn from an indistinguishable commonset is given (Utsu, 1992). 

Although the magnitude signature is an informative plot for the experienced analyst, thenon-cumulative FMD (Figure 2.5B) is probably the most helpful to understand the rate

change. It shows that the two periods experienced approximately the same rate of events

in their respective top-reporting magnitude bands, only, these bands were shifted. Before

1995, the maximum number of events was reported at Mmax(pre)=1.0, afterMmax(post)=1.2. Many more events were reported for 0.7"M"0.9 before 1995.5 than

afterward. In contrast, the rate in the magnitude band 1.2"M"1.6 was substantiallylower, before compared to afterward. This type of opposite behavior for the smaller and

the larger events is demonstrated by positive and negative peaks on opposite sides of themagnitude signature plot (Figure 2.5C).

That nature would produce fewer larger events, but balance this by more smaller events

after a certain date without a major tectonic event is not likely. Thus, we propose that therate change found by GENAS in Figure 2.4, and analyzed in Figure 2.5, should be

interpreted as a magnitude shift (e.g. Wyss, 1991). If we look at it in the presentation of

Figure 2.5D, we see that it is also a mild magnitude stretch (e.g. Zuniga and Wiemer,1999). This result is not good news, since it happened in the Parkfield catalog at a recent

date, introducing an obstacle for rate analyses at Parkfield. The amount of shift in 1995

is approximately +0.2 units. Even though the amount of shift appears to be close to +0.2

units, it would be necessary to determine the optimal value by a quantitative method. Thiscan be accomplished by selecting the "Compare two rates (fit)" from the "ZTools"

 pulldown menu of the cumulative number window. You will start a comparison of the

seismicity rates in the two time periods which this time includes the fitting of possiblemagnitude shifts, or stretches of magnitude scale, by means of synthetic b-value plots.

Also provided are estimates for the b-value, minimum magnitude of completeness, mean

rate for each period, and z-test values comparing the rates of the two periods. For a more

detailed description on magnitude stretches, see Zuniga and Wyss (1995).

After selecting "Compare-fit" in the cumulative number plot window, you will be

 prompted for the limits of the time periods under investigation, just as for the “Compare

two rates (no-fit)” option. A frequency-magnitude relation (normalized to a year) for eachinterval is plotted with different symbols. The first interval is labeled as "background",

while the subsequent interval is the "foreground". You should select two magnitude end-

 points for each curve to obtain an estimated b-value for each interval; these have to be

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 18/57

  18

chosen on the basis of the linearity of the observed curves. Once this selection has been

 performed, the routine attempts to fit the background to the foreground by assuming two

 possibilities:

(1) The background is first adjusted to fit the foreground by assuming a simple magnitude

shift. The shift is estimated from the separation between the two curves and by using theminimum magnitude at which the curve departs from a linear fit by more than one

standard deviation.

(2) The background, Mback, is matched to the foreground, Mfore, by assuming a linear

magnitude transformation (stretch or compression of magnitude scale) of the type

( Zuniga and Wyss, 1995):

Mfore = c * Mback + dM

where c and dM are constants.

 Numerical results are given in a window which allow the possibility of interactively

changing any of the shift, stretch or rate factor parameters (Figure 2.6).Results are also graphically displayed in a separate window which shows:

a) The frequency-magnitude distribution of the foreground and the frequency-magnitude

distribution of the corrected background, using the values for c and dM from the latestrun.

 b) Non-cumulative histograms for both foreground and corrected background

c) Magnitude signatures (if needed)

Figure 2.6: Results of compare-fit. The values given include the best fit for two separate possibilities: a

simple magnitude shift (ideally one would like to work with this value); and a magnitude stretch. In

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 19/57

  19

this case, the routine found that a simple shift of +0.1 units applied to the background,

would best fit the foreground, while if a stretch is chosen, one needs to apply a shift of+0.3 and a multiplicative factor of 0.82. Notice that in the selection panels, a simple

magnitude shift of +0.1 is input, which resulted in the plot shown in Figure 2.7A.

(A)(B)

Figure 2.7. Comparison of rates as a function of magnitude for two time periods under the Compare-Fit

option. The three panels correspond to the Frequency-Magnitude curves, normalized, the non-cumulative

histogram as a function of magnitude, and the magnitude signatures for the original two periods (circles)

and for the synthetic foreground as compared to the original background (crosses). (A) Results of applyinga simple magnitude shift of +0.1 units, without any rate change. (B) Same as in (A) but including a rate

change of 0.78 (equivalent to the -%22 change in percent given in Figure 2.5).

The magnitude signature plot is useful for asserting the goodness of the fit from the latest

run (bottom panel of Figures 2.7A and B). It shows the original magnitude signature,which results after comparing the two time periods, and a modeled signature obtained

from comparing the synthetic foreground (i.e. the corrected background) to the original background. The best match between both signatures indicates that we have been able to

model the observed behavior by applying the given corrections to the background. In the

example, we can see that the shape of the signature is correctly modeled by applying a

simple magnitude shift (Figure 2.7A) while a rate decrease is still necessary to model the position of the signature (Figure 2.7B).

The Compare-fit option is also useful in case one needs to determine the relation betweentwo different magnitude estimations for the same period and area. For this case, you

would need to first concatenate the two data sets (“Combine two catalogs” option in the

“Catalogs” pulldown menu from the Seismicity Map window) and treat them as separatetime periods.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 20/57

  20

.

Another date of interest is 1980, because at this time approximately, improvements in

analysis techniques took place in all of California. The period before it is not clean(Figure 2.3), thus we compare the rate in the periods 1972-1978 to those in 1980-1985

(the end of the clean period following the 1980 change, Figure 2.3). The rate comparison

of these periods shows an even more dramatic magnitude shift (Figure 2.7) of at leastDM=–0.5 units. In this case, the shift was accompanied by an increase in reporting of

small earthquakes. These two phenomena, magnitude shifts and increased reporting of

small events, are often seen at the same time, because a single change in the operating procedure generated both. The conclusion is that the earthquake catalog for Parkfield can

hardly be used for seismicity rate studies. The fact that the two FMD curves show the

same slope before and after the disastrous changes in 1980 (Figure 2.7D) suggests that

the catalog can still be used for b-value studies.

Figure 2.7:  Comparison of the rates as a function of magnitude for two periods, which are printed at the

top. The rate change took place in 1978/80 along the Parkfield segment of the San Andreas fault. Details

same as in Figure 2.5. The magnitude shift was at least dM=-0.5 units.

Another method to evaluate the homogeneity of reporting as a function of time, is toinspect cumulative number curves. In a network where no magnitude shifts have taken

 place but more small earthquakes are reported in recent years because of improvementsin the operations, the key to selecting the widest magnitude band which has been reported

homogeneously, is to define the smallest magnitude for which constant numbers have

 been reported. This assumes that in a rather large area the production of events isstationary, on average. Such a case is shown in the comparison for the Parkfield network

(Figure 2.8). The improvement of reporting is seen to be restricted to M<1.0. Thus, a

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 21/57

  21

cumulative number curve for M>1. events is approximately straight, whereas the curve

for all magnitudes shows a kink upward at the time of the improvement (Figure 2.8).

Figure 2.8:  Comparison of cumulative number of events for earthquake M >= 1.0 (blue) and M , 1.0 (red).The legend was added manually.

It could be a mistake to rely solely on cumulative number curves for evaluatinghomogeneity, because in the Parkfield catalog the selection of an intermediate magnitude

for cutoff (M=1.2, in this case) results in a cumulative curve with relatively constant

slope, whereas the plots for M>1.5 and for M<1.6 show the telltale opposite kinks in1980 characteristic for a magnitude shift (Figure 2.7).

Articles in which tools discussed in this chapter were used:

Habermann, R.E., Man-made changes of Seismicity rates, Bull. Seism. Soc. Am., 77 , 141-159, 1987.

Wiemer, S., and M. Baer, Mapping and removing quarry blast events from seismicity

catalogs, Bulletin of the Seismological Society of America, 90, 525-530, 2000.Wiemer, S., and M. Wyss, Minimum magnitude of complete reporting in earthquake

catalogs: examples from Alaska, the Western United States, and Japan,  Bulletin ofthe Seismological Society of America, 90, 859-869, 2000.

Zuniga, R., and M. Wyss, Inadvertent changes in magnitude reported in earthquakecatalogs: Influence on b-value estimates, Bull. Seismol. Soc. Am., 85, 1858-1866,

1995.

Zuniga, F.R., and S. Wiemer, Seismicity patterns: are they always related to naturalcauses?, Pageoph, 155, 713-726, 1999.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 22/57

  22

CHAPTER III

Measuring Changes of Seismicity Rate

Precondition:  You have already selected the part of an earthquake catalog that is

reasonably homogeneous in space, time and magnitude band. All inadequate parts of thecatalog and explosions have been removed.

Measuring a Local Rate Change: Suppose you have selected earthquakes from some

volume, and, displaying it in a cumulative number curve, you notice a change in slope

(Figure 3.1a), which you want to measure.

Figure 3.1: (a) Cumulative number of earthquakes as a function of time, obtained by setting N=200 in thewindow that appears if one chooses select earthquakes in circle (menu) in the pull down menu of the

select  button in the seismicity map window. (b) Cumulative number of earthquakes with the AS(t)

function for which the Z-scale is on the right. The maximum of this function defines the time of maximumcontrast between the rate before and after it. 

First: One might want to define the time of greatest change quantitatively (especially in

a case of change less obvious than the one in Figure 3). Open the ZTools pull-down-menu in the cumulative number window, and point to the option Rate changes (z-values). Of the three options offered, choose AS(t)function. This will calculate the red

curve in Figure 3.1b, which represents the standard deviate Z, comparing the rate in the

two parts of the period before and after the point of division, which moves from (t0+tW) to(te-tW). T0 is the beginning, te the end and tW, the window at the ends, can be adjusted by

typing the desired value into the yellow button that appears in the figure. The maximal

Z-value, and the time at which it is attained, is written in the top left corner of Figure3.1b. ( Alternatively, one could estimate the time of greatest change by eye, using the

curser. For this, one opens the ZTools  menu, selects get coordinates with cursor,

moves the cursor to the point of change, and, after clicking the mouse button, thecoordinates appear in the MATLAB control window.)

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 23/57

  23

Second:  One may want to know the amount of change. For this, choose the option

Compare two rates (no fit) from the list of the Ztools menu in the cumulative numberwindow. A window will open, offering the opportunity for input of the end points of the

 periods you wish to compare (Figure 3.2). After you activate the comparison, two

Figure 3.2:  Window for selection of two periods for the comparison of seismicity rates. Instead of typing

in the end points, one may use the cursor to click at four points in the cumulative number plot. The two periods need not be contiguous.

windows appear (Figure 3.3). One of them displays the normalized (per year) frequency-

magnitude distributions (FMD) of the two periods in cumulated and non-cumulated form(Figure 3.3a). The other shows the FMDs in absolute values (Figure 3.3b). The periods

selected, and the symbols representing them, are given at the top of Figure 3.3a, together

with the rate change, which is –78% in the example shown.

Figure 3.3:  Frequency-magnitude distribution for two periods for which we seek a comparison of theseismicity rates. (a) normalized (per year), (b) not normalized. From these FMD plots, one cam judge in

what magnitude bands the rate change takes place.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 24/57

  24

 

In the example shown in Figure 3.3, the rate change evenly affects all magnitude bands.

This favors of the interpretation that the rate change is real. In addition, this change took place at the time of the Landers M7.2 earthquake, at a distance of about 50 km to the east

of it. Thus, we accept the change as real and due to this main shock. The button

Magnitude Signature? was not activated in this case, because there was no reason toattempt to interpret the change as artificial.

The regular FMD plot (Figure 3.3b) allows a comparison of the b-values during the two

 periods. In this example, no change took place. The number of events used (n1 and n2),

as well as the two b-values (b1 and b2) are written into the plot. The probability,estimated according to Utsu (1992), that the two samples come from the same,

indistinguishable population of magnitudes is p=29%, as shown in the top right corner of

Figure 3.3b.

Third:  We may want to map the change of seismicity rate at the time of the Landersearthquake, for which Figures 3.1 and 3.2 show a local example. This is done by opening

the ZTools menu  in the window entitled seismicity map, and pointing to mapping z-values. From the several choices offered here, we select Calculate a z-value Map. Thiscommand opens a window designed to define the parameters of the grid (Figure 3.4).

Figure 3.4:  Window for the definition of the grid parameters to calculate a z-map. 

Pressing the button ZmapGrid, places the cross hairs at our disposition. We now click

with the left mouse button on a sequence of points on the map, thus defining the apexes

of a polygon within which the z-values for rate changes will be calculated. The last point

is identified by clicking the right mouse button. Depending on the number of points inthe grid and the power of your computer, this calculation may take a while. At the end of

this calculation, a window opens (not shown here) in which you must enter a file name to

save this calculation of a z-map and which allows you to browse to the subdirectory

where you want to store your result. As soon as you enter the file name, a windowshowing the z-menu opens (Figure 3.5).

For the example at hand, we press the button LTA under the heading Timecuts. As a

consequence, the next window opens (Input Parameters, Figure 3.6) that requires theinput of the beginning time and the duration of the time window, the rate within which

we wish to compare with the background rate, using the LTA definition. The window

may

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 25/57

  25

 

Figure 3. 5:  Z-menu window. Choosing LTA opens a window that asks for the definition of the timewindow for which a comparison with the background rate is to be mapped.

 be positioned anywhere within the observation period, and it may have any length thatfits. The background rate in LTA is defined by the sum of the rate before and after the

window selected for comparison. In our example, we defined the beginning time and the

duration of the window such that we compare the rate before with the rate after the

Landers main shock.

Figure 3.6:  Input parameter window for calculating a Z-map. 

The resulting zmap (Figure 3.7a) appears often in a distorted plot, because MATLAB

does not know that the axes should be geographical coordinates. For a final map of therate changes (Figure 3.7b), one can select the button Plot map in Lambert projectionusing m_map in the ZTools menu of the Z-value Map window.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 26/57

  26

Figure 3.7:  Z-map of the rate change at the time of the Landers earthquake. (a) Automatic scales, (b)Lambert projection. Stars mark the epicenters of the Landers and Big Bear main shocks of June 1992. 

Various tools are available to modify what is plotted and how it is plotted in the Z-map.For example, the epicenters, which are plotted automatically have been suppressed in

Figure 3.7a, by selecting none  from the choices of Symbol Type  that appear, if one

selects the Symbol menu in the Z-Value-Map window. Also, the radius for volumes forwhich the calculated Z-value is plotted, was limited by typing the number 25 into the

yellow button labeled MinRad(in km)  in the Z-Value-Map  window and pressing Go afterward. This was done, because in areas where the seismicity is too low for a local

estimate of the rate change, it makes no sense to plot a value for Z that would be derivedfrom what occurred in relatively distant volumes.

The number of events, ni, used for calculating the Z-values, appears in a gray button in

the upper right corner, below the button Go. When one uses the select button in this

window, the number of events selected equals the number visible next to the label ni. Ifone wishes to select a different number of events, one may replace the value in the ni button and then press the set ni button.

Fourth:  Finding the strongest rate change anywhere in time and space can be done by

calculating an alarm cube. From the ZTools  menu in the seismicity map, selectzmapmenu. The window shown in Figure 3.5 opens. Clicking on Alarm  opens the

window shown in Figure 3.8, in which one can define the window length of interest (7

years in our example) and the step width in units of bin length (14 days in our example,which was defined in the calculation of the zmap).

Figure 3.8:  Selection of alarm cube parameters.

Pressing the button LTA in the window shown in Figure 3.8 starts the calculation of the

alarm cube. The code slides a time window (of 7 years in the example) along the data at

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 27/57

  27

each node and calculates for every position the Z-value comparing the rate in the window

to that outside of it. The resulting array of z-values is then sorted, and the location intime and space of the largest z-values are displayed in the alarm cube (Figure 3.9a). The

location of these alarms are also shown in the seismicity map as three red dots (Figure

3.9b).

Figure 3.9:  (a) In the alarm cube the x- and y-axes are the longitude and latitude, the z-axes is time.

Features like fault lines and epicenters of main shocks at the top and bottom are guides to find one’s

 position. Red circles with blue lines following show the position in time and space of all occurrences of Z-

values larger or equal to the value given in the yellow button labeled Alarm Threshold. (b) The locationsof the alarms selected in the alarm cube are shown as red dots.

The parameters that can be set in the alarm cube window include the maximum radius

allowed for samples to be displayed (MinRad(in km) at upper right; set at 25 km in theexample). More importantly, in the button labeled Alarm Threshold, one can type any

value for Z, above which one wishes to see all occurrences (called alarms).

Before setting a different alarm level than the one selected automatically, one may wantto inform oneself about the distribution of alarms. The distribution can be plotted byselecting Determin # Alarmgroups (zalarm) from the ZTools menu in the alarm cube 

window. Making this selection opens a small window into which one has to type the

minimum alarm level to be plotted and the step (not shown, selected as 6 and 0.1 in theexample). The resulting pot (Figure 3.10a) shows that in our example one alarm with

Z=9.1 towers in significance above the others. The next two alarm groups appear at a

value of 7.3 and a third appears at 7.1. In order to find the position of the three additional

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 28/57

  28

Figure 3.10: (a) Number of alarm groups as a function of alarm level. Alarm groups are defined as a

group of contiguous nodes at which an alarm starts at the same time. (b) Fraction of the alarm volume as a

function of alarm level.

alarm groups, one could select 6.5 as the Alarm Threshold in the alarm cube window

and repeat the calculation. In that case, the locations of the additional nodes with alarms

above that level would appear in the seismicity map.

Alternatively, one may be interested in estimating the fraction of the study volumeoccupied by alarms at a given level (Figure 3.10b). This may be accomplished by

selecting the option Determin Valarm/Vtotal(Zalarm)  in the ZTools  menu of thealarm display window.

This alarm cube routine with its various options is especially useful for determining the

uniqueness of a seismic quiescence that one wishes to propose as a precursor. Manyauthors proposing quiescence or other precursors do not show how often the proposed

 phenomenon occurs at a similar significance at other times and in other locations than the

one possibly associated with a main shock. If the proposed precursor occupies the

number one position in the alarms, the phenomenon can be accepted as unusual. If,however, the supposedly interesting anomaly occupies number 45, for example, in level

of significance, one has to accept that this phenomenon occurs often and most likely

appears associated with a main shock by chance.

Articles in which tools discussed in this chapter were used:

Wiemer, S., and M. Wyss, Seismic quiescence before the Landers (M=7.5) and Big Bear

(M=6.5) 1992 earthquakes, Bull. Seismol. Soc. Am., 84, 900-916, 1994.

Wyss, M., and A.H. Martyrosian, Seismic quiescence before the M7, 1988, Spitak

earthquake, Armenia, Geophys. J. Int., 124, 329-340, 1998.

Wyss, M., K. Shimazaki, and T. Urabe, Quantitative mapping of a precursory quiescence

to the Izu-Oshima 1990 (M6.5) earthquake, Japan, Geophys. J. Int., 127 , 735-743,

1996.

Wyss, M., A. Hasegawa, S. Wiemer, and N. Umino, Quantitative mapping of precursoryseismic quiescence before the 1989, M7.1, off-Sanriku earthquake, Japan, Annali di

Geophysica, 42, 851-869, 1999.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 29/57

  29

Wyss, M., and S. Wiemer, How can one test the seismic gap hypothesis? The Case of

repeated ruptures in the Aleutians., Pageoph, 155, 259-278, 1999.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 30/57

  30

CHAPTER IV

Measuring Variations in b-value

Precondition:  You have already selected the part of an earthquake catalog that is

reasonably homogeneous in space, time and magnitude band. All inadequate parts of thecatalog and explosions have been removed. Also, you have culled the events with

magnitudes significantly below the Mc, such that the algorithm that finds Mc for the local

samples cannot mistakenly fit a straight line to a wide magnitude band below Mc.

Assumption:  The b-value is relatively stable as a function of time. The first ordervariations are expected as a function of space.

Mapping b-values: In the seismicity map window, open the ZTools menu and point to

Mapping b-values. From the sub-menu select Calculate a Mc and b-value map. The

window for Grid Input Parameters  (Figure 4.1) will open. After defining the parameters and pressing Go, the cross hairs appear. Define the apexes of the polygon

within which you want to calculate a b-value map, using the left mouse button, until the

last point, for which you use the right mouse button. Once the calculation is done, a

window opens that allows you to save the grid in a file and place it in the subdirectory ofyour choice by browsing. (Bug: If an error results, type Prmap=0 in the MATLAB

window and repeat the calculation).

Figure 4.1:  Grid Input Parameters for b-value maps. In addition to the total number, N, of events (or

radius) of the samples, one must enter a minimum number of events above the local value of Mc estimated

for each sample. Although one does not expect this number to drop below about 80% of N, one wants to

eliminate the possibility that it drops to an unacceptably small number.

The b-value map that appears is calculated using the weighted least squares method

(Figure 4.2b), but we use mostly the map calculated using the maximum likelihood

method (Figure 4.2a). These two figures should be approximately the same. Volumescontaining a main shock substantially larger than the rest of the events stand out with

lower b-values in the WLS map.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 31/57

  31

 

Figure 4.2:  b-value maps of southern California for the period 1981-1992.42. (a) Maximum likelihood

method, (b) weighted least squares method.

The default scale for the b-values, with which the maps are presented, includes the

minimum and the maximum values that are found. However, it is usually better to selectlimits that result in a map in which the blue and red are balanced. This is done by

selecting Fix color (z) scale from the Display menu in the b-value map window.The first item of business when viewing a b-value map, is to check if the results can be

trusted. For this, one can click on any location of interest and view the FMD plot. Forexample, the distribution at a location of high b-values is compared to that at a location of

low ones in Figure 4.3a. This plot was obtained by first selecting Select EQ in Circle 

and then Select EQ in Circle overlay existing Plot from the Select menu in the b-valuemap  window. According to the Utsu test, the two distributions are different at a

significance level of 99% (p=0.1 in the top right corner).

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 32/57

  32

Figure 4.3:  Frequency-magnitude distributions for quality control, comparing distributions which were

 judged as different in the maps. (a) Comparison of data sets with a high and a low b-value. (b)

Comparison of datasets with a high and a low Mc.

For all of the maps (Figures 4.2 and later) a histogram showing the distribution of the

values can be plotted by selecting Histogram in the menu of Maps of the b-value map 

window. Figure 4.4, for example, shows the distribution of b-values based on the max.L. method.

Figure 4.4:  Histogram of the b-values that appear in Figure 4.2a.

Important additional options in the Maps menu of the b-value map window are the mag

of completeness  (Figure 4.5a) map and the resolution  map (Figure 4.5b). Using theinformation already stored in the array computed for the b-value maps (Figure 4.2) one

can display the Mc. In the example (Figure 4.5a), the NE corner seems to show a higher

Mc than the rest of southern California. To check if the algorithm estimates Mccorrectly, one can open the Select menu and click first on Select EQ in Circle (placing

the cross hairs near a brown node of Figure 4.5a), and then clicking on Select EQ inCircle (overlay existing plot), which results in the comparison of the two FMDs (Figure4.3b). The Mc one would select by eye agrees with that estimated by the algorithm.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 33/57

  33

Figure 4.5:  (a) Magnitude of completeness map with scale in magnitude. (b) Resolution map with scale in

kilometers.

The resolution map  (Figure 4.5b) shows the radii necessary to sample the N eventsspecified in the grid computation (N=100 in the example). The radius is, of course,

inversely proportional to the a-value, a map of which can also be plotted using the Maps 

menu (not shown). The resolution map is more informative for the analyst than the a-value map, because it demonstrates how local the computed values are. From the

example (Figure 4.5b), one can see that volumes along some of the edges need large

radii, thus they are not worth studying. Also, near the center to the left side, one noticesrelatively large radii because this area contains few earthquakes. As a result of restricting

the b-value plot to R<25 km, this area appears white in Figure 3.7 (Bug: restriction not

offered in b-map).

The local recurrence time TL for a main shock with a given M (we will use M6.5) can be estimated by assuming that the observed FMD may be extrapolated to the M in

question. Thus, for every node, TL may be calculated, based on the observed a- and b-

values. However, one must be careful not to calculate TL from the b-value maps in

Figure 4.2, because they were constructed using N=const, which means that the a-valuewas not measured. Therefore, we recalculated a grid, focusing on the San Jacinto-

Elsinore fault zones, selecting the radius as 8 km in the window for grid input parameters

(Figure 4.1) and requesting a finer node separation of 0.02 deg. The resulting TL map is plotted in Figure 4.5, which shows that about 10% of the area present unusually short

recurrence times.

Figure 4.5:  (a) Local recurrence time map, using R=8 km and Mmain=6.5. (b) Local

 probability per year and unit area for an M6.5 earthquake.

The local probability, PL,  for a main shock with Mmain, can be calculated as the inverse of the localrecurrence time and normalized by area (Figure 4.5b).

Cross sections of b-values may be constructed by selecting Calculate a b-value crosssection  from the Mapping b-values  options in the ZTools  menu in the seismicity

window. Activating this button brings up a seismicity map in Lambert projection inwhich one must set the width of the cross section  in the lower right corner and the

method of defining the endpoints in the lower left corner (Figure 4.6a). Once one selects

the endpoints, the selected epicenters are highlighted (Figure 4.6a). A new window

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 34/57

  34

opens, showing the hypocenters in cross section and offering several buttons at the top as

choices for the next step (Figure 4.6b). Because we wanted to calculate b-values, weselected the button at the left top labeled b and Mc grid. The two Figures 4.6 are ideal

for documenting the location and position of cross sections.

Figure 4.6:  (a) Lambert projection of epicenters that appears when one chooses to work in a cross sectionin the seismicity window. The earthquakes selected by the choice of endpoints and cross-section width are

highlighted. (b) The hypocenters in cross section selected in (a), with buttons at the top designed for

executing the next step (mapping the b-value, in our example).

By selecting the topmost button on the right in Figure 4.6b, one opens a window like

Figure 4.1 that allows the definition of the grid properties. After they have been selected,

cross hairs appear, which must be used to click in a polygon, as in the case of calculating

the b-value map, in the cross section, within which the b-values are to be calculated. Theresult of this calculation is shown in Figure 4.7.

Figure 4.7:  b-value cross section of a 20 km wide section of the San Jacinto fault defined in Figure 4.6.

In Figure 4.7, one has again the option of limiting the radius, and setting the number  of

events in samples one may want to extract (top right corner). Also, this window has a

 button labeled Maps  that offers the same options as that button in the b-value map window discussed before. Also, as in the b-value map, the Display  button offers a

number of ways to modify the display.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 35/57

  35

 

Changes of b-values as a function of time may be identified by pointing to the optionMc and b-value estimation from the ZTools menu in the cumulative number windowand selecting b with time from the possibilities offered. There are several other options

offered, such as b with depth  and b with magnitude. After the selection, a windowopens (not shown) requesting input of the number of events to be used in the sliding time

window. In the example for which the result is shown in Figure 4.10, we selected a

volume around the Landers epicenter and used 400 events per b-estimate.

Figure 4.10:  b-values in sliding time windows of 400 events as a function of time. WLS method above

and max L method below.

Both methods of estimating b-values show a brief decline of b after the Landers

earthquake, followed by a substantial increase. The result in Figure 4.10 does not

guarantee that the observed change of b is a change in time, because it could be that the

activity shifted from a volume of constant and low b-value to one of constant, but highvalue. In order to determine which of the two possibilities was the case, ZMAP offers the

option Calculate a differential b-value Map (const R)  in the sub-menu Mapping b-values that appears in the ZTools  menu of the seismicity map. Selecting this option brings up a window (not shown) in which the starting and ending times of the periods to

 be compared need to be defined. After that, another window of the type of Figure 4.1

opens for defining the grid parameters. Once these are defined, the cross hairs appearand the analyst has to click at the locations of the apexes, as usual.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 36/57

  36

Figure 4.11:  b-value changes at the time of the Landers 1992 earthquake in its vicinity (R=15 km).

The map of the b-value changes at the time of the Landers earthquake reveals thatchanges as a function of time have indeed taken place, but that they are positive as well

as negative (Figure 4.11). This demonstrates how important it is to map temporal

changes and not to rely on figures like Figure 4.10, showing b as a function of time only.

Articles in which tools discussed in this chapter were used:

Wiemer, S., and J. Benoit, Mapping the b-value anomaly at 100 km depth in the Alaska

and New Zealand subduction zones, Geophys. Res. Lett., 23, 1557-1560, 1996.

Wiemer, S., and S. McNutt, Variations in frequency-magnitude distribution with depth intwo volcanic areas: Mount St. Helens, Washington, and Mt. Spurr, Alaska,

Geophys. Res. Lett., 24, 189-192, 1997.

Wiemer, S., S.R. McNutt, and M. Wyss, Temporal and three-dimensional spatial analysisof the frequency-magnitude distribution near Long Valley caldera, California,

Geophys. J. Int., 134, 409 - 421, 1998.Wiemer, S., and M. Wyss, Mapping the frequency-magnitude distribution in asperities:

An improved technique to calculate recurrence times?,  J. Geophys. Res., 102,

15115-15128, 1997.

Wyss, M., K. Nagamine, F.W. Klein, and S. Wiemer, Evidence for magma at

intermediate crustal depth below Kilauea's East Rift, Hawaii, based on anomalouslyhigh b-values, J. Volcanol. Geotherm. Res., in press, 2001.

Wyss, M., D. Schorlemmer, and S. Wiemer, Mapping asperities by minima of local

recurrence time: The San Jacinto-Elsinore fault zones, J. Geophys. Res., 105, 7829-7844, 2000.

Wyss, M., K. Shimazaki, and S. Wiemer, Mapping active magma chambers by b-values

 beneath the off-Ito volcano, Japan, J. Geophys. Res., 102, 20413-20422, 1997.Wyss, M., and S. Wiemer, Change in the probability for earthquakes in Southern

California due to the Landers magnitude 7.3 earthquake, Science, 290, 1334-1338,

2000.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 37/57

  37

CHAPTER V

Stress Tensor Inversions

IntroductionThere have been significant changes in the way ZMAP performs stress tensor inversions

from ZMAP 5 to ZMAP6! To do the inversions, ZMAP now uses software by Andy

Michael USGS Menlo Park. The advantage is that that (1) inversions can now be performed on a PC as well as on UNIX, precompiled Windows executables are included

in the ZMAP distribution; (2) The linearized inversion by Michael is much faster, taking

only seconds rather than minutes to complete. Results between the two methods have been show to be equivalent for the most part [ Hardebeck and Hauksson, 2001]. A first

application of the ZMAP tools to map stress can be found in [Wiemer et al., 2001].

When you use these codes included in ZMAP, please make sure to give credit to the

author of the code, Andy Michael. [ Michael , 1984;  Michael , 1987a;  Michael , 1987b;

 Michael , 1991; Michael et al., 1990].

On a PC, the inversion should work without the need to compile the software. On other

 platforms, you will need to run the makefiles found in the ./external directory:

makeslick

makeslfast

makebtslw

This should compile the necessary executables.

Data Format

The input data format for stress tensor inversions remains identical to the ZMAP5

versions, and is compliant with the USGS hypoinverse output The data imported intoZMAP needs to contain three additional columns:

column 10: Dip-direction

column 11: Dip

column 12: Rakecolumn 13: Misfit - fault plane uncertainty assigned by hypoinverse (optional)

Shown below is an example of the input data.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 38/57

  38

Table 1: Fault plane solution input data format

dip-direction dip rake misfit

230.0000 75.0000 137.3870 0.03

325.0000 90.0000 55.0000 0.04145.0000 80.0000 -55.0000 0.12

140.0000 75.0000 50.0000 0.01

50.0000 50.0000 140.0000 0.10

45.0000 50.0000 -135.0000 0.03

Data import is only supported through the ASCII option. Select the EQ Datafile (+focal)option when importing your data into ZMAP. Several precompiled datasets are available

through the online dataset web page (use the ‘online data’ button in the ZMAP menu).

Plotting focal mechanism data on a map

Using the Overlay -> Legend by … -> Legend by faulting type option from the

seismicity map, a map differentiating the various faulting styles of the individual

mechanisms by color can be plotted (Figure 5.1)

Figure 5.1 Map of the Landers regions. Hypocenters are color coded by faulting style.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 39/57

  39

 

Inverting for the best fitting stress tensor.

Stress tensor inversions can either be performed for individual samples, or on a grid. The

inversion for individual samples is initiated form the cumulative number window. Select

a subset from the seismicity window (generally 10 < N < 300). Select the ZTOOLS ->Stress Tensor Inversion -> Invert using Michael’s Method option. The inversion is started

and will take several seconds, depending on the sample size and speed of your machine.The inversion is performed by first saving the necessary data into a file, then calling

Michael’s inversion program unix(' slfast data2 ') to find the best solution. To estimate

the confidence regions of the solution, a bootstrap approach is used by Michael (unix([' bootslickw data2 2000 0.5' ]); ). In the defaults setup, fault planes and auxiliary planes

are assumed equally likely to be the rupture plane (expressed by the 0.5 in the bootslickw

call). Results are displayed in a stereographic projection (Wulff net, Figure 2).

Figure 5.2: Output of the stress tensor inversion

The faulting type is determined based on Zoback’s (1992) classification scheme; the info button will link to a web page describing the faulting styles. Seer Michael’s papers for

details on variance and Phi.

In addition, the stress tensor can be investigated as a function of time and depths.

Inversions will be performed for overlapping windows with a constant, user defined

number of events, and plotted against time or depth.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 40/57

  40

 

Figure 5.4: Stress tensor inversion results plotted as a function of time

Inverting on a grid

From the seismicity window, the ZTOOLS -> Map stress tensor option will open an inputwindow for the inversion on a grid.

The input structure is similar to the b- and z-value mapping. Grids node spacing is

degrees. You can either choose a constant number near each node, or a constant radius in

kilometer. When choosing the latter. it might be a good idea to set the minimum number

of events to a value above its default of zero (e.g., 10). A grid is defined interactively,excluding areas of low seismicity (left mouse: new node; right mouse button: last node).

Results of the inversion are displayed in two windows: A map that sows the orientation

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 41/57

  41

of S1 as a bar, and color codes the faulting style, and a map of the variance of the

inversion at each node, under laying again the orientation of S1 indicated by bars.

Figure 5.5: Stress tensor inversion results for the Landers region/ Th etop frame shows the orientatio of S1(bars), differentiating various faulting regimes. The Bottom plot shows in addition the variance of the stress

tensor at each node.

Red areas are regions where only a poor fit to a homogeneous stress tensor could be

obtained. The Select -> Select EQ in circle option will plot the cumulative number at this

node, then perform an inversion and plot the results in a wulff net.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 42/57

  42

 

Figure 5.6: Typical inversion results for a ‘red’ region, i.e. high variance and a poor fit to a homogeneous

stress filed, and a blue region.

Plotting stress results on top of topography

A nice looking map of the variance and orientation of S1, plotted on top of topography,can be obtained using the Maps -> Plot map on top of topography option from the

variance map. However, you must have access to the Matlab Mapping toolbox to use this

option, and you must have already loaded/plotted a topography map using the optionsfrom the seismicity map. The script called to do the plotting in dramap_stress.m. It may

 be necessary to change the script in order to adjust the labeling spacing, color map etc.

 Note that the map cannot be viewed from a perspective different from straight above,since the bars are all at one height of 10 km.

Figure 5.7: Variance map plotted on top of topography.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 43/57

  43

Using Gephart’s code

An alternative code to compute stress tensor was given by Gephart [Gephart , 1990a;

Gephart , 1990b; Gephart and Forsyth, 1984]. His code performs a complete grid search

of the parameter space. The ZMAP6 version of the code is essentially unchanged fromthe ZMAP5 version, with the exception that now a precompiled PC version is also

available. The data input format is identical to the one for Michael’s inversion. Note thatsignificant differences between the two methods have been observed in special cases.

For UNIX or LINUX version, you need to precompiled a few files, that are located in the

external/src_unix directory. Check the INFO file in this directory for information oncompiling.

To initiate a stress tensor inversion, select the "Invert for stress tensor" option from the

Tools pull down menu of the Cumulative Number window. The dataset currently selectedin this window will be used for the inversion. The actual inversion is performed using a

Fortran code based on Gephart and Forsyth [1984] algorithm, and modified by Zhong

Lu.

The actual program is described and discussed by Gephart and Forsyth [1984], Gephart  (1990), [ Lu and Wyss, 1996;  Lu et al., 1997] and [Gillard et al., 1995]. Two main

assumptions need are made: 1) the stress tensor is uniform in the crustal volumeinvestigated; 2) on each fault plane slip occurs in the direction of the resolved shear

stress. In order to invert the focal mechanism data successfully for the direction of

 principal stresses, one must have a crustal volume with faults representing zones of

weakness with different orientations in a homogeneous stress field. If only one type offocal mechanism is observed, then the direction of the principal stresses would be poorly

constraint (modified from Gillard and Wyss, 1995)

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 44/57

  44

Figure 5.8. Schematic representation of the misfit angle (Figure provided by Zhong Lu)

To determine the unknown parameters, the difference between the prediction of the

model and the observations needs to be minimized. This difference, is called the misfit,and is defined as the minimum rotation about any arbitrary axes that brings the fault

 plane geometry into coincidence with a new fault plane. A grid search over the focal

sphere is performed - at first with a 90-degree variance with 10 degree spacing(approximate method) then with a 30-degree variance and 5 degree spacing. Each

inversion takes a significant amount of time to run, which depends mainly on the number

of earthquakes to be inverted. As a rule of thumb: 30 earthquakes take about 15 minutesto be inverted on a SUN Sparc 20, about 3 minutes on a PC 1.7 GHz. Please wait until the

inversion is completed, do not attempt to continue using ZMAP. The inversion creates a

number of temporary files in the directory `~/ZMAP/external. The final result can be

found in the file `stress.out'

Table 2: Output of the stress tensor inversion in file `stress.out' and out95

S1

(az)

S1

(plun)

S2

(az)

S2

(plun)

S3

(az)

S3

(plun)PHI R Misfit

13 46 5 314 76 201 -5.6 0.9 3.597

The ratio is defined as: .

For the definition of PHI, see Gephart (1990). The file out95 contains the entire grid-

search, where each line is in the same format as shown in Table 2. To plot the best fittingstress tensor (the one with the smallest misfit value), type `plot95' in the Matlab

command window. This will load the file plot95 and calculate the 95 percent confidence

regions using the formula (Parker and McNutt, 1980)

were n is the number of earthquakes used in the inversion and MImin the minimum

achieved misfit. All grid-points with a misfit MI <= li will be plotted, as shown in Figure83. The projection is now a stereographic one as well! Also: the 95% confidence regionsare only calculated using a 30-degree grid. This is done in order to reduce computer time

(by about a factor four). As a result, in some situations there may be grid-points outside

the 30-degree variance that are significant at the 95% confidence limit but not shown in

the plot. To change this to a 90-degree variance search for the exact method, edit the file

stinvers/fmsiWindow_1.c, line 5

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 45/57

  45

#define VARIANCE_30 30

change to:

#define VARIANCE_30 90

and re-compile (cc -o msiWindow_1 msiWindow_1.c)

The cumulative misfit method

Stress tensor inversions are time consuming, and the resulting tensor is not easilyvisualized. To identify crustal volumes that satisfy one homogeneous stress tensor Lu and

Wyss (1995) and Wyss and Lu (1995) introduced the cumulative misfit method. The

misfit, f , for each individual earthquake can be summed up in a number of different ways,for example along the strike of a fault or plate boundary. If the stress direction along

strike is uniforms within segments, but different from other segments, the cumulative

misfit will show constant, but different slope for each segment (Figure 84). We can

also study the cumulative misfit as a function of latitude, depth, time, or magnitude, andtry to identify segments with constant but different slope.

ZMAP allows taking the cumulative misfit method one step further: A grid (in map view

or cross-section) is used, and the average misfit of the n closest earthquakes inan Euclidean sense is calculated. The distribution of this average misfit can be displayedusing a color representation. Maps of this type, calculated for a number of different

assumed homogeneous stress tensor can identify homogeneous volumes, which then can

 be inverted using the stress tensor inversion method described earlier.

Figure 5.9. Schematic explanation of the cumulative misfit method. Changes in the slop of the cumulative

misfit curve (blue) indicate a change in the stress regime. Figure courtesy of Zhong Lu

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 46/57

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 47/57

  47

depicts the misfit value f. Please note that in order to show this cross-section view a

cross-section must have been defined previously.

To calculate a map the grid spacing needs to be defined (in degrees) and the number of

earthquakes sampled around each grid-node. The distribution of average misfit values

will then be shown in a color image (Figure 89). A low average misfit will be indicated inred, a high misfit in blue. A study by Gillard and Wyss (1995) showed that in many casesaverage misfit values of F<6 degrees indicate that the assumption of a uniform stress field

is fulfilled.

Figure 5.12. Cross-section view of the misfit f of individual events. The size and color of each

symbol indicates the misfit f.

Figure 5.13. Cumulative misfit F as a function of Latitude

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 48/57

  48

 

Figure 5.14. Image showing the distribution of average misfit values F in map view. Red colors indicate a

low average misfit and thus good compliance with the assumed theoretical stress field. This map shows the

Parkfield segment of the san Andreas fault. The theoretical stress filed was given as (151 deg az, 2 deg plunge, R=0.9, Phi = 1).

Figure 5.15. Image showing the distribution of average misfit values F in cross-section

view.

References

Gephart, J.W., FMSI: A FORTRAN program for inverting fault/slickenside and

earthquake focal mechanism data to obtain the original stress tensor, Comput. Geosci.,

16 , 953-989, 1990a.Gephart, J.W., Stress and the direction of slip on fault planes, Tectonics, 9, 845-858,

1990b.

Gephart, J.W., and D.W. Forsyth, An Improved Method for Determining the RegionalStress Tensor Using Earthquake Focal Mechanism Data: Application to the San Fernando

Earthquake Sequence, Journal of Geophysical Research, 89, 9305-9320, 1984.

Gillard, D., M. Wyss, and P. Okubo, Stress and strain tensor orientations in the south

flank of Kilauea, Hawaii, estimated from fault plane solutions,  Journal of Geophysical Research, 100, 16025-16042, 1995.

Hardebeck, J.L., and E. Hauksson, Stress orientations obtained from earthquake focal

mechanisms: What are appropriate uncertainty estimates?,  Bulletin of the SeismologicalSociety of America, 91 (2), 250-262, 2001.

Lu, Z., and M. Wyss, Segmentation of the Aleutian plate boundary derived from stress

direction estimates based on fault plane solutions, Journal of Geophysical Research, 101,803-816, 1996.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 49/57

  49

Lu, Z., M. Wyss, and H. Pulpan, Details of stress directions in the Alaska subduction

zone from fault plane solutions, Journal of Geophysical Research, 102, 5385-5402, 1997.Michael, A.J., Determination of Stress From Slip Data: Faults and Folds,  Journal of

Geophysical Research, 89, 11517-11526, 1984.

Michael, A.J., Stress rotation during the Coalinga aftershock sequence,  Journal of

Geophysical Research, 92, 7963-7979, 1987a.Michael, A.J., Use of Focal Mechanisms to Determine Stress: A Control Study,  Journal

of Geophysical Research, 92, 357-368, 1987b.

Michael, A.J., Spatial variations of stress within the 1987 Whittier Narrows, California,aftershock sequence: new techniques and results,  Journal of Geophysical Research, 96 ,

6303-6319, 1991.

Michael, A.J., W.L. Ellsworth, and D. Oppenheimer, Co-seismic stress changes induced by the 1989 Loma Prieta, California earthquake, Geophysical Research Letters, 17 , 1441-

1444, 1990.

Wiemer, S., M.C. Gerstenberger, and E. Hauksson, Properties of the 1999, Mw7.1,Hector Mine earthquake: Implications for aftershock hazard,  Bulletin of the

Seismological Society of America, in press, 2001.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 50/57

  50

 

CHAPTER VI

Tips an tricks for making nice figures

Most ZMAP figures are not publication or presentation quality right away. Below aresome ideas on how to 1) tweak the ZMAP figures within Matlab such that they look

nicer, 2) Export the figures out of Matlab, and 3) post-process them in various editing

software.

Editing ZMAP graphs

The edit options in Matlab have improved dramatically. While Matlab 5.3 had some

option, that were not very stable, Matlab 6 now offers a full array of editing option.Therefore, I recommend strongly to use Matlab 6 whenever possible. I personally create

Figures mostly on a PC, because Editing tends to be more stable on a powerful PC than

on HP or SUN workstations, and because using copy & paste, progress can be made veryquickly.

Figure 6.1: Starting point of the edit

Lets start with a simple example: A cumulative number curve, comparing seismicity

above and below M1.5 in the Parkfield area. This plot was made using the ZTOOLS –overlay another plot (hold) option, then The original ZMAP way (left) is ok for display

on the screen, but not useful for publication. The fonts are too small, there is no legend,

axes scales are not quite right, and lines should be gray. First we activate the Edit option

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 51/57

  51

 by clicking on the arrow next to the small printer symbol. Now we can click on any

element and view/change its properties. Lets first change the lines. Select the line youwant to change with a left mouse click, then select the available options with a right

mouse. You can change some options right there, for more advanced options, like

MarkerType, you need to open the Properties menu.

Selecting the labels, we delete the title (park.mat) and change the size and position of the

axes to bold. We also increase the size of the star, and change it color.

Figure 6.3: First iteration

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 52/57

  52

 

 Now lets change the axes setup. Select the main axes, and open the properties box.

We will change the axes font size, and Yaxes ticklabels. We could also change the axes

 background color. By selecting the axes, and then unlocking in, we can resize the figureaspect ratio to our liking, and relock the axes. Finally, we select the axes, and use the

‘Show legend’ option to plot a legens. Its axes can then be unlocked, moved. We also edit

the text in the axes by selecting it until a text edit cursor appears. Finally, we could

change the figure background color by using Property edit- Figure Menu (double click inthe figure, or use Edfit -> Figure properties). You might add Annotations using the “T”

option, or lines and arrows. Below is the final result:

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 53/57

  53

 

Figure 6.4. Final edited figures

Exporting figures from ZMAP

In Figure 3, we created a decent looking figure. What to do next depends largely on whatyou need to do. The best option to make publication quality figures of simple graphs such

as figure 3 is to print the above figures into a postscript file. To do this, either use the

Print … button from the File Menu, and select the Print to file option (you need to have a

 postscript printer driver installed to do this). Note that the output may not have the sameaspect ratio, unless you use the PageSetup Menu options “Use Screen size, ceneterd on

Page” or FixAspectRatio. You also want to select the right paper format (A4 or letter) to

avoid later complications.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 54/57

  54

 

Alternatively you can print from the command prompt in Matlab, using, for example:

 print –dps –noui myplot.ps

The figure you want to print must be the active one. The –noui option avoid printing the

menus. See help print for details on drivers etc. The same PageSize setup applies.

In addition, you could keep a copy as a Matlab *.fig file. These Figures can be reloaded

into any Matlab session, and edited within Matlab. See hgsave and hgload for details.

Postscripts files can be printed well, converted into PDF, but they cannot always be

edited. On Unix workstations, IslandDraw does a good job opening simple postscript

figures from Matlab – but fails with interpolated color maps and too many points etc.FrameMaker works will for printing and text editing, but postscript cannt be edited, just

resized. Postscript generally cannot easily be edited in Microsoft Word or PowerPoint,

 but Designer or CoralDraw on a PC or Mac often works if the files are small.

If you Work on a PC, an alternative to postscript is the emf Format (enhanced Meta file).You can get this either by using the File -> Export option, or by setting the Edit -> Copyoptions to Windows metafile and then selecting “Copy Figure”. The clipboard content

can then be simply pasted into Word or Powerpoint. In Powerpoint (or other PC editing

 programs), they can be ungrouped and edited.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 55/57

  55

 

lets, for example, assume we want to give a colorful presentation using PowerPoint. Theoptions are almost limitless … but it does take some time.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 56/57

  56

Working with interpolated color maps

Interpolated color maps in Matlab look nice, but the tend to be not exportable readily into

any program. For quick documentation, such as this document, the copy as bitmap and

 paste option, or Alt PrintScreen options works out well. If a higher resolution is needed, Ioften end up using the following approach: 1) Finalize the Figures as much as possible in

Matlab. 2) Export it to a jpeg file. The resolution can be set interactively when printingfrom the command line:

 print –djpeg –r300 –noui myfig

Also, If you want a dark background and Matlab defaults to white, try:

set(gcf,’Inverthardcopy’,’off’)

The resolution for Founts is OK at 300 dpi:

In PowerPoint, the imported Figure will be too large for a Page, but can be resized while

keeping the dpi resolution. You will often notice that thicker lines and bigger, bold fontswork better. Make sure that the background in Matlab is the same you will use in the

slide, since it cannot easily be changed. To switch a Matlab figure from black to white

and vice versa, use the command line option: whitebg(gcf). In PowerPoint, it is thenreadily possible to add elements on top, add captions or Figures numbers etc. These

figures are generally high enough quality for publication, if they have been save with atleast 300 dpi.

8/12/2019 CookbookCookbookCookbook

http://slidepdf.com/reader/full/cookbookcookbookcookbook 57/57