2013 Frm Study Notes

25
©2012 Kaplan, Inc. Page 53 The following is a review of the Quantitative Analysis principles designed to address the AIM statements set forth by GARP ® . This topic is also covered in: Review of Statistics Topic 13 Exam Focus Statistical inference involves two areas: estimation and hypothesis testing. This topic provides insight into how risk managers make portfolio decisions on the basis of statistical analysis of samples of investment returns or other random economic and financial variables. We first focus on the construction of confidence intervals for population parameters based on sample statistics. We then discuss hypothesis testing procedures used to conduct tests concerned with population means and population variances. Specific tests reviewed include the z-test and the t-test. For the exam, you should be able to construct and interpret a confidence interval and you should know when and how to apply each of the test statistics discussed when conducting hypothesis testing. Statistical inference is the process of making estimates and testing claims about the characteristics of a population. There are two areas of statistical inference: estimation and hypothesis testing. In the previous topic, we introduced confidence intervals. In this topic, we will expand the discussion of confidence intervals as a tool to evaluate population parameters on the basis of sample statistics. We will then examine hypothesis testing which is the testing of claims about population parameters. Estimator vs. Parameter AIM 13.1: Describe and interpret estimators of the sample mean and their properties. In many real-world statistics applications, it is impractical (or impossible) to study an entire population. When this is the case, a subgroup of the population, called a sample (or estimator) can be evaluated. Based on this sample, the parameters of the underlying population can be estimated. For example, rather than attempting to measure the performance of the U.S. stock market by observing the performance of all 10,000 or so stocks trading in the United States at any one time, the performance of the subgroup of 500 stocks in the S&P 500 can be measured. The results of the statistical analysis of this sample can then be used to draw conclusions about the entire population of U.S. stocks. Ideally, we would like our estimator to be as accurate as possible (compared to the unknown true value). As we will discuss shortly, there are three desirable properties that we would like our estimator to display: unbiasedness, consistency, and efficiency. Sample

description

frm notes

Transcript of 2013 Frm Study Notes

Page 1: 2013 Frm Study Notes

©2012 Kaplan, Inc. Page 53

The following is a review of the Quantitative Analysis principles designed to address the AIM statements set forth by GARP®. This topic is also covered in:

Review of Statistics

Topic 13

Exam Focus

Statistical inference involves two areas: estimation and hypothesis testing. This topic provides insight into how risk managers make portfolio decisions on the basis of statistical analysis of samples of investment returns or other random economic and financial variables. We first focus on the construction of confidence intervals for population parameters based on sample statistics. We then discuss hypothesis testing procedures used to conduct tests concerned with population means and population variances. Specific tests reviewed include the z-test and the t-test. For the exam, you should be able to construct and interpret a confidence interval and you should know when and how to apply each of the test statistics discussed when conducting hypothesis testing.

Statistical inference is the process of making estimates and testing claims about the characteristics of a population. There are two areas of statistical inference: estimation and hypothesis testing. In the previous topic, we introduced confidence intervals. In this topic, we will expand the discussion of confidence intervals as a tool to evaluate population parameters on the basis of sample statistics. We will then examine hypothesis testing which is the testing of claims about population parameters.

Estimator vs. Parameter

AIM 13.1: Describe and interpret estimators of the sample mean and their properties.

In many real-world statistics applications, it is impractical (or impossible) to study an entire population. When this is the case, a subgroup of the population, called a sample (or estimator) can be evaluated. Based on this sample, the parameters of the underlying population can be estimated.

For example, rather than attempting to measure the performance of the U.S. stock market by observing the performance of all 10,000 or so stocks trading in the United States at any one time, the performance of the subgroup of 500 stocks in the S&P 500 can be measured. The results of the statistical analysis of this sample can then be used to draw conclusions about the entire population of U.S. stocks.

Ideally, we would like our estimator to be as accurate as possible (compared to the unknown true value). As we will discuss shortly, there are three desirable properties that we would like our estimator to display: unbiasedness, consistency, and efficiency.

Sample

Page 2: 2013 Frm Study Notes

Page 54 ©2012 Kaplan, Inc.

Topic 13Cross Reference to GARP Assigned Reading – Stock & Watson, Chapter 3

Point Estimates

AIM 13.4: Define, calculate and interpret a confidence interval. AIM 13.5: Describe the properties of point estimators:

Point estimates are single (sample) values used to estimate population parameters. The formula used to compute the point estimate is called the estimator. For example, the sample mean, x, is an estimator of the population mean and is computed as follows:

xx

n

The value generated with this calculation for a given sample is called the point estimate of the mean.

Recall the discussion of confidence intervals from the previous topic. Since we are evaluating sample data we replace the standard deviation, s, in the formula from the previous topic with the standard error of the point estimate.

Confidence interval estimates result in a range of values within which the actual value of a parameter will lie, given the probability of 1 – . Here, alpha, , is called the level of significance for the confidence interval, and the probability 1 – is referred to as the degree of confidence. For example, we might estimate that the population mean of random variables will range from 15 to 25 with a 95% degree of confidence, or at the 5% level of significance.

Confidence intervals are usually constructed by adding or subtracting an appropriate value from the point estimate. In general, confidence intervals take on the following form:

point estimate ± (reliability factor × standard error)

where:point estimate = value of a sample statistic of the population parameter

reliability factor = number that depends on the sampling distribution of the point estimate and the probability that the point estimate falls within the confidence interval, (1 – )

standard error = standard error of the point estimate

Sample

Page 3: 2013 Frm Study Notes

©2012 Kaplan, Inc. Page 55

Topic 13 Cross Reference to GARP Assigned Reading – Stock & Watson, Chapter 3

Regardless of whether we are concerned with point estimates or confidence intervals, there are certain statistical properties that make some estimates more desirable than others. These desirable properties of an estimator are unbiasedness, efficiency, and consistency.

An unbiased estimator is one for which the expected value of the estimator is equal to the parameter you are trying to estimate. For example, because the expected value of the sample mean is equal to the population mean E x , the sample mean is an unbiased estimator of the population mean. If this is not the case, then the estimators are biased.An unbiased estimator is also efficient if the variance of its sampling distribution is smaller than all the other unbiased estimators of the parameter you are trying to estimate. The sample mean, for example, is an unbiased and efficient estimator of the population mean.A consistent estimator is one for which the accuracy of the parameter estimate increases as the sample size increases. As the sample size increases, the standard error of the sample mean falls, and the sampling distribution bunches more closely around the population mean. In fact, as the sample size approaches infinity, the standard error approaches zero.

Another property of a point estimate is linearity. A point estimate should be a linear estimator (i.e., it can be used as a linear function of the sample data). If the estimator is the best available (i.e., has the minimum variance), exhibits linearity, and is unbiased, it is said to be the best linear unbiased estimator. The acronym used to describe this type of estimator is BLUE.

AIM 13.2: Describe and interpret the least squares estimator.

In regression analysis, the line of best fit through a scatter plot of sample data attempts to minimize the sum of the squared differences between the actual data points and the predicted linear regression. The values estimated in the line of best fit in regression analysis are termed least squares estimators. Least squares estimation, therefore, will find the intercept and slope of the line through a data set that minimizes the variance of the residual terms (i.e., the difference between each observation and its corresponding conditional expectation).

T-Values

AIM 13.3: Define, interpret, and calculate the t-statistic.

As introduced in the previous topic, the t-distribution is a bell-shaped probability distribution that is symmetrical about its mean. It is the appropriate distribution to use when constructing confidence intervals based on small samples (n < 30) from populations with unknown variance and a normal, or approximately normal, distribution. It may also be appropriate to use the t-distribution when the population variance is unknown and the sample size is large enough that the central limit theorem will assure that the sampling distribution is approximately normal.

The table in Figure 1 contains one-tailed critical values for the t-distribution at the 0.05 and 0.025 levels of significance with various degrees of freedom (df ). Note that, unlike the z-table, the t-values are contained within the table, and the probabilities are located at the

Sample

Page 4: 2013 Frm Study Notes

Page 56 ©2012 Kaplan, Inc.

Topic 13Cross Reference to GARP Assigned Reading – Stock & Watson, Chapter 3

column headings. Also note that the level of significance of a t-test corresponds to the one-tailed probabilities, p, that head the columns in the t-table.

Table of Critical Figure 1: t-Values

One-Tailed Probabilities, p

df p = 0.05 p = 0.025

5 2.015 2.571

10 1.812 2.228

15 1.753 2.131

20 1.725 2.086

25 1.708 2.060

30 1.697 2.042

40 1.684 2.021

50 1.676 2.009

60 1.671 2.000

70 1.667 1.994

80 1.664 1.990

90 1.662 1.987

100 1.660 1.984

120 1.658 1.980

1.645 1.960

Professor’s Note: For the exam, expect critical t-values to be provided. You will most likely be given a partial t-table and asked to utilize the appropriate information. The only critical values you need to memorize for the exam are the most commonly used z-values at 90%, 95%, and 99% for both one-tailed and two-tailed tests. Those values are shown in Figure 7 later in this topic.

Population vs. Sample Mean

To compute the population mean, all the observed values in the population are summed (∑X) and divided by the number of observations in the population, N. Note that the population mean is unique in that a given population only has one mean. The population mean is expressed as:

X

N

ii

N

1

Sample

Page 5: 2013 Frm Study Notes

©2012 Kaplan, Inc. Page 57

Topic 13 Cross Reference to GARP Assigned Reading – Stock & Watson, Chapter 3

The sample mean is the sum of all the values in a sample of a population, ∑X, divided by the number of observations in the sample, n. It is used to make inferences about the population mean. The sample mean is expressed as:

X

X

n

ii

n

1

Note the use of n, the sample size, versus N, the population size.

Sample Variance, Sample Standard Deviation, and Standard Error

AIM 13.7: Define, calculate, and interpret the sample variance, sample standard deviation, and standard error.

The population variance is defined as the average of the squared deviations from the mean. The population variance ( 2) uses the values for all members of a population and is calculated using the following formula:

2

2

1

( )X

N

ii

N

The computed variance is in terms of squared units of measurement. How does one interpret squared percents, squared dollars, or squared yen? This problem is mitigated through the use of the standard deviation. The population standard deviation, , is the square root of the population variance.

The sample variance, s2, is the measure of dispersion that applies when we are evaluating a sample of n observations from a population. The sample variance is calculated using the following formula:

s

X X

n

ii

n

2 1

2

1

( )

The most noteworthy difference from the formula for population variance is that the denominator for s2 is n – 1, one less than the sample size n, where σ2 uses the entire population size N. Another difference is the use of the sample mean, X , instead of the population mean, μ. Based on the mathematical theory behind statistical procedures, the use of the entire number of sample observations, n, instead of n – 1 as the divisor in the computation of s2, will systematically underestimate the population parameter, σ2, particularly for small sample sizes. This systematic underestimation causes the sample variance to be what is referred to as a biased estimator of the population variance. Using n – 1 instead of n in the denominator, however, improves the statistical properties of s2 as an estimator of σ2. Thus, s2, as expressed in the equation above, is considered to be an unbiased estimator of σ2.

Sample

Page 6: 2013 Frm Study Notes

Page 58 ©2012 Kaplan, Inc.

Topic 13Cross Reference to GARP Assigned Reading – Stock & Watson, Chapter 3

Example: Sample variance

Assume the following annualized total returns represent a sample of the managers at a large investment firm (30%, 25%, 23%, 20%, 12%). What is the sample variance of these returns?

Answer:

X30 12 25 20 23

522%

s2

2 2 2 2 230 22 12 22 25 22 20 22 23 22

5 1444 5 2. %

Thus, the sample variance of 44.5(%2) can be interpreted to be an unbiased estimator of the population variance.

As with the population standard deviation, the sample standard deviation can be calculated by taking the square root of the sample variance. The sample standard deviation, s, is defined as:

s

X X

n

ii

n( )

1

2

1

Example: Sample standard deviation

Compute the sample standard deviation based on the result of the preceding example.

Answer:

Since the sample variance for the preceding example was computed to be 44.5(%2), the sample standard deviation is:

s = [44.5(%2)]1/2 = 6.67%

The results shown here mean that the sample standard deviation, s = 6.67%, can be interpreted as an unbiased estimator of the population standard deviation, σ.

The standard error of the sample mean is the standard deviation of the distribution of the sample means.

Sample

Page 7: 2013 Frm Study Notes

©2012 Kaplan, Inc. Page 59

Topic 13 Cross Reference to GARP Assigned Reading – Stock & Watson, Chapter 3

When the standard deviation of the population, , is known, the standard error of the sample mean is calculated as:

xn

where:

x = standard error of the sample mean = standard deviation of the population

n = size of the sample

Example: Standard error of sample mean (known population variance)

The mean hourly wage for Iowa farm workers is $13.50 with a population standard deviation of $2.90. Calculate and interpret the standard error of the sample mean for a sample size of 30.

Answer:

Because the population standard deviation, , is known, the standard error of the sample mean is expressed as:

xn

$ .$ .

2 90

300 53

Professor’s Note: On the TI BAII Plus, the use of the square root key is obvious. On the HP 12C, the square root of 30 is computed as: [30] [g] [ x ].

This means that if we were to take all possible samples of size 30 from the Iowa farm worker population and prepare a sampling distribution of the sample means, we would get a distribution with a mean of $13.50 and standard error of $0.53.

Practically speaking, the population’s standard deviation is almost never known. Instead, the standard error of the sample mean must be estimated by dividing the standard deviation of the sample mean by n :

ss

nx

Professor’s Note: Use this when the population variance is unknown.

where:sx = standard error of the sample mean

s = standard deviation of the sample =

X X

n

ii

n2

1

1n = size of the sample

Sample

Page 8: 2013 Frm Study Notes

Page 60 ©2012 Kaplan, Inc.

Topic 13Cross Reference to GARP Assigned Reading – Stock & Watson, Chapter 3

Example: Standard error of sample mean (unknown population variance)

Suppose a sample contains the past 30 monthly returns for McCreary, Inc. The mean return is 2% and the sample standard deviation is 20%. Calculate and interpret the standard error of the sample mean.

Answer:

Since is unknown, the standard error of the sample mean is:

ss

nx

20

303 6

%. %

This implies that if we took all possible samples of size 30 from McCreary’s monthly returns and prepared a sampling distribution of the sample means, the mean would be 2% with a standard error of 3.6%.

Professor’s Note: Regarding when to use and / n , just remember that the standard deviation of the mean of many observations is less than the standard deviation of a single observation. If the standard deviation of monthly stock returns is 2%, the standard error (deviation) of the average monthly return over the next six months is 2%/ 6 = 0.82%. The average of several observations of a random variable will be less widely dispersed (have lower standard deviation) around the expected value than will a single observation of the random variable.

Confidence Intervals

AIM 13.8: Define, calculate, and interpret confidence intervals for the population mean.

If the population has a normal distribution with a known variance, a confidence interval for the population mean based on a given confidence level (a.k.a. confidence coefficient) can be calculated as:

x zn

/2

where:x = point estimate of the population mean (sample mean)z /2 = reliability factor, a standard normal random variable for which the probability in the right-hand tail of the distribution is /2. In other words, this is the z-score that leaves /2 of probability in the upper tail

n = the standard error of the sample mean where is the known standard deviation

of the population, and n is the sample size

Sample

Page 9: 2013 Frm Study Notes

©2012 Kaplan, Inc. Page 61

Topic 13 Cross Reference to GARP Assigned Reading – Stock & Watson, Chapter 3

The most commonly used standard normal distribution reliability factors are:

z /2 = 1.645 for 90% confidence intervals (the significance level is 10%, 5% in each tail)

z /2 = 1.960 for 95% confidence intervals (the significance level is 5%, 2.5% in each tail)

z /2 = 2.575 for 99% confidence intervals (the significance level is 1%, 0.5% in each tail)

Do these numbers look familiar? They should! In our previous topic, we found the probability under the standard normal curve between z = –1.96 and z = +1.96 to be 0.95, or 95%. Owing to symmetry, this leaves a probability of 0.025 under each tail of the curve beyond z = –1.96 of z = +1.96, for a total of 0.05, or 5%—just what we need for a significance level of 0.05, or 5%.

Example: Confidence interval

Consider a practice exam that was administered to 100 FRM candidates. The mean score on this practice exam was 80 for all 36 of the candidates in the sample who studied at least ten hours a week in preparation for the exam. Assuming a population standard deviation equal to 15, construct and interpret a 99% confidence interval for the mean score on the practice exam for 36 candidates who study at least ten hours a week. Note that in this example the population standard deviation is known, so we don’t have to estimate it.

Answer:

At a confidence level of 99%, z /2 = z0.005 = 2.575. So, the 99% confidence interval is calculated as follows:

x zn

/ . .2 80 2 57515

3680 6 4

Thus, the 99% confidence interval ranges from 73.6 to 86.4.

Confidence intervals can be interpreted from a probabilistic perspective or a practical perspective. With regard to the outcome of the FRM practice exam example, these two perspectives can be described as follows:

Probabilistic interpretation. After repeatedly taking samples of FRM candidates who studied ten hours or more per week, administering the practice exam, and constructing confidence intervals for each sample’s mean, 99% of the resulting confidence intervals will, in the long run, include the population mean.Practical interpretation. We are 99% confident that the population mean score is between 73.6 and 86.4 for candidates who study more than ten hours per week.

Sample

Page 10: 2013 Frm Study Notes

Page 62 ©2012 Kaplan, Inc.

Topic 13Cross Reference to GARP Assigned Reading – Stock & Watson, Chapter 3

Confidence Intervals for the Population Mean: Normal With Unknown Variance

If the distribution of the population is normal with unknown variance, we can use the t-distribution to construct a confidence interval. This is known as a random interval since it will vary from one sample to the next.

x ts

n/2

where:x = the point estimate of the population meant /2 = the t-reliability factor (a.k.a. t-statistic or critical t-value) corresponding to a t-distributed random variable with n – 1 degrees of freedom, where n is the sample size. The area under the tail of the t-distribution to the right of t /2 is /2

s

n = standard error of the sample mean

s = sample standard deviation

Unlike the standard normal distribution, the reliability factors for the t-distribution depend on the sample size, so we can’t rely on a commonly used set of reliability factors. Instead, reliability factors for the t-distribution have to be looked up in a table of Student’s t-distribution, like the one at the back of this book.

Owing to the relatively fatter tails of the t-distribution, confidence intervals constructed using t-reliability factors (t /2) will be more conservative (wider) than those constructed using z-reliability factors (z /2).

Example: Confidence intervals

Assume we took a sample of the past 30 monthly stock returns for McCreary, Inc. and determined that the mean return is 2% and the sample standard deviation is 20%. Since the population variance is unknown, the standard error of the sample is estimated to be:

ss

nx

20

303 6

%. %

Now, let’s construct a 95% confidence interval for the mean monthly return.

Sample

Page 11: 2013 Frm Study Notes

©2012 Kaplan, Inc. Page 63

Topic 13 Cross Reference to GARP Assigned Reading – Stock & Watson, Chapter 3

Answer:

Here we will use the t-reliability factor because the population variance is unknown. Since there are 30 observations, the degrees of freedom are 29 = 30 – 1. Remember, because this is a two-tailed test at the 95% confidence level, the probability under each tail must be /2 = 2.5%, for a total of 5%. So, referencing the one-tailed probabilities for Student’s t-distribution at the back of this book, we find the critical t-value (reliability factor) for

/2 = 0.025 and df = 29 to be t29, 2.5 = 2.045. Thus, the 95% confidence interval for the population mean is:

2 2 04520

302 2 045 3 6 2 7 4% .

%% . . % % . %

Thus, the 95% confidence interval has a lower limit of –5.4% and an upper limit of +9.4%.

We can interpret this confidence interval by saying that we are 95% confident that the population mean monthly return for McCreary stock is between –5.4% and +9.4%.

Professor’s Note: You should practice looking up reliability factors (a.k.a. critical t-values or t-statistics) in a t-table. The first step is always to compute the degrees of freedom, which is n – 1. The second step is to find the appropriate level of alpha or significance. This depends on whether the test you’re concerned with is one-tailed (use α) or two-tailed (use α/2). When confidence intervals are designed to compute an upper and lower limit, we will use α/2. To look up t29, 2.5, find the 29 df row and match it with the 0.025 column; t = 2.045 is the result. We’ll do more of this in our study of hypothesis testing.

Confidence Interval for a Population Mean When the Population Variance Is Unknown Given a Large Sample From Any Type of Distribution

We now know that the z-statistic should be used to construct confidence intervals when the population distribution is normal and the variance is known, and the t-statistic should be used when the distribution is normal but the variance is unknown. But what do we do when the distribution is nonnormal?

As it turns out, the size of the sample influences whether or not we can construct the appropriate confidence interval for the sample mean.

If the distribution is nonnormal but the population variance is known, the z-statistic can be used as long as the sample size is large (n 30). We can do this because the central limit theorem assures us that the distribution of the sample mean is approximately normal when the sample is large.If the distribution is nonnormal and the population variance is unknown, the t-statistic can be used as long as the sample size is large (n 30). It is also acceptable to use the z-statistic, although use of the t-statistic is more conservative.

Sample

Page 12: 2013 Frm Study Notes

Page 64 ©2012 Kaplan, Inc.

Topic 13Cross Reference to GARP Assigned Reading – Stock & Watson, Chapter 3

This means that if we are sampling from a nonnormal distribution (which is sometimes the case in finance), we cannot create a confidence interval if the sample size is less than 30. So, all else equal, make sure you have a sample of at least 30, and the larger, the better.

Criteria for Selecting the Appropriate Test StatisticFigure 2:

When sampling from a:Test Statistic

Small Sample(n < 30)

Large Sample(n 30)

Normal distribution with known variance z-statistic z-statistic

Normal distribution with unknown variance t-statistic t-statistic*

Nonnormal distribution with known variance not available z-statistic

Nonnormal distribution with unknown variance not available t-statistic*

*The z-statistic is theoretically acceptable here, but use of the t-statistic is more conservative.

All of the preceding analysis depends on the sample we draw from the population being random. If the sample isn’t random, the central limit theorem doesn’t apply, our estimates won’t have the desirable properties, and we can’t form unbiased confidence intervals. Surprisingly, creating a random sample is not as easy as one might believe. There are a number of potential mistakes in sampling methods that can bias the results. These biases are particularly problematic in financial research, where available historical data are plentiful, but the creation of new sample data by experimentation is restricted.

Hypothesis Testing

AIM 13.6: Explain and apply the process of hypothesis testing:

Hypothesis testing is the statistical assessment of a statement or idea regarding a population. For instance, a statement could be as follows: “The mean return for the U.S. equity market is greater than zero.” Given the relevant returns data, hypothesis testing procedures can be employed to test the validity of this statement at a given significance level.

A hypothesis is a statement about the value of a population parameter developed for the purpose of testing a theory or belief. Hypotheses are stated in terms of the population parameter to be tested, like the population mean, . For example, a researcher may be interested in the mean daily return on stock options. Hence, the hypothesis may be that the mean daily return on a portfolio of stock options is positive.

Hypothesis testing procedures, based on sample statistics and probability theory, are used to determine whether a hypothesis is a reasonable statement and should not be rejected or

Sample

Page 13: 2013 Frm Study Notes

©2012 Kaplan, Inc. Page 65

Topic 13 Cross Reference to GARP Assigned Reading – Stock & Watson, Chapter 3

if it is an unreasonable statement and should be rejected. The process of hypothesis testing consists of a series of steps shown in Figure 3.

Hypothesis Testing ProcedureFigure 3: 1

The Null Hypothesis and Alternative Hypothesis

The null hypothesis, designated H0, is the hypothesis that the researcher wants to reject. It is the hypothesis that is actually tested and is the basis for the selection of the test statistics. The null is generally stated as a simple statement about a population parameter. Typical statements of the null hypothesis for the population mean include H0: = 0, H0: 0, and H0: 0, where is the population mean and 0 is the hypothesized value of the population mean. The null hypothesis always includes the = sign.

The alternative hypothesis, designated HA, is what is concluded if there is sufficient evidence to reject the null hypothesis. It is usually the alternative hypothesis that you are really trying to assess. Why? Since you can never really prove anything with statistics, when the null hypothesis is discredited, the implication is that the alternative hypothesis is valid.

One-Tailed and Two-Tailed Tests of Hypotheses

The alternative hypothesis can be one-sided or two-sided. A one-sided test is referred to as a one-tailed test, and a two-sided test is referred to as a two-tailed test. Whether the test is one- or two-sided depends on the proposition being tested. If a researcher wants to test whether the return on stock options is greater than zero, a one-tailed test should be used. However, a two-tailed test should be used if the research question is whether the return on options is simply different from zero. Two-sided tests allow for deviation on both sides of the hypothesized value (zero). In practice, most hypothesis tests are constructed as two-tailed tests.

A two-tailed test for the population mean may be structured as:

H0: = 0 versus HA: 0

1. Wayne W. Daniel and James C. Terrell, Business Statistics, Basic Concepts and Methodology, Houghton Mifflin, Boston, 1997.

Sample

Page 14: 2013 Frm Study Notes

Page 66 ©2012 Kaplan, Inc.

Topic 13Cross Reference to GARP Assigned Reading – Stock & Watson, Chapter 3

Since the alternative hypothesis allows for values above and below the hypothesized parameter, a two-tailed test uses two critical values.

The general decision rule for a two-tailed test is:

Reject H0 if: test statistic > upper critical value or test statistic < lower critical value

Let’s look at the development of the decision rule for a two-tailed test using a z-distributed test statistic (a z-test) at a 5% level of significance, = 0.05.

At = 0.05, the computed test statistic is compared with the critical z-values of ±1.96. The values of ±1.96 correspond to ±z /2 = ±z0.025, which is the range of z-values within which 95% of the probability lies. These values are obtained from the cumulative probability table for the standard normal distribution (z-table), which is included at the back of this book.If the computed test statistic falls outside the range of critical z-values (i.e., test statistic > 1.96, or test statistic < –1.96), we reject the null and conclude that the sample statistic is sufficiently different from the hypothesized value.If the computed test statistic falls within the range ±1.96, we conclude that the sample statistic is not sufficiently different from the hypothesized value ( = 0 in this case), and we fail to reject the null hypothesis.

The decision rule (rejection rule) for a two-tailed z-test at = 0.05 can be stated as:

Reject H0 if test statistic < –1.96 or if test statistic > 1.96.

Figure 4 shows the standard normal distribution for a two-tailed hypothesis test using the z-distribution. Notice that the significance level of 0.05 means that there is 0.05 / 2 = 0.025 probability (area) under each tail of the distribution beyond ±1.96.

Two-Tailed Hypothesis Test Using the Standard Normal (Figure 4: z) DistributionSample

Page 15: 2013 Frm Study Notes

©2012 Kaplan, Inc. Page 67

Topic 13 Cross Reference to GARP Assigned Reading – Stock & Watson, Chapter 3

Example: Two-tailed test

A researcher has gathered data on the daily returns on a portfolio of call options over a recent 250-day period. The mean daily return has been 0.1%, and the sample standard deviation of daily portfolio returns is 0.25%. The researcher believes that the mean daily portfolio return is not equal to zero. Construct a hypothesis test of the researcher’s belief. Assume the test statistic is equal to 6.33.

Answer:

First we need to specify the null and alternative hypotheses. The null hypothesis is the one the researcher expects to reject.

H0: 0 = 0 versus HA: 0 0

Since the null hypothesis is an equality, this is a two-tailed test. At a 5% level of significance, the critical z-values for a two-tailed test are ±1.96, so the decision rule can be stated as:

Reject H0 if +1.96 < test statistic or if test statistic < –1.96

Our test statistic is 6.33

Since 6.33 > 1.96, we reject the null hypothesis that the mean daily option return is equal to zero. Note that when we reject the null, we conclude that the sample value is significantly different from the hypothesized value. We are saying that the two values are different from one another after considering the variation in the sample. That is, the mean daily return of 0.001 is statistically different from zero given the sample’s standard deviation and size.

For a one-tailed hypothesis test of the population mean, the null and alternative hypotheses are either:

Upper tail: H0: 0 versus HA: > 0, orLower tail: H0: 0 versus HA: < 0

The appropriate set of hypotheses depends on whether we believe the population mean, , to be greater than (upper tail) or less than (lower tail) the hypothesized value, 0. Using

a z-test at the 5% level of significance, the computed test statistic is compared with the critical values of 1.645 for the upper tail tests (i.e., HA: > 0) or –1.645 for lower tail tests (i.e., HA: < 0). These critical values are obtained from a z-table, where –z0.05 = –1.645 corresponds to a cumulative probability equal to 5%, and the z0.05 = 1.645 corresponds to a cumulative probability of 95% (1 – 0.05).

Sample

Page 16: 2013 Frm Study Notes

Page 68 ©2012 Kaplan, Inc.

Topic 13Cross Reference to GARP Assigned Reading – Stock & Watson, Chapter 3

Let’s use the upper tail test structure where H0: 0 and HA: > 0.

If the calculated test statistic is greater than 1.645, we conclude that the sample statistic is sufficiently greater than the hypothesized value. In other words, we reject the null hypothesis.If the calculated test statistic is less than 1.645, we conclude that the sample statistic is not sufficiently different from the hypothesized value, and we fail to reject the null hypothesis.

Figure 5 shows the standard normal distribution and the rejection region for a one-tailed test (upper tail) at the 5% level of significance.

One-Tailed Hypothesis Test Using the Standard Normal (Figure 5: z) Distribution

Example: One-tailed test

Perform a z-test using the option portfolio data from the previous example to test the belief that option returns are positive.

Answer:

In this case, we use a one-tailed test with the following structure:

H0: 0 versus HA: > 0

The appropriate decision rule for this one-tailed z-test at a significance level of 5% is:

Reject H0 if test statistic > 1.645

The same test statistic is used, regardless of whether we are using a one-tailed or two-tailed test. From the previous example, we know that the test statistic for the option return sample is 6.33. Since 6.33 > 1.645, we reject the null hypothesis and conclude that mean returns are statistically greater than zero at a 5% level of significance.

Sample

Page 17: 2013 Frm Study Notes

©2012 Kaplan, Inc. Page 69

Topic 13 Cross Reference to GARP Assigned Reading – Stock & Watson, Chapter 3

The Choice of the Null and Alternative Hypotheses

The most common null hypothesis will be an “equal to” hypothesis. Combined with a “not equal to” alternative, this will require a two-tailed test. The alternative is often the hoped-for hypothesis. When the null is that a coefficient is equal to zero, we hope to reject it and show the significance of the relationship.

When the null is less than or equal to, the (mutually exclusive) alternative is framed as greater than, and a one-tail test is appropriate. If we are trying to demonstrate that a return is greater than the risk-free rate, this would be the correct formulation. We will have set up the null and alternative hypothesis so that rejection of the null will lead to acceptance of the alternative, our goal in performing the test.

Tests of Significance

Hypothesis testing involves two statistics: the test statistic calculated from the sample data and the critical value of the test statistic. The value of the computed test statistic relative to the critical value is a key step in assessing the validity of a hypothesis.

A test statistic is calculated by comparing the point estimate of the population parameter with the hypothesized value of the parameter (i.e., the value specified in the null hypothesis). With reference to our option return example, this means we are concerned with the difference between the mean return of the sample (i.e., x = 0.001) and the hypothesized mean return (i.e.,

0 = 0). As indicated in the following expression, the test statistic is the difference between the sample statistic and the hypothesized value, scaled by the standard error of the sample statistic.

test statisticsample statistic hypothesized value

standard error of the sample statistic

The standard error of the sample statistic is the adjusted standard deviation of the sample. When the sample statistic is the sample mean, x, the standard error of the sample statistic for sample size n, is calculated as:

x n

when the population standard deviation, , is known, or

s snx

when the population standard deviation, , is not known. In this case, it is estimated using the standard deviation of the sample, s.

Professor’s Note: Don’t be confused by the notation here. A lot of the literature you will encounter in your studies simply uses the term x for the standard error of the test statistic, regardless of whether the population standard deviation or sample standard deviation was used in its computation.

Sample

Page 18: 2013 Frm Study Notes

Page 50 ©2012 Kaplan, Inc.

Topic 39Cross Reference to GARP Assigned Reading – Tuckman, Chapter 5

Key Concepts

1. The DV01, equivalently, the PVBP, is the absolute value of the price change of a bond from a one basis point change in yield. The DV01 is an adequate measure of price volatility or interest rate sensitivity since it corresponds to such a small change in yield.

2. The hedge ratio when hedging a bond with another bond is calculated as:

HRDV01 (initial position)

DV01 (hedging instrument)

3. The formula for effective duration is:

duration

BV BV

2 BV y+

0

y y

4. Convexity is a measure of the degree of curvature or convexity in the price/yield relationship:

convexity

BV BV BV

BV y

y y 2 0

02

5. Combining the standard convexity measure with a measure of duration provides more accurate estimates of bond price changes, particularly when the change in yield is relatively large.

percentage price change duration effect + convexity effecct

–duration y convexity y100 10012

2

6. The use of convexity allows asset-liability management managers to better match the characteristics of their asset stream with that of their liabilities.

7. The portfolio duration of a bond portfolio is the value-weighted average of the durations of the bonds in the portfolio.

8. When hedging a bond that exhibits positive (negative) convexity with a bond that exhibits negative (positive) convexity, be cautious of interest rate movements. The bond’s hedge may not be adequate after interest rates change.

Sample

Page 19: 2013 Frm Study Notes

©2012 Kaplan, Inc. Page 51

Topic 39Cross Reference to GARP Assigned Reading – Tuckman, Chapter 5

Concept Checkers

Use the following information to answer Questions 1 and 2.

An investor has a short position valued at $100 in a 10-year, 5% coupon, T-bond with a YTM of 7%. Assume discounting occurs on a semiannual basis.

1. Which of the following is closest to the dollar value of a basis point (DV01)?A. 0.065.B. 0.056.C. 0.047.D. 0.033.

2. Using a 20-year T-bond with a DV01 of 0.085 to hedge the interest rate risk in the 10-year bond mentioned above, which of the following actions should the investor take?A. Buy $130.75 of the hedging instrument.B. Sell $130.75 of the hedging instrument.C. Buy $76.50 of the hedging instrument.D. Sell $76.50 of the hedging instrument.

3. The duration of a portfolio can be computed as the sum of the value-weighted durations of the bonds in the portfolio. Which of the following is the most limiting assumption of this methodology?A. All the bonds in the portfolio must change by the same yield.B. The yields on all the bonds in the portfolio must be perfectly correlated.C. All the bonds in the portfolio must be in the same risk class or along the same

yield curve.D. The portfolio must be equally weighted.

4. Estimate the percentage price change in bond price from a 25 basis point increase in yield on a bond with a duration of 7 and a convexity of 243.A. 1.67% decrease.B. 1.67% increase.C. 1.75% increase.D. 1.75% decrease.

5. An investor is estimating the interest rate risk of a 14% semiannual pay coupon bond with 6 years to maturity. The bond is currently trading at par. The effective duration and effective convexity of the bond for a 25 basis point increase and decrease in yield are closest to: Duration ConvexityA. 3.970 23.20B. 3.740 23.20C. 3.970 20.80D. 3.740 20.80

Sample

Page 20: 2013 Frm Study Notes

Page 52 ©2012 Kaplan, Inc.

Topic 39Cross Reference to GARP Assigned Reading – Tuckman, Chapter 5

Concept Checker Answers

1. A For a 7% bond, N = 10 × 2 = 20; I/Y = 7/2 = 3.5%; PMT = 5/2 = 2.5; FV = 100; CPT PV = –85.788

For a 7.01% bond, N = 20; I/Y = 7.01/2 = 3.505%; PMT = 2.5; FV = 100; CPT PV = –85.723

For a 6.99% bond, N = 20; I/Y = 6.99/2 = 3.495%; PMT = 2.5; FV = 100; CPT PV = –85.852

DV01+Δy = | 85.788 – 85.723 | = 0.065

DV01-Δy = | 85.788 – 85.852 | = 0.064

2. C The hedge ratio is 0.065 / 0.085 = 0.765. Since the investor has a short position in the bond, this means the investor needs to buy $0.765 of par value of the hedging instrument for every $1 of par value for the 10-year bond.

3. B A significant problem with using portfolio duration is that it assumes all yields for every bond in the portfolio are perfectly correlated. However, it is unlikely that yields across national borders are perfectly correlated.

4. A V % . . . % 7 0 0025 100 12 243 0 0025 100 1 672

5. C N = 12; PMT = 7; FV = 100; I/Y = 13.75/2 = 6.875%; CPT PV = 100.999

N = 12; PMT = 7; FV = 100; I/Y = 14.25/2 = 7.125%; CPT PV = 99.014

Δy = 0.0025

Duration = 100 999 99 014

2 100 0 00253 970

. .

( ) ..

Convexity = 100 999 99 014 200

100 0 002520 8

2

. .

( )( . ).Sample

Page 21: 2013 Frm Study Notes

©2012 Kaplan, Inc. Page 185

Challenge Problems

1. An investor enters a short position in a gold futures contract at $318.60. Each futures contract controls 100 troy ounces. The initial margin is $5,000 and the maintenance margin is $4,000. At the end of the first day the futures price rises to $329.22. Which of the following is the amount of the variation margin at the end of the first day?A. $0.B. $62.C. $1,000.D. $1,062.

2. A large-cap U.S. equity portfolio manager is concerned about near-term market conditions and wishes to reduce the systematic risk of her portfolio from 1.2 to 0.90. Her portfolio value is $56 million, and the S&P 500 futures index is currently trading at 1,050 and has a multiplier of 250. How can the portfolio manager’s objective be achieved?A. Sell 47 contracts.B. Buy 47 contracts.C. Sell 64 contracts.D. Buy 64 contracts.

3. Suppose you observe a 1-year (zero-coupon) Treasury security trading at a yield to maturity of 5% (price of 95.2381% of par). You also observe a 2-year T-note with a 6% coupon trading at a yield to maturity of 5.5% (price of 100.9232). And, finally, you observe a 3-year T-note with a 7% coupon trading at a yield to maturity of 6.0% (price of 102.6730). Assume annual coupon payments. Use the bootstrapping method to determine the 2-year and 3-year spot rates. 2-year spot rate 3-year spot rateA. 5.51% 5.92%B. 5.46% 5.92%C. 5.51% 6.05%D. 5.46% 6.05%

4. Former Treasury Secretary Robert Rubin decided to stop issuing 30-year Treasury bonds in 2001 and to replace them by borrowing more with shorter-maturity Treasury bills and notes (although the U.S. Treasury has since resumed issuing 30-year bonds). Which of the following statements concerning this decision is most accurate?A. If the pure expectations hypothesis of the term structure is correct, this decision

will reduce the government’s borrowing cost.B. If the liquidity theory of the term structure is correct, this decision will reduce

the government’s borrowing cost.C. If the liquidity theory of the term structure is correct, this decision will not

change the government’s borrowing cost.D. If the pure expectations hypothesis of the term structure is correct, this decision

will increase the government’s borrowing cost.

Sample

Page 22: 2013 Frm Study Notes

Page 186 ©2012 Kaplan, Inc.

Book 3Challenge Problems

5. A portfolio manager owns Macrogrow, Inc., which is currently trading at $35 per share. She plans to sell the stock in 120 days but is concerned about a possible price decline. She decides to take a short position in a 120-day forward contract on the stock. The stock will pay a $0.50 per share dividend in 35 days and $0.50 again in 125 days. The risk-free rate is 4%. The value of the trader’s position in the forward contract in 45 days, assuming in 45 days the stock price is $27.50 and the risk-free rate has not changed, is closest to:A. $7.17.B. $7.50.C. $7.92.D. $7.00.

6. A 6-month futures contract on an equity index is currently priced at 1,276. The underlying index stocks are valued at 1,250 and pay dividends at a continuously compounded rate of 1.70%. The current continuously compounded risk-free rate is 5%. The potential arbitrage is closest to:A. 5.20.B. 8.32.C. 16.58.D. 26.00.

7. All things being equal, which of the following statements would increase the value of a European put option on a nondividend paying stock?

I. A decrease in the risk-free rate over the life of the option. II. An increase in the time to expiration of the option.

A. I only.B. II only.C. Both I and II.D. Neither I nor II.

Use the following information to answer Questions 8 and 9.

Stock ABC trades for $60 and has 1-year call and put options written on it with an exercise price of $60. The annual standard deviation estimate is 10%, and the continuously compounded risk-free rate is 5%. The value of the call is $4.09.

Chefron, Inc. common stock trades for $60 and has a 1-year call option written on it with an exercise price of $60. The annual standard deviation estimate is 10%, the continuous dividend yield is 1.4%, and the continuously compounded risk-free rate is 5%.

8. The value of the put on ABC stock is closest to:A. $1.16.B. $3.28.C. $4.09.D. $1.00.

Sample

Page 23: 2013 Frm Study Notes

©2012 Kaplan, Inc. Page 193

GARP FRM Practice Exam QuestionsFinancial Markets and Products

Professor’s Note: The following questions are from the 2008–2011 GARP FRM Practice Exams.

Alan bought a futures contract on a commodity on the New York Commodity 1. Exchange on June 1. The futures price was USD 500 per unit and the contract size was 100 units per contract. Alan set up a margin account with initial margin of USD 2,000 per contract and maintenance margin of USD 1,000 per contract. The futures price of the commodity varied as shown below. What was the balance in Alan’s margin account at the end of day on June 5?

Day Futures Price (USD)

June 1 497.30

June 2 492.70

June 3 484.20

June 4 471.70

June 5 468.80

A. –USD 1,120B. USD 0C. USD 880D. USD 1,710

In late June, Simon purchased two September silver futures contracts. Each contract 2. size is 5,000 ounces of silver and the futures price on the date of purchase was USD 18.62 per ounce. The broker requires an initial margin of USD 6,000 and a maintenance margin of USD 4,500. You are given the following price history for the September silver futures:

Day Futures Price (USD) Daily Gain (loss)

June 29 18.62 0

June 30 18.69 700

July 1 18.03 –6,600

July 2 17.72 –3,100

July 6 18.00 2,800

July 7 17.70 –3,000

July 8 17.60 –1,000

On which days did Simon receive a margin call?A. July 1 onlyB. July 1 and July 2 onlyC. July 1, July 2 and July 7 onlyD. July 1, July 2 and July 8 only

Sample

Page 24: 2013 Frm Study Notes

Page 194 ©2012 Kaplan, Inc.

Book 3GARP FRM Practice Exam Questions

Which of the following statements about basis risk is incorrect?3. A. An airline company hedging exposure to a rise in jet fuel prices with heating oil

futures contracts may face basis risk.B. Choices left to the seller about the physical settlement of the futures contract

in terms of grade of the commodity, location, chemical attributes may result in basis risk.

C. Basis risk exists when futures and spot prices change by the same amount over time and converge at maturity of the futures contract.

D. Basis risk is zero when variances of both the futures and spot process are identical and the correlation coefficient between spot and futures prices is equal to one.

On Nov 1, Jimmy Walton, a fund manager of an USD 60 million US medium-4. to-large cap equity portfolio, considers locking up the profit from the recent rally. The S&P 500 index and its futures with the multiplier of 250 are trading at USD 900 and USD 910, respectively. Instead of selling off his holdings, he would rather hedge two-thirds of his market exposure over the remaining 2 months. Given that the correlation between Jimmy’s portfolio and the S&P 500 index futures is 0.89 and the volatilities of the equity fund and the futures are 0.51 and 0.48 per year respectively, what position should he take to achieve his objective?A. Sell 250 futures contracts of S&P 500B. Sell 169 futures contracts of S&P 500C. Sell 167 futures contracts of S&P 500D. Sell 148 futures contracts of S&P 500

Which of the following is not a source of basis risk when using futures contracts for 5. hedging?A. Differences between the asset whose price is being hedged and the asset

underlying the futures contract.B. Uncertainty about the exact date when the asset being hedged will be bought or

sold.C. The inability of managers to forecast the price of the underlying.D. The need to close the futures contract before its delivery date.

On Nov 1, Dane Hudson, a fund manager of an USD 50 million US large cap 6. equity portfolio, considers locking up the profit from the recent rally. The S&P 500 index and its futures with the multiplier of 250 are trading at USD 1,000 and USD 1,100, respectively. Instead of selling off h1s holdings he would rather hedge his market exposure over the remaining 2 months. Given that the correlation between Dane’s portfolio and the S&P 500 index futures is 0.92 and the volatilities of the equity fund and the futures are 0.55 and 0.45 per year respectively, what position should he take to achieve his objective?A. Sell 40 futures contracts of S&P 500B. Sell 135 futures contracts of S&P 500C. Sell 205 futures contracts of S&P 500D. Sell 355 futures contracts of S&P 500

Sample

Page 25: 2013 Frm Study Notes

Page 218 ©2012 Kaplan, Inc.

Book 3 Formulas

Financial Markets and Products

call option payoff: CT = max (0, ST – X)

put option payoff: PT = max (0, X – ST)

forward contract payoff: payoff = ST – K

where: ST = spot price at maturity K = delivery price

basis = spot price of asset being hedged – futures price of contract used in hedge

hedge ratio: HR S,FS

F

beta: CovS,F

FS F2 ,

correlation: CovS,F

S F

hedging with stock index futures:

number of contracts portfolio value

value of fuportfolio ttures contract

portfolio value

futuportfolio rres price contract multiplier

adjusting the portfolio beta: number of contractsP

*( )

discrete compounding: FV A 1R

m

m n

continuous compounding: FV AeR n

forward rate agreement: cash flow(if receiving R ) L R R T T

cash flow(if K K 2 1– –

ppaying R ) L R R T TK K 2 1– –

forward price: F S e0 0rT

Sample