Financial Market Analytic

328

Transcript of Financial Market Analytic

  • FINANCIAL MARKET

    ANALYTICS

    JOHN L. TEALL

    Q Quorum Books

    Westport, Connecticut London

  • Library of Congress Cataloging-in-Publication Data

    Teall, John L., 1958-Financial market analytics / John L. Teall.

    p. cm. Includes bibliographical references and index. ISBN 1-56720-198-9 (alk. paper) 1. InvestmentsMathematics. 2. Business mathematics. I. Title.

    HG4515.3.T43 1999 332.6/0151dc21 98-23975

    British Library Cataloguing in Publication Data is available.

    Copyright 1999 by John L. Teall

    All rights reserved. No portion of this book may be reproduced, by any process or technique, without the express written consent of the publisher.

    Library of Congress Catalog Card Number: 98-23975 ISBN: 1-56720-198-9

    First published in 1999

    Quorum Books, 88 Post Road West, Westport, CT 06881 An imprint of Greenwood Publishing Group, Inc. Printed in the United States of America

    @r The paper used in this book complies with the Permanent Paper Standard issued by the National Information Standards Organization (Z39.48-1984). P

    In order to keep this title in print and available to the academic community, this edition was produced using digital reprint technology in a relatively short print run. This would not have been attainable using traditional methods. Although the cover has been changed from its original appearance, the text remains the same and all materials and methods used still conform to the highest book-making standards.

  • CONTENTS

    Preface ix

    1 Introduction and Overview 1 l.A Analytics and the Scientific Method in Finance 1 l.B Financial Models 3 l.C Empirical Studies 4 l.D Research in Finance 5 l.E Applications and Organization of this Book 13

    2 Preliminary Analytical Concepts 15 2.A Time Value Mathematics 15 2.B Geometric Series and Expansions 17

    Application 2.1 Annuities and Perpetuities 18 Application 2.2 Growth Models 19 Application 2.3 Money and Income Multipliers 20

    2.C Return Measurement 21 2.D Mean, Variance and Standard Deviation 23

    Application 2.4 Risk Measurement 24 2.E Comovement Statistics 26

    Application 2.5 Security Comovement 27 2.F Introduction to Simple OLS Regressions 29

    Application 2.6 Relative Risk Measurement 30 Exercises 32

  • Contents

    Elementary Portfolio Mathematics 37 3.A Introduction to Portfolio Analysis 37 3.B Single Index Models 40 3.C Multi-Index Models 43

    Exercises 45

    Matrix Mathematics 49 4.A Matrices, Vectors and Scalars 49

    Application 4.1 Portfolio Mathematics 50 4.B Addition, Subtraction and Transposes of Matrices 50 4.C Multiplication of Matrices 52

    Application 4.1 (continued) Portfolio Mathematics 52 4.D Inversion of Matrices 54 4.E Solving Systems of Equations 55

    Application 4.2 Coupon Bonds and Deriving Yield Curves 57 Application 4.3 Arbitrage with Riskless Bonds 59 Application 4.4 Fixed Income Portfolio Dedication 60

    4.F Vectors, Vector Spaces and Spanning 61 Application 4.5 The State Preference Model 65 Application 4.6 Binomial Option Pricing 69 Application 4.7 Put-Call Parity 71

    4.G. Orthogonal Vectors 73 Application 4.8 Arbitrage Pricing Theory 74 Exercises 78

    Differential Calculus 83 5.A Functions and Limits 83

    Application 5.1 The Natural Log 84 5.B Slopes, Derivatives, Maxima and Minima 84

    Application 5.2 Utility of Wealth 87 5.C Derivatives of Polynomials 89

    Application 5.3 Marginal Utility 91 Application 5.4 The Baumol Cash Management Model 91 Application 5.5 Duration 94 Application 5.6 Bond Portfolio Immunization 97 Application 5.7 Portfolio Risk and Diversification 97

    5.D Partial Derivatives 99 Application 5.8 Deriving the Simple OLS Regression Equation 99 Application 5.9 Deriving Multiple Regression Coefficients 101

    5.E The Chain Rule, Product Rule and Quotient Rule 102 Application 5.10 Plotting the Capital Market Line 104

    5.F Taylor Series Expansions 112 Application 5.11 Convexity and Immunization 113 Application 5.12 Risk Aversion Coefficients 115

    vi

    3

    4

    5

  • Contents vu

    5.G The Method of LaGrange Multipliers 116 Application 5.13 Optimal Portfolio Selection 118 Application 5.14 Plotting the Capital Market Line,

    Second Method 119 Application 5.15 Deriving the Capital Asset Pricing Model 122 Application 5.16 Constrained Utility Maximization 124 Exercises 127 Appendix 5.A Derivatives of Polynomials 131 Appendix 5.B Rules for Finding Derivatives 132 Appendix 5.C Portfolio Risk Minimization on a Spreadsheet 133

    6 Integral Calculus 137 6. A Antidifferentiation and the Indefinite Integral 137 6.B Definite Integrals and Areas 138

    Application 6.1 Cumulative Densities 142 Application 6.2 Expected Value and Variance 144 Application 6.3 Stochastic Dominance 145 Application 6.4 Valuing Continuous Dividend Payments 149 Application 6.5 Expected Option Values 150

    6.C Differential Equations 151 Application 6.6 Continuous Time Security Returns 152 Exercises 155 Appendix 6.A Rules for Finding Integrals 157

    7 Introduction to Probability 159 7.A Random Variables and Probability Spaces 159 7.B Distributions and Moments 160 7.C Binomial Distributions 161

    Application 7.1 Estimating Probability of Option Exercise 164 7.D The Normal Distribution 166 7.E The Log-Normal Distribution 167

    Application 7.2 Common Stock Returns 167 7.F Conditional Probability 169

    Application 7.3 Option Pricing Conditional Exercise 169 Application 7.4 The Binomial Option Pricing Model 170 Exercises 173

    8 Statistics and Empirical Studies in Finance 175 8.A Introduction to Hypothesis Testing 175

    Application 8.1 Minimum Acceptable Returns 176 8.B Hypothesis Testing: Two Populations 179

    Application 8.2 Bank Ownership Structure 179 8.C Interpreting the Simple OLS Regression 181

    Application 8.3 The Capital Asset Pricing Model 184

  • vm Contents

    Application 8.4 Analysis of Weak Form Efficiency 189 Application 8.5 Portfolio Performance Evaluation 191

    8.D Multiple OLS Regressions 193 Application 8.6 Estimating the Yield Curve 198

    8.E Event Studies 199 Application 8.7 Analysis of Merger Returns 201

    8.F Models with Binary Variables 208 Exercises 211

    9 Stochastic Processes 213 9.A Random Walks and Martingales 213 9.B Binomial Processes 214 9.C Brownian Motion, Weiner and Ito Processes 215 9.D Ito's Lemma 218

    Application 9.1 Geometric Weiner Processes 221 Application 9.2 Option Prices Estimating Exercise

    Probability 222 Application 9.3 Option Prices Estimating Expected

    Conditional Option Prices 223 Application 9.4 Deriving the Black-Scholes Option

    Pricing Model 224 Exercises 230

    10 Numerical Methods 233 10.A Introduction 233 10.B The Binomial Method 233

    Application 10.1 The Binomial Option Pricing Model 235 Application 10.2 American Put Option Valuation 237

    10.C The Method of Bisection 240 Application 10.3 Estimating Bond Yields 241 Application 10.4 Estimating Implied Variances 242

    10.D The Newton-Ralphson Method 245 Application 10.4 (continued) Estimating Implied Variances 246 Exercises 248

    Appendix A Solutions to End-of-Chapter Exercises 249 Appendix B Statistics Tables 293 Appendix C Notation Definitions 295 Glossary 299 References 305 Index 313

  • PREFACE

    Evolution of highly sophisticated financial markets, innovation of specialized securities and increasingly intense competition among investors have driven the development and use of highly rigorous mathematical modeling techniques. The investment community has unleashed a plethora of complicated financial instruments, mathematical models and computer algorithms, often created by so-called rocket scientists. Practitioners and researchers have learned that mathematical models are crucial to financial decision making; yet the quantitative skills of practitioners and researchers are often "rusty." University students enrolled in finance courses often feel overwhelmed when their mathematical preparation is inadequate. They all too frequently suffer difficulty with mathematics to the point where they are unable to grasp even the intuition of financial techniques. Many books, manuals and instructors have responded by watering down quantitative content. Yet, mathematics is necessary to understand the implications, variations and limitations of financial techniques. The purpose of this book is to provide background reading in a variety of elementary mathematics topics used in financial analysis. It assumes that the reader has limited or no exposure to statistics, calculus and matrix mathematics. Broad coverage includes discussions related to portfolio management, derivatives valuation, corporate finance, fixed income analysis and other issues as well.

    This book's organization by quantitative topic differs from that of other financial mathematics books which tend to be organized by financial topic. Readers experiencing difficulty with quantitative technique typically need more review of mathematical technique to master financial technique. This book is intended to provide an informal introduction to a given mathematics topic which is then re-enforced through application to a variety of topics in finance. Coverage is broad, both in terms of coverage in mathematics and in finance. This heterogeneity of coverage has compelled separation and spreading of the

  • X Preface

    various finance topics throughout the book, but they are linked through Background Readings suggestions at the beginnings of most sections and applications. Exercises provided at the ends of chapters are intended to be completed with assistance of a basic calculator, though a computer-based spreadsheet may be helpful in some cases.

    This book is designed to provide prerequisite or parallel reading for other books such as Elton and Gruber [1995], Hull [1997], Copeland and Weston [1988], Alexander and Francis [1986], Cox and Rubinstein [1985] and Martin, Cox and MacMinn [1988]. The book may serve as a quantitative foundation for more advanced or specialized texts such as Campbell, Lo and MacKinlay [1997], Baxter and Rennie [1996] and Neftci [1996]. Another goal of this book is to enable readers to read less technical academic and professional articles. This book should serve several purposes in financial and academic communities:

    1. As a reference book in the library of a finance practitioner likely to encounter problems of a quantitative nature.

    2. To provide quantitative support to the researcher in finance without the mathematics skills necessary to master current finance methodology. Among the researchers who may benefit from this book are academics specializing in strategic management, accounting, business policy and certain fields of economics. In addition, financial engineers, systems analysts in financial services firms and policy makers may benefit from material presented in this book.

    3. As a supplemental text for undergraduate and M.B.A. students likely to experience difficulties with the quantitative technique in finance courses. Material covered in this text will parallel coverage of several courses, including Financial Management, Principles of Investments, Portfolio Analysis, Options and Futures and Fixed Income Management. In some cases, this book may be appropriate as a supplement to doctoral students in finance.

    4. As a primary text for a course such as "Applied Analytical Methods in Finance."

    5. As a primary or secondary text for a prerequisite mathematics course offered by most M.B.A. programs covering linear mathematics, calculus and statistics. Most graduate business schools offer such courses to prepare students without adequate mathematics background for more quantitative aspects of their programs.

    6. As a manual for a continuing education course intended to provide coverage of finance from an analytical perspective. In addition, participants in certain certification programs may benefit from this book.

    Objectives and organizations of individual chapters are described in Section 1 .E and in introductions to the chapters themselves.

  • Preface XI

    ACKNOWLEDGMENTS I am fortunate to have had a number of students, colleagues and friends

    assist and provide guidance in the preparation of this book. Steve Adams, Larry Bezviner, Michael Dang, Hyangbab Ku, Joe Mazzeo and Jay Pandit all contributed useful comments and corrections on earlier versions of the manuscript. My production editor, Deborah Whitford was most helpful in the preparation and editing of the manuscript as were Sang Kim, Daniel Terfossa and Philip Wong, who also furnished valuable assistance in the preparation tables and figures. My old friends Iftekhar Hasan and John Knopf provided encouragement and advice throughout the various stages of writing this book. Ed Downe and Ken Sutrick contributed most useful comments regarding specific sections of the manuscript. I am particularly grateful for the gracious and varied contributions of Peter Knopf and T. J. Wu. And most importantly, I never get anything accomplished without constant prodding and needling from Miriam Vasquez. I would like to attribute to my friends the various errors and shortcomings that will inevitably surface in this book, but I'm afraid that they are already on the verge of casting me off. Since I feel more secure in my relationships with my lovely Anne and my lovely Emily, I'll blame them.

  • This page intentionally left blank

  • 1

    Introduction and Overview

    l.A: ANALYTICS AND THE SCIENTIFIC METHOD IN FINANCE

    Traders, merchants, farmers and financiers have used mathematics to conduct business for many centuries. Throughout most of the past two millennia, commerce has been conducted with the use of simple arithmetic, integers and fractions. Western business accounts were maintained without the use of the number zero or decimals until after the twelfth century when the Hindu-Arabic numerical system was introduced to the West. Obviously, methods for performing routine computations have improved substantially over the years. The primary mechanical device for performing computations prior to the twelfth century was the abacus; even the simplest of arithmetic operations were cumbersome to perform with the Roman numerical system. For example, suppose that a merchant needed to borrow LXXXVIII denarii from one lender and XLIV from a second. What is the total sum to be borrowed? Next, compute the sum to be repaid, assuming a IX percent interest rate compounded monthly for VII years. Obviously, the Roman numerical system is less useful for performing routine arithmetic operations than for recording numbers of units.

    The thirteenth-century Italian mathematician, Leonardo Fibonacci (born Leonardo Pisano) introduced Arabic numerical notation to Europe in his book Liber Abaci. This treatise on arithmetic and algebra was enthusiastically received by his contemporaries, in part, because it contained a wealth of practical applications. Fibonacci discussed numerous applications of the Arabic numerical system to commerce, including interest calculations, weights and measures, exchange rates and bookkeeping.

    Use of the Hindu-Arabic numerical system and simple mathematics slowly worked its way into business and finance over the centuries. However, the ability to compile and manipulate data with simple arithmetic was insufficient to properly analyze most types of financial decisions. In addition, prior to 1950,

  • 2 Chapter 1

    financial studies had drawn little attention from the scholarly communities. Neither academic nor professional financial literature benefitted from rigorous scientific discipline. The literature tended to be primarily descriptive and anecdotal in nature, based largely on the experience and common sense of practitioners.1 Very few analytics beyond arithmetic and simple algebra were used in this literature.2 Furthermore, many of the earlier writings were inconsistent and contradictory with no methodology to resolve inconsistencies. A more axiomatic approach to financial research and technique was developed which worked its way into the financial literature in the 1950s and early 1960s. This approach was based on the methodology frequently used in the physical sciences. The scientific method applied to financial problem solving might be described as follows:

    1. Observe, describe and measure financial phenomena. 2. Use previously obtained knowledge and experience to exclude all but those

    factors most relevant to the problem under consideration. 3. Describe, measure and model the causes, processes and implications of

    these financial phenomena. 4. Place the results of the model into some known law, framework or

    generalization and/or construct testable hypotheses or generalizations to explain the phenomena. This reasoning process of generalization from specific observations to form hypotheses or theories is called induction.

    5. Observe and test these descriptions, measures, models, hypotheses and generalizations empirically. Derive and test predictions of models.

    6. Revise and improve models to make better predictions. 7. Accept or continue to revise the models.

    This axiomatic approach to the study of finance, like the study of physical sciences, requires extensive use of mathematics. Mathematics provides us with a means of representing and simplifying complex financial systems in a concise and rigorous manner; it makes the study of finance far more exciting, enabling us to better understand investor motivations and behavior. Mathematics brings us closer to comprehending how and why investors behave in high-risk environments where they face stress and a variety of constraints. Many important developments in the financial industry owe their existence to the development of improved quantitative techniques. Financial engineering, techniques of option valuation, portfolio insurance, fixed income hedging strategies and index arbitrage are just a few of the modern financial developments which are highly dependent on mathematical technique. In fact, many of the quantitative developments in finance were initiated in the industry as analysts became more sophisticated in their pursuit of increased profits and improved risk management techniques.

    Financial analysis takes place in a highly competitive uncertain environment that includes many individuals and institutions. These players interact with one

  • Introduction and Overview 3

    another over many periods of time which can be presumed to be infinitely divisible. While this environment is fascinating and exciting, its analysis requires application of many branches of mathematics, ranging from simple arithmetic, algebra and calculus to stochastic processes, numerical methods and probability theory. This book presents essential mathematical technique and its applications to financial analysis.

    l.B: FINANCIAL MODELS

    A model might be characterized as an artificial structure describing the relationships among variables or factors. Practically all of the methodology in this book is geared toward the development and implementation of financial models to solve financial problems. For example, the simple valuation models in Chapter 2 provide a rudimentary foundation for investment decision making, while the more sophisticated models in Chapter 9 describing stochastic processes provide an important tool to account for risk in decision making.

    The use of models is important in finance because "real world" conditions that underlie financial decisions are frequently extraordinarily complicated. Financial decision makers frequently use existing models or construct new ones that relate to the types of decisions they wish to make. Models proposing decisions which ought to be made are called normative models.3

    The purpose of models is to simulate or behave like real financial situations. When constructing financial models, analysts exclude the "real world" conditions that seem to have little or no effect on the outcomes of their decisions, concentrating on those factors which are most relevant to their situations. In some instances, analysts may have to make unrealistic assumptions in order to simplify their models and make them easier to analyze. After simple models have been constructed with what may be unrealistic assumptions, they can be modified to match more closely "real world" situations. A good financial model is one which accounts for the major factors that will affect the financial decision (a good model is complete and accurate), is simple enough for its use to be practical (inexpensive to construct and easy to understand), and can be used to predict actual outcomes. A model is not likely to be of use if it is not able to project an outcome with an acceptable degree of accuracy. Completeness and simplicity may directly conflict with one another. The financial analyst must determine the appropriate trade-off between completeness and simplicity in the model he wishes to construct.

    This book emphasizes both theoretical and empirical models as well as the mathematics required to construct them. Theoretical models are constructed to simulate or explain phenomena; empirical models are intended to evaluate or measure relationships among "real world" factors. The financial analyst may construct and use a theoretical model to provide a framework for decision making and then use an empirical model to test the theory. Analysts also use

  • 4 Chapter 1

    empirical models to measure financial phenomena and to evaluate financial performance.

    In finance, mathematical models are usually the easiest to develop, manipulate and modify. These models are usually adaptable to computers and electronic spreadsheets. For example, the matrix-based models in Chapter 4 are easily accommodated by popular spreadsheet programs. Numerical techniques discussed in Chapter 10 are essentially models which are used to obtain numerical solutions for other models, usually with the aid of computer software. Mathematical models are obviously most useful for those comfortable with math; the primary purpose of this book is to provide a foundation for improving the quantitative preparation of the less mathematically oriented analyst. Other models used in finance include those based on graphs and those involving simulations. However, these models are often based on or closely related to mathematical models.

    Computers have played an important role in many types of financial analysis for a number of years. Since the early 1980s, computer-based spreadsheets have been used with increasing frequency, largely due to their ease of use. Among the better-known computer based spreadsheets are Lotus 123, Excel and Quattro-Pro. An electronic spreadsheet appears on the user's computer screen as a matrix or array of columns and rows where numbers, formulas or labels are entered. These entries may be related in a number of ways. Advantages of a computer-based spreadsheet over a paper-based spreadsheet include the speed and ease of computations and revisions offered by the computer. Spreadsheet use does not require mastery of a programming language; in fact, one can begin to use a spreadsheet within a few minutes after learning a small number of elementary commands and procedures. In addition, the models in this book are perfectly adaptable to more structured programs such as BASIC, FORTRAN, PASCAL, C+ +, and so on.

    l.C: EMPIRICAL STUDIES Empirical studies are intended to measure financial phenomena and

    performance and to test theories and models. Because financial studies usually concern measurements involving large numbers of firms or securities, empirical analysis makes extensive use of statistics. Financial analysts are fortunate in that they frequently have access to significant data resources. Securities markets record enormous quantities of prices and other trading statistics, firms create detailed accounting statements and various business and government agencies generate huge volumes of data pertaining to economics and commerce. Statisticians have developed highly sophisticated means of analyzing such data.

    Finance, unlike many of the social sciences, has not made extensive use of experimental methodologies. However, important contributions to the finance literature have been made by behavioral psychologists such as Daniel Kahnaman and Amos Tversky, who demonstrated that individuals in their decision making

  • Introduction and Overview 5

    tend to over emphasize recent information and trends and under-emphasize prior information. Dale Griffin collaborated with Tversky on work arguing that experts tend to be more prone to overconfidence than novices while maintaining reputations for their expertise. They suggest that overconfident traders tend to be more aggressive in their trading strategies. Although use of experimental methodology is increasing in finance, the vast majority of empirical research in finance is highly dependent on statistical analysis of data.

    Many empirical tests are conducted for the purpose of testing theories and models. Scholars are concerned with testing the validity of their theories to explain the behavior and performance of financial markets. Practitioners in the financial industry benefit from testing their theories and models on historical or hypothetical data before actually investing money to implement them. Several important methodologies for empirical testing are presented in Chapter 8. This chapter discusses how financial theories can be tested based on either examination of the validity of underlying assumptions or the accuracy of predictions implied by the theories.

    l.D: RESEARCH IN FINANCE

    This section briefly reviews scholarly research in financial economics in order to provide readers lacking strong academic backgrounds in finance with resources that may prove useful for solving financial problems. It emphasizes the literature which either introduced or made extensive use of the quantitative concepts presented in this book. It is important to know how an existing body of literature can be used to solve financial problems. The reviews that follow also mention sections or applications in this book that discuss techniques related to the reviewed research.

    Early Research

    As discussed in Section l.A, financial literature prior to 1950 was primarily descriptive, anecdotal and prescriptive in nature. However, there were some important exceptions. One early exception was Daniel Bernoulli [1738], who wrote on diminishing marginal utility and risk aversion. At a meeting of mathematicians, he proposed a problem commonly referred to as the St. Petersburg Paradox. This problem was concerned with why gamblers would pay only a finite sum for a gamble with an infinite expected value. Louis Bachelier [1900] wrote his doctoral dissertation at the Sorbonne on the distribution of stock prices and option valuation. He provided a derivation for a probability density function which was later to be known as a Weiner process (Brownian motion process with drift; see Section 9.C). The option valuation model based on this process was quite similar to the better known and more recently developed Black-Scholes pricing model (Applications 9.3 and 9.4). His Brownian motion derivation predated the better publicized derivation of

  • 6 Chapter 1

    Brownian motion by Albert Einstein. Unfortunately, his research was ignored until the early 1950s when Leonard Savage and Paul Samuelson discussed the distribution of security prices and Case Sprenkle [1961] wrote his doctoral dissertation on option valuation.

    Irving Fisher [1896, 1907, 1930] wrote important treatises on the theory of interest rates and the internal rate of return (See Section 2.C). His 1896 paper set forth the Expectations Hypothesis of the term structure of interest rates (see Application 4.2 on deriving spot and forward rates). The well-known Fisher Separation Theorem demonstrates that the individual investment decision can be made independently of consumption preferences over time. This important theorem extends Application 5.16 on constrained utility maximization by introducing different asset investment classes. Alfred Cowles [1933] and Holbrook Working [1934] were statisticians who were concerned with capital markets efficiency, or more specifically, the random movement of stock prices. Their tests were somewhat similar to those described in Application 8.4 on weak form market efficiency.

    The axiomatic approach to financial research was in its infancy during the 1950s and early 1960s. Harry Markowitz [1952, 1959] is regarded as the originator of Modern Portfolio Theory. His research is the basis for Section 4. A and Application 5.7 regarding portfolio mathematics. Writing his doctoral dissertation in statistics, Markowitz described the impact on portfolio diversification of increasing the number of securities in a portfolio. His model also detailed the importance of selecting uncorrelated securities for portfolio management. Treynor [1961] used the results of Markowitz to value securities. Lintner [1956] and Gordon [1959] provided important research on corporate dividend policy and the valuation of corporate shares. Application 2.2 includes a derivation of the Gordon Stock Pricing Model.

    Modigliani and Miller [1958, 1961, 1963] were major innovators in corporate finance, particularly on issues related to dividend policy and capital structure. They were the first to offer a proof based on the Law of One Price, a concept used throughout this book, particularly in Chapters 4 and 9. Their papers demonstrated the irrelevance of corporate capital structure and dividend policies in perfect markets. Kenneth Arrow and Gerard Debreu [1954] and Debreu [1959] published in the economics literature a model for pricing commodities. This model was applied to the valuation of corporate assets and securities by Arrow [1964] and Hershleifer [1964] and [1965]. Application 4.5 on State Preference Theory is derived from technique presented in these papers. Tobin [1958] derived the Efficient Frontier and Capital Market Line based on the work of Markowitz. His model, which forms the basis of Applications 5.10 and 5.14, suggests that all investors in a market, no matter how differently they feel about risk, will hold the same stocks in the same proportions as long as they maintain identical expectations regarding the future. Investor portfolios will differ only in their relative proportions of stocks and bonds.

  • Introduction and Overview 7

    Statisticians and econometricians have long been fascinated by the tremendous amounts of financial data made available by the various data services. For example, Kendall [1953], Muth [1961] and Eugene Fama [1965] all wrote on capital markets efficiency and the random nature of stock price returns. Samuelson [1965a] and Fama [1965] modeled asset price dynamics as a submartingale (See Section 9.A), where the best forecast for a future asset price is the current price with a "fair" return; price histories are otherwise irrelevant in forming forecasts.

    The Major Breakthroughs The major breakthroughs in financial research were in large part due to the

    more scientific approaches to financial analysis. As suggested on page 6, Markowitz provided the major breakthrough in portfolio analysis. The Capital Asset Pricing Model (CAPM) extended the work of Markowitz and Tobin to provide an important theory of capital markets equilibrium, enabling investors to value securities. The model states that security returns are linearly related to returns on the market and that firm specific risks do not affect security prices. Developed independently by Sharpe [1964], Lintner [1965] and Mossin [1966], the veracity of this model is still a hotly disputed issue in the financial literature. The derivation of Sharpe is the basis for Application 5.15, and Application 8.3 provides an example for its computation.

    The Black-Scholes Options Pricing Model is based on the construction of perfectly hedged portfolios (See Applications 4.6 and 9.4) and applied to the valuation of corporate securities. The perfect hedge and the equilibrium pricing framework are important features distinguishing their paper from earlier ones by Sprenkle [1961] and Samuelson [1965b]. The publication of the Black and Scholes paper coincided with the 1973 opening of the Chicago Board Options Exchange, the first and still largest stock options market. In addition, Black and Scholes applied their model to the valuation of risky debt and equity securities in the limited liability firm.

    A third model of equilibrium in financial markets, the Arbitrage Pricing Theory (APT), was provided by Stephen Ross [1976]. This model is based on a form of the Law of One Price, which, in general, states that investments generating identical cash flow structures should be valued identically. The APT provided a simple and more general theory of equilibrium in capital markets than the CAPM. The Arbitrage Pricing Theory is derived in Application 4.8.

    Utility Analysis and Risk Measurement Utility analysis is concerned with how people make and rank choices (See

    Applications 5.1 and 5.2). Bernoulli [1738] wrote the first paper on the relationships among diminishing marginal utility, risk aversion and expected value in an uncertain environment. John von Neumann and Oscar Morgenstem

  • 8 Chapter 1

    [1944] set forth axioms of utility analysis and maximization. John Pratt [1964], following Arrow [1964] and Markowitz [1952], discussed determination of risk premiums (See Application 5.12) based on utility functions of risk averse individuals. These concepts were significant in the development of the Efficient Frontier and the Capital Asset Pricing Model.

    Financial analysts have long realized that forecasting security returns is quite difficult. Furthermore, estimating security risk can also be time consuming and problematic. For example, analysts often use the volatility of historical returns as a surrogate for ex-ante risk as in Application 2.4. Two difficulties associated with the traditional sample estimator procedure, time required for computation and arbitrary selection of returns from which to compute volatilities, may be dealt with through the use of extreme value estimators (e.g., Parkinson [1980]). Latane and Rendleman [1976] suggest using volatilities implied by option pricing models, as in Application 10.4.

    Portfolio Analysis John Lintner [1965] performed the first empirical test of the Capital Asset

    Pricing Model using a two-stage regression. He rejected the CAPM based on his tests; however, his two-stage regression procedure was performed on individual stocks rather than portfolios. This enabled beta estimation errors to cloud his results. Black, Jensen and Scholes [1972] found evidence to support the CAPM based on their test of portfolios. Fama and MacBeth [1973] found that while the riskless rate and beta explained the structure of security returns, beta squared and unsystematic variances did not. This lended support to the validity of CAPM.

    In a widely quoted paper, Richard Roll [1977] presented an important criticism of the earlier CAPM tests. Essentially, he concluded that CAPM tests are flawed in that the market portfolio has not been properly specified. Market indices which have been used in tests are not identical to the actual market portfolio, and CAPM tests are very sensitive to the selected index. Furthermore, the linear relationship between security returns predicted by the CAPM must hold if the selected index is mean-variance efficient. Hence, according to Roll, the only valid CAPM test is whether the market portfolio is efficient, though performance of such a test is complicated due to the inability to properly specify the market.

    Several studies, including Chen, Roll and Ross [1986] of multi factor models and APT models have suggested that more than one index is needed to explain the correlation structure of security returns (see Section 3.C and Application 4.8). Fama and French [1992] found that stock betas did not explain long-term return relationships, although firm size and market-to-book ratios did. A number of more recent studies have been published and are underway (e.g., Kothari, Shanken and Sloan [1995] who found that the relationship between portfolio returns and beta is much stronger when annual returns rather than

  • Introduction and Overview 9

    monthly returns are used). The APT equilibrium asset pricing model of Ross does not require assumptions as restrictive as the CAPM requires. The APT states that security returns will be linearly related to a series of factors, but does not state what those factors are. Roll and Ross [1980] and Chen, Roll and Ross [1986] use factor analysis in their tests, finding that APT was supported by their data.

    CAPM makes very restrictive assumptions regarding investor utility functions and/or security return distributions, while APT requires that security returns be linearly related to index values. The concept of stochastic dominance (Application 6.3) may be used without such restrictive assumptions by investors in choosing among portfolios (see Whitmore and Findlay [1978], Hadar and Russell [1969] and Meyer [1977]). First order stochastic dominance rules apply to all investors who prefer more wealth to less. Hanoch and Levy [1969] prove that second order stochastic dominance rules can be used by risk-averse investors who prefer more wealth to less.

    Fixed Income Analytics

    The term structure of interest rates is concerned with the relationship between fixed income instrument yields and their terms to maturity. Term structure models can be used to project future interest rates and to construct hedges when interest rates are ex-ante unknown. The Expectations Hypothesis regarding the term structure of interest rates (Fisher [1896], Lutz [1940] and Application 4.2) states that long-term interest rates are a geometric mean of current and projected short rates. This hypothesis is supplemented by the Liquidity Premium Hypothesis (Keynes [1936], Hicks [1946]) and the Market Segmentations Hypothesis (Walker [1954], Modigliani and Sutch [1966]). The Liquidity Premium Hypotheses state that long rates tend to exceed the geometric mean of current and projected short rates due to investor preferences to invest short term to avoid risk. The Market Segmentations Hypothesis states that long and short rates depend on supply and demand conditions for short term and long term debt.

    Macauley [1938] and Hicks [1946] developed the simple duration model measuring the sensitivity of a bond's price to interest rates. Bierwag [1977] discusses fixed income portfolio immunization techniques (see Applications 5.5, 5.6 and 5.11).

    Derivative Securities

    A derivative security may be defined simply as an instrument whose payoff or value is a function of that of another security, index or value. There exist a huge variety of derivative securities, including (but not limited to) options, futures contracts and swap contracts. Stock options are one of the more popular types of derivative securities. The model of Black and Scholes [1973] was

  • 10 Chapter 1

    unique in that it is based on the construction of perfectly hedged portfolios. The perfectly hedged portfolio should earn the riskless rate of return. Thus, unlike earlier models, that of Black and Scholes is an equilibrium asset pricing model. They also applied their model to the valuation of limited liability corporation debt and equity securities, realizing that the equity position in a limited liability stock firm is analogous to a call option to purchase the firm's assets. Later, this concept was applied to a variety of other types of assets and contracts.

    Black and Scholes [1972] performed the first empirical test of the Black-Scholes model on over-the-counter dividend protected call options. Although their model seemed to work quite well, Black and Scholes suggested that a large fraction of deviation of actual options prices from formula values could be explained by transactions costs. Also, they found that the model overestimated values of calls on high risk stocks. Stoll [1969] found that the put-call parity relation (see Application 4.7) performed reasonably well. Smith [1976] and Whaley [1982] provide excellent reviews of the early empirical literature on option pricing. Essentially, they found that the Black-Scholes model, with various modifications, works quite well in determining option values.

    Cox, Ross and Rubinstein [1979] derive the Binomial Option Pricing Model, which has proven particularly useful in the valuation of American options (see Applications 4.6, 7.1, 7.4, 10.1 and 10.2). As the number of time periods in the lattice approaches infinity, the results of the Binomial Model approach those of the Black-Scholes Model.

    Forward markets provide for future transactions to buy or sell assets. Futures contracts might be regarded as standardized forward contracts, which are normally traded on exchanges and provide for margin requirements and marking to the market. Why do futures markets exist? Keynes [1923] and Hicks [1946] argue that producer risk aversity provides an incentive for producers to sell their products in advance in futures markets to avoid price uncertainty. Speculators may receive more favorable commodity prices in the process of resolving producer price uncertainty.

    The development of large numbers of other types of derivative contracts, including a variety of swaps and "exotic" options, has led to new bodies of literature. Creation of new contracts forms an important basis of what is often referred to as Financial Engineering. This area of finance deals largely with risk management. Useful review papers in this area include Smith, Smithson and Wilford [1990], Smith and Smithson [1990], Finnerty [1988] and Damodoran and Subrahmanyam [1992].

    Capital Markets Efficiency

    An Efficient Capital Market is defined as a market where security prices reflect all available information. The level of efficiency existing in a market might be characterized as the speed in which security prices reflect information of a particular type. Fama [1970] classified these types of information and

  • Introduction and Overview 11

    defined three types of market inefficiency. Weak form inefficiency exists when security prices do not reflect historical price information; that is, an investor can generate an abnormal profit by trading based on historical price information. Semistrong form inefficiencies exist when investors can generate abnormal returns based on any publicly available information. Strong form inefficiencies exist when any information, public or private can be used to generate abnormal trading profits.

    Granger [1968], in one of the earlier weak form efficiency tests, found a very weak relationship between historical and current prices .057% of a given day's variation in the log of the price relative (Inil+RJ) is explained by the prior day's change in the log of the price relative. His methodology is somewhat similar to that discussed in Application 8.4.

    Numerous studies including Roll [1981], Keim [1983] and Reinganum [1981, 1983] have confirmed a January Effect. One explanation is year-end tax selling, when investors sell their "losers" at the end of the year to capture tax write-offs. Year-end tax selling bids down prices at the end of the year. They recover early the following year, most significantly during the first five days in January.

    There also exists substantial evidence that smaller firms outperform larger firms (Banz [1981], Barry and Brown [1984] and Reinganum [1983]). For example, if one were to rank all NYSE, AMEX and NASDAQ firms by size, one is likely to find that those firms which are ranked as smaller will outperform those which are ranked larger. This effect holds after adjusting for risk as measured by beta. There is evidence that these abnormally high returns are most pronounced in January.

    Basu [1977] and Fama and French [1992] find that firms with low price-to-earnings (P/E) ratios outperform firms with higher price-to-earnings ratios. Fama and French find that the P/E ratio and firm size predict security returns significantly better than the Capital Asset Pricing Model.

    Semistrong form efficiency tests are concerned with whether security prices reflect all publicly available information. For example, how much time is required for a given type of information to be reflected in security prices? In one well-known test of market efficiency, Fama, Fisher, Jensen and Roll [1969] examined the effects of stock splits on stock prices. They argued that splits were related to more fundamental factors that affected prices. The importance of their paper stems from the development and use of the now standard event study methodology to test semistrong form efficiency. Brown and Warner [1980] compare and contrast various event study methodologies, including those discussed in Section 8.E.

    Selected Topics in Corporate Finance Generally, corporate finance is concerned with three types of decisions

    within the firm: the corporate investment decision, the corporate financing

  • 12 Chapter 1

    decision and the dividend decision (Van Home [1981]). The investment or capital budgeting decision concerns how the firm will use its financial resources. Dean [1951] recommended acceptance of capital budgeting projects whose internal rates of return (IRR) exceed market determined costs of capital. Lorie and Savage [1955] and Hershleifer [1958] criticized the IRR rules. They suggest the Net Present Value (NPV) rule as an alternative. Sections 2. A and 2.B in this book deals with these issues. Mason and Merton [1985] demonstrated how certain options embedded in capital budgeting projects can be evaluated.

    Major developments in cash management were contributed by Baumol [1952] (see Application 5.4) and Miller and Orr [1966]. These models generate cash balances under cases of certainty and uncertainty with respect to cash usage. Kim and Atkins [1978] modeled the accounts receivable decision as an investment, using the NPV technique.

    Beaver [1966] used a series of univariate tests of ratios to distinguish firms eventually filing for bankruptcy from those which did not. His tests were unable to make use of more than one ratio at a time. Altman [1968] extended this analysis, using the methodology of multi-discriminate analysis to forecast default on corporate debt issues. This paper had an enormous impact on the methods used by lending institutions and credit rating agencies in determining credit worthiness. However, the assumptions underlying the use of multi-discriminate analysis usually do not apply very closely. Furthermore, the numerical scoring system used by Altman has little intuitive meaning. Hence, other prediction models based on logit or probit analysis including Ohlson [1980] and Zavgren [1985] have been provided in the literature (See Section 8.F). John and John [1992] provided an extensive review of the literature in the more general area of financial distress and corporate restructuring.

    Research in the Profession

    Although most of the research in the financial profession is of a proprietary nature, there is much communication among researchers in the profession. For example, journals such as Risk are comprised of articles written by and for researchers in the derivatives and risk management professions. There are also substantial amounts of information exchanged between members of the profession and members of the academic communities. Among the journals and magazines specializing in applied financial research are The Financial Analysts Journal, Financial Management, The Journal of Investing, The Journal of Portfolio Management, The Review of Derivatives Research, The Journal of Applied Corporate Finance, The Midland Corporate Finance Journal, and The Journal of Financial Engineering. The journals Mathematical Finance and Applied Mathematical Finance specialize in applications of mathematics to finance. In addition, there are a number of useful books that focus on practitioner-oriented research, including Stern and Chew [1986] and Baxter and Rennie [1996],

  • Introduction and Overview 13

    l.E: APPLICATIONS AND ORGANIZATION OF THIS BOOK The primary purpose of this text is to ensure that the reader obtains a

    reasonable degree of comfort and proficiency in applying mathematics to a variety of types of financial analysis. Chapter 2 provides a brief review of elementary mathematics of time value, return and risk. Specific applications often follow the description of the mathematical topic in this and in other chapters. A particularly large number of exercises is provided at the end of Chapter 2 for readers requiring substantial review. Chapter 3 discusses elementary portfolio return and risk measures along with a description of index models. The quantitative sophistication required for Chapters 2 and 3 does not extend beyond high school algebra. Chapter 4 delivers an introduction to matrix algebra along with a number of applications in finance. This chapter and Chapter 5 on differential calculus are probably the most important in the book. Chapter 5 also provides a large number of applications of differential calculus to finance. Chapter 6 discusses rudiments integral calculus, differential equations and a few simple applications. In large part, it is intended to set the stage for Chapters 7 and 9 on probability and stochastic processes. Chapters 7 and 8 provide a review of probability and statistics along with applications to analyzing risk, valuing securities and performing financial empirical studies. Event studies are emphasized in Chapter 8. Next, Chapter 9 discusses stochastic processes and continuous time mathematics. Particular emphasis is given to the analysis of options in both Chapters 9 and 10. Chapter 10 provides an introduction to some of the numerical methods most commonly used in finance. There are appendices at the ends of Chapters 5 and 6 and at the end of the text. Included in the end-of-text appendices are detailed solutions to end-of-chapter exercises, statistics tables and a list of notation definitions. A glossary of terms follows the text appendices.

    This book is designed such that it will not be necessary for most readers to start at the beginning and read all the material prior to a given topic. Generally, reading the previous section (except for the first section in each chapter) will be sufficient background for the reader to comprehend any given section unless additional "Background Readings" are listed. Comprehending the section preceding an application should be sufficient to ensure understanding of that application, except, again, where additional "Background Readings" are listed.

    NOTES 1. For example, see Dewing [1920] which describes the life cycle of the firm. See

    also Weston [1981, 1994], Megginson [1996], Martin, Cox and MacMinn [1988], Elton and Gruber [1995] and Copeland and Weston [1988] who provide wide-ranging overviews of financial literature in the academic realm. Part of this literature review was based on these earlier reviews.

    2. Exceptions to this are discussed in Section l.D.

  • 14 Chapter 1

    3. Normative models, proposing what "ought to be," are distinguished from positive models which intend to describe "what is." Academicians are often most interested in positive models to describe various financial phenomena. Members of the profession are often interested in both types of models.

    SUGGESTED READINGS Weston [1981, 1994], Megginson [1996], Martin, Cox and MacMinn [1988], Elton

    and Gruber [1995] and Copeland and Weston [1988] all provide wide-ranging overviews of financial literature in the academic realm. Bernstein [1992] provides a very readable discussion of financial literature in the academic realm and discusses many applications for practitioners. The focus of Bernstein's book is on academics who generated important discoveries in finance. Merton [1995] has prepared an overview of mathematics usage in finance along with a discussion of the vital role of mathematics in financial analysis.

  • 2

    Preliminary Analytical Concepts

    2.A: TIME VALUE MATHEMATICS

    Interest is a charge imposed on borrowers by lenders for the use of the lenders' money. The interest cost is usually determined as a percentage of the principal (the sum borrowed). Interest is computed on a simple basis if it is paid only on the principal of the loan. Compound interest accrues on accumulated loan interest as well as on the principal. Thus, if a sum of money (X0) were borrowed at an annual interest rate (i) and repaid at the end of n years with accumulated interest, the total sum repaid (FVn or future value at the end of year n) is determined as follows: (2.1)

    Interest is computed on a compound basis when a borrower must pay interest on not only the loan principal, but on accumulated interest as well. If interest must accumulate for a full year before it is compounded, the future value of such a loan is determined as follows:

    (2.2)This compound interest formula can be derived easily from the simple

    interest formula by adding accumulated interest to principal at the end of each year to form the basis of the subsequent year's computations:

    (A) If interest is to be compounded m times per year (or once every fractional 1/m part of a year), the future value of the loan is determined as follows:

    (2.3)

  • 16 Chapter 2

    Many continuous time financial models allow for continuous compounding of interest. As m approaches infinity (m-oo), the future value of a loan or investment can be defined as follows:

    (2.4) where e is the natural log whose value can be approximated at 2.718. (See Application 5.1 for more details on this derivation.)

    Cash flows realized at the present time have a greater value to investors than cash flows realized later. The purpose of the present value concept is to provide a means of expressing the value of a future cash flow in terms of current cash flows. That is, the present value concept is used to determine how much an investor would pay now for the expectation of some cash flow CFn to be received at a later date:

    (2.5)

    where PV is the present value of a single cash flow to be received at time n and k is an investor-determined discount rate accounting for risk, inflation and the investor's time value of money. This present value formula is easily derived from the compound interest formula by noting that X0 (principal) and PV are analogous, as are FV and CFn. The present value function is merely the inverse of the future value function. The continuously compounded version of Equation 2.5 is PV = CFne"kn. This variation is generally used when the analyst does not wish to arbitrarily select a compounding interval. In addition, this continuously compounding variation enables the analyst to continuously adjust for interest or returns not withdrawn from the asset being evaluated.

    If an investor wishes to evaluate a series of cash flows, he needs only to discount each separately and then sum the present values of each of the cash flows:

    Consider a cash flow series where the cash flows were expected to grow at a constant annual rate of g. The amount of the cash flow generated by that investment in year t (CFt) reflecting t1 years of growth would be:

    (2.7) where CF! is the cash flow generated by the investment in year one.

    (2.6)

  • Preliminary Analytical Concepts 17

    2.B: GEOMETRIC SERIES AND EXPANSIONS

    A geometric expansion is an algebraic procedure used to simplify a geometric series. Suppose we wished to solve the following finite geometric series for S:

    (A) where c is a constant or parameter and x is called a quotient. If n is large, computations may be time consuming and repetitive. Simplifying the series may save substantial amounts of computation time. Essentially, the geometric expansion is a two-step process:

    1. First, multiply both sides of the equation by the quotient:

    (B) 2. Second, to eliminate repetitive terms, subtract the above product from the

    original equation and simplify:

    (C)

    (D)

    for x * 1.

    For example, if x were to equal (1 + i), the following two equations would be equal:

    (E)

    ( F )

    Thus, for any geometric series where x ^ 1, the following summation formula holds:

    (2.8)

    Such geometric series and expansions are very useful in time value mathematics and problems involving series of probabilities.

  • 18 Chapter 2

    APPLICATION 2.1: ANNUITIES AND PERPETUITIES (Background reading: Section 2.A) An annuity is defined as a series of identical payments made at equal

    intervals. If payments are to be made into an interest bearing account, the future value of the account will be a function of interest accumulating on deposits as well as the deposits themselves. The future value annuity factor may be derived through the use of the geometric expansion. Consider the case where we wish to determine the future value of an account based on a payment of X made at the end of each year (t) for n years where the account pays an annual interest rate equal to i:

    (A) Thus, the payment made at the end of the first year accumulates interest for a total of (n-1) years, the payment at the end of the second year accumulates interest for (n-2) years and so on. The first step in the geometric expansion is to multiply both sides of Equation A by (1 +i):

    (B)

    Then we subtract Equation A from Equation B to obtain:

    (C) and rearrange to obtain:

    (D) which simplifies to:

    (2.9)

    A similar procedure is used to arrive at a formula for finding the present value of an annuity:

    (A)

    (B)

    (C)

    (D)

  • Preliminary Analytical Concepts 19

    which simplifies to the following:

    (2.10)

    As the value of n approaches infinity in the annuity formula, the value of the right-hand-side term in the brackets l/(k(l +k)n) approaches zero. Thus, the present value of a perpetual annuity, or perpetuity is determined as follows:

    (2.11)

    APPLICATION 2.2: GROWTH MODELS

    Suppose that we wished to value a cash flow series where the cash flow each year is expected to have grown at rate g over the prior year's cash flow. Thus, the cash flow in any year t (CFt) is C F ^ l +g). We can derive a present value growing annuity model as follows:

    (A)

    (B)

    (O

    (D)

    (E)

    (F)

    which simplifies to the following Present Value Growing Annuity formula:

    (2.12)

  • 20 Chapter 2

    When k> g, the Present Value Growing Annuity formula can be used to derive the Present Value Growing Perpetuity formula by allowing n to approach infinity:

    (2.13)

    When applied to stocks, this model is often referred to as the Gordon Stock Pricing Model.

    APPLICATION 2.3: MONEY AND INCOME MULTIPLIERS

    Suppose that the central bank of a country issues a fixed amount of currency K to the public and permits commercial banks to loan funds left by the public in the form of demand deposits of amount DD. The public obtains the currency and deposits it with the commercial banking system. Further, suppose that the central bank requires that commercial banks hold on reserve a proportion r of their demand deposits; that is, all commercial banks must leave with the central bank nonloanable reserves totaling r DD. Whenever funds are loaned by a commercial bank, they are spent by the borrower. The borrower purchases goods from a seller; the seller then deposits its receipts into the commercial banking system, creating more funds available to loan. However, each deposit requires that the commercial bank increase its reserve left with the central bank by the proportion r:

    (A) Here, K is the currency originally issued by the central bank to the public and deposited in the commercial banking system. The amount rK is used to meet the reserve requirement while (l-r)K is loaned to the public, then redeposited into the commercial banking system. Of the (l-r)K redeposited into the banking system, (l-r)(l-r)K is available to loan after the reserve requirements are fulfilled on the second deposit. This process continues into perpetuity. Based on the currency issued by the central bank and its reserve requirement, what is the total money supply for this economy? We can determine total money supply through the following geometric expansion:

    (B)

    (C)

    (D)

    (2.14)

  • Preliminary Analytical Concepts 21

    where we assume that K is positive and 0 < r < 1. Thus, the money multiplier here equals K/r. A central bank issuing $100 in currency with a reserve requirement equal to 10% will have a total money supply equal to $1,000.

    A similar sort of multiplier exists in the relationship between consumer autonomous consumption (consumption expenditures independent of income) and total income. Suppose that the following depicts the relationships among income Y, autonomous consumption TJ and income-dependent consumption cY:

    If autonomous consumption were to increase by a given amount, this would increase income, resulting in an increase in income-dependent consumption. This would further increase income and consumption, and the process would replicate itself perpetually:

    (B) We can derive an income multiplier to determine the full amount of the change in income resulting from a change in autonomous consumption:

    (C)

    (D)

    (2.15) Thus, the income multiplier equals c/(l-c) = c/s where s represents the proportion of marginal income saved by individuals.

    2.C: RETURN MEASUREMENT

    The purpose of measuring investment returns is simply to determine the economic efficiency of an investment. Thus, an investment's return expresses the cash flows generated by an initial cash outlay relative to the amount of that outlay. There exist a number of methods for determining the return of an investment. One can compute a holding period return on investment (ROIH) as follows:

    (2.16)

    (A)

  • 22 Chapter 2

    where CFt represents the cash flow paid by the investment in time t and P0 is the initial investment outlay. One may standardize this holding period return by annualizing it as follows:

    (2.17)

    Although its concept is quite simple, this arithmetic mean return does not account for the timing of cash flows nor the compounding of investment profits. An alternative average return is the geometric mean return computed as follows:

    (2.18)

    where rt is the return on investment for a single period t. Another return measure which more appropriately accounts for the timing of investment cash flows is internal rate of return (IRR), which is that value for r which solves the following:

    (2.19)

    However, one should note that internal rate of return can be more difficult to compute than the other return measures. In addition, there may be multiple values for r (multiple IRRs) which satisfy equation 2.19 and no rule which consistently tells us which is appropriate. This may occur when there are negative cash flows following positive cash flows.

    One important variation of internal rate of return is the yield to maturity for a bond:

    (2.20)

    where P0 is the bond's purchase price, F its face value, y its yield to maturity (IRR) and INT its annual interest payments. One obtains yield to maturity by solving this equation for y. While this expression is appropriate for bonds making annual interest payments, the following can be used for bonds making semiannual interest payments (INT -r 2):

    (2.21)

  • Preliminary Analytical Concepts 23

    2.D: MEAN, VARIANCE AND STANDARD DEVIATION

    The purpose of this and the following two sections is to introduce the reader to several important, though elementary concepts from probability and statistics. These concepts are applied to the measurement of risk in applications following these sections, then defined and discussed more rigorously in Chapters 6 and 7.

    Suppose we wish to describe or summarize the characteristics or distribution of a single population of values (or sample drawn from a population). Important characteristics include central location (measured by average, mean, median, expected value or mode), dispersion (measured by range, variance or standard deviation), asymmetry (measured by skewness) and clustering of data about the mean and extrema (measured by kurtosis).

    In many instances, we will be most interested in the typical value (if it exists) drawn from a population or sample; that is, we are interested in the "location" of the data set. Mean (often referred to as average) or expected values (sometimes referred to as weighted average) are frequently used as measures of location (or central tendency) because they account for all relevant data points and the frequency with which they occur. The arithmetic mean value of a population /i is computed by adding the values Xj associated with each observation i and dividing the result by the number of observations n in the population:

    (2.22)

    Future events whose actual outcomes are not certain may have associated with them numerous potential outcomes. Some of these potential outcomes may be more likely to be realized than others. The more likely outcomes are said to have higher probabilities associated with them. Probabilities are analogous to frequencies as a proportion of a population and may be measured as percentages summing to 100%. The expected value of a population E[x] is computed as a weighted average of the potential rates xh where probabilities Pj serve as weights:

    (2-23)

    Other measures of location include median and mode. If we were to rank values in a data set from highest to lowest, that value with the middle rank would be regarded as the median value. Usually, ties are averaged. The value occurring with the highest frequency in a data set is referred to as the mode.

    Variance is a measure of the dispersion (variability and sometimes volatility or uncertainty) of values within a data set. In a finance setting, variance is also used as an indicator of risk. Variance is defined as the mean of squared deviations of actual data points from the mean or expected value of a data set.

  • 24 Chapter 2

    Deviations are squared to ensure that negative deviations do not cancel positive deviations, resulting in zero variances. High variances imply high dispersion of data. This indicates that certain or perhaps many data points are significantly different from mean or expected values. Population, sample and expected variances are computed as follows:

    (2.24)

    (2.25)

    (2.26)

    Standard deviation is simply the square root of variance. It is also used as a measure of dispersion, risk or uncertainty. Standard deviation is sometimes easier to interpret than variance because its value is expressed in terms of the same units as the data points themselves rather than their squared values. High standard deviations as high variances imply high dispersion of data. Standard deviations are computed as follows:

    (2.27)

    (2.28)

    (2.29)

    APPLICATION 2.4: RISK MEASUREMENT When an individual or firm invests, it subjects itself to uncertainty regarding

    the amounts and timing of future cash flows. Expected return is defined and used as a return forecast in this section. Expected return is expressed as a function of the investment's potential return outcomes and associated

  • Preliminary Analytical Concepts 25

    probabilities. The riskiness of an investment is simply the potential for deviation from the investment's expected return. Thus, the risk of an investment is defined here as the uncertainty associated with returns on that investment. Expected return is defined mathematically as a function of returns Rj resulting from any one of n potential outcomes i with probability P^

    (2.30)

    The statistical concept of variance is an indicator of uncertainty associated with the investment. It accounts for all potential outcomes and associated probabilities:

    (2.31)

    Unfortunately, in many real-world scenarios, it is very difficult to properly assign probabilities to potential outcomes. However, if we are able to claim that historical volatility indicates future variance (or, similarly, volatility or uncertainty is constant over time), we can use historical variance as our indicator of future uncertainty:

    (2.32)

    where "R" is the mean return over the n year sampling period. Standard deviation a is simply the square root of variance. It has the convenient property of being expressed in the same units of measurement as the mean.

    There are two primary difficulties associated with the traditional historical sample estimator procedure for variance, time required for computation and the arbitrary selection of returns from which to compute volatilities (returns based on prices from end of day, week, quarter, etc.). The time required for computation may be quite large when the sample selected must be large enough for statistical significance (60 monthly returns is a commonly used data set for variance computations). Extreme value indicators (based on security high and low prices) such as that derived by Parkinson [1980] can be very useful for reducing the amount of data required for statistically significant standard deviation estimates:

    (2.33)

    where HI designates the stock's high price for a given period and LO designates the low price over the same period. This estimation procedure is based on the assumption that underlying stock returns are log-normally distributed with zero drift and constant variance over time. Garman and Klass [1980] and Ball and

  • 26 Chapter 2

    Torous [1984] provide more efficient extreme value estimators using opening and closing prices while Rogers and Satchell [1991]) and Kunitomo [1992] provide drift adjusted (nonzero average return) models.

    A problem shared by both the traditional sample estimating procedures and the extreme value estimators is that they require the assumption of stable variance estimates over time; that is, historical variances equal future variances. A third procedure, first suggested by Latane and Rendleman [1976], is based on market prices of options, which may be used to imply variance or volatility estimates. For example, the Black-Scholes Option Pricing Model and its extensions provide an excellent means to estimate underlying stock variances if call prices are known. Essentially, this procedure determines market estimates for underlying stock variance based on known market prices for options on the underlying securities (see Chapter 10).

    Brenner and Subrahmanyam [1988] provide a simple formula to estimate an implied standard deviation (or variance) from the value c0 of a call option whose striking price equals the current market price S0 of the underlying asset:

    (2.34)

    where T is the number of time periods prior to the expiration of the option. As the market price differs more from the option striking price, the estimation accuracy of this formula will worsen.

    2.E: COMOVEMENT STATISTICS A joint probability distribution is concerned with probabilities associated

    with each possible combination of outcomes drawn from two sets of data. Covariance measures the mutual variability of outcomes selected from each set; that is, covariance measures the relationship between variability in one data set relative to variability in the second data set, where variables are selected one at a time from each data set and paired. If large values in one data set seem to be associated with large values in the second data set, covariance is positive; if large values in the first data set seem to be associated with small values in the second data set, covariance is negative. If data sets are unrelated, covariance is zero. Covariance between data set x and data set y may be measured as follows, depending on whether one is interested in covariance of a population, of a sample or expected covariance:

    (2.35)

  • Preliminary Analytical Concepts 27

    (2.36)

    (2.37)

    The sign associated with covariance indicates whether the relationship associated with the data in the sets are direct (positive sign), inverse (negative sign) or independent (covariance is zero). The absolute value of covariance measures the strength of the relationship between the two data sets. However, the absolute value of covariance is more easily interpreted when it is expressed relative to the standard deviations of each of the two data sets. That is, when we divide covariance by the product of the standard deviations of each of the data sets, we obtain the coefficient of correlation pxy as follows:

    (2.38)

    A correlation coefficient equal to 1 indicates that the two data sets are perfectly positively correlated; that is, their changes are always in the same direction, by the same proportions, with 100% consistency. Correlation coefficients will always range between - 1 and +1. A correlation coefficient of - 1 indicates that the two data sets are perfectly inversely correlated; that is, their changes are always in the opposite direction, by the same proportions with 100% consistency. The closer a correlation coefficient is to 1 or +1, the stronger is the relationship between the two data sets. A correlation coefficient equal to zero implies independence (no relationship) between the two sets of data.

    The correlation coefficient may be squared to obtain the coefficient of determination (also referred to as r2 in some statistics texts and here as p2). The coefficient of determination is the proportion of variability in one data set that is explained by or associated with variability in the second data set. For example, p2 equal to .35 indicates that 35% of the variability in one data set is explained in a statistical sense by variability in the second data set.

    APPLICATION 2.5: SECURITY COMOVEMENT Standard deviation and variance provide us with measures of the absolute

    risk levels of securities; such absolute measures provide potential for deviation from the variable expected value. However, in many instances, it is useful to

  • 28 Chapter 2

    measure the risk of one security relative to the risk of another or relative to the market as a whole or to an index. The concept of covariance is integral to the development of relative risk measures. Covariance (COV [Rj,Rj] or au) provides us with a measure of the relationship between the returns of two securities. That is, given that two securities returns are likely to vary, covariance indicates whether they will vary in the same direction or in opposite directions. The likelihood that two securities will covary similarly (or, more accurately, the strength of the relationship between returns on two securities) is measured by Equation 2.39:

    (2.39)

    where Rki and Rjj are the return of stocks k and j if outcome i is realized and Pj is the probability of outcome i. E[Rk] and E[Rj] are simply the expected returns of securities k and j. The concept of covariance is also crucial to the development of models of diversification and portfolio risk (see Chapter 3).

    Historical covariance can be used to measure security comovement or relative risk if one is willing to assume that historical comovement indicates future comovement:

    (2.40)

    The coefficient of correlation provides us with a means of standardizing the covariance between returns on two securities. For example, how large must covariance be to indicate a strong relationship between returns? Covariance will be smaller given low returns on the two securities than given high security returns. The coefficient of correlation pkj between returns on two securities will always fall between 1 and +1.1 If security returns are directly related, the correlation coefficient will be positive. If the two security returns always covary in the same direction by the same proportions, the coefficient of correlation will equal one. If the two security returns always covary in opposite directions by the same proportions, pk>j will equal negative one. The stronger the inverse relationship between returns on the two securities, the closer pkJ will be to negative one. If pkj equals zero, there is no relationship between returns on the two securities. The coefficient of correlation pkJ between returns is simply the covariance between returns on the two securities divided by the product of their standard deviations:

    (2.41)

  • Preliminary Analytical Concepts 29

    2.F: INTRODUCTION TO SIMPLE OLS REGRESSIONS

    Regressions are used to determine relationships between a dependent variable and one or more independent variables. A simple regression is concerned with the relationship between a dependent variable and a single independent variable; a multiple regression is concerned with the relationship between a dependent variable and a series of independent variables. A linear regression is used to describe the relationship between the dependent and independent variable(s) to a linear function or line (or hyperplane in the case of a multiple regression).

    The simple Ordinary Least Squares regression (simple OLS) takes the following form:

    (2.42)

    The ordinary least squares regression coefficients a and b are derived by minimizing the variance of errors in fitting the curve (or m dimensional surface for multiple regressions involving m variables). Since the expected value of error terms equals zero, this derivation is identical to minimizing error terms squared (see the OLS derivation in Application 5.8). Regression coefficient bx is simply the covariance between y and x divided by the variance of x; bj and b0 are found as follows:

    (2.43)

    (2.44) Appropriate use of the OLS requires the following assumptions:

    1. Dependent variable values are distributed independently of one another. 2. The variance of x is approximately the same over all ranges for x. 3. The variance of error term values is approximately the same over all

    ranges of x. 4. The expected value of each disturbance or error term equals zero.

    Violations in these assumptions will weaken the validity of the results obtained from the regression and may necessitate either modifications to the OLS regression or different statistical testing techniques. The derivation of the OLS model are discussed in Sections 5.8 and 5.9 and numerous applications will be discussed in Chapters 3 and 8.

  • 30 Chapter 2

    APPLICATION 2.6: RELATIVE RISK MEASUREMENT A portfolio is simply a collection of investments. The market portfolio is the

    collection of all investments that are available to investors. That is, the market portfolio represents the combination or aggregation of all securities (or other assets) that are available for purchase. Investors may wish to consider the performance of this market portfolio to determine the performance of securities in general. Thus, the return on the market portfolio is representative of the return on the "typical" asset. An investor may wish to know the market portfolio return to determine the performance of a particular security or his entire investment portfolio relative to the performance of the market or a "typical" security.

    Determination of the return on the market portfolio requires the calculation of returns on all of the assets available to investors. Because there are hundreds of thousands of assets available to investors (including stocks, bonds, options, bank accounts, real estate, etc.), determining the exact return of the market portfolio may be impossible. Thus, investors generally make use of indices such as the Dow Jones Industrial Average or the Standard and Poor's 500 to gauge the performance of the market portfolio. These indices merely act as surrogates for the market portfolio; we assume that if the indices are increasing, then the market portfolio is performing well. For example, performance of the Dow Jones Industrials Average depends on the performance of the thirty stocks that comprise this index. Thus, if the Dow Jones market index is performing well, the thirty securities, on average are probably performing well. This strong performance may imply that the market portfolio is performing well. In any case, it is easier to measure the performance of thirty or five hundred stocks (for the Standard and Poor's 500) than it is to measure the performance of all of the securities that comprise the market portfolio.

    Beta measures the risk of a given security relative to the risk of the market portfolio of all investments. Beta is determined by Equations 2.45 and 2.46:

    (2.45)

    (2.46)

    Beta may also be described as the slope of an Ordinary Least Squares regression line fit to data points comprising returns on Security i versus returns on some index such as one representing the market portfolio:

    (2.47) where c^ is the vertical intercept of this regression. Again, ft is computed based on Equations 2.45 and 2.46. The vertical intercept a{ of the regression line is

  • Preliminary Analytical Concepts 31

    simply E[Rj J - ftE[IJ. The slope term ft may be interpreted as the change in the return of the security induced by a change in the index; ft is the risk of the asset relative to the risk of the market. The term e-ltl might be interpreted as the security's return associated with firm-specific factors and unrelated to It. The concepts of Beta, relative risk models and index models are discussed in much greater detail in Chapters 3, 5 and 8.

    NOTE

    1. Many statistics textbooks use the notation (ri(j) to designate the correlation coefficient between variables (i) and (j)- Because the letter (r) is used in this text to designate return, we will use the lower case rho (py) to designate correlation coefficient.

    SUGGESTED READINGS Many of the topics in this chapter, including time value of money, return, risk and

    comovement statistics are discussed in Brealey and Myers [1996]. An even more elementary presentation of these topics is provided by Brealey, Myers and Marcus [1995]. The presentation in Brealey, Myers and Marcus is suitable as background reading for this chapter. Brown and Kritzman [1990] also discuss many of these topics, including the Parkinson extreme value variance estimator. Mayer, Duesenberry and Aliber [1987] and other texts in money and banking and in macroeconomics discuss money multipliers and their derivations. The textbook by Ben-Horim and Levy [1984] provides an excellent introductory presentation of statistics with numerous applications to finance.

  • 32 Chapter 2

    EXERCISES

    2.1 The Doda Company has borrowed $10,500 at an annual interest rate of 9%. How much will be a single lump-sum repayment in eight years, including both principal and interest, if interest is computed on a simple basis; that is, what is the future value of this loan?

    2.2 What would be the lump-sum loan repayment made by the Doda Company in Problem 2.1 if interest were compounded

    a. annually? b. semi-annually? c. monthly? d. daily? e. continuously?

    2.3 Assume that you are advising a twenty-three-year-old client with respect to personal financial planning. Your client wishes to save, become a millionaire, and then retire. Your client intends to open and contribute to a tax deferred Individual Retirement Account each year until he retires with $1,000,000 in that account.

    a. If your client were to deposit $2,000 at the end of each year into his I.R.A., how many years must he wait until he retires with his $1,000,000? Assume that the account will pay interest at an annual rate of 10%, compounded annually.

    b. What would your answer to a. be if the interest rate were 12%? c. What would the client's annual payment have to be if he wished to retire at the age

    of forty with $1,000,000? Assume that the client will make deposits at the end of each year for 17 years at an annual interest rate of 10% and that his I.R.A. will be supplemented with another type of retirement account known as a 401(k), so that his total annual tax deferred deposits can exceed $2,000.

    d. What would your answer to c. be if your client were willing to wait until he is fifty to retire?

    e. What would your answer to d. be if your client were able to make deposits into an account paying interest at an annual rate of 12%?

    f. What would your answers to a., c. and d. be in the annual interest rate were only 4%?

    g. If the annual inflation rate for the next fifty years were expected to be 3%, what would be the purchase power of $1,000,000 in 17 years? In 27 years?

    h. What would be your answers to g. be if the inflation rate were expected to equal 9%?

    2.4 The Starr Company has the opportunity to pay $10,000 for an investment paying $2,000 in each of the next nine years. Would this be a wise investment if the appropriate discount rate were

    a. 5%? b. 10%? c. 20%?

    2.5 An investor has the opportunity to purchase for $4,900 an investment which will pay $1,000 at the end of six months, $1,100 at the end of one year, $1,210 at the end of

  • Preliminary Analytical Concepts 33

    eighteen months, $1,331 at the end of two years, and $1,464.10 at the end of thirty months. Assuming that the investor discounts all of his cash flows at an annual rate of 20%, should he purchase this investment? Why or why not?

    2.6 The Tray nor Company is selling preferred stock which is expected to pay a $50 annual dividend per share. What is the present value of dividends associated with each share of stock if the appropriate discount rate were 8% and its life expectancy were infinite?

    2.7 The Lajoie Company is considering the purchase of a machine whose output will result in a $10,000 cash flow next year. This cash flow is projected to grow at the annual 10% rate of inflation over each of the next 10 years. What will be the cash flow generated by this machine in

    a. its second year of operation? b. its third year of operation? c. its fifth year of operation? d. its tenth year of operation?

    2.8 What would be the present value of a fifty-year annuity whose first cash flow of $5,000 is paid in 10 years and whose final (fiftieth) cash flow is paid in 59 years? Assume that the appropriate discount rate is 12% for all cash flows.

    2.9 An employee expects to make a deposit of $1,000 into his pension fund account in one year, with additional deposits to follow for a total of 40 years when he retires. The amount to be deposited in each year will be 5% larger than in the prior year (e.g., $1,050 deposited in the second year, $1,102.50 in the third year, etc.). Furthermore, the retirement account will accrue interest on accumulated deposits at an annual rate of 8%, compounded annually. What will be the terminal (future) value of the account at the end of the 40-year period? Show how to derive a computationally efficient expression to solve this problem.

    2.10 The Chesbro Company is considering the purchase of an investment for $100,000 that is expected to pay off $50,000 in 2 years, $75,000 in 4 years and $75,000 in 6 years. In the third year, Chesbro must make an additional payment of $50,000 to sustain the investment. Calculate the following for the Chesbro investment:

    a. return on investment using an arithmetic mean return b. the investment internal rate of return c. describe any complications you encountered in part b

    2.11 A $1,000 face value bond is currently selling at a premium for $1,200. The coupon rate of this bond is 12% and it matures in 3 years. Calculate the following for this bond assuming its interest payments are made annually:

    a. its annual interest payments b. its current yield c. its yield to maturity

    2.12 Work through each of the calculations in Problem 2.11 assuming interest payments are made semiannually.

  • 34 Chapter 2

    2.13 The Galvin Company invested $100,000