CHAPTER III STOCK MARKET VOLATILITY: THE...

71
96 CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENT 3.1. Introduction The development of the theory of portfolio selection by Markowitz (1952) is the foundation to the various practices in today’s financial markets. He translated the basic idea of economics [“in order to obtain something we have to forego something else”] into finance theory, which implies that there exists a risk-return tradeoff. In simple words, if an investor wants higher return on a project, he has to accept a higher degree of risk as well. As risk is measurable with respect to some benchmark, the investor defines his utility function according to his attitude towards risk. A measure that is immediately brought in while doing so is the dispersion of various outcomes. Thus, the notion of risk is related to variance. It is well known that variance is a statistic that can be determined from the past observations (returns). In almost all financial models, variance is used as proxy for variability. We may measure the variability based on past prices that conform to the present variability as closely as possible. Basically, this has been conceptualized as ‘VOLATILITY’ in financial market. Thus volatility, as the concept, may be treated as synonymous with variability in general or variance in particular. There have been a lot of empirical studies to test volatility in the stock markets globally. Research has proved that stock markets have become more volatile in the recent times due to the emergence of “New Economy” stocks, which are valued highly as compared to their “Old Economy” counterparts on the expectations of giving very high returns in future. Thus this high expectation has brought about wide fluctuations in the prices making the markets turbulent. Volatility is fluctuation, sometimes a significant fluctuation of something related to risk. There is risk involved in investing and fluctuation is a part of it. Significant volatility, something many investors have experienced recently, may have both negative as well as positive results on the investment portfolios. As investors’

Transcript of CHAPTER III STOCK MARKET VOLATILITY: THE...

Page 1: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

96

CHAPTER – III

STOCK MARKET VOLATILITY: THE MEASUREMENT

3.1. Introduction

The development of the theory of portfolio selection by Markowitz (1952) is the

foundation to the various practices in today’s financial markets. He translated the

basic idea of economics [“in order to obtain something we have to forego something

else”] into finance theory, which implies that there exists a risk-return tradeoff. In

simple words, if an investor wants higher return on a project, he has to accept a higher

degree of risk as well. As risk is measurable with respect to some benchmark, the

investor defines his utility function according to his attitude towards risk.

A measure that is immediately brought in while doing so is the dispersion of various

outcomes. Thus, the notion of risk is related to variance. It is well known that

variance is a statistic that can be determined from the past observations (returns). In

almost all financial models, variance is used as proxy for variability. We may

measure the variability based on past prices that conform to the present variability as

closely as possible. Basically, this has been conceptualized as ‘VOLATILITY’ in

financial market. Thus volatility, as the concept, may be treated as synonymous with

variability in general or variance in particular.

There have been a lot of empirical studies to test volatility in the stock markets

globally. Research has proved that stock markets have become more volatile in the

recent times due to the emergence of “New Economy” stocks, which are valued

highly as compared to their “Old Economy” counterparts on the expectations of

giving very high returns in future. Thus this high expectation has brought about wide

fluctuations in the prices making the markets turbulent.

Volatility is fluctuation, sometimes a significant fluctuation of something related to

risk. There is risk involved in investing and fluctuation is a part of it. Significant

volatility, something many investors have experienced recently, may have both

negative as well as positive results on the investment portfolios. As investors’

Page 2: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

97

fundamental assumptions about stocks change, stock prices can move quickly,

especially in today’s wired marketplace. Much of today’s volatility is simply a result

of the marketplace reacting to fundamental overvaluations.

In short, volatility in and of itself is relatively benign. It is the consequence of the

volatility that matters. If an investor has time to ride out the market’s ups and downs,

volatility may be of little consequence. The shorter the time frame, however, the

more harmful volatility can be.

The factors that affect volatility may be categorized as those affecting the long-term

and those affecting the short-term volatility. Several economic factors may cause

slow changes in stock market volatility, where the changes become noticeable over

many months or years. These are known as the long-term factors, such as corporate

leverage, personal leverage and the condition of the economy.

Several bursts in stock volatility in some markets around the world have spurred the

interest in stock volatility during the past few years. The boom and subsequent crash

in the Indian stock market in 1992 and 2001, the US stock market crashes of October

1987 and October 1989 the Mexican currency crisis in 1994, Asian currency crisis in

1997, the Russian crisis of 1998, the 1999 Brazilian crisis, the 2001 Argentinean

crisis, the 2002 Turkish crisis, the subprime financial crisis in 2008 and the European

Debt crisis in 2009 are prominent examples. These bursts of volatility are hard to

relate to longer-term phenomena such as recessions or leverage. Instead, most people

have tried to relate them to the structure of securities trading. Hence, these are

categorized as the factors that may cause short-term volatility, a few of which are

trading volume, trading halts (circuit breakers and circuit filters), computerized

trading, noise trading, international linkages, market makers, takeovers, supply of

equities, the press and other factors.

Volatility per se is not unnatural or unwanted. However, excessive volatility caused

by irrational or speculative behavior of the traders and investors, trading mechanism

imperfections and lack of information transparency is not desirable. If stock market

volatility increases, it may have important consequences for investors and policy-

Page 3: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

98

makers. Investors may equate higher volatility with greater risk and may alter their

investment decisions due to increased volatility. Policy-makers may be concerned

that stock market volatility would spill over into the real economy and harm economic

performance. Alternatively, policy-makers may feel that increased stock volatility

threatens the viability of financial institutions and the smooth functioning of financial

markets.

Stock return volatility hinders economic performance through consumer spending

(Garner 1988), and may also affect business investment spending (Gretler and

Hubbard 1989), where the investors may perceive a rise in stock market volatility as

an increase in the risk in equity investments. Hence, investors may shift their funds to

less risky assets. This move could result in rise in cost of capital of firms (Arestis et

al 2001). According to Bekaert (1995), in segmented capital markets, a country’s

volatility is a critical input in the cost of capital. Volatility may also be used as a

decision making criterion, where one would invest in those assets that yield the

highest return per unit of risk (Wessels 2006).

Further, extreme stock return volatility could disrupt the smooth functioning of the

financial system and lead to structural or regulatory changes. Systems that work well

with normal return volatility may be unable to cope with extreme price changes.

Changes in market rules or regulations may be necessary to increase the resiliency of

the market in the face of greater volatility.

However, increase in volatility per se cannot be criticized. Increased volatility may

simply reflect fundamental economic factors or information and expectations about

them. In fact, the more quickly and accurately prices reflect new information; the

more efficient would be pricing of securities and thereby the allocation of resources.

A market in which prices always “fully reflect” available information is called

“efficient” where share prices fluctuate randomly about their “intrinsic” values.

The stock market in India has had its fair share of crises engendered by excessive

speculation resulting in excessive volatility. Undoubtedly, the enthusiasm of

investors in the early 1990’s to some extent has been replaced by a growing concern

Page 4: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

99

about the excessive volatility of the Indian stock market in recent years. The

widespread concern of the exchange management, brokers and investors alike has

underlined the importance of being able to measure and predict stock market

volatility. Only then can effective monitoring mechanisms be put in place, which

would help in avoidance of such episodes in future.

The industrial development of a nation largely depends on the allocative efficiencies

of the stock market, which acts as a barometer of a country’s health. Indian stock

market has a long history but as an agent of development it has only a few glorious

occasions to its credit to reckon. A number of committees were appointed to review

functioning of stock market that submitted innumerable suggestions to minimize

speculative activities thereby fluctuation of share prices. Ironically all these efforts by

and large ultimately failed to control a rational share price movement that is crippling

functioning of the market for long.

3.2. Review of Literature

Volatility is the degree to which asset prices tend to fluctuate. It is the variability or

randomness of asset prices, i.e. the dispersion of returns of an asset from its mean

return. Stock market volatility measures the size and frequency of fluctuations in a

broad stock market price index (Madhusudan Karmakar 2006) (Mishra et al 2010).

It also has a significant forecasting power for real GDP growth (Campbell et al 2001).

The stock market returns follow a deterministic path implying that stock returns

oscillate between excess and under return, passing through the mean stock return

(Seth and Saloni 2005).

According to Poon and Granger (2003) volatility has a very wide sphere of influence

including investment, security valuation, risk management and policy making,

emphasizing on the importance of volatility forecasting. Research has also shown

that capital market liberalization policies too, are likely to affect volatility.

Rao and Tripathy (2008) found that the market would react very sharply to economic,

political and policy issues.

Page 5: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

100

Batra (2004) examined the economic significance of changes in the pattern of stock

market volatility in India during the period of financial reforms.

According to Schwert (1989 a, b) time variation in market volatility can often be

explained by macroeconomic and micro structural factors.

Volatility in national markets is determined by world factors and part determined by

local market effects, assuming that the national markets are globally linked (Pratip

Kar et al 2000).

Using time varying market integration parameter, Bekaert and Harvey (1995) showed

that world factors have an increased influence on volatility with increased market

integration.

Mandelbrot (1963) observed volatility clustering and leptokurtosis as common

observations in financial time series. Moreover, a highly significant large JB statistic

confirms that the return series is not normally distributed.

Harvey (1995) points out that in many emerging markets, time series return data do

not follow normal distribution.

But according to Obaidullah (1991), time series return data in Indian Stock Markets

are normally distributed.

Bollerslev (1986) introduced model Generalized Autoregressive Conditional

Heteroskedasticity (GARCH). The GARCH model allows the conditional variance to

be dependent upon its own lags. He estimated GARCH(1,1) model on the quarterly

data set of U.S. inflation for the period 1948-II to 1983-IV. The results suggested the

presence of GARCH effect in the inflation data. It indicated that the volatility of

inflation was persistent.

Page 6: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

101

To compute the conditional variance of sample return series, GARCH (1,1) model has

been applied by Susan Thomas (1995), Madhusudan Karmakar (2003) and Puja Padhi

(2005).

Brailsford and Faff (1996) find that the GARCH models are superior to other models

to forecast Australian monthly stock index volatility.

Brooks (1998) found that the FARCH models outperform other techniques while

modeling volatility.

According to Jean and Peters (2001), GJR and APARCH give better forecasts than

symmetric GARCH. But increased performance of the forecasts could not be clearly

observed when using non-normal distributions.

Jayanth R Varma (1999) tested the relevance of GARCH-GED (Generalized Auto-

Regressive Conditional Heteroskedasticity with Generalized Error Distribution

residuals) model and the EWMA (Exponentially Weighted Moving Average) model

and evaluated their performance in the VaR framework in Indian stock market.

David X Li (1999) presented a new approach to calculating Value at Risk (VaR) using

skewness, kurtosis and the standard deviation explicitly. The new approach was

found to capture the extreme tail much better than the standard VaR calculation

method used in Riskmetrics.

When a time-varying risk premium is incorporated into the analysis, the view that

“historical prices have been consistently too volatile and their returns too high” cannot

be supported (Angela Black and Patricia Fraser 2003).

Harvinder Kaur (2004) studied the extent and pattern of stock return volatility of the

Indian Stock Market and concluded that the most volatility found during the months

of April followed by March and February could be due to the presentation of the

Union Budget.

Page 7: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

102

The GARCH (1, 1) model has been found to be the overall superior model based on

most of the symmetric loss functions, though ARCH has been found to be better than

the other models for investors who are more concerned about under predictions than

over predictions (Deb et al 2003).

According to Ravi Madapati (2005), the conditional heteroskedastic models fit the

Indian data quite satisfactorily, providing good forecasts of volatility.

Madhusudan Karmakar (2005) reported the presence of leverage effect in the Indian

stock market, but which model can best capture the leverage effect has been left for

further research.

Bhaskkar Sinha (2006) found the EGARCH for BSE Sensex and GJR-GARCH for

NSE Nifty best for modeling volatility clustering and persistence of shock.

Mahajan and Singh (2008) examined the empirical relationship between return,

volume and volatility in Indian stock market using GARCH (1,1) and EGARCH (1,1)

estimated for Nifty index.

Rao, Kanagaraj and Tripathy (2008) found that stock future derivatives were not

responsible for increase or decrease in spot market volatility and conclude that there

could be other market factors that have helped the increase in Nifty volatility.

Atanu Das et al (2009) compared the predictive power of Stochastic Volatility Model

(SVM) and Kalman Filter (KF) based approach vis-à-vis EWMA and GARCH based

approaches with data from Indian security indices.

Somsanker Sen (2010) has explored the movements of Volatility on S&P CNX

NIFTY.

Vipul Singh and Ahmad (2011) compared several GARCH family models in order to

model and forecast the conditional variance of S&P CNX Nifty Index with special

Page 8: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

103

focus on the fitting of first order GARCH models to Nifty financial daily return series

and explaining financial market risk.

Rajan (2011) discussed different mathematical models to model the volatility of the

stock market in general and apply it to Indian context to pin down one that captures

the irregular behavior of the Indian stock market.

Asad Ahmad and Rana (2012) attempted to determine the forecasting performance of

symmetric and asymmetric volatility forecasting models in terms of error estimators

using the intra-day of highly liquid stocks in the Indian stock market. Superiority of

forecasting performance of asymmetric GARCH model over symmetric model has

been established.

The issue of changes in volatility of stock returns in emerging markets, in particular,

has received considerable attention in recent years due to various reasons. The market

participants need this measure for reasons like portfolio management, pricing of

options, predicting asset return series, forecasting confidence intervals, financial risk

management, etc. The issue of volatility and risk has also become increasingly

important in recent times to financial practitioners, regulators and researchers. In this

context, the present study is carried out to understand the volatility behavior of the

Indian stock markets.

3.3. Objectives of the Chapter

The objective of the chapter is to understand the volatility behavior and to measure

the volatility levels of the Indian stock market, through the CNX Nifty. However, the

following are set as the sub objectives for the chapter:

1. To compute the historical volatility levels of CNX Nifty using the classical,

range based and drift independent volatility estimators

2. To subject the CNX Nifty prices to autocorrelation tests

3. To estimate the conditional variance of sample return series through GARCH

model

Page 9: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

104

3.4. Database and Methodology

The National Stock Exchange captures 83 per cent transactions of the cash segment

and 79 per cent of the derivatives segment, thus the prominent index of NSE, the

CNX Nifty is treated as the principal stock index of the country. Hence for analysis,

CNX Nifty was selected.

The classical estimator is calculated using

a. The daily close prices of Nifty for a period of 20 years from January 4, 1993

to December 31, 2012 totaling 4922 trading days.

b. The daily open prices of Nifty for a period of 17 years from November 3, 1995

to December 31, 2012 totaling 4283 trading days.

The range-based and drift-independent volatility estimators are calculated using the

daily open, high, low and close prices of Nifty for a period of 17 years from

November 3, 1995 to December 31, 2012 totaling 4283 trading days.

The daily open, high, low and close prices of Nifty are obtained from the official

website of NSE ignoring the days when there was no trading.

The price changes are calculated from the last day the market was open.

The autocorrelation tests, AR(1) and GARCH(1,1) models were all calculated by

using the statistical package ‘E Views 6’.

3.5. Volatility and its Measure

A stock’s price moves in accordance with investors’ continuously changing and

contradictory expectations about the company’s future financial performance. There

is a perpetual uncertainty associated with every stock. But it is considered normal

behavior, and is reflected in the share price movement. However, the extent of this

movement is not the same for all stocks. Some stocks tend to move more, and this

difference has implications on their investment potential. In order to gauge how such

behavior impacts share prices, its movement – specifically, the volatility intrinsic to it

– needs to be measured.

There are two basic ways to measure volatility: historical and implied. Historical

volatility is calculated by using the standard deviation of underlying asset price

Page 10: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

105

changes from close to close (or open to open) of trading for the past few months.

Implied volatility is a computed value that measures an option’s volatility, rather than

the underlying asset. A fair value of an option may be calculated by entering the

historical volatility of the underlying asset into an option-pricing model. The

computed fair value may differ from the actual market price of the option. In brief,

historical volatility gauges price movement in terms of past performance and implied

volatility approximates how much the market place thinks prices will move. An

important advantage of estimating the implied volatility is that it requires no historical

data collection. The volatility is readily available with merely one day’s price

information.

3.5.1. Historical Volatility

The simplest way to estimate volatility is to use the basic definition that volatility is

the standard deviation of logarithmic asset returns. Given daily or weekly asset

prices, it is a simple matter to compute the corresponding daily or weekly return and

compute their standard deviation. The mean and the standard deviation of a set of

data are usually reported together. In a certain sense, the standard deviation is a

"natural" measure of statistical dispersion if the center of the data is measured about

the mean. This is because the standard deviation from the mean is smaller than from

any other point. Volatility is traditionally estimated using closing price data, hence

this method is also called the classical estimator or the optimal (maximum likelihood)

estimator which is obtained from random walk model.

The current value of the Standard Deviation may be used to estimate the importance

of a move or set expectations. This assumes that price changes are normally

distributed with a classic bell curve. Even though price changes for securities are not

always normally distributed, chartists may still use normal distribution guidelines to

gauge the significance of a price movement. In a normal distribution, 68% of the

observations fall within one standard deviation. 95% of the observations fall within

two standard deviations. 99.7% of the observations fall within three standard

deviations. Using these guidelines, traders may estimate the significance of a price

movement. A move greater than one standard deviation would show above average

strength or weakness, depending on the direction of the move.

Page 11: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

106

Figure 3.1: Normal Distribution guidelines

3.5.1a. Return

Daily stock returns were calculated by the logarithmic difference in the price, using

the following formula where 𝒓𝒕 and 𝑰𝒕 indicate return and index values respectively at

time t:

𝒓𝒕 = 𝒍𝒏 (𝑰𝒕

𝑰𝒕−𝟏) 𝑿 𝟏𝟎𝟎 (𝒆𝒒𝒏 𝟑. 𝟏)

Arithmetic and logarithmic returns are not equal, but are approximately equal for

small returns. The difference between them is large only when percent changes are

high. For example, an arithmetic return of +50% is equivalent to a logarithmic return

of 40.55%, while an arithmetic return of -50% is equivalent to a logarithmic return of

-69.31%.

Logarithmic returns are often used by academics in their research. The main

advantage is that the continuously compounded return is symmetric, while the

arithmetic return is not; positive and negative percent arithmetic returns are not equal.

This means that an investment of $100 that yields an arithmetic return of 50%

followed by an arithmetic return of -50% will result in $75, while an investment of

$100 that yields a logarithmic return of 50% followed by a logarithmic return of -50%

it will remain $100. Foreseeing the reasons of consistency of log returns in utility and

in uniformity the study also used log returns for the analysis.

Page 12: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

107

3.5.1b. Average Annual Returns

The average annual returns are calculated and presented in Table 3.1, where it may be

observed that the returns have been negative almost all the years in the decade of

1990s but during the next decade the average returns have recorded positive in almost

all the years.

The important points of consideration are:

a. The market showed negative returns during the years 1995 to 2000 which may

be attributed to the boom and subsequent crash of the Indian stock market in

1999-2000

b. Though the market could recover during the year 1997, it could not maintain it

for the next year due to the Asian Financial Crisis which started in 1997 and

impacted global stock markets to crash

c. During the year 1999 the market plunged to a very high level of return due to

the effect of boom but subsequently dropped when the Indian stock markets

crashed due to the Dot-com bubble in the year 2000 and was impacted by the

economic effects of September 11 attacks in the year 2001

d. Since the year 2002 the markets have shown considerably positive returns,

though fluctuating

e. The sudden drop of the returns to the maximum level during the year 2008

could be attributed to the subprime financial crisis that affected countries

across the globe

f. The market could withstand this crisis and revert back at the earliest giving the

maximum returns during the year 2009

g. Stock markets around the world plummeted during late July and early August

2011, and were volatile for the rest of the year resulting in a negative return

h. The market is found to revert back at the earliest from the various affects it

faced and resulted in positive average returns

i. It could be concluded that the market is improving its efficiency to withstand

and recover from any sort of crises

Page 13: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

108

Table 3.1: Annual Average Returns from Nifty close values

Year Return (%) No. of Obs.

1993 0.147616 213

1994 0.054668 230

1995 -0.111598 236

1996 -0.004190 249

1997 0.074904 244

1998 -0.079768 250

1999 0.202898 254

2000 -0.063368 250

2001 -0.071191 248

2002 0.012754 251

2003 0.213290 254

2004 0.039949 254

2005 0.123498 251

2006 0.134108 250

2007 0.175397 249

2008 -0.296624 246

2009 0.232082 243

2010 0.065503 252

2011 -0.114414 247

2012 0.097407 251

1993-2012 0.041620 4922

Source: Compiled Data

Page 14: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

109

3.5.1c. Standard Deviation

It is the simplest and most common type of calculation that benefits from only using

reliable prices from closing auctions. It may also be calculated using only the

opening auction values for each transacting day. This classical estimator is calculated

using the following formula, where 𝒓𝒕 denotes the logarithmic return (either open or

close) on day t and E denotes the average return for the period.

= √𝟏

𝑻 − 𝟏∑(𝒓𝒕 − 𝑬)𝟐

𝑻

𝒕=𝟏

(𝒆𝒒𝒏 𝟑. 𝟐)

Following are the advantages of the Classical Estimator:

a. It has well understood sampling properties

b. It is very simple to use

c. It is free from obvious sources of error and bias on the part of market activity

d. It is easy to convert to a form involving typical daily moves

Following are the disadvantages of the Classical Estimator:

a. Inadequate use of readily available information in its estimation

b. It converges information very slowly

3.5.1d. Volatility Measure: Classical Estimators

The classical estimators, open-open standard deviation and close-close standard

deviation, are calculated using

- The daily close prices of Nifty for a period of 20 years from January 4, 1993

to December 31, 2012 totaling 4922 trading days.

- The daily open prices of Nifty for a period of 17 years from November 3, 1995

to December 31, 2012 totaling 4283 trading days.

The classical estimators, using the standard deviation method as stated above, are

obtained and the changes in standard deviation over the study period are tabulated in

Table 3.2. A closer examination of the standard deviation in closing prices of Nifty

and the standard deviation in opening prices of Nifty reveals that two phases i.e.

Page 15: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

110

1999-2000 and 2008-2009 get highlighted with standard deviation being highest,

representing a very high volatility in the index prices, both open as well as close.

Table 3.2: Classical Estimators

Year C-C St. dev O-O St. dev No. of Obs.

1995 1.242143 1.481455 236

1996 1.527411 1.566099 249

1997 1.798488 1.843867 244

1998 1.777206 1.762365 250

1999 1.837389 2.035663 254

2000 2.001889 2.032186 250

2001 1.630300 1.643363 248

2002 1.060891 1.067565 251

2003 1.232220 1.251100 254

2004 1.762618 1.778010 254

2005 1.113604 1.116737 251

2006 1.650139 1.673507 250

2007 1.601395 1.613973 249

2008 2.808280 2.789615 246

2009 2.142748 2.125210 243

2010 1.024103 1.008786 252

2011 1.321311 1.445137 247

2012 0.954654 1.000615 251

1993-2012 1.640411 1.692721 4922

Source: Compiled Data

These two phases may be attributed to

(a) the boom and subsequent crash in the Indian stock market during 1999-2000, and

(b) to the sub-prime financial crisis in 2008

which affected the Indian market during the period selected for the study.

Page 16: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

111

3.5.2. Advanced Volatility Measures

Close-to-close volatility is usually used as it has the benefit of using the closing

auction prices only. Should other prices be used, then they could be vulnerable to

manipulation or a “fat fingered” trade. However, a large number of samples need to

be used to get a good estimate of historical volatility, and using a large number of

closing values may obscure short-term changes in volatility. There are however,

different methods of calculating volatility using some or all of the open, high, low and

close values.

The advanced volatility models may further be categorized as

i. Range based volatility models

ii. Drift Independent volatility models

3.5.3. Range Based Volatility Models

Instead of depending on only one representative value (snapshot price) of a day’s

trading, the range based volatility models consider embodying more information in

the calculation methodology; hence include few or all of the intraday information

available. Thus these models use the Open price, High price, Low price and the Close

price for estimating the volatility of the returns from the select scrip or index.

3.5.3a. Parkinson’s High-Low Volatility Measure

Building on the work of Feller (1951), Parkinson (1980) suggested a range estimator,

which was a counterpart to the traditional one. The Parkinson number, or High Low

Range Volatility, developed by the physicist, Michael Parkinson, in 1980 aims to

estimate the Volatility of returns for a random walk using the high and low prices

over the entire day instead of just a ‘snapshot’ price at the end of the day, embodying

more information. It was claimed to attain the same accuracy with about 80% less

data and with a relative efficiency of 5.2 when the traditional estimator was taken as

the benchmark, thus it was said to be “far superior to” the traditional estimator. It

attempts to estimate the volatility of returns for an asset following a diffusion process

(geometric random walk) by using only the high and low of the period. Essentially,

Page 17: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

112

the simple formula, gives the distribution of the maxima and the minima of the asset

returns. Parkinson's number P for an asset is given by:

𝑝 = √1

4 𝑇 𝑙𝑛(2)∑(𝑙𝑛(

𝑇

𝑡=1

𝐻𝑡 𝐿𝑡⁄ ))2 (𝑒𝑞𝑛 3.3)

An important use of the Parkinson number is the assessment of the distribution prices

during the day as well as a better understanding of the market dynamics. Comparing

the Parkinson number and periodically sampled volatility helps understand the

tendency towards mean reversion in the market as well as the distribution of stop-

losses. This estimator has an efficiency of 5.2 times the classic close-to-close

estimator (standard deviation).

Following are the advantages of the Parkinson’s Estimator:

a. Using daily range seems sensible and provides completely separate

information from using time based sampling such as closing prices

Following are the disadvantages of the Parkinson’s Estimator:

a. It is appropriate only for measuring volatility of a Geometric Brownian

Motion process

b. It particularly cannot handle trends and jumps

c. It systematically underestimates volatility

3.5.3b. Garman-Klass Open Close Volatility Measure

The relative efficiency of an estimator is defined as the ratio of variance of the

benchmark estimator to the variance of the estimator under consideration. Garman

and Klass (1980) suggested several estimators and tested their relative efficiencies in

their paper. The preferred estimator was constructed by normalized high, low and

closing prices relative to the open prices. This estimator combined the traditional

estimator and Parkinson’s estimator, thus incorporating more intraday information.

𝑔𝑘 = √1

𝑇∑ [

1

2(𝑙𝑛

𝐻𝑡

𝐿𝑡)

2

− (2 𝑙𝑛2 − 1) (𝑙𝑛𝐶𝑡

𝑂𝑡)

2

]

𝑇

𝑡=1

(𝑒𝑞𝑛 3.4)

Page 18: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

113

This estimator is up to 7.4 times as efficient as the close-to-close estimator (the exact

efficiency improvement is dependent on the sample size) but is also biased due to the

discrete sampling leading to a low estimate of the range. However, this estimator is

more biased than the Parkinson estimator.

Following are the advantages of the Garman-Klass Estimator:

a. It is upto 7.4 times more efficient than the close to close estimator

b. It makes the best use of commonly available price information

Following are the disadvantages of the Garman-Klass Estimator:

a. It is even more biased than Parkinson’s estimator

3.5.3c. Yang Zhang extension to Garman-Klass Volatility Measure

Yang and Zhang offered an extension to the Garman and Klass historical

volatility estimator. The equation was modified to include the logarithm of the open

price divided by the preceding close price. As a result, this modification allows the

volatility estimator to account for the opening jumps, but as the original function, it

assumes that the underlying follows a Brownian motion with zero drift (the historical

mean return should be equal to zero). The estimator tends to overestimate the

volatility when the drift is different from zero, however, for a zero drift motion, this

estimator has an efficiency of 8 times the classic close-to-close estimator (standard

deviation).

𝑦𝑧 = √1

𝑇∑ [(𝑙𝑛

𝑂𝑡

𝐶𝑡−1)

2

+ 1

2(𝑙𝑛

𝐻𝑡

𝐿𝑡)

2

− (2 𝑙𝑛2 − 1) (𝑙𝑛𝐶𝑡

𝑂𝑡)

2

]

𝑇

𝑡=1

(𝑒𝑞𝑛 3.5)

Where

= volatility

T = total number of trading days

𝐶𝑡−1 = the closing price of previous day

𝐶𝑡 = the closing price

𝑂𝑡 = the opening price

Page 19: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

114

𝐻𝑡 = the high price

𝐿𝑡 = the low price

Ln = the natural log

Parkinson (1980), Garman and Klass (1980) made an assumption that the underlying

asset followed a continuous Brownian motion process, which was the shortcoming of

their estimator. The fact that prices of financial instruments are only observable at

discrete time intervals contradicts their assumption and creates possible sources of

bias; therefore the fidelity of the observed high and low prices becomes questionable.

In an empirical study of stock price variability, Beckers (1983) reinforced this idea

that Parkinson’s estimator would be biased downward by non-continuous prices,

because when sampling discretely, it is common for the observed low price to be

higher than the “true” lowest price and for a similar situation to apply to the observed

high price. Similar findings were made by Edwards (1988), studying the S&P 500

and Value Line cash indices, and Wiggins (1991) investigating individual stocks.

Wiggins (1992) stated that this bias was not particularly serious for an actively traded

instrument with small price increments, however. He took S&P 500 futures prices as

an example in his empirical analysis.

3.5.3d. Volatility Measure: Range Based Estimators

The range-based estimators, Parkinson’s volatility measure, Garman-Klass volatility

measure and Yang-Zhang’s correction to Garman-Klass volatility measure are

obtained and the results are tabulated below. An examination of the results obtained

in Table 3.3 indicates that the three range based volatility measures also depict the

same trend as that of the close to close volatility. Even these measures show a higher

volatility during the two said phases of

(a) the boom and subsequent crash in the Indian stock market during 1999-2000, and

(b) to the sub-prime financial crisis in 2008

which affected the Indian market during the period selected for the study.

Page 20: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

115

Table 3.3: Range Based Estimators

Year Parkinson GK GK-YZ No. of Obs.

1995 0.977948 0.860752 0.934904 236

1996 1.147219 1.075719 1.219436 249

1997 1.308221 1.219637 1.447892 244

1998 1.515057 1.420693 1.438012 250

1999 1.580481 1.516513 1.685942 254

2000 2.016158 2.024442 2.036289 250

2001 1.569462 1.546917 1.552033 248

2002 1.01514 1.002154 1.006063 251

2003 1.199727 1.181123 1.191931 254

2004 1.63223 1.581951 1.599249 254

2005 1.084088 1.068069 1.074678 251

2006 1.610891 1.591949 1.596541 250

2007 1.514174 1.477489 1.482379 249

2008 2.599011 2.517717 2.524942 246

2009 1.84509 1.722613 1.72747 243

2010 0.933496 0.903525 0.934773 252

2011 1.103295 1.085559 1.330034 247

2012 1.032948 1.102186 1.219126 251

1993-2012 1.505712 1.46364 1.516674 4922

Source: Compiled Data

It may be observed that the Garman Klass model shows an overall lesser volatility

than Parkinson’s volatility measure. This is because Parkinson assumed continuous

trading and considers only the high and low values of the daily prices of the index

where as the Garman Klass model considers the open and close values also in addition

to the others. Hence the overall volatility for a year according to Garman Klass model

is lesser when compared to the volatility shown by the Parkinson’s model.

Page 21: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

116

The Yang Zhang’s correction to the Garman Klass model of volatility estimation,

considers the opening jumps as well, along with the open, high, low and close values

of the index prices. Despite that, as the model is not drift independent, the volatility

resulted using Yang Zhang’s correction is almost similar to the volatility resulted by

the Parkinson’s and the Garman-Klass’ models.

3.5.4. Drift Independent Volatility Models

These models are also range based models, but are different in terms of the treatment

of the drift in developing the model. All of the range based volatility models assume

the drift (or the average return) to be zero for the series under consideration for

volatility measurement. Volatility of securities possessing or depicting drift, or non-

zero mean, was not found appropriately measured if the measurement tool was any of

the classical models or any of the range based models. Such securities require a more

sophisticated measure of volatility.

3.5.4a. Rogers-Satchell Volatility Measure

The Rogers-Satchell function [Rogers and Satchell (1991) and Rogers, Satchell and

Yoon (1994)] is a volatility estimator that properly measures the volatility for

securities with non-zero mean. As a result, it provides better volatility estimation

when the underlying is trending.

𝑟𝑠 = √ 1

𝑇∑(𝑙𝑛 (𝐻𝑡 𝐶𝑡) 𝑙𝑛(𝐻𝑡 𝑂𝑡) + ⁄⁄

𝑇

𝑡=1

𝑙𝑛(𝐿𝑡 𝐶𝑡) 𝑙𝑛(𝐿𝑡 𝑂𝑡))⁄⁄ (𝑒𝑞𝑛 3.6)

However, this estimator does not account for opening jumps in price (Gaps), hence

underestimates the volatility. The function uses the open, close, high, and low price

series in its calculation and it has only one parameter, which is ‘the period to use’ to

estimate the volatility. This estimator has an efficiency of 8 times the classic close-to-

close estimator (standard deviation).

Following are the advantages of the Roger-Satchell Estimator:

a. It allows for the presence of trends

Page 22: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

117

Following are the disadvantages of the Roger-Satchell Estimator:

a. It still cannot deal with opening jumps

3.5.4b. Yang Zhang Drift Independent Volatility Measure

In 2000 Yang and Zhang created a volatility measure that handles both opening jumps

and drift. It is the sum of the overnight volatility (close-to-open volatility) and a

weighted average of the Rogers-Satchell volatility and the open-to-close volatility.

The assumption of continuous prices does mean the measure tends to slightly under

estimate the volatility.

𝑦𝑧 = √𝑜2 + 𝑘𝑐

2 + (1 − 𝑘)𝑟𝑠2 (𝑒𝑞𝑛 3.7)

Where

𝑜2 =

1

𝑇 − 1 ∑ [ln (

𝑂𝑡

𝐶𝑡−1) − ln (

𝑂𝑡

𝐶𝑡−1)

]

2𝑇

𝑡=1

𝑐2 =

1

𝑇 − 1 ∑ [ln (

𝐶𝑡

𝑂𝑡) − ln (

𝐶𝑡

𝑂𝑡)

]

2𝑇

𝑡=1

𝑘 = 0.34

1.34 + 𝑁 + 1𝑁 − 1

𝑟𝑠2 = Rogers-Satchell’s Variance

In some simulations it may have efficiency 14 times greater than the close-to-close

volatility. But this is highly dependent on the proportion of volatility caused by

opening jumps. If these jumps dominate, the estimator performs no better than the

close-to-close estimator.

Following are the advantages of the Yang-Zhang Estimator:

a. It is specifically designed to have minimum estimation error

b. It can handle both drift and jumps

c. It is most efficient in its use of available data

Following are the disadvantages of the Yang-Zhang Estimator:

a. The performance degrades to that of close to close estimator when process is

dominated by jumps

Page 23: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

118

3.5.4c. Volatility Measure: Drift Independent Estimators

The Drift Independent Estimators, Rogers-Satchell’s volatility measure and Yang

Zhang’s volatility measure are obtained and the results are tabulated below.

Table 3.4: Drift Independent Estimators

Year R-S Y-Z No. of Obs.

1995 0.860370 0.796883 236

1996 1.063661 0.795669 249

1997 1.180066 0.795671 244

1998 1.435104 0.795673 250

1999 1.497977 0.795656 254

2000 2.098462 0.795662 250

2001 1.571860 0.795660 248

2002 1.003531 0.795650 251

2003 1.161203 0.795653 254

2004 1.558296 0.795645 254

2005 1.047189 0.795654 251

2006 1.596040 0.795656 250

2007 1.505478 0.795671 249

2008 2.603048 0.795747 246

2009 1.659962 0.795664 243

2010 0.906830 0.795663 252

2011 1.073396 0.795707 247

2012 1.309246 0.795652 251

1993-2012 1.481189 0.795459 4922

Source: Compiled Data

The results in Table 3.4 reveal that Roger Satchell’s measure is not different from the

other measures, and shows the higher volatility during the two said phases. Though

the Roger Satchell’s measure handles drift, it does not handle opening jumps. Hence

the results from this model are quite similar to those of the other models.

Page 24: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

119

It becomes imperative to take a look at the results from the Yang Zhang model, which

handles both drift as well as the opening jumps. This was clearly observed from the

volatility values depicted by this method through all the years of the study period.

The overall volatility has been similar and consistent through each of the years of the

study period. Even for the whole duration taken at a time, there was not much

difference in the volatility displayed by this method.

From the above observations it is concluded that Yang Zhang’s measure of volatility

is better and consistent in calculating the overall risk of the index under consideration

as it considers overnight jumps as well as handles drift in the return series. This

measure also shows a maximum efficiency of 14 times the classical close to close

volatility measure.

Table 3.5: Summary of Volatility Estimates

Estimate Prices

Taken

Handle

Drift?

Handle overnight

jumps?

Efficiency

(max)

Close to close C No No 1

Parkinson HL No No 5.2

Garman-Klass OHLC No No 7.4

Rogers-Satchell OHLC Yes No 8

Garman-Klass

Yang-Zhang ext. OHLC No Yes 8

Yang-Zhang OHLC Yes Yes 14

Source: http://www.todaysgroep.nl/media/236846/measuring_historic_volatility.pdf

All these estimators are built on a strict assumption that an asset price follows a

Geometric Brownian Motion which is certainly not the case in real markets.

Page 25: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

120

3.6. Skewness

The first thing usually noticed about a distribution’s shape is whether it has one mode

(peak) or more than one. If it is uni-modal (has just one peak), like most data sets, the

next thing noticed is whether it is symmetric or skewed to one side. If the bulk of the

data is at the left and the right tail is longer, we say that the distribution is skewed

right or positively skewed; if the peak is toward the right and the left tail is longer, we

say that the distribution is skewed left or negatively skewed.

Figure 3.2: Different forms of Skewness

It may be noted that the mean and standard deviation have the same units as the

original data, and the variance has the square of those units. However, the skewness

has no units; it is a pure number, like a z-score.

𝑆𝑘𝑒𝑤𝑛𝑒𝑠𝑠 = 𝑛

(𝑛 − 1)(𝑛 − 2) ∑ (

𝑥𝑡 − ��

𝑠)

3

(𝑒𝑞𝑛 3.8)

3.7. Kurtosis

The height and sharpness of the peak relative to the rest of the data are measured by a

number called kurtosis. Higher values indicate a higher, sharper peak; lower values

indicate a lower, less distinct peak. This occurs because, as Wikipedia’s article on

kurtosis explains, higher kurtosis means more of the variability is due to a few

extreme differences from the mean, rather than a lot of modest differences from the

mean.

Page 26: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

121

𝐾𝑢𝑟𝑡𝑜𝑠𝑖𝑠 = {𝑛(𝑛 + 1)

(𝑛 − 1)(𝑛 − 2)(𝑛 − 3)∑ (

𝑥𝑡 − ��

𝑠)

4

} − 3 (𝑛 − 1)2

(𝑛 − 2)(𝑛 − 3) (𝑒𝑞𝑛 3.9)

In the words of Kevin P. Balanda and H.L. MacGillivray “Increasing kurtosis is

associated with the movement of probability mass from the shoulders of a distribution

into its center and tails.” The mean and standard deviation have the same units as the

original data, and the variance has the square of those units. However, the kurtosis

has no units, but is a pure number, like a z-score.

Figure 3.3: Different forms of Kurtosis

The reference standard is a normal distribution, which has a kurtosis of 3. In token of

this, often the excess kurtosis is presented as simply kurtosis-3.

A normal distribution has kurtosis exactly 3 (excess kurtosis exactly 0). Any

distribution with kurtosis ≈ 3 (excess ≈ 0) is called mesokurtic.

A distribution with kurtosis < 3 (excess kurtosis < 0) is called platykurtic.

Compared to a normal distribution, its central peak is lower and broader, and

its tails are shorter and thinner.

A distribution with kurtosis > 3 (excess kurtosis > 0) is called leptokurtic.

Compared to a normal distribution, its central peak is higher and sharper, and

its tails are longer and fatter.

Page 27: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

122

From Table 3.6 it may be observed that skewness (or distribution shape) of the return

series of Nifty has been shifting between positive and negative for individual years

taken into consideration. But the overall period of 20 years showed a negative

skewness in the distribution of Nifty returns.

Table 3.6: Skewness and Kurtosis

Year Skewness Kurtosis No. of Obs.

1993 -0.33569 0.33082 213

1994 0.63399 1.96098 230

1995 -0.10597 0.75154 236

1996 0.70025 1.09674 249

1997 0.05845 7.56417 244

1998 -0.09430 1.61662 250

1999 0.04450 2.24681 254

2000 -0.10528 1.49105 250

2001 -0.46151 2.26191 248

2002 0.07770 1.45728 251

2003 -0.33654 0.47011 254

2004 -1.80181 14.39703 254

2005 -0.51667 0.59184 251

2006 -0.61989 2.73131 250

2007 -0.25819 1.55815 249

2008 -0.28344 1.68816 246

2009 1.50835 12.62100 243

2010 -0.27696 0.67026 252

2011 0.27044 0.05748 247

2012 0.07563 0.66164 251

1993-2012 -0.13085 5.99630 4922

Source: Compiled Data

It may also be observed from this table that Kurtosis was beyond 3 in the years 1997,

2004 and 2009 when individual years are taken into consideration. For the overall

Page 28: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

123

period of 20 years, Kurtosis ended up at 5.9963 which is almost double the required

standard value of 3. This clearly proves that the Nifty return distribution is

Leptokurtic in nature. As this is one of the important characteristics of financial time

series data, it enables the applicability of ARCH / GARCH.

3.8. Jarque Bera Test

In statistics, the Jarque Bera test is a goodness-of-fit test of whether sample data have

the skewness and kurtosis matching a normal distribution. The test is named

after Carlos Jarque and Anil K. Bera. It is used to check hypothesis about the fact that

a given sample xS is a sample of normal random variable with unknown mean and

dispersion. As a rule, this test is applied before using methods of parametric statistics

which require distribution normality.

This test is based on the fact that skewness and kurtosis of normal distribution equal

zero. Therefore, the absolute value of these parameters could be a measure of

deviation of the distribution from normal. The test statistic JB is defined as

𝐽𝐵 = 𝑛

6 (𝑆2 +

1

4 (𝐾 − 3)2) (𝑒𝑞𝑛 3.10)

where n is the number of observations (or degrees of freedom in general); S is the

sample skewness, and K is the sample kurtosis.

The hypotheses are set as:

H0: The return series are normally distributed

H1: The return series are not normally distributed

If the data come from a normal distribution, the JB statistic asymptotically has a chi-

squared distribution with two degrees of freedom, so the statistic may be used

to test the hypothesis that the data are from a normal distribution. The null

hypothesis is a joint hypothesis of the skewness being zero and the excess

kurtosis being zero. Samples from a normal distribution have an expected skewness of

0 and an expected excess kurtosis of 0 (which is the same as a kurtosis of 3). As the

definition of JB shows, any deviation from this increases the JB statistic.

Page 29: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

124

3.8a. Empirical Investigation: Jarque Bera

The descriptive statistics of Log Returns of Nifty close values for the 20 year period

are given below.

Figure 3.4: Histogram of Log Returns of Daily Nifty Close Values

Table 3.7: Descriptive Statistics of Log Returns of

Nifty Close Values from 4 Jan 1993 to 31 Dec 2012

Mean 0.041620

Median 0.076938

Standard Deviation 1.640411

Kurtosis 5.996300

Skewness -0.130850

Minimum -13.05386

Maximum 16.33432

Count 4922

Jarque-Bera 7369.966

Probability 0.000000

Source: Compiled Data

According to Table 3.7, the basic statistics indicate that the mean return (0.041620) is

closer to zero, when relatively compared to the standard deviation (1.6404111). The

return series is negatively skewed for the 20 year period. The Kurtosis, which

0

200

400

600

800

1,000

1,200

1,400

1,600

-15 -10 -5 0 5 10 15 20

Freq

uenc

y

Page 30: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

125

measures the magnitude of the extremes, is greater than three, which means that the

return series are leptokurtic in shape, with higher and sharper central peak, and longer

and fatter tails than the normal distribution. The daily stock returns are thus not

normally distributed. The null hypothesis “the return series are normally distributed”

of Jarque-Bera test was rejected proving the returns to be not normally distributed.

Hence, ARCH / GARCH modeling is suggested.

3.9. ARCH Models

The econometric challenge is to specify how the information is used to forecast the

mean and variance of the return, conditional on the past information. The main

constraint of the ARIMA model is the assumption that the disturbance term t in a

model has a constant conditional variance through time. The studies of Mandelbrot

(1963) and Fama (1965) have recognized that such an assumption will not be valid

when studying stock returns. Hence, a more flexible model is required to describe the

volatility of the data.

Conventional econometric analysis assumes the variance of the disturbance terms as

constant over time. The assumption is very limiting for analyzing financial series

because of volatility clustering. The models capable of dealing with variance of the

series are required. The researchers engaged in forecasting time series such as stock

prices and foreign exchange rates observed autocorrelation in the variance at t with its

values lagged one or more periods. If the error variance is related to the squared error

term in the previous term, such autocorrelation is known as autoregressive conditional

heteroskedasticity (ARCH).

While many specifications have been considered for the mean return and have been

used in efforts to forecast future returns, virtually no methods were available for the

variance before introduction of ARCH models. The primary description tool was the

rolling standard deviation. This is a standard deviation calculated using a fixed

number of the most recent observations. It assumes that the variance of tomorrow’s

return is an equally weighted average of the squared residuals from the previous days.

The assumption of equal weights seems unattractive; as one would think that the more

Page 31: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

126

recent events would be more relevant and therefore should have higher weights.

Furthermore, the assumption of zero weights for observations more than one month

old is also unattractive (Engle, 2001).

The ARCH model proposed by Engle (1982) allows the data to determine the best

weights to use in forecasting the variance. A useful generalization of this model is the

GARCH introduced by Bollerslev (1986). This model is also a weighted average of

past squared residuals, but it has declining weights that never go completely to zero.

It gives models that are easy to estimate and has proven successful in predicting

conditional variances (Engle and Patton, 2001).

Stock returns are characterized by statistical distributions. Most high frequency

financial time series are found deviating from normality. One of the key assumptions

of the ordinary regression is that variance of the errors is constant throughout the

sample which is known as homoskedasticity. Violation of this assumption indicates

the problem of heteroskedasticity. Findings of heteroskedasticity in stock returns are

well documented by Fama (1965), Engle (1982) and Bollerslev (1986). These studies

have found that the stock return data is typically characterized by following empirical

regularities:

a. Serial Correlation in the Returns

It is a measure of relationship between successive errors. Serial correlation in

the returns indicates that successive returns are not independent.

b. Thick Tails

Skewness and Kurtosis measure the shape of a probability distribution. It

measures the degree of asymmetry, with symmetry implying zero skewness.

Positive skewness indicates a relatively long right tail compared to the left tail

and negative skewness indicates the opposite. Kurtosis indicates the extent to

which probability is concentrated in the center and especially at the tail of the

distribution rather than in the shoulders which are the regions between center

and the tails. Every normal distribution has skewness equal to 0 and kurtosis

of 3. Kurtosis in excess of 3 indicates heavy tails, an indicator of leptokurtosis.

Page 32: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

127

Asset returns tend to be leptokurtotic, i.e. too many values near the mean and

in the tails of the distribution when compared with the normal distribution.

The documentation of this empirical regularity is presented in Mandelbrot

(1965). Such regularity has also been observed in the present study (Table

3.6, pg 103).

c. Volatility Clustering

Large changes tend to be followed by large changes, of either sign and small

changes tend to be followed by small changes [(Engle, 1982) and (Bollerslev,

1986)]. This is known as volatility clustering. Statistically, volatility

clustering implies a strong autocorrelation in squared returns.

d. Leverage Effect

Volatility seems to react differently to a big price increase or a big price drop.

This property is known as ‘leverage effect’; and plays an important role in the

development of volatility models. The negative asymmetry in the distribution

of return questions the assumption of an underlying normal distribution. The

so-called ‘leverage effect’ first observed by Black (1976) refers to the

tendency for stock prices to be negatively correlated with changes in stock

volatility. A firm with outstanding debt and equity typically becomes more

highly leveraged when value of the firm falls. This raises the equity return

volatility.

e. Forecastable Events

Forecastable releases of important information are associated with high

volatility. There are also important predictable changes in volatility across the

trading day.

The main weaknesses of ARCH model are

a. The model assumes that positive and negative shocks have the same effects on

volatility because it depends on the square of the previous shock

b. It over predicts the volatility because it responds slowly to large isolated

shocks in time series data (Tsay, 2005).

Page 33: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

128

c. ARCH model hardly provides any new insight for understanding the source of

variations of a financial time series. It only provides the mechanical way of

describing the behavior of the conditional variance.

d. ARCH model is rather restrictive. For instance, a 12 of an ARCH(1) model

must be in the interval [0, 1/3] if series is to have finite fourth moment. The

constraint becomes complicated for high order ARCH models.

3.10. Stationarity in Series

A stationary time series is one whose statistical properties such as mean, variance,

autocorrelation, etc. are all constant over time. Most statistical forecasting methods

are based on the assumption that the time series may be rendered approximately

stationary through the use of mathematical transformations. A stationarized series is

relatively easy to predict. A simple prediction that its statistical properties will be the

same in the future as they have been in the past may be made. The predictions for the

stationarized series may then be "untransformed," by reversing whatever

mathematical transformations were previously used, to obtain predictions for the

original series. Thus, finding the sequence of transformations needed to stationarize a

time series often provides important clues in the search for an appropriate forecasting

model.

Another reason for trying to stationarize a time series is to be able to obtain

meaningful sample statistics such as means, variances, and correlations with other

variables. Such statistics are useful as descriptors of future behavior only if the series

is stationary. For example, if the series is consistently increasing over time, the

sample mean and variance will grow with the size of the sample, and they will always

underestimate the mean and variance in future periods. And if the mean and variance

of a series are not well-defined, then neither shows correlation with other variables.

For this reason a caution is required while trying to extrapolate regression models

fitted to non stationary data.

Most business and economic time series are far from stationary when expressed in

their original units of measurement, and even after deflation or seasonal adjustment

they will typically still exhibit trends, cycles, random-walking, and other non-

Page 34: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

129

stationary behavior. If the series has a stable long-run trend and tends to revert to the

trend line following a disturbance, it may be possible to stationarize it by de-trending

(e.g., by fitting a trend line and subtracting it out prior to fitting a model, or else by

including the time index as an independent variable in a regression or ARIMA

model), perhaps in conjunction with logging or deflating. Such a series is said to

be trend-stationary.

However, sometimes even de-trending is not sufficient to make the series stationary,

in which case it may be necessary to transform it into a series of period-to-period

and/or season-to-season differences. If the mean, variance, and autocorrelations of the

original series are not constant in time, even after detrending, perhaps the statistics of

the changes in the series between periods or between seasons will be constant. Such a

series is said to be difference-stationary. Sometimes it may be hard to tell the

difference between a series that is trend-stationary and one that is difference-

stationary, and a so-called unit root test may be used to get a more definitive answer.

Thus, before estimating ARCH models for a financial time series, taking two steps is

necessary. First check for unit roots in the residuals and second test for ARCH

effects. The input series for ARMA needs to be stationary before we may apply Box-

Jenkins methodology. The series first needs to be differenced until it is stationary.

This needs log transforming the data to stabilize the variance. Since the raw data are

likely to be non-stationary, an application of ARCH test is not valid. For this reason, it

is usual practice to work with the logs of the changes of the series rather than the

series itself.

3.10.1. Unit root test process

The presence of unit root in a time series is tested using Augmented Dickey- Fuller

test. It tests for a unit root in the univariate representation of time series. For a return

series Rt, the ADF test consists of a regression of the first difference of the series

against the series lagged k times as follows:

∆𝑟𝑡 = α + 𝛿 𝑟𝑡−1 + ∑ 𝛽𝑖

𝑝

𝑖=1

∆𝑟𝑡−𝑖 + ε𝑡 (𝑒𝑞𝑛 3.11)

Page 35: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

130

Where

∆𝑟𝑡 = 𝑟𝑡 − 𝑟𝑡−1 Or ∆𝑟𝑡 = ln(𝑅𝑡)

The null and alternative hypotheses are as follows:

H0 = The series contains unit root

H1 = The series is stationary

The acceptance of null hypothesis implies non-stationary. If the ADF test rejects the

null hypothesis of a unit root in the return series, that is if the absolute value of ADF

statistics exceeds the McKinnon critical value the series is stationary and we may

continue to analyze the series. Before estimating a full ARCH model for a financial

time series, it is necessary to check for the presence of ARCH effects in the residuals.

3.10.2. Empirical Investigation: Unit Root Test

Augmented Dickey-Fuller Test is conducted on the Nifty close values at “Level” and

“First Difference” which resulted in the attainment of stationarity in the series at first

difference level. The results are presented in Table 3.8.

Table 3.8: Result of ADF Test

At Level t-Statistic Prob.*

Augmented Dickey-Fuller test statistic -0.267402 0.9272

Test critical values: 1% level -3.431497

5% level -2.861932

10% level -2.567021

At First Difference t-Statistic Prob.*

Augmented Dickey-Fuller test statistic -24.38754 0.000

Test critical values: 1% level -3.431503

5% level -2.861935

10% level -2.567023

*MacKinnon (1996) one-sided p-values.

Source: Compiled Data

Page 36: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

131

Figure 3.5: Graph of NSE Nifty Close Prices

Figure 3.6: Graph of NSE Nifty Log Returns

From Figure 3.6, the series is found to have a constant mean showing the stationarity

of the data. It is seen that the returns fluctuated around the mean value, which is close

to zero. The series has a non constant variance, i.e. heteroskedasticity, which is the

typical feature of a financial time series data. Volatility clustering in the returns was

observed, where periods of low volatility are followed by periods of low volatility and

periods of high volatility are followed by periods of high volatility. Statistically,

volatility clustering implies a strong autocorrelation in squared returns.

As the series follows the characteristics of financial time series data, i.e.

heteroskedasticity, leptokurtosis and serial correlation, a linear model would not be

0

1,000

2,000

3,000

4,000

5,000

6,000

7,000

1000 2000 3000 4000

CLOSE

-15

-10

-5

0

5

10

15

20

1000 2000 3000 4000

LNRET

Page 37: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

132

able to capture the volatility of the series. Hence non linear models such as ARCH /

GARCH have been used for modeling the volatility on the Indian Stock Market.

3.11. Durbin-Watson Statistic

The Durbin–Watson statistic is a test statistic used to detect the presence of

autocorrelation (a relationship between values separated from each other by a given

time lag) in the residuals (prediction errors) from a regression analysis. It is named

after James Durbin and Geoffrey Watson. The small sample distribution of this ratio

was derived by John von Neumann (von Neumann, 1941). Durbin and Watson (1950,

1951) applied this statistic to the residuals from least squares regressions, and

developed bounds tests for the null hypothesis that the errors are serially uncorrelated

against the alternative that they follow a first order autoregressive process. Later, John

Denis Sargan and Alok Bhargava developed several non Neumann–Durbin–Watson

type test statistics for the null hypothesis that the errors on a regression model follow

a process with a unit root against the alternative hypothesis that the errors follow a

stationary first order auto regression (Sargan and Bhargava, 1983).

If 𝑒𝑡 is the residual associated with the observation at time t, then the test statistic is

𝑑 = ∑ (𝑒𝑡 − 𝑒𝑡−1)2𝑇

𝑡=2

∑ 𝑒𝑡2𝑇

𝑡−1

(𝑒𝑞𝑛 3.12)

Where, T is the number of observations. Since d is approximately equal to 2(1 − r),

where r is the sample autocorrelation of the residuals, d = 2 indicates no

autocorrelation. The value of d always lies between 0 and 4. If the Durbin–Watson

statistic is substantially less than 2, there is evidence of positive serial correlation. As

a rough rule of thumb, if Durbin–Watson is less than 1.0, there may be cause for

alarm. Small values of d indicate successive error terms are, on average, close in

value to one another, or positively correlated. If d > 2, successive error terms are, on

average, much different in value from one another, i.e., negatively correlated. In

regressions, this may imply an underestimation of the level of statistical significance.

To test for positive autocorrelation at significance 𝛼, the test statistic d is compared to

lower and upper critical values (𝑑𝐿 and 𝑑𝑈):

Page 38: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

133

If d < 𝑑𝐿 , there is statistical evidence that the error terms are positively

autocorrelated.

If d > 𝑑𝑈, there is no statistical evidence that the error terms are positively

autocorrelated.

If 𝑑𝐿 < d < 𝑑𝑈, the test is inconclusive.

Positive serial correlation is serial correlation in which a positive error for one

observation increases the chances of a positive error for another observation.

To test for negative autocorrelation at significance 𝛼 , the test statistic (4−d) is

compared to lower and upper critical values (𝑑𝐿and 𝑑𝑈):

If (4−d) < 𝑑𝐿, there is statistical evidence that the error terms are negatively

autocorrelated.

If (4−d) > 𝑑𝑈 , there is no statistical evidence that the error terms are

negatively autocorrelated.

If 𝑑𝐿 < (4−d) < 𝑑𝑈, the test is inconclusive.

Negative serial correlation implies that a positive error for one observation increases

the chance of a negative error for another observation and a negative error for one

observation increases the chances of a positive error for another.

The critical values, 𝑑𝐿 and 𝑑𝑈 , vary by level of significance (α), the number of

observations, and the number of predictors in the regression equation. Their

derivation is complex—statisticians typically obtain them from the appendices of

statistical texts.

3.12. Ljung – Box Q Statistic

The Ljung–Box test is commonly used in autoregressive integrated moving

average (ARIMA) modeling. It is important to note that this test is applied to

the residuals of a fitted ARIMA model, and not the original series. In such

applications the hypothesis actually being tested is that the residuals from the ARIMA

model have no autocorrelation. When testing the residuals of an estimated ARIMA

model, the degrees of freedom need to be adjusted to reflect the parameter estimation.

Page 39: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

134

For example, for an ARIMA(p,0,q) model, the degrees of freedom should be set to h –

p – q.

The Ljung–Box test, named after Greta M. Ljung and George E. P. Box, is a type

of statistical test of whether any of a group of autocorrelations of a time series is

different from zero. Instead of testing randomness at each distinct lag, it tests the

"overall" randomness based on a number of lags, and is therefore a portmanteau test.

This test is sometimes known as the Ljung–Box Q test, and it is closely connected to

the Box–Pierce test, which is named after George E. P. Box and David A. Pierce. In

fact, the Ljung–Box test statistic was described explicitly in the paper that led to the

use of the Box-Pierce statistic, and from which that statistic takes its name. The Box-

Pierce test statistic is a simplified version of the Ljung–Box statistic for which

subsequent simulation studies have shown poor performance.

Simulation studies have shown that the Ljung–Box statistic is better for all sample

sizes including small ones. The Ljung–Box test is widely applied in econometrics and

other applications of time series analysis.

The null and alternative hypotheses of the Ljung–Box test may be defined as follows:

H0: The data are independently distributed (i.e. the correlations in the population

from which the sample is taken are 0, so that any observed correlations in the

data result from randomness of the sampling process).

Ha: The data are not independently distributed.

The test statistic is:

𝑄 = 𝑛 (𝑛 + 2) ∑

𝑘2

𝑛 − 𝑘

𝑘=1

(𝑒𝑞𝑛 3.13)

where n is the sample size, is the sample autocorrelation at lag k, and h is the

number of lags being tested. For significance level α, the critical region for rejection

of the hypothesis of randomness is

𝑄 > 1−𝛼,ℎ2

Page 40: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

135

where is the α-quantile of the chi-squared distribution with h degrees of

freedom.

3.12.1. Arch effect test process

Consider the k-variable linear regression model.

𝑦𝑡 = 𝛽1 + 𝛽2 𝑥𝑡 + … . + 𝛽𝑘 𝑥𝑘 𝑡 + 𝑢𝑡 (𝑒𝑞𝑛 3.14)

In addition, assume that conditional on the information available at time (t-1), the

disturbance term distributed as

𝑢𝑡 ~ [0, (α0 + α1 𝑢𝑡−12 )] (𝑒𝑞𝑛 3.15)

That is, ut is normally distributed with zero mean and

𝑉𝑎𝑟 (𝑢𝑡) = (α0 + α1 𝑢𝑡−12 ) (𝑒𝑞𝑛 3.16)

That is the variance of ut follows an ARCH (1) process. The variance of u at time t is

dependent on the squared disturbance at time (t-1), thus giving the appearance of

serial correlation. The error variance may depend not only on one lagged term of the

squared error term but also on several lagged squared terms as follows:

𝑉𝑎𝑟 (𝑢𝑡) = σ𝑡2 = α0 + α1 𝑢𝑡−1

2 + α2 𝑢𝑡−22 + … . + α𝑝 𝑢𝑡−𝑝

2 (𝑒𝑞𝑛 3.17)

If there is no autocorrelation in the error variance, we have

𝐻0 ∶ α1 = α2 = ⋯ = α𝑝 = 0 (𝑒𝑞𝑛 3.18)

In such a case, 𝑉𝑎𝑟 (𝑢𝑡) = α0, and we do not have the ARCH effect.

The null hypothesis is tested by the usual F test but the ARCH-LM test of Engle 1982

is a common test in this regard. Under ARCH-LM test the null and alternative

hypotheses for Nifty stock index are as follows:

𝐻0 ∶ α1 = 0 𝑎𝑛𝑑 α2 = 0 𝑎𝑛𝑑 α3 = 0 𝑎𝑛𝑑 … . α𝑞 = 0 (𝑒𝑞𝑛 3.19)

𝐻1 ∶ α1 ≠ 0 𝑎𝑛𝑑 α2 ≠ 0 𝑎𝑛𝑑 α3 ≠ 0 𝑎𝑛𝑑 … . α𝑞 ≠ 0 (𝑒𝑞𝑛 3.20)

Null hypothesis in this case is homoskedasticity or equality in the variance.

Acceptance of this hypothesis implies that, there is no ARCH effects in the under

Page 41: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

136

process series. In other words, the data do not show volatility clustering i.e. there is

no heteroskedasticity or time varying variance in the data.

Since an ARCH model may be written as an AR model in terms of squared residuals,

a simple Lagrange Multiplier (LM) test for ARCH effects may be constructed based

on the auxiliary regression as in equation 3. 16. Under the null hypothesis that there

is no ARCH effects: 𝐻0 = α1 = α2 = ⋯ = α𝑃 = 0

The test statistic is

𝐿𝑀 = 𝑇 . 𝑅2 ~ 2(𝑃) (𝑒𝑞𝑛 3.21)

where T is the sample size and R2 is computed from the regression equation 3.16

using estimated residuals. That is in a large sample, TR2 follows the Chi-square

distribution with degrees of freedom equal to the number of autoregressive terms in

the auxiliary regression. The test statistic is defined as TR2 (the number of

observations multiplied by the coefficient of multiple correlation) from the last

regression, and it is distributed as a 𝑞2 (Gujarati, 2007).

Thus, the test is one of a joint null hypothesis that all q lags of the squared residuals

have coefficient values that are not significantly different from zero. If the value of

the test statistic is greater than the critical value from the χ2 distribution, then one can

reject the null hypothesis. The test may also be thought of as a test for autocorrelation

in the squared residuals.

If P-value is smaller than the conventional 5% level, the null hypothesis that there are

no ARCH effects will be rejected. In other words, the series under investigation

shows volatility clustering or persistence (Brooks, 2002). If the LM test for ARCH

effects is significant for a time series, one could proceed to estimate an ARCH model

and obtain estimates of the time varying volatility σ2 based on past history.

Page 42: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

137

3.12.2. Serial Correlation effect test process

In Ordinary Least Squares (OLS) regression, time series residuals are often found to

be serially correlated with their own lagged values. Serial correlation means

(a) OLS is no longer an efficient linear estimator,

(b) Standard errors are incorrect and generally overstated, and

(c) OLS estimates are biased and inconsistent if a lagged dependent variable is used as

a regressor.

This test is an alternative to the Q-Statistic for testing for serial correlation. It is

available for residuals from OLS, and the original regression may include

autoregressive (AR) terms. Unlike the Durbin-Watson Test, the Breusch-Godfrey

Test may be used to test for serial correlation beyond the first order, and is valid in the

presence of lagged dependent variables.

The null hypothesis of the Breusch-Godfrey Test is that there is no serial correlation

up to the specified number of lags. The Breusch-Godfrey Test regresses the residuals

on the original regressors and lagged residuals up to the specified lag order. The

number of observations multiplied by R2 is the Breusch-Godfrey Test statistic.

3.12.3. Empirical Investigation: Autocorrelation tests

NSE Nifty is selected as proxy of the Indian stock market and data is collected for

Nifty for a period of 20 years from 4 January 1993 to 31 December 2012. According

to the Jarque-Bera test for normality, the Nifty series showed a p-value of 0 rejecting

the null hypothesis “Return series are normally distributed”.

Autocorrelation tests have been performed on the return series and presented in Table

3.9. As the DW statistic value (1.821049) of log returns is greater than the upper limit

of the DW table value (1.779), the null hypothesis of “the error terms are positively

autocorrelated” cannot be accepted. It may thus be concluded that there is no

statistical evidence that the error terms are positively autocorrelated. Hence the DW

statistic could not provide a strong evidence of the presence of autocorrelation in the

Nifty return series.

Page 43: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

138

Table 3.9: Results of Auto Correlation Tests

DW Statistic 1.821049

Ljung Box Q Statistic 39.265

Probability of Q Statistic 0.000

Serial correlation LM test

Prob. of Chi square (2 df) 0.000

Heteroskedasticity test for ARCH effect

Prob. Of Chi square (1 df) 0.000

Source: Compiled Data

Q(k), the Ljung Box statistic, identified the presence of first order autocorrelation in

the returns with a lag of 15 days. The Q statistic showed a p-value of 0 rejecting the

null hypothesis “There is no autocorrelation in the log return series”. Thus it could be

concluded that there exists an autocorrelation in the log return series.

3.13. ARIMA Estimation for Stock Returns

The general ARMA model was described in the 1951 thesis of Peter Whittle,

Hypothesis testing in time series analysis, who used mathematical analysis (Laurent

series and Fourier analysis) and statistical inference. ARMA models were

popularized by a 1971 book by George E. P. Box and Jenkins, who expounded an

iterative (Box–Jenkins) method for choosing and estimating them. This method was

useful for low-order polynomials of degree three or less.

There are two distinct steps in the estimation of the parameters of the model. The first

is to estimate mean equation with the help of Box-Jenkins methodology, and the

second step is to estimate the parameters of variance equation of ARCH and GARCH

class of models.

The general model introduced by Box and Jenkins (1976) includes autoregressive as

well as moving average parameters and includes differencing in the formulation of the

model. The important three types of parameters in the model are:

Page 44: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

139

i. The autoregressive parameters (p)

ii. The number of differencing passes (d)

iii. Moving average parameters (q)

In the notation introduced by Box and Jenkins, models are summarized as ARIMA (p,

d, q); so, if a model is described as (0, 1, 2) means that it contains 0 (zero)

autoregressive (p) parameters and 2 moving average (q) parameters which were

computed for the series after it was differenced once.

To estimate mean equation, the Autoregressive Integrated Moving Average (ARIMA)

model, as developed by Box and Jenkins, has been widely applied in a variety of

economic and financial time series. Box-Jenkins method consists of the following

steps to estimate ARIMA mean equation.

a. Identification

Here the appropriate values of p, d and q are found out using autocorrelation

(ACF) and partial autocorrelation (PACF) functions1

b. Estimation

Having identified the appropriate values of p and q, the next stage is to estimate

the parameters of the autoregressive and moving average terms included in the

model

c. Diagnostic Checking

After choosing particular ARIMA model and having estimated its parameters, the

next step is to see whether chosen ARIMA model fits the data reasonably well.

Residuals from this model are examined to see if they are white noise, and if they

are, then accept the particular fit of the model, otherwise refine the model.

1 ACF is correlation between observations of a stationary process as function of the time

interval between them. PACF is also a measure of correlation used to identify the extent of

relationship between current values of a variable with earlier values of that same variable

while holding the effects of all other time lags constant.

Page 45: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

140

If the Box–Jenkins model is a good model for the data, the residuals should satisfy

these assumptions.

a. Model diagnostics for Box–Jenkins models is similar to model validation for

non-linear least squares fitting.

b. That is, the error term At is assumed to follow the assumptions for a stationary

univariate process.

c. The residuals should be white noise (or independent when their distributions

are normal) drawings from a fixed distribution with a constant mean and

variance.

One way to assess if the residuals from the Box–Jenkins model follow the

assumptions is to generate statistical graphics (including an autocorrelation plot) of

the residuals. Plotting the mean and variance of residuals over time and performing a

Ljung-Box test or plotting autocorrelation and partial autocorrelation of the residuals

are also helpful to identify misspecification.

However, if these assumptions are not satisfied, one needs to fit a more appropriate

model by going back to the model identification step and try to develop a better

model. Hopefully the analysis of the residuals may provide some clues as to a more

appropriate model.

3.13.1. Fitting ARIMA Model

ARMA models in general, after choosing p and q, can be fitted by least

squares regression to find the values of the parameters which minimize the error term.

It is generally considered good practice to find the smallest values of p and q which

provide an acceptable fit to the data. For a pure AR model the Yule-Walker

equations may be used to provide a fit.

Finding appropriate values of p and q in the ARMA(p,q) model may be facilitated by

plotting the partial autocorrelation functions for an estimate of p, and likewise using

the autocorrelation functions for an estimate of q. Further information may be gleaned

Page 46: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

141

by considering the same functions for the residuals of a model fitted with an initial

selection of p and q. Brockwell and Davis (p. 273) recommend using AIC for

finding p and q.

3.13.2. Application of the Model

ARMA is appropriate when a system is a function of a series of unobserved shocks

(the MA part) as well as its own behavior. For example, stock prices may be shocked

by fundamental information as well as exhibiting technical trending and mean-

reversion effects due to market participants.

3.14. ARCH Model Specification for Nifty

ARCH models are capable of modeling and capturing many of the stylized facts of the

volatility behavior usually observed in financial time series including time varying

volatility or volatility clustering (Zivot and Wang, 2006).

The serial correlation in squared returns, or conditional heteroskedasticity (volatility

clustering), may be modeled using a simple autoregressive (AR) process for squared

residuals. For example, let yt denote a stationary time series such as financial returns,

then yt is expressed as its mean plus a white noise if there is no significant

autocorrelation in yt itself:

𝑦𝑡 = 𝑐 + ε𝑡 (𝑒𝑞𝑛 3.22)

where c is the mean of yt, and ε t is the standardized residuals which are independent

and identically distributed with mean zero.

To allow for volatility clustering or conditional heteroskedasticity, assume that

𝑉𝑎𝑟𝑡−1 (ε𝑡)2 = σ𝑡2 (𝑒𝑞𝑛 3.23)

where 𝑉𝑎𝑟𝑡−1 (ε𝑡)2 denotes the variance conditional on information at time t-1, and

σ𝑡2 = α0 + α1 ε𝑡−1

2 + … . + α𝑝 ε𝑡−𝑝2

Since ε t has a zero mean, the above equation may be rewritten as:

ε𝑡2 = α0 + α1 ε𝑡−1

2 + … . + α𝑝 ε𝑡−𝑝2 + 𝑢𝑡 (𝑒𝑞𝑛 3.24)

Where 𝑢𝑡 = ε𝑡2 − 𝐸𝑡−1 (ε𝑡

2) is a zero mean white noise process.

Page 47: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

142

The above equation represents an AR (p) process for (ε𝑡2), and the model in equation

3.14 is known as the autoregressive conditional heteroskedasticity (ARCH) model of

Engle (1982), which is usually referred to as the ARCH(p ) model. Before estimating

a full ARCH model for a financial time series, it is necessary to test for the presence

of ARCH effects in the residuals. If there are no ARCH effects in the residuals, then

the ARCH model is unnecessary and mis-specified.

In order to identify the ARCH characteristics in Nifty, the conditional return should

be modeled first; the general form of the return may be expressed as a process of

autoregressive AR (p), up to (p) lags, as follows:

𝑅𝑡 = α0 + ∑ α1

𝑝

𝑖=1

𝑅𝑡−1 + ε𝑡 (𝑒𝑞𝑛 3.25)

This general form implies that the current return depends not only on (𝑅𝑡−1) but also

on the previous (p) return value (𝑅𝑡−𝑝).

The next step is to construct a series of squared residuals (ε𝑡2) based on conditional

return to drive the conditional variance. Unlike the OLS assumption of a constant

variance of (ε𝑡, 𝑠), ARCH models assume that (ε𝑡, 𝑠) have a non constant variance or

heteroscedasticity, denoted by (ℎ𝑡2) . After constructing time series residuals, we

modeled the conditional variance in a way that incorporates the ARCH process of

(ε 2) in the conditional variance with q lags. The general forms of the conditional

variance, including (q) lag of the residuals is as follows:

𝜎𝑡2 = 𝛽0 + ∑ 𝛽1

𝑞

𝑡=1

𝑡−12 (𝑒𝑞𝑛 3.26)

The above equation is what Engle (1982) referred to as the linear ARCH (q) model

because of the inclusion of the (q) lags of the (ε𝑡2) in the variance equation. This

model suggests that volatility in the current period is related to volatility in the past

periods.

For example in the case of AR(1) model, If β1 is positive, it suggests that if volatility

was high in the previous period, it will continue to be high in the current period,

indicating volatility clustering. If β1 is zero, then there is no volatility clustering.

Page 48: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

143

To determine the value of q or the ARCH model order, we use the model selection

criterion such as AIC (Akaike Information Criterion). The decision rule is to select the

model with the minimum value of information criterion. This condition is necessary

but not enough because the estimate meets the general requirements of an ARCH

model. The model to be adequate should have coefficients that are significant. If this

requirement is met, then the specified model is adequate and said to fit the data well.

In an ARCH(q) model, old news arrived at the market more than q periods ago has no

effect on current volatility at all. In the ARCH model, the variance of the current

error is an increasing function of the magnitude of lagged errors, irrespective of their

sign. ARCH model implies that a large (small) variance tends to be followed by a

large (small) variance.

3.14.1. Empirical Investigation: Estimation of ARIMA

In this section, estimation of mean equation for the Indian stock market, Nifty, is

discussed followed by volatility model specifications. The mean equation is

estimated for Nifty using Box-Jenkins methodology.

3.14.2. Step 1: Identification

The input series for ARIMA needs to be stationary, which has been attained for the

select market, Nifty, by using the first difference level (Table 3.8). The steps

involved in finding out the appropriate values of p, d and q. The Autocorrelation

Function (ACF) and Partial Autocorrelation Function (PACF) are computed and

presented in Table 3.10 below. Ljung-Box-Pierce Q Statistic is highly significant,

and it indicates the first order serial correlation in the return series.

The Partial Autocorrelation Function of the correlogram of log return series of Nifty

in Table 3.10 shows significance at lag 1, which means that AR(1) would be

applicable for modeling the mean.

Page 49: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

144

Table 3.10: Correlogram of Log Returns

Autocorrelation Partial Correlation AC PAC Q-Stat Prob

|* | |* | 1 0.089 0.089 39.265 0.0000

| | | | 2 -0.033 -0.041 44.613 0.0000

| | | | 3 0.000 0.007 44.613 0.0000

| | | | 4 0.015 0.013 45.75 0.0000

| | | | 5 -0.004 -0.007 45.829 0.0000

| | | | 6 -0.044 -0.043 55.494 0.0000

| | | | 7 0.009 0.017 55.935 0.0000

| | | | 8 0.024 0.018 58.794 0.0000

| | | | 9 0.024 0.022 61.691 0.0000

| | | | 10 0.038 0.037 68.673 0.0000

| | | | 11 -0.028 -0.035 72.576 0.0000

| | | | 12 -0.019 -0.013 74.318 0.0000

| | | | 13 0.015 0.017 75.497 0.0000

| | | | 14 0.036 0.033 81.971 0.0000

| | | | 15 0.005 0.002 82.087 0.0000

Source: Compiled Data

Page 50: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

145

Figure 3.7: Actual and Theoretical Autocorrelation

For modeling the mean, two ways may be followed, which includes categories named

“with intercept” (where a constant is used) and “without intercept” (where constant is

not used). Out of these two categories, the Akaike information criterion (AIC) is

observed and the category showing least value of AIC will be used for running the

AR(1) model. It is observed that the AIC is least in case of “with intercept” according

to Table 3.11. Hence, the AR(1) model was run with intercept and the model is

presented in Table 3.12.

Table 3.11: Basis for taking intercept in the estimation

AR(1) Coeff. prob. Constant prob. Akaike Info.

With intercept 0.08929 0 0.042113 0.0997 3.820204

Without intercept 0.089883 0 - - 3.820349

Source: Compiled Data

-.05

.00

.05

.10

2 4 6 8 10 12 14 16 18 20 22 24

Actual Theoretical

Auto

corre

latio

n

-.05

.00

.05

.10

2 4 6 8 10 12 14 16 18 20 22 24

Actual Theoretical

Parti

al a

utoc

orre

latio

n

Page 51: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

146

3.14.3. Step 2: Estimation

AR(1) model has been used with the Box Jenkins methodology to model the

conditional mean equation. Let Yt denote the first differenced Nifty return series. The

regression when run with AR(1) resulted in the following tentatively identified AR

model:

𝑌𝑡 = 𝑐 + 𝛼1 𝑌𝑡−1 + 𝑡 (𝑒𝑞𝑛. 3.27)

Using E Views 6, the following estimates are obtained:

𝑌𝑡 = 0.042113 + 0.08929 𝑌𝑡−1 + 𝑡 (𝑒𝑞𝑛. 3.28)

From this estimated equation in Table 3.12, it may be observed that the AR(1) value

is significant rejecting the null hypothesis “The residuals are not stationary”. AR(1)

value, i.e. 𝛼1 = 0.08929, is positive which suggests that if volatility was high in the

previous period, it will continue to be high in the current period and vice-versa. Thus

the estimated AR(1) equation indicates volatility clustering in the series.

Table 3.12: AR(1) model

Variable Coefficient Std. Error t-Statistic Prob.

C 0.042113 0.025575 1.646647 0.0997

AR(1) 0.08929 0.014198 6.288664 0.0000*

R-squared 0.007976 Mean dependent var 0.042071

Adjusted R-squared 0.007774 S.D. dependent var 1.640273

S.E. of regression 1.633885 Akaike info criterion 3.820204

Sum squared resid 13131.66 Schwarz criterion 3.822847

Log likelihood -9397.613 Hannan-Quinn criter. 3.821131

F-statistic 39.5473 Durbin-Watson stat 1.992952

Prob(F-statistic) 0.0000

Source: Compiled Data

Page 52: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

147

3.14.4. Step 3: Diagnostic Checking

Having chosen the ARIMA model as specified in equation 3.27, and having estimated

its parameters in equation 3.28, the next step involves checking whether the chosen

model fits the data reasonably well. One simple diagnostic measure is to obtain

residuals from the equation 3.28 and obtain the ACF and PACF of these residuals up

to lag of 15 days.

The estimated ACF and PACF from the residuals as well as the squared residuals of

the equation showed no significance for the existence of serial correlation upto the

15th

lag. Presented in Tables 3.13 and 3.14, the correlogram of autocorrelation and

partial autocorrelation for both residuals and squared residuals of the equation give

the impression that the residuals estimated from equation 3.28 are purely white noise.

These residuals were further tested for ARCH effects using ARCH in

Heteroskedasticity test and for Serial Correlation effect using the Serial Correlation

LM Test. The F statistic is reported significant in both cases at 5% level of

significance, rejecting the null hypotheses of no heteroskedasticity and no serial

correlation. Thus, this test further suggests for the use of non linear model for

capturing volatility.

Page 53: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

148

Table 3.13: Correlogram of Residuals

Q-statistic probabilities adjusted for 1 ARMA term(s)

Autocorrelation Partial Correlation

AC PAC Q-Stat Prob

| | | | 1 0.003 0.003 0.0586

| | | | 2 -0.042 -0.042 8.6853 0.003

| | | | 3 0.001 0.001 8.6906 0.013

| | | | 4 0.016 0.014 9.9453 0.019

| | | | 5 -0.001 -0.001 9.9522 0.041

| | | | 6 -0.045 -0.044 19.972 0.001

| | | | 7 0.012 0.012 20.656 0.002

| | | | 8 0.021 0.017 22.878 0.002

| | | | 9 0.019 0.02 24.64 0.002

| | | | 10 0.039 0.041 32.013 0.000

| | | | 11 -0.03 -0.03 36.529 0.000

| | | | 12 -0.018 -0.017 38.104 0.000

| | | | 13 0.014 0.013 39.128 0.000

| | | | 14 0.035 0.035 45.264 0.000

| | | | 15 0.002 0.005 45.277 0.000

Source: Compiled Data

Page 54: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

149

Table 3.14: Correlogram of Squared Residuals

Q-statistic probabilities adjusted for 1 ARMA term(s)

Autocorrelation Partial Correlation AC PAC Q-Stat Prob

| | | | 1 0.017 0.017 1.3498

| | | | 2 0.005 0.005 1.462 0.227

| | | | 3 0.008 0.007 1.7484 0.417

| | | | 4 -0.008 -0.008 2.0384 0.564

| | | | 5 0.007 0.007 2.2467 0.69

| | | | 6 -0.005 -0.005 2.3752 0.795

| | | | 7 -0.003 -0.003 2.4252 0.877

| | | | 8 -0.016 -0.016 3.7079 0.813

| | | | 9 -0.018 -0.017 5.2907 0.726

| | | | 10 0.001 0.001 5.292 0.808

| | | | 11 -0.007 -0.007 5.5624 0.851

| | | | 12 -0.016 -0.016 6.8205 0.813

| | | | 13 0.01 0.01 7.2719 0.839

| | | | 14 -0.009 -0.009 7.7038 0.862

| | | | 15 -0.014 -0.014 8.6274 0.854

Source: Compiled Data

Page 55: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

150

3.15. GARCH Model

The problem with applying the original ARCH model is the non-negativity constraint

on the coefficient parameters of βi's to ensure the positivity of the conditional

variance. However, when a model requires many lags to model the process correctly,

the non-negativity may be violated. To avoid the long lag structure of the ARCH (q)

developed by Engle (1982), Bollerslev (1986) generalized the ARCH model, the so-

called GARCH, by including the lagged values of the conditional variance. Thus,

GARCH(p,q) model specifies the conditional variance to be a linear combination of q

lags of the squared residuals (ε𝑡−12 ) from the conditional return equation and p lags

from the conditional variance σ𝑡−12 . Then, the GARCH(p,q) specification may be

written as follows:

σ𝑡2 = β0 + ∑ β1

𝑞

𝑖=1

ε𝑡−𝑖2 + ∑ 𝛽2

𝑝

𝑗=1

σ𝑡−𝑗2 (𝑒𝑞𝑛 3.29)

where the coefficients 𝛽1, 𝛽2 > 0 and (𝛽1 + 𝛽2) < 1

The coefficients are all assumed to be positive to ensure that the conditional variance

σ𝑡2 is always positive. This model is known as the generalized ARCH or GARCH(p,q)

model. When q = 0, the GARCH model reduces to the ARCH model.

To show the significance of the explanation of conditional variance of one lag of both

ε𝑡2 and σ𝑡

2 i.e. ε𝑡−12 and σ𝑡−1

2 , the GARCH process should be employed by estimating

the conditional return to drive ε𝑡2, and then the estimation of the conditional variance

by using the following equation:

σ𝑡2 = β0 + β1 ε𝑡−1

2 + α1 σ𝑡−12 (𝑒𝑞𝑛 3.30)

The adequacy of the GARCH model may be examined by standardized residuals,

(/𝜎), where (σ) is the conditional standard deviation as calculated by the GARCH

model, and(ε ) is the residuals of the conditional return equation.

If the GARCH model is well specified, then the standardized residuals will be

Independent and Identically Distributed (IID). To show this, two-step test is needed.

The first step is to calculate the Ljung-Box Q-Statistics (LB) on the squared

observation of the raw data. This test may be used to test for remaining serial

Page 56: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

151

correlation in the mean equation and to check the specification of the mean equation.

If the mean equation is correctly specified, all Q-statistics should not be significant.

The next step is to calculate the Q-statistics of the squared standardized residuals.

This test may also be used to test for remaining ARCH in the variance equation and to

check the specification of the variance equation. If the variance equation is correctly

specified, all Q-statistics should not be significant. Put another way, if the GARCH is

well specified, then the LB statistic of the standardized residuals will be less than the

critical value of the Chi-square statistic.

The test for mean equation specification is thought of as a test for autocorrelation in

the standardized residuals. The test has a null hypothesis that there is no

autocorrelation up to order k of the residuals. If the value of the test statistic is greater

than the critical value from the Q-statistics, then the null hypothesis can be rejected.

Alternatively, if p-value is smaller than the conventional significance level, the null

hypothesis that there are no autocorrelation will be rejected. In other words, the series

under investigation shows volatility clustering or volatility persistence. The same is

true for variance equation .The only difference is that in this case the test will be done

on squared standardized residuals.

Under the GARCH (p, q) model, the conditional variance of ε t, σt2, depends on the

squared residuals in the previous p periods, and the conditional variance in the

previous q periods. Usually a GARCH (1, 1) model with only three parameters in the

conditional variance equation is adequate to obtain a good model fit for financial time

series (Zivot and Wang, 2006)

Many variants of the GARCH model have been proposed in the literature thereafter,

including the following:

a. GARCH (the simplest GARCH)

b. M-GARCH (GARCH in Mean)

c. E-GARCH (Exponential GARCH)

d. T-GARCH (Threshold GARCH)

e. C-GARCH (Component GARCH)

Page 57: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

152

f. CT-GARCH (Asymmetric Component Threshold GARCH)

g. P-GARCH (Power GARCH)

h. I-GARCH (Integrated GARCH)

i. V-GARCH (Variance GARCH)

j. GJR-GARCH (Glosten, Jagannathan and Runkle)

k. GARCH-GED (Generalized Error Distribution residuals)

ARCH and GARCH models assume conditional heteroskedasticity with

homoskedastic unconditional error variance, i.e. the changes represent temporary and

random departure from a constant unconditional variance.

The advantage of GARCH model is that it captures the tendency in financial data for

volatility clustering. It therefore, enables us to make the connection between

information and volatility explicit since any change in the rate of information arrival

to the market will change the volatility in the market. Thus, unless information

remains constant, which is hardly the case, volatility must be time varying even on a

daily basis.

In GARCH process, unexpected returns of the same magnitude (irrespective of their

sign) produce same amount of volatility. But Engle and Ng (1993) argue that if a

negative return shock causes more volatility than a positive return shock of the same

magnitude, the GARCH model under predicts the amount of volatility following bad

news and over predicts the same amount of volatility following good news.

3.15.1. Stationarity and Persistence

The GARCH (p,q) is defined as stationary when {( 𝛼1 + 𝛼2 + … … + 𝛼𝑞) +

(𝛽1 + 𝛽2 + … … + 𝛽𝑝)} < 1. The large GARCH lag coefficients 𝛽𝑝 indicate that

shocks to conditional variance takes a long time to die out, so volatility is ‘persistent’.

Large GARCH error coefficient 𝛼𝑞 means that volatility reacts quite intensely to

market movements and so if 𝛼𝑞 is relatively high and / or 𝛼𝑞 is relatively low, then

volatilities tend to be ‘spiky’.

Page 58: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

153

If (α + β) is close to unity, then a shock at time t will persist for many future periods.

A high value of it implies a ‘long memory’. Non-negativity constraints on 𝛼𝑞, 𝛽𝑝 and

𝛼0 can create difficulties in estimating GARCH models. Furthermore those non-

negativity constraints imply that increasing 𝑡2 in any period increases ℎ𝑡+𝑚 for all

ruling out the random oscillatory behavior in the process.

3.15.2. Modeling the Conditional Variance

After confirming the presence of clustering volatility and ARCH effect on the log

return series, the GARCH model was run. The model was run with an error

distribution function of “student’s t with fixed df” and “student’s t” separately and

results were compared with each other to find which of these best fitted the model and

presented in Table 3.15. The best fit is identified based on the Akaike information

criterion (AIC), i.e. the function showing smaller absolute value of AIC will be

chosen as the best fitted model.

Table 3.15: Comparison of both functions for the best fitted model

Student's t with df Student's t

@SQRT GARCH significant significant

RESID(-1)^2 significant significant

GARCH(-1) significant significant

Akaike info criterion -0.71204 -0.71298

Schwarz criterion -0.70412 -0.70374

Residual Test

ARCH effect accept H0* accept H0*

Serial Correlation test accept H0** accept H0**

* H0 is framed as “There is no ARCH effect”

** H0 is framed as “There is no Serial Correlation effect”

Source: Compiled Data

Page 59: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

154

From an examination of the data presented in Table 3.15, we find that all

requirements for the model run are fulfilled by both the functions. It is observed that

the absolute AIC value is smaller for the “Student’s t with df” function which is

identified as the best fitted model for Nifty for the select period.

Thus, as per the specified equation 3.30, the best fitted model for Nifty for the select

period according to the best fit AIC function i.e. ‘Student’s t with df’ is presented in

Table 3.16 below. The GARCH model estimated according to equation 3.30 is also

given in this table.

Table 3.16: Estimated GARCH Equation at Student’s t with df

Dependent Variable: LNRET

Method: ML - ARCH (Marquardt) - Student's t distribution

Sample: 1 4922

Included observations: 4922

Convergence achieved after 12 iterations

Presample variance: backcast (parameter = 0.7)

t-distribution degree of freedom parameter fixed at 10

GARCH = C(4) + C(5)*RESID(-1)^2 + C(6)*GARCH(-1)

Variable Coefficient Std. Error z-Statistic Prob.

@SQRT(GARCH) 0.369844 0.051541 7.175735 0.0000

C 0.035825 0.008072 4.438016 0.0000

U 0.999837 0.001408 710.2163 0.0000

Variance Equation

C 0.000891 0.00015 5.954406 0.0000

RESID(-1)^2 0.113218 0.00994 11.39011 0.0000

GARCH(-1) 0.862069 0.010934 78.84313 0.0000

R-squared 0.986073 Mean dependent var 0.04162

Adjusted R-squared 0.986059 S.D. dependent var 1.640411

S.E. of regression 0.19369 Akaike info criterion -0.71204

Sum squared resid 184.4275 Schwarz criterion -0.70412

Log likelihood 1758.34 Hannan-Quinn criter. -0.70926

F-statistic 69611.96 Durbin-Watson stat 1.825068

Prob(F-statistic) 0.0000

Source: Compiled Data

Page 60: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

155

Figure 3.8: Descriptive statistics of Residuals of fitted GARCH(1,1) Model

Figure 3.9: Volatility Clustering of fitted GARCH(1,1) Model

0

200

400

600

800

1,000

1,200

-6 -4 -2 0 2 4 6

Series: Standardized Residuals

Sample 2 4922

Observations 4921

Mean -0.037177

Median -0.028802

Maximum 7.475823

Minimum -6.281666

Std. Dev. 1.019179

Skewness -0.095965

Kurtosis 5.524925

Jarque-Bera 1314.744

Probability 0.000000

-2

-1

0

1

2

3

-20

-10

0

10

20

1000 2000 3000 4000

Residual Actual Fitted

Page 61: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

156

According to the GARCH model (eqn 3.30)

σ𝑡2 = β0 + β1 ε𝑡−1

2 + α1 σ𝑡−12

The value of RESID(-1)^2 represents the β1 value and the value of GARCH(-1)

represents the α1 value of the GARCH model. According to the estimated GARCH

equation, the values of β1 and α1 are identified as 0.113218 and 0.862069

respectively.

It becomes imperative to observe that the α1 value is large enough in magnitude, to

imply that volatility reacts intensely to the market movements. The sum of these

parameter estimates (0.975287) is very close to but smaller than unity. A sum smaller

than unity, indicates that stationarity condition is not violated. The sum closer to

unity, indicates a long persistence of shocks in volatility.

3.15.3. Half Life of Volatility Persistence

These parameter estimates of GARCH(1,1) model are further used to calculate the

half life of volatility persistence by using the following formula:

ln(0.5)

ln(𝛼 + 𝛽) (𝑒𝑞𝑛 3.31)

The half life of volatility persistence (shock) is calculated and found to be 27.699

days. This implies that the shocks in volatility would die out within an approximate

time of 28 days.

It may be observed that the lag coefficient of conditional variance β1(0.113218) is

identified to be greater than the error coefficient C (0.000891), which implies that

volatility is not spiky. It also indicates that the volatility does not decay speedily and

tends to die out slowly.

Thus, it may be concluded from this estimation that volatility in the Nifty movements

reacts intensely to the market movements and the long persistence in volatility

indicates that the Indian market is inefficient; hence information is not reflected in the

stock process quickly.

Page 62: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

157

3.16. Factors Affecting Volatility

Several economic factors may cause slow changes in stock market volatility. These

include corporate leverage, personal leverage, business conditions, trading volume

and trading halts, computerized trading, noise trading, international linkages and

others.

3.16.1. Corporate leverage

Financial and operating leverage effect the volatility of stock returns. Financial

leverage indicates the use of debt financing to increase the expected return and risk of

equity capital. The use of fixed assets to increase the expected profitability and risk

of production and marketing activities is known as operating leverage. The standard

deviation of stock returns of all equity firms simply equals the standard deviation of

the returns to its assets. If the firm issue debt to buy back its share, the volatility of its

stock returns will increase because the stock holders still have to bear most of the risk.

Similarly, large amounts of operating leverage will make the value of the firm more

sensitive to economic conditions. When a demand for the company’s products falls

off unexpectedly, the profits of a firm with large fixed costs will fall more than the

profits of the firm that avoids large capital investments causing higher stock return

volatility.

3.16.2. Personal Leverage

It refers to the use of personal debt to increase the expected return and risk of an

individual’s investment portfolio. Much recent debate has focused on the effects of

margin requirements on the volatility of aggregate stock prices.

3.16.3. Business Conditions

There is strong evidence that stock volatility increases during economic recessions.

This relationship may in part reflect operating leverage, as recessions are typically

associated with excess capacity and unemployment. Fixed costs for the economy

would have the effect of increase in the volatility of stock returns during periods of

low demand.

Page 63: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

158

3.16.4. Trading Volume & Trading Halts

It has been argued that increased trading activity and stock return volatility occur

together. With high trading volume, prices fluctuate rapidly causing volatility. Stock

exchanges apply market-wide or scrip-specific trading halts for providing a ‘cooling-

off’ period that allows the investors to re-evaluate the market information and to re-

formulate his investment strategy.

A trading halt may be applied in the form of a circuit breaker or a circuit filter.

Circuit filters are also referred to as price limits since these are applied whenever the

stock price change exceeds certain preset percentage limit in either direction. There is

no theoretical basis for determining whether the imposition of circuit breakers will

have the desired effect of reducing stock market volatility. But it is believed that they

will pacify volatility.

3.16.5. Computerized Trading

The sophistication in information technology has made it much easier for larger

number of people to learn about and react to information very quickly thereby

increasing liquidity. The liquidity of organized securities market plays an important

part in supporting the value of traded securities allowing changing price quickly. This

will induce variation in price causing the volatility.

3.16.6. Noise Trading

Noise trading is trading on noise as if it were information. It is essential to the

existence of liquid markets providing the essential missing ingredient. The more the

noise trading, the more liquid the markets will be in the sense of having frequent

trades that allows us to observe prices. But noise trading actually puts noise into the

prices. The price of a stock reflects both the information that the information traders

trade on and the noise that noise traders trade on. The farther away the stocks get

from their fundamental values, the more aggressive the noise traders become which

creates more volatility in the market.

Page 64: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

159

3.16.7. International Linkages

The movements of the Indian stock prices are influenced by the movements of stock

prices on other overseas markets and vice versa. Due to greater integration of stock

markets of the world, international developments like recent Financial crisis of USA

and Iraq, and the crisis in the European countries, would have varying degree of

impacts on the Indian stock markets and other stock markets of the world.

3.16.8. Other factors

The extensive coverage of stock market news by media indicates that they are an

important source of information and comment upon the stock market. This brings

about changes in the buying and selling decisions of investors thereby affecting the

price of the stock market, creating volatility.

Page 65: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

160

3.17. Conclusions

This study is carried out to understand the volatility behavior of the Indian stock

market by computing historical volatility levels of Nifty using classical, range based

and drift independent volatility estimators. The study also aims at performing

autocorrelation tests and estimate conditional variance of the sample return series

through GARCH(1,1) model.

On the whole, the analysis of 20 year data starting from January 1993 to December

2012 established two phases in volatility in Nifty, namely, the boom and subsequent

crash of the Indian stock market during 1999-2000 and the subprime financial crisis

that cropped up across the globe during 2008-2009. Excepting these two phases, the

20 year period exhibited a trend closer to stationarity.

The 28 day persistence in volatility of Nifty return series indicate that volatility in the

Indian stock market reacts intensely to the market movements and takes long time to

die out the shocks it faces from the market movements. It may mean that the Indian

market is inefficient; hence information is not reflected in the stock process quickly.

Page 66: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

161

References

52. Angela Black, Patricia Fraser, 2003, Are Stock Prices too Volatile and Returns

too High? A Reassessment of the Empirical Evidence Using a Dynamic

Version of the CAPM, ICFAI Journal of Applied Finance, 9(1), January, 38-

57

53. Arestis P., Demetriades P.O., Luintel K.B., 2001, Financial Development and

Economic Growth: The Role of Stock Markets, Journal of Money, Credit and

Banking, 33(2), 16-41

54. Asad Ahmad, Rana U.S., 2012, Forecasting Performance of Various Volatility

Models on Intra-Day Equity Price in the Indian Stock Market, Indian Journal

of Finance, 6(6), June, 21-29

55. Atanu Das, Pramatha Nath Basu, Tapan Kumar Ghoshal, 2009, Stochastic

Volatility Model for Indian Security Indices: VaR Estimation and Back

Testing, Indian Journal of Finance, 3(9), September, 43-47

56. Beckers S., 1983, Variances of Security Price Return based on High, Low and

Close Prices, Journal of Business, 56, 97-112

57. Bekaert G., Harvey C.R., 1995, Time Varying World Market Integration,

Journal of Finance, American Finance Association, 50(2), June, 403-444

58. Bhaskkar Sinha, 2006, Modeling Stock Market Volatility in Emerging

Markets: Evidence from India, The ICFAI Institute of Management Teachers

(IIMT), Working paper series

59. Black F., 1976, Studies of Stock Price Volatility Changes, Proceedings of

1976 Meetings of the American Statistical Association, Business and

Economics Statistics Section, Washington, DC, American Statistical

Association, 177-181

60. Bollerslev Tim, 1986, Generalized Autoregressive Conditional

Heteroskedasticity, Journal of Econometrics, 31, April, 307-327

61. Bradford De Long J., Marco Becht, 1992, Excess Volatility and the German

Stock Market, 1876-1990, NBER Working Paper 4054, March

62. Brailsford T.J., Faff R.W., 1996, An Evaluation of Volatility Forecasting

Techniques, Journal of Banking and Finance, 20, 419-438

Page 67: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

162

63. Brooks Chris, 2002, Introductory Econometrics for Finance, Cambridge

University Press

64. Brooks, 1998, Predicting Stock Market Volatility: Can Market Volume Help?,

Journal of Forecasting, 17, 59-80

65. David X Li, 1999, Value at Risk Based on the Volatility, Skewness and

Kurtosis, Riskmetrics Group, March 4, available at [email protected]

66. Deb S.S, Vuyyuri S, Roy B, 2003, Modelling Stock Market Volatility in India:

A Comparison of Univariate Deterministic Models, ICFAI Journal of Applied

Finance, October, 19-33

67. Dr. Som Sankar Sen, 2010, On the Volatility of S&P CNX NIFTY, Indian

Journal of Finance, 4(5), May, 53–57

68. Durbin J, Watson G.S., 1950, Testing for Serial Correlation in Least Square

Regression-I, Biometrika, 37, 409-428

69. Durbin J, Watson G.S., 1951, Testing for Serial Correlation in Least Square

Regression-II, Biometrika, 38, 159-179

70. Durbin J, Watson G.S., 1971, Testing for Serial Correlation in Least Square

Regression-III, Biometrika, 58(1), 1-19

71. Engle R., 1982, Autoregressive Conditional Heteroskedasticity with Estimates

of the Variance of UK Inflation, Econometrica, 50(4), 987-1008

72. Engle R.F., Ng V.K., 1993, Measuring and Testing the Impact of News on

Volatility, Journal of Finance, 48, 1749-1778

73. Fama E., 1965, The Behavior of Stock Market Prices, Journal of Business,

38(1), 34-105

74. Feller W., 1951, The Asymptotic Distribution of the Range of Sums of

Independent Random Variables, The Annals of Mathematical Statistics, 22(3),

427-432

75. Franklin R. Edwards, 1988, Futures Trading and Cash Market Volatility:

Stock Index and Interest Rate Futures, Journal of Futures Markets, 8(4), 421-

439

76. Garman M.B., Klass M.J., 1980, On the Estimation of Security Price

Volatilities from Historical Data, Journal of Business, 53(1), January, 67-78

77. Garner A., 1988, Has the Stock Market Crash reduced consumer spending?,

Federal Reserve Bank of Kansas City Economic Review, April, 3-16

Page 68: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

163

78. Gertler M., Hubbard R.G., 1989, Factors in Business Fluctuations, Financial

Market Volatility, Federal Reserve Bank of Kansas City, 33-72

79. Goudarzi H., C.S. Ramanarayanan, 2010, Modeling and Estimation of

Volatility in the Indian Stock Market, International Journal of Business and

Management, 5(2), 85-98, available at www.ccsenet.org/ijbm

80. Gujarati Damodar N., Sangeeta, 2007, Basic Econometrics, The McGraw Hill

Publishing Company

81. Harvey C.R., 1995, Predictable Risk and Return in Emerging Stock Markets,

Review of Financial Studies, 8(3), 773-816

82. Harvinder Kaur, 2004, Stock Market Volatility in India, The Indian Journal of

Commerce, 57(4), October-December, 55-70

83. Jayanth R. Varma, 1999, Value at Risk Models in the Indian Stock Market,

Working paper No.99-07-05, Indian Institute of Management, Ahmedabad,

July

84. Jean, Philippe Peters, 2001, Estimating and forecasting volatility of stock

indices using asymmetric GARCH models and (Skewed) Student-t densities,

Ecole d’Administration des Affaires, University of Liege, Belgium, 20 March

85. John Campbell, Martin Lettau, Burton Malkiel, Yexiao Xu, 2001, Have

Individual Stocks Become More Volatile? An Empirical Exploration of

Idiosyncratic Risk, Journal of Finance, 56(1), 1-43

86. Karmakar M., 2003, Heteroskedastic Behavior of the Indian Stock Market:

Evidence and Explanation, Journal of Academy of Business and Economics,

1(1), 27-36

87. Karmakar M., 2005, Modeling Conditional Volatility of the Indian Stock

Markets, Vikalpa, 30(3), July-September, 21-35

88. Karmakar M., 2006, Stock Market Volatility in the Long Run, 1961-2005,

Economic and Political Weekly, 1796-2000

89. Kevin P. Balanda, H.L. MacGillivray, 1988, Kurtosis: A Critical Review, The

American Statistician, 42(2), May, 111–119

90. Mahajan S. and Singh B., 2008, Return, Volume and Volatility Analysis in

Indian Stock Market, Paradigm, XII(1), January-June

91. Mandelbrot B., 1963, The Variation of Certain Speculative Prices, Journal of

Business,.36, 394-419

Page 69: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

164

92. Markowitz H., 1952, Portfolio Selection, Journal of Finance, 7(1), March, 77-

91

93. Mehta B.C., Kranti Kapoor, 2005, Fundamentals of Econometrics, Himalaya

Publishing House, Mumbai

94. Michael V.P., 2000, Research Methodology in Management, Himalaya

Publishing House, Mumbai

95. Mishra P.K., Das K.B., Pradhan B.B., 2010, Global Financial Crisis and Stock

Return Volatility in India, Indian Journal of Finance, 4(6), June, 21–26

96. Neumann V.J., 1941, Distribution of the Ratio of the Mean Square Successive

Difference to the Variance, The Annals of Mathematical Statistics, 12, 367-

395

97. Obaidullah M., 1991, The Distribution of Stock Returns, Chartered Financial

Analyst

98. Parkinson M., 1980, the Extreme Value Method for Estimating the Variance

of the Rate of Return, The Journal of Business, 53(1), 61-65

99. Poon S., Granger C., 2003, Forecasting Volatility in Financial Markets: A

Review, Journal of Economic Literature, 41, 478-539

100. Puja Padhi, 2005, Stock Market Volatility in India: A Case of Select Scripts,

SSRN – Id 873985, www.ssrn.com

101. Rajan M.P., 2011, Volatility Estimation in the Indian Stock Market Using

Heteroskedastic Models, Indian Journal of Finance, 5(6), June, 26-32

102. Ramana Rao S.V., and Tripathy, 2008, Volatility Tests and Efficient Stock

Markets: A Study in India, Journal of International Business and Economics

103. Ramana Rao S.V., Kanagaraj A., Naliniprava Tripathy, 2008, Does Individual

Stock Futures Affect Stock Market Volatility in India?, Journal of the Indian

Institute of Economics, 50(1), 125 – 135

104. Ravi Madapati, 2005, Forecasting Volatility of BSE-30 Index Using

Econometric Models, The Journal of Derivatives Markets, ICFAI University

Press, January, 54-80

105. Robert F Engle and Andrew J Patton, 2001, What good is a volatility model?,

Quantitative Finance, Institute of Physics Publishing, 1, January, 237-245

106. Rogers L.C.G., Satchell S.E., 1991, Estimating Variance from High, Low and

Close Prices, Annals of Applied Probability, 1(4), January, 504-512

Page 70: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

165

107. Rogers L.C.G., Satchell S.E., Yoon Y., 1994, Estimating the Volatility of

Stock Prices: A Comparison of Methods that use High and Low Prices,

Applied Financial Economics, 4(3), January, 241-247

108. Ruey S. Tsay, 2005, Analysis of Financial Time Series, 2nd

Edition, John

Wiley and Sons, New York

109. Saichev A., Sornette D. and Filimonov V., 2009, Most Efficient Homogeneous

Volatility Estimators, CCSS Working Paper Series, No. CCSS 09-007, 12

August, http://arxiv.org/pdf/0908.1677.pdf

110. Sargan J.D., Alok Bhargava, 1983, Testing Residuals from Least Square

Regression for being generated by the Gaussian Random Walk, Econometrica,

51, 153-174

111. Schwert G.B., 1989(a), Business Cycles, Financial Crises and Stock Volatility,

Carnegie-Rochester Conference Series on Public Policy, Elsevier, 31(1),

January, 83-125

112. Schwert G.B., 1989(b), Why Does Stock Market Volatility Change Over

Time?, Journal of Finance, 44, 1115-1154

113. Seth A.K., Saloni G., 2005, Understanding Volatility at BSE: A Quality

Control Approach, Decision, 32, Jan-June

114. Susan Thomas, 1995, Heteroskedasticity Models on the Bombay Stock

Exchange, July, available at [email protected]

115. Vipul Kumar Singh and Prof. Naseem Ahmad, 2011, Modeling S&P CNX

Nifty Index Volatility with GARCH class Volatility Models: Empirical

Evidence from India, Indian Journal of Finance, 5(2), February, 34-47

116. Wessels D.R., 2006, The Characteristics of Stock Market Volatility,

www.indexinvestor.co.za

117. Wiggins J.B., 1991, Empirical Tests of the Bias and Efficiency of the Extreme

Value Variance Estimator for Common Stocks, The Journal of Business,

64(3), 417-432

118. Wiggins J.B., 1992, Estimating the Volatility of S&P 500 Futures Prices using

the Extreme Value Method, Journal of Futures Markets, 12, 265-273

119. Yang D and Zhang Q., 2000, Drift Independent Volatility Estimation Based on

High, Low, Open and Close Prices, Journal of Business, 73(3), 477-491

Page 71: CHAPTER III STOCK MARKET VOLATILITY: THE MEASUREMENTshodhganga.inflibnet.ac.in/bitstream/10603/27645/10/10... · 2018-07-09 · Stock market volatility measures the size and frequency

166

120. Zivot Eric, Wang, 2006, Modeling financial time series with s-plus, (2nd edn),

Springer

121. http://traderfeed.blogspot.in/2008/10/stock-market-volatility-historical.html

122. http://www.investmentu.com/2005/July/stock-market-volatility.html

123. www.caspur.it/risorse/softappl/doc/sas_docs/ets/chap8/sect8/htm

124. www.mathworks.in/access/helpdesk/help/toolbox/econ/garchfit.html

125. www.mathworks.in/access/helpdesk/help/toolbox/econ/garchsim.html

126. www.mysmu.edu/faculty/yujun/econ604_proj_2006/mgarch-slides-LB-

print.pdf

127. www.pages.stern.nyu.edu/-churvich/timeseries/Handouts/GARCH-churvich

128. www.stockmarkettrivia.com

129. www.voxeu.org

130. www.wabash.edu

131. www.wiki.answers.com

132. www.wwcap.com

133. www.xycoon.com