The Dawn of Complex Risk Management
-
Upload
itreasurer -
Category
Documents
-
view
217 -
download
0
Transcript of The Dawn of Complex Risk Management
ontinuingEducation
Part 1: Risk methodologies
isk Trees
in the
Woods
By Peter Davies
Sailfish Systems
With multiple risk
methodologie
s to
choose from, treasurers should know
what each
does
and then
apply that
most appropriate within the context of
their company's business objectives.
In going public
with i ts RiskMetrics
methodology, JP Morgan has put a
very bright spot light on risk evaluation
methods.
As
attractive as Ri skMetrics
is, particu larl y to corporate treasurers,
there are other m
ethodolo gies that
treasurers should be awa re of.
This overview of the three
primary
types of risk meas ureme
n t
determin
istic, probabilistic, and analyti ca l
s
hould
help treasury ri sk managers to
quickly
identify the trees a ltern ate risk
measurement m
et
hod o lo gies) and
keep their eyes on the forest (manag
ing the risks aris ing from their compa
ny's business ob jectives).
eterministic methods
The deterministic m e thod o lo g ies
in c lud e scenarios, se nsitivi ty, th e
Greeks (de lta, ga mm a, etc .), a
nd
stress test
in
g. These methods are those
most w idely supported in
traditional
trading systems as they are often used
by financ ial traders.
Deterministic risk measurement meth-
ods require those employing them
to
specify what they expect future mar-
ket rates (scenarios)
to
be
and
value
their
positions accordingly. The risk
measured, therefore, is that arising if
the expected rates indeed
occur
.
Traditionally, deter
minist
ic methods
we re used by traders to measure the
risk
of
products with the sa me ri sk
characte rist ics - e.g., spot FX or US
6
treasury notes . The sim pli city
of the
risk st ru cture in suc h instruments
makes it easy fo r traders to judge how
the relatively few risk factors affect ing
their positions
may behave- these
op inions are inherent in their trad ing.
Th e p roblem w ith deterministic
methods is that they are highly trans-
action o rie nted. They foc us
on the
prici ng and execut ion of a stream of
individual dea ls, acco
untin
g
for
the
fact that each one may be misvalued
before it ca n be hedged or closed. The
G reeks gained popularity, fo r exam
ple, because they alert traders to mar
ginal sensitivity to mispricing any price
var·iab le.
Deterministic methods are not as
we
ll suited to assessi ng the risk of a
portfolio of d iverse inst rum ents (e.g. ,
derivat ives). Thi s is particularly true in
a co porate treasury
co
nt
ext
w here
treas
ur
ers are n
ot
the
direct
expert
trader- i .e., where operating manage
ment reates the underlying positions .
Even very good traders cannot assess
the reaso nableness of scena
rio
s that
cover many risk factors (market rates
as we ll as comp lex pos
it i
ons)- l
et
a lone the positions of ot hers. Thu s,
ad ditional risk measurement method
o logies should be used to he lp man
age ri
sk beyond the tactical level.
Even w ith tactical risks, however,
traders should not be left to manage
their ow n positions. Stress-testing,
highli ghted in the G-30 recommenda
tions, is an important part of an over
all risk measurem ent po licy . It looks at
the extrem
es,
the
unthinkables,
and
the in tent ionally irrationa l. It balances
the trader ' s ment ali ty that
positions
can always be traded out of v n in
ext reme stress. Management can take
some comfort in knowing what kinds
of extreme markets w ill have the
most
dire conseq uences for their positions .
Probabilistic
methods
The probab il i st ic methods include: JP
Morgan ' s RiskMetrics, Monte Ca rl o
simulation, and historical simulation.
Ge nera
ll
y speaki ng, probabil istic
methodologies forgo the certa
inty of
determini stic methods and measure
risks resu lting from an uncertain range
of rates or prices.
The key to mo st probabi I sti c
methodologies is the distribution func
tion. It describes the range of uncer
ta inl y in the future rate or price vari
able being dealt with . O nce a distrib
u
tio
n fu n
ctio
n is chosen , a
computer
can fi ll in the actual samples w ithin
the distribution w ith as much detail as
demanded.
While probabilistic methods eliminate
the need
for traders to make
assumptions about future rates/prices
they do require statisticians to make
assumptions about how
future
rates/
prices are distributed.
The power
of
the distribution func
tion
is that a si n g le function ca n
embody
many details, e.g., any proba
b i l ity fo r any rate,
thereby
red uc ing
the
numb
er of choi ces
or
assumption s
needed.
Un
fortun ate
ly
, the remaining
c ho ices are
much
more obscure.
Assumptions in
probabilistic
method
o log ies require a
familiarity with
sta
tist ical descriptions of an increas ingly
comp l
ex
and dynamic f i nanc i al
world-and their fa llibili ty.
There
is a lso a second dimension
w hen probabi I s tic methods are used
to measure the risk in a portfolio of
positions. Even if an app ropriate distri
bution
funct ion for all the variables
affecting the a
portfolio
can be speci
fied , e c h di str ibution fu n c tion is
merely defining
the range
of
expected
future va riables affect ing each posi
tion. To accu rately meas ur
e the risk in
the
portfolio,
statist ic ians must also
desc
ri be the re latio ns
hip between
these var iables.
The relationship b
etwee
n rate/p ri ce
mov ement s i s usu a y in
terms of cova ri ance. The covar iance
approach
quantifies these relation
ships
by
taking historica l data samples
and ca lcu lat ing the distributions and
re lat ionships from past observations .
Covar i models are r ive
beca use they enable risk measurers to
red uce a l
ot of sample data into a few
In te rnational Treas urer/November 28, 1994
numbers. If the
historical
samp le is
bel
ieved
to be a
good indi
cator of
expected future rates, they can gene r-
ate who le ranges of expected future
rates based on the calcu lated probabili
ties of a single distribution function .
RiskMetrics is a specif ic
implemen
tation of a covar iance mod e l. JP
Morga n has
in
vested
its
resea rch
efforts into developing a se t of fi nely
noned variance ~ v o l a t i l i t y
and covar i
ance data th t treasury can use w ith
o
ut
the effort of
gather in g
numbers
and pay ing for ana
lysts.
The weakness in covar iance
meth
ods co mes from
th e ve ry thing that
makes tnem
powerfu
I :
tlie
r
eduction
of
a lot of data into a few descriptors.
To use a covariance risk model is to
accept that rates have one distribution
and
th t
combinations of rates have
one relationship between them.
ven
elementary experience suggests th t
these
ssumptions
re not true in
many or even most cases.
Monte Carlo simulations take
the
same
ingredients as
a
cova
rianc e
model (d
istribution functions and
covariances) but introduce a degree of
uncertainty in est im ating the expected
rates. Instead of a trader
or
a statisti
cian defining a li ke
ly
risk parameter, a
computer makes a large number of
random draws from the specified dis
tributions.
The
pattern of resulting
rates
approximates
but does not
~ e x a G t l {
m a t c n the underlyin g distribu
tions).
We
can think of this as though
the d istribution and
covariance
data
desc ribes some underlying process in
the f inancia l markets . The randomness
of
the draws reflects the " noi se" level
t t overl ys the underlying process .
To use and understand a
Monte
Carlo
risk estimate treasurers must be
com
fortab le
with
the stat istics behind the
distribution and covar iance estimates
as
well
as ha
ve an opinion
about
how
dominant the underl y in g process is
vers us the random risk observed day
to-day. In
most
cases, Monte Carlo
techniques are restr icted to the handful
of "quants" who have the answers to
International Treasurer/N ovember 28, 1994
enter into the machine and ca n subse
quently und erstand the results.
Historical simul ation is qu ite differ
ent from the other probabilist ic meth
ods and has almost oppos ite strengths
and weaknesses. The probabilities
us ed in a
hi
storical simul ation are
take n dir ec tly from the observed
events of
the pas t. A covar iance
mode
l
wou
ld use the same hi stor ica l
data, but reduc e it
from , say, 300
observa ti ons to 3 stat ist ica l
descrip
tions
. A
histori
ca l simu lat ion wou ld
use all 500 observatio ns and sidestep
the issue of how to describe them.This
ge ts around the biggest
dr awback of
cova
ri ance, and, to a lesser degree,
Monte Carlo
mod
els: the assumption
that
there is
one description for
the
distribution
of market variab
les and
one description for
the re lationships
between them.
The lack of abstraction all ows histor
ic
al
simulations
to discriminate ,
fo
r
exampl e, between high vo lati
lity
days
and
low vo lati li ty days, up markets
from down markets, if that is the way
it is in the observed data . This
is
most
import
ant
where
treasurers have very
specific price relat ionships at risk,
as
is the case
with derivatives. The
downsid e to hi stor ical simulation is
that it do
es
not give the knowledge
ab le user the ab i l ity to descr
ib
e the
underlying processes
and
see their
effec ts independently
from
a samp le
time
series . Rather, they use changes
in past periods to show the effects in
today' s markets if they were repeated.
nalytic methods
The analytic methods include lin
ea r
and non-I near regression techniques .
The analytic approaches are extensio ns
of probabi l
istic
models in that they
attempt to
describe
the way market
rates behave. Unlike the simpler proba
bilistic methodologies, the
ana
lyti
c
methodologi
es are
predictive: their pur
pose
is
to est imate
actual future rates .
The
lin
ear approaches are a ll based
on more
power
ful statistical methods
than covariance
.
The most popu l ar
approach is to use var iation s of auto-
inancialRisk Measurement
corre lation analysis (ARCH , GARCH,
etc. .
Auto-corre
la
tion
describes how
rates w ithin a ser ies are related to each
other rather than between se ries. These
effects are often described by market
participants as trends, cyc les, etc.
GARCH has attracted a great dea l of
attention
in
academ
ic ci rc les and is
we
ll recognized for its
validity
when
app li ed to hi
ghe
· order processes ,
especia ll y vo lati
lity
.
Volatility
is persis
t nt jumps
stays
with
a gradual decay over time .
While
the
un d e rlyin g pr ices may be moving
a
round at random, we can use
GARCH
to make a
good
estimate
of
what
the vo lati
lity
level
will
be in the
future based on where
it is now
.
These stati sti ca I
techniques
suffer
from the same prob lem
as Mont
e Carlo
s
imulation
in that they demand a sig
nificant degree
of
understanding on the
part
of
the user. Furthermore, GARCH
type analysis
may be a good estimator
of volat i ty levels, but
is not effective
for est imating price directions.
Most
treasurers are as co ncerned about the
direction of rates/prices as they are
about the volat il i ty. In fact, movement
in the right d irection may
not
even be
cons idered a risk
Th e more obscure, non-lin ea r
app roaches are re lly predictive trad
ing systems. These systems are devel
oped to predict the future path
of
rates ,
taking
into account that
the future is
not simpl y based on the
present by
either corre lation or autocorr
elat ion .
The basic
technologies used include
neural networks and fuzzy matrices. As
their
names imply, these are esoter ic
tools and their predictive (i .e ., biased)
estimator
rol e makes them question
able for risk measurement purposes .
In co nsiderin g any risk measurement
methodo
logy , it is important
for trea
surers to ke ep in
mind
that risk mea
surement is not
their main objective
risk management
is
. In part 2, I shall
provide some exa mpl
es
showing
which risk measures are more appro
priate in meet in g specific business
objectives. •
r. Davies is reached
at 2 12 587-0007.
7