MULTIPLE TRIANGLE MODELLING ( or MPTF ) APPLICATIONS MULTIPLE LINES OF BUSINESS- DIVERSIFICATION?...

Post on 13-Jan-2016

215 views 0 download

Transcript of MULTIPLE TRIANGLE MODELLING ( or MPTF ) APPLICATIONS MULTIPLE LINES OF BUSINESS- DIVERSIFICATION?...

MULTIPLE TRIANGLE MODELLING ( or MPTF )

APPLICATIONS

• MULTIPLE LINES OF BUSINESS- DIVERSIFICATION?

• MULTIPLE SEGMENTS

– MEDICAL VERSUS INDEMNITY

– SAME LINE, DIFFERENT STATES

– GROSS VERSUS NET OF REINSURANCE

– LAYERS

• CREDIBILITY MODELLING

– ONLY A FEW YEARS OF DATA AVAILABLE

– HIGH PROCESS VARIABILITY MAKES IT DIFFICULT TO

ESTIMATE TRENDS

BENEFITS

• Level of Diversification- optimal capital allocation by LOB

• Mergers/Acquisitions

• Writing new business- how is it related to what is already on the books?

More information will be available at the St. James Room

4th floor 7 – 11 pm.

ICRFS-ELRF will be available for FREE!

Model Displays for LOB1 and LOB3

LOB1 and LOB3 Weighted Residual Plots for Calendar Year

Pictures shown above correspond to two linear models, which described

by the following equations

Without loss of sense and generality two models in (1) could be considered as one linear model:

;),cov(),( Cεεεε 21T21 E;2,1, iE 0ε i

Rεε 21 ),(corr

)1(,

,

2222

1111

εβXy

εβXy

)2(

2

1

2

1

2

1

2

1

ε

ε

β

β

X0

0X

y

y

For illustration of the most simple case we suppose that size of vectors y

in models (1) are the same and equal to n, also we suppose that

In this case

εXβy

;2,1,)var()( 2 iE iniTii Iεε,ε 12nIC

2212

1221)var(

nn

nn

II

IIΣε

Which could be rewritten as

For example, when n = 3

2212

2212

2212

1221

1221

1221

0000

0000

0000

0000

0000

0000

Σ

There is a big difference between linear models in (1) and linear model (2), as in (1) we consider models separately and could not use additional information, from dependency of these models, what we can do in model (2). To extract this additional information we need to use proper methods to estimate vector of parameters . The estimation

which derived by ordinary least square (OLS) method, does not provide any advantage, as covariance matrix is not participating in estimations. Only general least square (GLS) estimation

yXXXβ TT 1)(ˆ

yΣXXΣXβ 1T1T 1)(~

could help to achieve better results.

However, it is necessary immediately to underline that we do not know elements of the matrix and we have to estimate them as well. So, practically, we should build iterative process of estimations

and this process will stop, when we reach estimations with satisfactory statistical properties.

)()( ~,

~ mm Σβ

There are some cases, when model (2) provides the same results as

models in (1). They are: 1. Design matrices in (1) have the same structure ( they are the same

or proportional to each other ).

2. Models in (1) are non-correlated, another words

012

However in situation when two models in (1) have common regressors model (2) again will have advantages in spite of the same structure of design matrices.

Correlation and Linearity The idea of correlation arises naturally for two random variables that have a joint distribution that is bivariate normal. For each individual variable, two parameters a mean and standard deviation are sufficient to fully describe its probability distribution. For the joint distribution, a single additional parameter is required the correlation.

If X and Y have a bivariate normal distribution, the relationship between them is linear: the mean of Y, given X, is a linear function of X ie:

βXαY|XE

The slope is determined by the correlation , and the standard deviations and :

The correlation between Y and X is zero if and only if the slope is zero.

Also note that, when Y and X have a bivariate normal distribution, the conditional variance of Y, given X, is constant ie not a function of X:

XY β

2|XYY|XVar

This is why, in the usual linear regression model Y = + X + the variance of the "error" term does not depend on X.However, not all variables are linearly related. Suppose we have two random variables related by the equation

where T is normally distributed with mean zero and variance 1.

What is the correlation between S and T ?

2TS

Linear correlation is a measure of how close two random variables are to being linearly related.

In fact, if we know that the linear correlation is +1 or -1, then there must be a deterministic linear relationship Y = + X between Y and X (and vice versa).

If Y and X are linearly related, and f and g are functions, the relationship between f( Y ) and g( X ) is not necessarily linear, so we should not expect the linear correlation between f( Y ) and g( X ) to be the same as between Y and X.

A common misconception with correlated

lognormals Actuaries frequently need to find covariances or correlations between variables such as when finding the variance of a sum of forecasts (for example in P&C reserving, when combining territories or lines of business, or computing the benefit from diversification).

Correlated normal random variables are well understood. The usual multivariate distribution used for analysis of related normals is the multivariate normal, where correlated variables are linearly related. In this circumstance, the usual linear correlation ( the Pearson correlation ) makes sense.

However, when dealing with lognormal random variables (whose logs are normally distributed), if the underlying normal variables are linearly correlated, then the correlation of lognormals changes as the variance parameters change, even though the correlation of the underlying normal does not.

All three lognormals below are based on normal variables with correlation 0.78, as shown left, but with different standard deviations.

We cannot measure the correlation on the log-scale and apply that correlation directly to the dollar scale, because the correlation is not the same on that scale.

Additionally, if the relationship is linear on the log scale (the normal variables are multivariate normal) the relationship is no longer linear on the original scale, so the correlation is no longer linear correlation. The relationship between the variables in general becomes a curve:

Weighted Residual Plots for LOB1 and LOB3 versus Calendar Years

What does correlation mean?

Model Displays for LOB1 and LOB3 for Calendar Years

Model for individual iota parameters

0321.0ˆ;0814.0ˆ;,~ˆ

0331.0ˆ;1194.0ˆ;,~ˆ

222222

112111

N

N

001027.0000344.0

000344.0001097.0ˆ,0814.0

1194.0ˆ,,~

2

1 ΣμΣμN

359013.0ˆ),,( 21 corr

There are two types of correlations involved in calculations of reserve distributions.

Weighted Residual Correlations between datasets:

0.359013 – is weighted residual correlation between datasets LOB1 and LOB3;

Correlations in parameter estimates:0.324188 – is correlation between iota

parameters in LOB1 and LOB3.

These two types of correlations induce correlations between triangle cells and within triangle cells.

Common iota parameter in both triangles

Two effects:

Same parameter for each LOB increases correlations and CV of aggregates

Single parameter with lower CV reduces CV of aggregates

0267.0ˆ;0996.0ˆ;,~ 2 N

Forecasted reserves by accident year, calendar year and total are correlated

Indicates dependency through residuals’ and parameters’ correlations

Indicates dependency through parameter estimate correlations only

Dependency of aggregates in aggregate table

Var(Aggregate) >> Var(LOB1) + Var(LOB2).

In each forecast cell and in aggregates by accident year and

calendar year (and total)

Correlation between reserve distributions is 0.833812

HistogramKernelLognormalGamma

Density Comparisons (Acc. Year: Total)

1 Unit = $1,000,000,0001.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

Simulations from lognormals correlated within LOB and between LOBs

GROSS VERSUS NET

Model Display -1

Is your outward reinsurance program optimal?

6% stable trend in calendar year

Model Display – 2

Zero trend in calendar year.Therefore ceding part that is growing!

Weighted Residual Covariances Between Datasets

Weighted Residual Correlations Between Datasets

Weighted Residuals versus Accident Year

Gross

Net

Weighted Residual Normality Plot

Accident Year Summary

CV Net > CV Gross – Why? It is not good for cedant?

MODELING LAYERS

• Similar Structure• Highly Correlated• Surprise Finding!

CV of reserves limited to $1M is the same as CV of reserves limited to $2M !

Model Display for All 1M: PL(I)

Model Display for All 2M: PL(I)

Model Display for All 1Mxs1M: PL(I)

Note that All 1Mxs1M has zero inflation, and All 2M has lower inflation than All 1M, and distributions of parameters going forward are correlated

Residual displays vs calendar years show high correlations between three triangles.

Weighted Residual Covariances Between Datasets

Weighted Residual Correlations Between Datasets

Compare Accident Year Summary

Consistent forecasts based on composite model.

If we compare forecast by accident year for Limited 1M and limited 2M t is easy to see that CV is the same.

More information will be available at the St. James Room

4th floor 7 – 11 pm.

ICRFS-ELRF will be available for FREE!