Ambit_Insights_Back_Testing_Interest_Rate-Risk_Models

download Ambit_Insights_Back_Testing_Interest_Rate-Risk_Models

of 2

Transcript of Ambit_Insights_Back_Testing_Interest_Rate-Risk_Models

  • 8/7/2019 Ambit_Insights_Back_Testing_Interest_Rate-Risk_Models

    1/2

    AMBIT RISK MANAGEMENT& COMPLIANCE

    Prudent risk managers should not makeany decisions about their rate-riskexposure unless they have confidencethat their models are at least reasonablyaccurate. Of course, most risk managersare quite confident. The question is, dothey have sufficient reason to be soconfident?

    All too often, confidence in ones modelresults seems to correlate with the costof the model. Managers who have spenthundreds of thousands on models like tobelieve that they have Ferraris. Managerswho have spent tens of thousands onmodels may feel envious.Experience tellsus that both impressions can be right or

    wrong, depending on how well themodels are used.

    Confidence in ones model forecastsshould be based on well-structuredtesting. In its 2004 guidelines for interest-rate risk (IRR), the Bank for InternationalSettlements (BIS) notes that:

    Reviews of the interest rate riskmeasurement system should includeassessments of the assumptions,parameters, and methodologies used.Such reviews should seek to understand, test, and document the current

    measurement process, evaluate thesystems accuracy, and recommendsolutions to any identified weaknesses.If the measurement system incorporatesone or more subsidiary systems orprocesses, the review should includetesting aimed at ensuring that thesubsidiary systems are well-integratedand consistent with each other in allcritical respects.1

    Collectively, those assessments are calledback testing. Back testing is an activitythat gets lots of favorable press but not

    enough practical use.

    THE SCOPE OF THE PROBLEM

    In 2007, a U.S. bank with total assets inthe range of $8 billion to $12 billion hadits interest rate risk measured, as of the

    same date, using three different models.The actual results2 are shown in Table 1.

    The dispersion in the measurements fora 200 basis point (bp) rise in rates is sixpercent of the economic value of equity(EVE). The dispersion in the measurementsfor a 200 bp fall in rates is 22 percent ofEVE.

    Consider limits. Most banks would likely

    be in violation of EVE limits for sensitivitiesgreater than 20 percent. If the bank onlyused Model A, it would report a limitviolation. But if the bank used eithermodel B or C, management would thinkthat the bank was well within its limits forthe 200 bp falling rate shock. Which iscorrect?

    Consider risk management costs. Manybanks in this size range use rate swaps tohedge their IRR exposures.If the bankused only Model B, it might underhedgeand lose an unexpectedly large amountof value in the event of a major increasein rates. If the bank used only Model C,it might spend too much on hedges andsuffer unexpected losses in the otherdirection.

    WHAT CAN BE DONE?

    Earnings at risk (EAR) back testing isalways easier than EVE back testing.Forecasts of either net-interest income(NII) or net income from EAR modelscan be compared to the actual resultssubsequently observed. Common output

    back tests include the following:

    Compare the forecasted NII for asubsequent period to the normalizedNII actually observed for that period.The normalized NII is the NII adjustedfor nonrecurring income or expense.

    BACK TESTING INTERESTRATE RISK MODELS

    While model error is unavoidable,input and output tests can make adifference.

    Table 1: Interest RateRisk Measures:Three Different Models

    Change in EVE ifinterest rates rise 200bp -21% -18% -24%

    Change in EVE ifinterest rates fall 200bp -21% +3% +1%

    Model A Model B Model C

  • 8/7/2019 Ambit_Insights_Back_Testing_Interest_Rate-Risk_Models

    2/2

    Use rate/volume/mix variance analysisto isolate the variances observedbetween forecasted and actual earningsthat result only from the actual ratechanges. Then see how closely theobserved variance resulting from theactual rate changes compares to theforecasted changes.

    Save prior model runs. At a later date,rerun the model with the actualobserved market rates instead of theforecasted rates used in the originalmodel run. Keep the data, the

    assumptions and everything elsethe same as in the original model run.Then see how closely the incomereported from rerun with actualrates compares to actual income forthat period.

    Output testing for EVE models is farmore diffi cult. Obviously, EVE cannotbe compared to any independent value.EVE is not the same as book equity,regulatory capital, economic capital ormarket cap.

    EVE model users can, and should,compare their models calculated marketvalues to observed market values foractively traded securities owned by thebank.

    BEYOND OUTPUT TESTING

    Notice that the BIS quotation cited abovedoes not explicitly refer to output testing.Instead, it lists assessments of theassumptions, parameters and methodolo-gies used. This boils down to data testingand assumption testing.

    Data inputs into both EAR and EVEmodels should be tested. This does notmean that all inputs from bank recordsmust be reconciled to the penny. It doesmean that inputs from other bank recordsshould be reconciled, with some allowance

    for error, to general ledger values toensure that all material asset or liabilityvolumes are included in the model.

    Data inputs to both EAR and EVE modelsalso require scrubbing. Much of the dataimportant to rate-risk managers, such asloan rate caps and securities calls, cannotbe reconciled. Errors tend to accumulate.Checking data inputs to find and fixsuch errors (data scrubbing) is a full-timeprocess for most medium-sized and alllarge banks.

    Assumptions used in both EAR andEVE models should also be back tested.Observed changes in variables, suchas changes in loan prepayment speedsfollowing a change in market rates,should be regularly compared to theassumed change employed in themodeling process.

    SUMMARY

    Model error is unavoidable. After all,models are, by definition, simplificationsof reality. Both EAR and EVE models havehuge quantities of data inputs and heavilydepend on rate and volume assumptions.

    It is often noted that the skill of thecraftsman, not the tools, makes thedifference between a job done and ajob well done. For every rate-risk manager,input and output tests are essential tools.Wise rate-risk managers go beyond merepolicy or regulatory compliance and usethese tools to carefully assess the size oftheir model error before making riskmanagement decisions.

    This article was contributed byLeonard Matz. Director, Liquidity andInterest Rate Risk Consulting, SunGard.

    For more information [email protected]

    www.sungard.com/ambit

    2008 SunGard.Trademark Information: SunGard, the SunGard logo and Ambit, Apsys, BancWare, STeP and System Access are trademarksor registered trademarks of SunGard Data Systems Inc. or its subsidiaries in the U.S. and other countries. All other tradenames are trademarks or registered trademarks of their respective holders.

    First published in

    Footnotes:1 Principle 10, Paragraph 66.2 The numbers shown are slightly modified to protect the anonymity of the source. The relationships between the figures remain

    almost unchanged.

    BANKACCOUNTING & FINANCE