riskprofessional201112-dl

41
PROFESSIONAL A GARP Publication DECEMBER 2011 On the Same Page Principal Financial CRO Greg Elming and CEO Larry Zimpleman on risk management as an integrated, collaborative enterprise ALSO INSIDE Metrics: a company risk ranking and a statistical aid for corporate directors End-to-end stress testing

Transcript of riskprofessional201112-dl

Page 1: riskprofessional201112-dl

PROFESSIONAL

A GARP Publication DECEMBER 2011

On the Same PagePrincipal Financial CRO Greg Elming and

CEO Larry Zimpleman on risk management as an integrated, collaborative enterprise

ALSO INSIDE

Metrics: a company riskranking and a statisticalaid for corporate directors End-to-end stress testing

Page 2: riskprofessional201112-dl

1 RISK PROFESSIONAL D E C E M B E R 2 0 1 1 www.garp.org

D E C E M B E R 2 0 1 1

PROFESSIONAL

F R O M T H E E D I T O R

Education in Governancen the cover of our October issue was the question, “Can risk culture be taught?” Lots of people are giv-ing it the old college try, but the course is too new and subjective to yield a yes-or-no answer – yet. So in this issue, and in articles online (see the table of contents for links), we move on to matters of risk governance,

particularly to consider whether corporate boards of directors can make a constructive difference, and if so, how.

“The Credit Crisis Around the Globe: Why Did Some Banks Perform Better?”, a 2010 paper co-authored by Ohio State University professor and GARP trustee René Stulz, did not instill much confidence in board efforts. Banks that rewarded their shareholders the most in 2006, presum-ably overseen by “shareholder-friendly” directors, fared worse during the subsequent crisis, as earlier risk-taking came back to haunt them.

Out of necessity, directors have embarked on their own risk management studies and have yet to graduate. Deepa Govindarajan of ICMA Centre at the University of Reading in the U.K., who advocates more systematic, proactive oversight of risk appetites, says directors are “unable to enunci-ate formally the risks associated with the strategic choices they pursue.”

Thus is the learning curve defined. No one is going to argue that gover-nance is futile, and boards are more engaged in the risk conversation than ever before. The Dodd-Frank Wall Street Reform and Consumer Protec-tion Act requires board-level risk committees at bank holding companies with at least $10 billion in assets. That may be a welcome indicator of the enhanced importance of risk management. But, Jim DeLoach of con-sulting firm Protiviti writes, it will force a realignment of responsibilities traditionally assigned to audit committees.

Tools and techniques are coming into the breach. As reported in the Up-front section, GMI is publishing a cautionary list of risky companies, and Protiviti is touting a risk index that distills the “avalanche of data” in board-meeting packages down to a single risk value. But management still comes down to human collaboration and execution. As James Stewart of JHS Risk Strategies writes, amid all the metrics and complexity, “tradi-tional leadership roles and responsibilities are arguably more relevant and important than emerging best practice.”

Editor-in-ChiefJeffrey [email protected]

Executive EditorRobert [email protected]

Design DirectorJ-C Suaré[email protected]

Art DirectorJennifer [email protected]

Risk Professional Editorial Board

Peter Tufano Dean, Said Business SchoolUniversity of Oxford

Aaron BrownRisk ManagerAQR Capital Management

Tim DaviesDirector for RiskLloyds TSB Bank

Mike Gorham Director, IT Center for Financial Markets Stuart Graduate School of Business

David LeeCRO and Managing Director, Liquid AlternativesCredit Suisse

Peter Manning Head of Risk ManagementChina Construction Bank (London)

Robert MertonSchool of Management Distinguished Professor of FinanceMIT Sloan School of Management

Joe PimbleyPrincipalMaxwell Consulting

Riccardo RebonatoGlobal Head of Market Risk and Quantitative AnalyticsRBS

Peruvemba SatishManaging Director and Chief Risk OfficerAllstate Investments

David ShimkoPresidentWinhall LLC

Victor YanManaging Director and Head of Market Risk, ChinaStandard Chartered Bank

Senior Vice President, PublisherDavid S. [email protected]

Director, Global Advertising and Business DevelopmentMichael [email protected]

International Advertising and Event SalesAsia: David [email protected]

Director, Career CenterMary Jo Roberts

Manufacturing & Distribution Kristina Jaoude

Production CoordinatorHanna Park

RISK PROFESSIONAL is published six times a year by the Global Association of Risk Professionals. It is circulated to all GARP dues-paying members. To subscribe, please go to https://www.garp.org/news-and-publications/risk-professional.aspx. Basic annual rate: $150.00. For customer service inquiries, please call: London, UK, +44 207 397 9630; Jersey City, NJ, +1 201 719 7210. For reprints and e-prints of RISK PROFESSIONAL articles, please contact PARS International at 212-221-9595.

O

Page 3: riskprofessional201112-dl

www.garp.org D E C E M B E R 2 0 1 1 RISK PROFESSIONAL 2

D E C E M B E R 2 0 1 1

research, Kelly joined New York-based Quantifi in 2009 and has over 18 years of financial industry experience. He has worked for Citigroup, JPMorgan Chase & Co., American International Group and Credit Suisse First Boston.

The Risk Strategy article on page 38 was written by Gaurav Kapoor, chief operating officer of Metric-stream, a supplier of governance, risk and compliance systems; and Con-nie Valencia, principal of Elevate Consulting, which specializes in pro-cess improvement, controls and tech-nology risk management. Kapoor, a Wharton School MBA, spent 10 years with Citibank in the U.S. and Asia, helped lead Arca-diaOne from inception to acquis i t ion, founded the C o m p l i a n -ceOnline.com GRC commu-nity and until 2010 was Palo Alto, California-based Metricstream’s CFO and general manager of GRC. Valencia brought Big 4 accounting and consulting ex-perience to Miami-based Elevate, where her work includes corporate governance consultation, policy and procedure enhancement and Sar-banes-Oxley compliance.

C O N T R I B U T O R S

President and CEORichard [email protected]

Managing Director, Research CenterChristopher Donohue, [email protected]

Managing Director, Global Head of Training ServicesAlastair [email protected]

Managing Director, Global Head of Marketing Kathleen [email protected]

Chief Operating OfficerMichael [email protected]

ControllerLily [email protected]

Board of Trustees

William Martin, ChairmanChief Risk Officer, Abu Dhabi Investment Authority

Richard ApostolikPresident and CEO, GARP

Kenneth AbbottManaging Director, Morgan Stanley

Robert CeskeChief Risk Manager, Treasury and Global Funding Operations, GE Capital

Sebastian FritzSenior Advisor, Frankfurt School of Finance and Management

Bennett GolubSenior Managing Director and CRO, BlackRock

Michael HofmannChief Risk Officer, Koch Industries

Donna HoweChief Risk Officer, Hartford Investment Management

Frederick LauHead of Group Risk Management, Dah Sing Bank

Jacques LongerstaeyExecutive Vice President, CRO, State Street Global Advisors

Victor MakarovHead, Market Risk Analytics, Metropolitan Life

Michelle McCarthyDirector of Risk Management, Nuveen Investments

Riccardo RebonatoGlobal Head of Market Risk and Quantitative Analytics, RBS

Jacob RosengartenChief Risk Officer, XL Capital

Robert ScanlonGroup Chief Credit Officer, Standard Chartered Bank

David ShimkoPresident, Winhall LLC

René StulzReese Chair in Banking and Monetary Economics,Ohio State University

Peter TufanoDean, Said Business School, University of Oxford

Mark WallaceManaging Director, Chief Risk Officer,Credit Suisse Asset Management

RISK PROFESSIONAL, ©2011, Global Association of Risk Professionals, 111 Town Square Place, Suite 1215, Jersey City, NJ 07310. All rights reserved. Reproduction in any form is prohibited without written permission from GARP. Printed in the United States by RR Donnelly.

A team of six shares credit for the Credit Risk article on page 19. Five are analysts and economists from HSBC, including Alessandra Mongiardino, a past contributor to Risk Professional when she was affiliated with Moody’s Investors Service. The sixth is Andrea Serafino (below), senior econometrician at the U.K.’s Financial Services Authority, who

was at HSBC, special iz ing in macroeco-nomic stress testing, before joining the regulator in 2010.

M o n -giardino is now head of risk strategy and a mem-ber of the risk manage-

ment executive team for HSBC Bank (HBEU), responsible for the strategic framework that ensures risk management is effectively embedded throughout the bank and across all risk types. Her HSBC collaborators: Zoran Stanisavljevic, head of whole-sale risk analytics; Evguenia Iankova, senior quantitative economist; Bruno Eklund, senior quantitative analyst; and Petronilla Nicoletti, senior quan-titative economist.

Counterparty risk management is a current focus of trading and risk software company Quantifi, whose director of credit products, David Kelly, contributed the article on that subject on page 27. Well versed in derivatives trading and quantitative

Page 5: riskprofessional201112-dl

www.garp.org D E C E M B E R 2 0 1 1 RISK PROFESSIONAL 4

C O N T E N T S

F E A T U R E SC O V E R S T O R I E S

13 CRO-CEO DIALOGUE: On the Same Page at the Principal

Few companies can trace an enterprise risk management heri-tage back as far as the 1970s, as does the Principal Financial

Group. The Des Moines, Iowa-based insurer and asset manager also boasts a chief executive officer and a chief risk officer with nearly 70 years of combined service to the organization. In an

exclusive joint interview, CEO Larry Zimpleman and CRO Greg Elming describe their implementation of ERM and its

profit-enhancing and loss-mitigating paybacks.

19 CREDIT RISK/STRESS TESTING: A Robust End-to-End Approach

Alessandra Mongiardino and colleagues at HSBC and the Financial Services Authority put forward a factor-model

approach to stress testing that can identify corporate portfolios that are particularly vulnerable to stress, improve the accuracy

of default-probability and loss-given default forecasts and produce transparent results.

27 How the Financial Crisis Changed Counterparty RiskIn the face of dramatically increased capi-tal requirements for counterparty credit risk, banks have created credit value adjust-ment desks to centralize the monitoring and hedging of counterparty exposures in a more active and timely way. Despite the improvements in this area, banks still face data and analytical challenges, according to David Kelly of Quantifi.

38 A Strategy-Based Approach A better alignment of risk management with company goals and objectives can take into account all risks, including black swans, while enabling balanced, focused and more cost-effective analyses and re-sponses to potential threats, write Gaurav Kapoor and Connie Valencia.

13 19

Cov

er P

hoto

by

© 2

011

Scot

t Sin

klie

r

Page 6: riskprofessional201112-dl

5 RISK PROFESSIONAL D E C E M B E R 2 0 1 1 www.garp.org

D E P A R T M E N T S E X C L U S I V E L Y O N L I N E

1 From the Editor Education in Governance

2 Contributors

7 UPFRONTTen North American companies with GRC issues appear on research firm GMI’s inaugural Risk List; Protiviti’s single-number risk index “snapshot;” risk managers’ slow road to the top job; wind power and weather derivatives; and can a World Currency Unit stabilize forex volatility?

22 QUANT CLASSROOM Attilio Meucci shows how to estimate VaR and other risk numbers for arbi-trary portfolios by emphasizing histori-cal scenarios similar to today’s environ-ment.

22

8

10

WEBCASTSRisk Appetite, Governance and Corporate Strategy

The Ramifications of the Dodd-Frank Act: Derivatives

VIEWPOINTSShould a Board of Directors HaveA Separate Risk Committee?

Demonstrating the EffectivenessOf Compliance and Ethics Programs

Risk Culture and Tone at the TopIncreasingly accountable to regulators and shareholders on risk manage-ment matters, executives and directors should not lose sight of tradition-al goal-setting, leadership and communications as they become conver-sant in the technicalities and analytics of financial market complexity,

according to consultant James Stewart.

Governance and the Risk Appetite StatementAny investor weighs risk-return trade-offs, expressed in terms of risk ap-petite. But the differences between individual investment decisions and those of a collective board acting on behalf of stakeholders need to be

better understood, says ICMA Centre’s Deepa Govindarajan.

C O N T E N T S

Page 7: riskprofessional201112-dl

With so much uncertainty in the global financial markets, it’s more important than ever to have the right skill set. Because Certified FRMs have proven they have the knowledge to manage a broad cross-section of financial risk, they arequalified to help solve today’s complex challenges. That’s why Certified FRMs are in demand by the leading banks, consultingfirms, corporations, asset management firms, insurance firms, and government and regulatory bodies worldwide. Andthat’s why there’s never been a better time to be an FRM.

Registration for the May 2012 FRM Exam opens on December 1. To learn more, visit www.garp.org/frm

Creating a culture of risk awareness.TM

© 2011 Global Association of Risk Professionals. All rights reserved.

There’s never been a better time to be an FRM®.

Page 8: riskprofessional201112-dl

7 RISK PROFESSIONAL D E C E M B E R 2 0 1 1 www.garp.org

Naming Names

isk managers, investors and regulators seeking guidance about pub-licly-held companies

with troublesome risk profiles are getting some help from the New York-based corporate gov-ernance research organization GMI.

The independent provider of environmental, social and gov-ernance (ESG) ratings on 4,200 actively traded companies world-wide released its inaugural GMI Risk List in October. It consists of 10 North American corpora-tions in alphabetical order that exhibit “specific patterns of risk . . . representing an array of ESG and accounting transparency is-sues.”

The methodology places em-phasis on non-financial consid-erations that are increasingly seen as factors contributing to corporate failures. Companies that lag tend to have compen-sation or pay-per-performance issues, questionable board inde-pendence, ownership structures that disadvantage large numbers of public shareholders, or poor environmental, health and safety risk disclosures. GMI bases its assessment on the likelihood of such problems’ resulting in such negative outcomes as regulatory actions, shareholder litigation and material financial restate-ments.

“Anybody focused on risk management efforts, including contingency planning, supply

R

A top 10 of risky situationsBy Kath erin e heire s

chain risk or environmental, social and corporate governance risks, will find this research of great interest,” says Jack Zwingli, CEO of GMI, which was formed in December 2010 with the merger of GovernanceMetrics International, the Corporate Li-brary and Audit Integrity.

Zwingli notes that in the last decade, meltdowns such as those of Adelphia Communications

only heightened investor, regula-tor and risk manager interest in monitoring and understanding cultural and qualitative as well as financial issues that can drive companies into bankruptcy, reg-ulatory discipline or costly litiga-tion.

Multiple ConcernsOne GMI Risk List company, for example, was cited for the fact that investors outside the found-ing family had little power to influ-ence the company’s course, while GMI research showed it may also be inappropriately capitalizing assets and smoothing earnings. Another had a board deemed to

was highlighted for lack of atten-tion to environmental risk while also capitalizing some expenses to inflate reported earnings.

Aside from the top 10 re-lease, GMI offers more in-depth analytics about each company’s weaknesses through its GMI An-alyst research platform, including a complete list of the red flags or factors seen as having a material impact on investment returns. Not more than 5% of the list of 4,200 companies GMI tracks fall into the riskiest category, receiv-ing the lowest grade of “F.”

“It has been one factor after another that has only served to heighten awareness of the impor-tance of these non-financial is-sues at corporations and the need to measure and tighten these types of risks,” says Zwingli.

Kimberly Gladman, GMI’s director of research and risk ana-lytics, states, “Fiduciaries increas-ingly consider an evaluation of ESG issues a core part of their in-vestment due diligence. Our Risk List research is intended to assist investors in identifying particular ESG factors that may correlate with negative events.”

GMI literature points out that ESG scrutiny is on the rise, for example, among institutional in-vestors and other signatories to the United Nations Principles for Responsible Investment, which currently represents over $25 tril-lion in managed assets.

Combined CompetenciesZwingli says GMI is particularly well positioned to supply risk-related research because its own scope and synergies have broad-ened: The Corporate Library was known mainly for its expertise in governance issues, Governance-

The Risk List

UPFRONT

Company Exchange Ticker Industry

ApolloGroupInc. NASD APOL Personal Services

ComstockResourcesInc. NYSE CRK Oil and Gas Exploration/Production

ComtechTelecomm.Corp. NASD CMTL Communications Equipment

DiscoveryCommunications NASD DISCA Broadcasting

EZCORPInc. NASD EZPW Consumer Financial Services

K-SwissInc. NASD KSWS Footwear

M.D.C.Holdings NYSE MDC Homebuilding

NewsCorp. NASD NWSA Media Diversified

SandRidgeEnergyInc. NYSE SD Oil and Gas Exploration/Production

ScientificGamesCorp. NASD SGMS Casinos / Gaming

Corp., Enron Corp., Tyco Inter-national and Worldcom, followed by ESG breakdowns at American International Group, BP, Massey Energy Co. and others, have

be dominated by insiders, while accounting indicated rapidly in-creasing leverage, low cash levels and concerns about inadequate expense recognition. Yet another

Source: GMI, October 2011

Page 9: riskprofessional201112-dl

www.garp.org D E C E M B E R 2 0 1 1 RISK PROFESSIONAL 8

UPFRONT

Company Rating

enchmarks, metrics and other mathemat-ics are hardly new to risk managers or to

the discipline of risk manage-ment. If anything, quantitative analysis, and too much faith in it, threw financial risk management off course and helped precipi-tate the credit market debacles of 2007 and 2008.

To risk management prac-titioners and experts like Cory Gunderson, head of consulting firm Protiviti’s U.S. financial ser-vices practice, the industry has gotten a better handle on risk in the aftermath of the crisis, aided by a new generation of sophisti-cated analytical tools and report-ing mechanisms. At the same time, financial institution man-agements and boards of direc-tors are inundated by data that taxes the capacity of the systems that they depend on for high-lev-el risk assessments, oversight and strategic guidance.

“An avalanche of data and not a lot of analysis” is how Gunderson describes the situa-tion. For that complex and hard-to-manage problem, Gunderson

is developing a solution, de-scribed as a risk index. It boils hundreds of pages of risk data that board members receive on a monthly or quarterly basis down to a single number.

Protiviti is not out to create a global indicator of risk levels, as has been done, for example, by GARP, explains Gunder-son. This is a tool to produce a “single-number snapshot” of the state of risk at a given financial institution. The index is cus-tomized, scalable to any size of firm, and designed to enable an interpretation of “how it is do-ing with regard to its risks at any given moment,” Gunderson tells Risk Professional.

Deliberate RolloutIt is about a year since Gunder-son began rolling the concept out, establishing the beginnings of a track record with selected, undisclosed client firms. Protiviti has not made an official release of the product or of the meth-odology underlying it, though it published a fairly thorough, four-page overview in its FS In-sights newsletter.

BBy Jeffre y Kutler

It revealed, for instance, the steps taken to aggregate and link an orga-nization’s strate-gic objectives to quantifiable risk measures and an algorithm that ul-timately calculates the index. “An in-dex can be broken down into different component parts, allowing for drill-down capability and a focus on core issues that drive an entity’s risk profile,” the article said.

Gunderson says a more de-tailed white paper is forthcoming in early 2012.

The newsletter noted that “two basic yet crucial questions remain difficult for board mem-bers and management to answer easily: Is our organization riskier today than it was yesterday? Is our organization likely to be-come riskier tomorrow than it is today?”

The index promises to give boards “an accurate picture of the organization’s current and future risk profile in a single, dis-crete number.” The tool would similarly be applied by manag-ers enterprise-wide and at senior, business-line and departmental levels, as well as geographically. What’s more, “Executives and

boards can use the information . . . to take action and run what-if sce-narios.”

Wider ApplicabilityGunderson, a University of Chi-cago MBA who is based in New

York and has been with Protiviti since 2002 after 11 years with Arthur Andersen, says, “We be-lieve we are onto a cutting-edge idea.” The index has been posi-tively received where it has been implemented and has attracted “a ton of interest,” not just from the financial industry. Finance is “my background,” and an area that “lends itself to early adop-tion,” notes Gunderson, but over time the index can work in other sectors too.

“We have not solved risk management,” he adds. “We created a tool – a very efficient tool – to boil down where risks are coming from.” He stresses that rather than replacing or replicating any existing risk management measures, the in-dex is “another tool in the tool-kit.”

Although Gunderson and Protiviti are not ready to take the wraps off, he says that mul-

Metrics International for its ESG metrics and analysis, and Audit Integrity for forensic accounting analysis and risk models.

Future Risk List offerings will examine non-U.S. firms and in-dustrial sectors where high risk factors are in play for all partici-pants. GMI aims to publish its

lists twice a year, Zwingli said, adding that GMI is looking to expand beyond 10 companies, to possibly as many as 50.

Zwingli stresses GMI’s inde-pendent status and the fact that it does not provide consulting ser-vices to firms seeking to improve their ESG ratings, which differ-

entiates it from other research and ratings firms. GMI competes in some areas with the likes of Glass, Lewis & Co. and Institu-tional Shareholder Services, a unit of MSCI.

“We have been looking at risk measures for several decades,” says Zwingli, “and have been

able to identify those metrics and patterns of behavior most pre-dictive of negative events. We be-lieve risk managers need to make sure they are aware of the risks that are most critical at a given company and make them a part of their everyday risk assessment process.”

Cory Gunderson

Page 10: riskprofessional201112-dl

9 RISK PROFESSIONAL D E C E M B E R 2 0 1 1 www.garp.org

UPFRONT

tiple risk measures and time se-ries go into the index calculation; and that it has a quantitative component while also having to be “responsive to the culture of a company” so that it resonates

with, and has value to, the peo-ple within it.

As stated in the Protiviti news-letter, “By equipping companies with a more comprehensive, comprehensible and actionable

snapshot of their organizations’ risk management progress, risk indices can help risk officers, board members and sharehold-ers become more confident that they understand the business’s

risk profile – and whether risk levels are rising or falling.”

For a Protiviti commentary on boards and risk commit-tees, click here.

Slow Road to the Top

arren Buffett is known as the sage of Omaha as much for his straight talk

and common-sense approach as for his legendary investment suc-cesses. A case in point was what he said in a CNBC interview two years ago: “The CEO has to be the chief risk officer for a bank.”

As much as he may have been on point, the billionaire did not mean that the CEO must liter-ally or simultaneously wear the official hat of CRO. But his re-mark does prompt a question: Why have there not been more CROs or others with risk man-agement backgrounds rising, as has been anticipated for the last few years, into CEO positions at major banks?

The February 2009 Preview Edition of this magazine head-lined on its cover, “The Rise of the Risk Executive,” alongside a photo of Matthew Feldman, president and CEO of the Feder-al Home Loan Bank of Chicago. Feldman, a former commercial banker who had earlier been se-nior vice president of risk man-agement, remains one of the few to climb from that rank to the top of a financial institution of any

W

The rise of the CRO is taking timeBy l .a. Win o Ku r

prominence. Also spotlighted in that early

issue of Risk Professional was Mau-reen Miskovic, a career risk man-ager who made the rare move from board of directors to CRO at Boston-based State Street Corp. in 2008. In early 2011, she became group CRO of Switzer-land-based UBS, which was sub-sequently hit by a costly rogue trading scandal. UBS’s previous CRO, Philip Lofts, is now CEO of UBS Group Americas.

Why haven’t more CROs fol-lowed that path?

According to Kevin Buehler, who co-founded McKinsey & Co.’s global risk management practice and co-leads its cor-porate and investment banking practice, the answer is that it will just take time. One factor work-ing against CROs is their lack of corporate longevity. “Whenever unexpected losses occur at large financial institutions, the CRO is often in the line of fire,” Buehler explains. This results in “tremen-dous turnover in the CRO posi-tion, particularly at institutions suffering a high level of losses at a time of financial turmoil.”

Hollywood portrayed this very trend in Margin Call, a recently

released, criti-cally praised film by J.C. Chandor (see the October Risk Professional, page 8). When a financial cri-sis threatened the collapse of a Wall Street securities firm, senior risk managers took the fall.

A n o t h e r impediment has to do with the role CROs play in the C-suite. In some organizations, says Buehler, it is viewed principally as a con-trol function not unlike that of the head of internal auditing – which is not an easy launching pad for a prospective CRO.

Becoming StrategicOn the other hand, where the CRO plays more of a strategic role – for instance, helping to balance risk and return similar to what a chief financial officer would do, or heading what is regarded as a key business unit – there is a greater likelihood of moving up the ladder and contending for CEO. This is especially true, says Buehler, for candidates who have a proven track record in both good times and bad.

One more piece to the puzzle: A greater risk consciousness at

the board level of large finan-cial services c o m p a n i e s , which results from having more direc-tors with risk and regulatory backgrounds. C o r p o r a t e boards pri-marily care about “strat-egy; compen-sation and

succession planning for the man-agement team; and risk issues that could undermine the com-pany,” Buehler says.

Because of the financial crisis and the regulatory response to it, senior bank executives have been spending increasing amounts of time interacting with regulators, the McKinsey director contin-ues. In those instances where a board enjoys a close relationship with its CRO, he says, it would increasingly value the skill set that a CRO brings to the table.

Buehler points out that CROs who play a central, increasingly more strategic role working with the board and on senior execu-tive teams will be “one step clos-er to the CEO spot.”

L.A. Winokur, a veteran business jour-nalist based in the San Francisco area, wrote the February 2009 article, “The Rise of the Risk Executive.”

Special preview edition

A d v a n c i n g t h e R i s k P r o f e s s i o nPRofessionAl

the riSe of the riSkexecutiveRisk managers gain prestige and prominence, and take more heat, as market pressures mount

Matthew FeldMan, CeO, Federal hOMe lOan Bank OF ChiCagO

State Street’s Top Management Brings Its CRO Into The Inner Circle

Will Too Many Controls Kill The Credit Default Swap?

A New Technical Standard Proposed For Risk Reporting

FeBRuARy 2009

pluS

Page 11: riskprofessional201112-dl

www.garp.org D E C E M B E R 2 0 1 1 RISK PROFESSIONAL 10

UPFRONT

A Basket in ReserveDX Organisation Ltd. in London has a new managing direc-tor, and he has high

aspirations for what WDX is sell-ing: the World Currency Unit, aka Wocu. Ian Hillier-Brook, a veteran financial industry and technology executive who suc-ceeded WDX founder Michael King this summer, says he is aim-ing by mid-2012 to have a dozen exchanges around the world actively using the Wocu and for global banks to be posting Wocu rates and prices.

“We’re attending conferences, writing articles in the U.K. for the Association of Corporate Treasurers and working with a number of global banks,” Hill-ier-Brook, who had previously served as a director and consul-tant to WDX, tells Risk Profes-sional.

It has been as much an educa-tional pursuit as it is a commer-cial enterprise. A separate WDX Institute promotes the intellec-tual case for a currency unit.

The Wocu was conceived in 1996 as a weighted basket of currency values – the top 20 cur-rencies based on gross domestic product are in the basket – that would lessen the volatility associ-ated with conventional trading in currency pairs.

Wocu calculations have been issued continuously since 2000, and the commercial launch on January 1, 2010, coincided with concerns in emerging markets and among some economists about the continued, long-term

WBy Ju lie t te fairle y

viability of the U.S. dollar as a reserve currency.

Aid in TradeWhen his appointment was an-nounced in September, Hilli-er-Brook stated that “there is growing consensus among initial target users that the Wocu will be an invaluable component of future international trade.”

He also said, “A balanced bas-ket is the answer to dampening volatility, while including the U.S. dollar and the currencies of fast-growing developing countries re-flects the real world.” WDX also pointedly noted that the Wocu would “continue to function ef-fectively, and uninterrupted, if the euro were to break up.”

Says Hillier-Brook: “Corpo-rate treasurers are saying they will happily use the Wocu if their banks are offering Wocu rates and prices.” The Warsaw Com-modities Exchange in Poland an-nounced in June that it intended to use Wocu values in commod-ity trading for exchange-rate risk reduction.

Hurdles to AdoptionMarket observers say the con-cept has merits, but achieving wide-scale adoption will be a challenge.

“I like this idea, but if you look at competitors, such as a stable currency index, they’ve struggled to keep traction. The Wocu is more complex because it is more diversified,” says Axel Merk, president and chief invest-ment officer of Palo Alto, Cali-

fornia, currency fund manager Merk Investments.

Having raised private capital to commercialize its product, WDX “spent the last two years building infrastructure, getting technology in place and licensing the Wocu,” says Hillier-Brook.

It is not the only potential alternative for the dollar-domi-nated currency reserve system. One proposal, articulated by China’s central bank chief in

2009, among others, was for the International Monetary Fund synthetic measure for reserve as-sets, Special Drawing Rights, to become the calculation mecha-nism. Hillier-Brook sees disad-vantages in the SDR: It is ex-changeable with only four major currencies and is updated every five years, versus the Wocu’s twice-yearly rebalancing.

“Volatility of currencies is a major concern at the moment, but simply dumping the U.S. dollar does not make sense,” says Hillier-Brook. “We have the countries whose economies are strongest in the basket, including the U.S. dollar. A balanced bas-ket reduces volatility and helps to manage the problem, rather than solve it by creating a derivative.”

Persistent DollarDespite the knocks that the dollar has taken, currency traders still have a high opinion of it, asserts Dev Kar, a former IMF senior

economist who is lead economist of the Washington, D.C.-based research and advocacy organiza-tion Global Financial Integrity. “The dollar has the dominant position in the invoicing of im-ports and exports because it is used as a third-party currency between countries, such as India and China,” says Kar. “To sup-plant the dollar is a difficult task. All I can say is good luck.”

Merk adds that the Wocu will

have a hard time rivaling the ma-jor currencies’ liquidity. “There are non-deliverable currencies in the basket,” he points out. “You have to come up with a deliver-able currency. The drawback is that with emerging market cur-rencies, there is often no full con-vertibility.”

WDX believes its time will come and its formula is tested. As its Web site (www.wocu.com) explains, “modern risk theory and mathematics” have gone into the Wocu, “a step-changing currency instrument, free of political influence. Un-like the outdated SDR, the Wocu is distributed from the WDX algorithm and modern, real-time technology infrastruc-ture. The Wocu truly reflects exchange rates and economic power, calculated on a sub-sec-ond basis. [It] will herald lower risk, lower volatility and an an-swer to the allocation puzzle of reserve currencies.”

Page 12: riskprofessional201112-dl

11 RISK PROFESSIONAL D E C E M B E R 2 0 1 1 www.garp.org

Wind Shift

ho has seen the wind? Neither I nor you, wrote the 19th century poet Chris-

tina Rossetti. Nor did wind farm operators of RES Americas in Texas during the winter of 2009-’10 when energy production sagged.

But the weather changes. This year saw “the windiest July in Texas in a long time,” says Michael Grundmeyer, head of global business development strategy for Seattle-based 3Tier, which quantifies and manages exposure to renewable energy risks.

As wind power, increasing 26% annually, becomes a grow-ing source of electricity genera-tion, so does the demand for this form of weather risk manage-ment and weather derivatives for hedging.

“The natural variability of wind led to conservative financ-ing and a relatively high cost of capital,” says Martin Malinow, CEO of Galileo Weather Risk Management Advisors in New York, which introduced Wind-Lock in May using 3Tier’s data. “It is a tool to manage the uncer-tainty of wind-driven earnings.”

Models AdvanceWeather modeling generates his-torical time series – 3Tier has 30 years of data – resulting in global benchmarks for wind variability risk assessment and mitigation, and offering energy traders and

W

Weather derivatives make a markBy Janice fio r avante

grid operators tools for scenario analysis.

Weather prediction modeling has evolved from the first weath-er derivatives of the late 1990s that addressed temperature dif-ferences in summer and winter for energy companies. These “are still traded, still structured today,” Grundmeyer notes.

Standardized contracts trade on the Chicago Mercantile Ex-change, customized ones over-the-counter. OTC deals are gen-erally larger in dollar terms, up to $150 million or $200 million notional. “There is an increased balance between the exchange-traded market and OTC mar-kets,” says Bill Windle, managing director of RenRe Energy Advi-sors in The Woodlands, Tex. and president of the Weather Risk Management Association (WRMA). The product class has grown to include hurricane, rainfall and snowfall futures and options.

For the wind segment, us-ing multiple clusters of high-powered computers for weather prediction models, 3Tier aims at “making data digestible for those with a financial stake in the num-bers,” explains Grundmeyer. Key questions for wind farm op-erators and developers concern variability of wind speeds at a given site, on what time scale and with how much predictability.

“We were involved in five projects in Texas that suffered” from the El Niño weather pat-

tern, and “they couldn’t get enough wind,” recalls Grund-meyer. “They have exposure to a specific geographic area and want to lay off that risk from their balance sheets. The owners of wind power projects need to know the points where there are deviations from the norm that affect the larger issues of grid operability scheduling.”

Enter SpeculatorsOperators want to avoid the im-pact of intermittent wind and having to buy replacement pow-er at high prices on short notice.

The conventional “swap” ap-proach would involve finding a counterparty to take the other side of the risk trade, perhaps a thermal power plant operator. “But maybe it doesn’t have to be a two-sided market,” says Jeff Hodgson, president of Chicago Weather Brokerage, who thinks speculators will make the mar-ket.

“With weather, your model tells you something is going to happen,” says Hodgson, a for-mer Merrill Lynch wealth man-agement adviser who started CWB in 2009 when he was ex-cited by the CME‘s introduction of snowfall-related products. “If you think it’s going to snow over 60 inches in Chicago, you look for someone to take the other side of the trade – it is two speculators betting against each other. Someone will be right and someone will be wrong, a zero-sum game.”

Hodgson is usually the expert-educator on the subject, but he tells of attending a conference of snow-removal businesses and hearing how they understood de-rivatives as a way to off-load risk.

“They’d come up with ideas for using futures and options that I hadn’t considered yet.”

In only the second year of trading, volumes in snowfall fu-tures and options at the CME have increased more than 400%. “We anticipate exponential growth again next season,” says Hodgson.

These contracts’ benefits, he says, include their non-correla-tion with stock and bond mar-kets.

Global Interest“With a tradable marketplace,” adds Hodgson, “the farmer has the ability to trade out of a por-tion, or all, of the position prior to expiration. This is something that is unique to this market.”

More than 1.4 million weath-er derivative contracts, a record, were written globally in the 12 months through March 2011, according to the Washington-based WRMA. The aggregate notional value of OTC contracts was $2.4 billion, out of a total market of $11.8 billion.

Temperature contracts are the most traded customized weather hedge. Growth was also seen in rainfall, snow, hurricane and wind contracts, reflecting end-user participation from a wider variety of industries such as ag-riculture, construction and trans-portation.

Weather risk is taking its place alongside financial and valuation risks as a factor in business rev-enues and bottom lines. Recent guidance from the Securities and Exchange Commission focuses on disclosure of climate-change impacts. Investors may not want to hear companies citing weather as an excuse for missed earnings.

UPFRONT

Page 13: riskprofessional201112-dl

www.garp.org D E C E M B E R 2 0 1 1 RISK PROFESSIONAL 12

If you manage risk, money or investments,you recognize how critical it is to stay ontop of global industry developments —and to understand what they mean foryour organization.

GARP provides its Members with the expert risk informationand the global professional connections required to excel.From the latest research and premium risk content, to Members-only education and networking opportunities, you can benefit from the depth and breadth of GARP’s global community.

Become a Member of GARP today and maintain your professional advantage.

Creating a culture of risk awareness.TM

© 2011 Global Association of Risk Professionals. All rights reserved.

AGLOBALNETWORK.

Page 14: riskprofessional201112-dl

13 RISK PROFESSIONAL D E C E M B E R 2 0 1 1 www.garp.org

The ERM DifferencePrincipal Financial relied on enterprise risk management

in an aggressive global expansion begun in the wake of the dot-com crash and the post-9/11 downturn.

By Michael Shari

C R O - C E O D I A L O G U E

he Principal Financial Group was ahead of its time in relying on risk management as a strate-gic guide in insurance and retirement businesses launched more than three decades ago. Risk man-agement also provided underpinnings of decisions

such as the company’s demutualization and initial public offer-ing on October 26, 2001, just six weeks after the September 11 terrorist attacks.

While the U.S. was preparing for war amid continuing after-shocks from the previous year’s dot-com bust, Principal set itself apart from its peers by making acquisitions in Asia, Australia, Europe and Latin America. Still aggressive on that front, the company announced three acquisitions this year: global equity investment firm Origin Asset Management of London in July; pension manager HSBC AFORE of Mexico City in April; and emerging markets fund manager Finisterre Capital of London, also in April.

Founded in 1897, Des Moines, Iowa-based Principal is the

T 11th largest life insurer in the U.S., as well as the biggest man-ager of bundled 401(k) retirement plans. It also owns Principal Bank, which opened in 1998 after being one of the first online banks approved by its then-regulator, the Office of Thrift Su-pervision.

Because of its focus on small and medium-size companies, which tend to outperform larger ones in an anemic economy, Principal’s retirement business has largely averted the risk of shrinking payrolls at a time of steadily high unemployment.

What’s more, senior vice president and chief risk officer Greg Elming sees “a natural hedge” in the fact that Principal’s $54 billion general account is divided into more than 10 distinct seg-ments to match distinct businesses and the geographic diversity of those businesses. Principal reported $9.16 billion in revenue and $717 million in net income last year. It has offices in 15 countries, 16.5 million customers and $336 billion in assets un-der management.

Elming and CEO Larry Zimpleman are both Iowa natives Phot

ogra

phs

by ©

201

1 Sc

ott S

inkl

ier

Page 15: riskprofessional201112-dl

www.garp.org D E C E M B E R 2 0 1 1 RISK PROFESSIONAL 14

“Our net income held up better because of a greater focus on enterprise risk management, and

that served us well during the financial crisis.” ~ Larry Zimpleman, CEO, Principal Financial Group

Page 16: riskprofessional201112-dl

15 RISK PROFESSIONAL D E C E M B E R 2 0 1 1 www.garp.org

C R O - C E O D I A L O G U E

who grew up within the organization. Elming, a 29-year Principal veteran and University of Iowa graduate who had been comptroller for 10 years, became CRO in April, succeeding Ellen Lama-le, who had been with the insurer since 1976. Zimpleman, a 40-year veteran of the company, a Drake University MBA and now a trustee of the Des Moines school, has been president since 2006, CEO since 2008 and chairman since 2009.

Zimpleman, 60, and Elming, 51, agree and are acutely aware that the risks Principal faces are “growing geo-metrically” as the company expands. To manage those exposures, they rely on a force of 300 risk professionals in the CRO’s office, in the business units and even in shared services like human resources. Their common touchstone is a multi-page grid in which all of those risks are painstakingly catalogued as fi-nancial or operational and then broken down into several levels of specificity, from 30,000-foot overviews to granu-lar close-ups. The executives candidly walked Risk Professional through Princi-pal’s risk management framework and positioning relative to competitors in this recent interview.

How did risk management devel-op historically at Principal?ZIMPLEMAN: In the late 1970s and early ’80s, we began to take some pret-ty early and ground-breaking steps. We had moved from the 1950s, when interest rates were 3% to 4%, to the late 1970s, when they were 20%, and short rates were higher than long rates. That had tremendous ramifications for how we thought about how to man-age a multiline company like we were. That “ah-ha!” moment told us a couple

things. You can’t operate a single pool of investments attached to multiple businesses lines, because the character-istics of life insurance are very different from health insurance. We were one of the very early insurance companies to split up our general account into, I’d say, 10 to 12 investment pools. We be-gan to adopt much more detailed stud-ies of asset and liability management. This was all going on in the early 1980s, when that was just not the way that the business was done.

What role did risk management play in strategic moves such as demutualization and expansion in emerging markets?ZIMPLEMAN: The reality of being a mutual company was that the value of the company was going to a small block of individual policyholders. That was not a fair or equitable sharing of the value that was being put on the books. Most of the customers were coming into the retirement and mutual fund spaces. They were not participating policyholders. So we outgrew the mu-tual form. Hand-in-hand with being a public company was a need to focus on an opportunity that would have a long shelf life to it. We said one of the things we do really well is that we have this focus on enterprise risk management, and we do really well in the U.S. retire-ment space. So let’s do a global scan for countries that offer the best long-term opportunity, particularly for small and medium businesses. We started in the early 2000s in greenfield businesses in markets like Mexico, Brazil, Hong Kong, and Malaysia. We were 10 or 12 years ahead of our time in identifying that opportunity and getting ourselves into these markets.

Can you contrast Greg Elming’s approach as CRO with that of his predecessor?ZIMPLEMAN: When we started this ERM effort in the 1980s, it was virtually 100% focused on mitigating financial or economic risks. As we become more global, we are getting into new levels of currency risk and different levels of po-litical risk. In the last three to five years, a whole new bucket of what I would call business or operational risks has been added to the suite of risks that the CRO is responsible for -- reputational risks, compliance risk, IT risk, cybersecurity risk.ELMING: It is not that Ellen [Lamale] was not necessarily looking at [those risks]. It’s just perhaps placing a little more emphasis on them, given the en-vironment today. A lot of those things were what I had to rock-and-roll on in my comptroller role. But the vast major-ity of my focus is to enhance and aug-ment our ability around the operational risk efforts at the specific time that those things are becoming probably more vo-luminous, more complex, more involved than maybe they have been in our his-tory.

How would you describe the em-powerment of the CRO?ELMING: I don’t want to imply too much about the independence and oversight of the job. But I have access to a department that has all the professionals I need. Col-laboration is huge here. We have no less than 100 different risk committees within the organization, dealing with all aspects of managing risk. Plus I get the pleasure of having quarterly discussions with our board and very active subcommittees of the board, whom I talk with frequently. It cascades from Larry, from the board,

Page 17: riskprofessional201112-dl

www.garp.org D E C E M B E R 2 0 1 1 RISK PROFESSIONAL 16

C R O - C E O D I A L O G U E

and is accepted as a good, solid business practice by all the business leaders, which I think is the ultimate blessing of empow-erment, if you will.

Do you see yourself as a peer of the CEO?ELMING: I am not sure I’d put myself that high up, but I would consider my-self as having the ear of anybody that I need to have the ear of.ZIMPLEMAN: Relative to our chief information officer, our chief invest-ment officer, our chief marketing officer, Greg’s role, title and compensation are absolutely on the same plane.

Is risk management closer to the front office at Principal than it would be at other companies?ELMING: We believe that risk man-agement ought to be in three lines of defense. The first is the [300] business-unit risk professionals in our five key op-erations as well as corporate functions such as IT and human resources, which span various business units. We feel very strongly that the responsibility for iden-tifying, prioritizing and mitigating all as-pects of enterprise risk needs to sit with the business units. We also need to know we have oversight coordination, with the greater good and consolidation perspec-tive in mind. That is where we have the CRO office, which is our second line of defense. I take an independent look at things with a little bit of a skeptical perspective. I report directly to Larry, so there isn’t anybody clouding my judg-ment or tweaking my perspective. The third line of defense is internal audit, where we have 25 risk professionals from a variety of backgrounds, be it opera-tions or finance or IT, who go in and ac-tually kick the tires, independently assess

whether we are doing what we say we are doing, and make sure that some oth-er risk isn’t there that we haven’t identi-fied yet.ZIMPLEMAN: What’s key is that the CRO and the business units are in ac-tive discussions. This is not someone who is removed from those discussions and is then asked later to provide some sort of input or opinion about it. They are sitting right there at the time when discussions are going on, with an equal voice, if you will, as to whether this is something that the company does or does not want to move forward on. It’s not someone being at a level higher than someone else saying, “I will pull the rank card.” It would be more around, “Here’s a competitor’s product. If we are going to offer something similar, what would we do to make it commercially viable?” When that dialogue between risk pro-fessionals and businesspeople would get created, what would come back is, “You can’t cover all those risks, so you will – excuse my English – have to go naked on those, which is not something you want to do. Or you might be able to do it, but the price is so outrageously high that the product is not commercially viable.”

What pitfalls have you avoided? ZIMPLEMAN: The poster child is variable annuities. I can’t tell you how many meetings I went to in 2004, 2005 and 2006, where we were continuously asked, “Why aren’t you selling variable annuities in the wealth advisory chan-nel?” Our answer was, “Well, we are just not comfortable with the kind of contract provisions, the pricing and the subsequent hedging that we would have to do.” It’s not as if we aren’t in those products. For us, variable annuities with living benefits is about $2 billion. At a

company like MetLife, it would be well over $100 billion.

Any other areas avoided?ZIMPLEMAN: If you look at our peers, many would probably have 15% to 20% of their general-account investment portfolio in some sort of asset backed by residential real estate. For us, you would find it at more like 2%. We never invest-ed much in those securities because we were never that comfortable with the du-rational and economic make-whole con-ditions around them. And that is today a huge, huge positive for us because of the continuing trouble in residential real estate, which we completely avoided.

Sell-side analysts have described your exposure to commercial real estate as a bit outsized.ZIMPLEMAN: With any investment portfolio, you have to decide where you want to take your incremental oppor-tunities. Commercial mortgages have proven to be a much better source for investments in insurance company ac-counts than residential real estate. The long-term historical loss ratios are quite low. All of our commercial mortgages are underwritten by our own people. Very few companies have that level of commitment. We would have exposure in our general account in rough terms today that is 20% to 25% in commercial mortgages, as compared to peers at, let’s say, 10%.

To what extent is Principal safe from experiencing what happened at Lehman Brothers, Bear Stea-rns, AIG and UBS?ZIMPLEMAN: It is very hard for me to relate to companies where internal processes were such that CROs or risk

Page 18: riskprofessional201112-dl

17 RISK PROFESSIONAL D E C E M B E R 2 0 1 1 www.garp.org

C R O - C E O D I A L O G U E

“Collaboration is huge here. We have 100 different risk committees.”

~ Greg Elming, CRO, Principal Financial Group

Page 19: riskprofessional201112-dl

www.garp.org D E C E M B E R 2 0 1 1 RISK PROFESSIONAL 18

C R O - C E O D I A L O G U E

professionals were considered second class. We were as surprised and shocked as most people were when we found out how little attention was being paid to risk. To some degree, that exists even in the insurance industry. Manulife Finan-cial is probably the best example of a company that was held out from 2005 to 2007 as having very astute enterprise risk management. Then you come to find out that their variable annuity busi-ness had absolutely no hedging and no programs in place. It has cost them very significantly.

How do you distinguish Princi-pal’s risk management from that of peers?ZIMPLEMAN: Look at our net income, which shows not only how operating earn-ings performed, but also how the invest-ment portfolio performed, from 2007 to 2009, from peak to trough. This is where many of the ERM elements get embed-ded – into the portfolio performance. In our case, net income dropped by roughly 50%. But for our peer group – Pruden-tial, Lincoln Financial, AXA and others – the entire net income was wiped out, on average. We believe our net income held up better because we have been operat-ing with a greater focus on enterprise risk management, and that it served us well during the financial crisis.

How do you integrate risk man-agement with portfolio manage-ment?ELMING: The way we view asset man-agement risk, primarily as it relates to our asset management operations, is mostly around operational risk or business risk – the things associated with business practice, people, process, systems and non-economic factors that can influence

us. Our asset management professionals, including risk professionals, are closely integrated with our general account, or our Principal Life and investment man-agement professionals, too, to make sure we are managing things consistently. We have both at the table every week when we discuss the general account invest-ment strategies.

Does the weekly meeting have a name?ELMING: It is the Principal Life In-surance Investment Committee, Friday mornings at 9:30. The primary mem-bers are Julia Lawler, chief investment officer, who chairs the committee; the CRO, CEO, and CFO [Terrance Lil-lis]; two or three key business unit lead-ers who are generating the liabilities that we invest from; and James McCaughan, president of Principal Global Investors.

Is risk management seen as hav-ing helped mitigate losses, or as a contributor to profits? ELMING: Both. We view risk from both lenses and gather data, analyze, track, and set tolerances that are both lower-bounded and upper-bounded. We are not just looking to keep disasters from happening. We also have measurements to make sure we are taking enough risk, and taking advantage of situations that we need to take advantage of.

What is happening in the compli-ance area?ELMING: One of the key things con-fronting the industry – and me in my job, personally – is the flood and flurry of regulatory, legislative, accounting and compliance activity. It is pretty easy to look at that stuff and get buried with, “Oh my gosh, I’ve got to find a way to

comply with all this stuff and avert di-saster.” But we also look at that as, “OK, where are the opportunities in this?”

What is your view on interest rates? ZIMPLEMAN: That is the type of thing we have a very vigorous discus-sion around at the investment commit-tee meetings. Two to three years from now, we are [still] going to be talking about relatively low interest rates. There is a view that, over time, there is some chance that they would migrate higher, primarily driven by inflation, which is coming out of the emerging economies moreso than the developed economies.ELMING: When managing interest rate risk, you need to be pretty astute and controlled in derivative utilization, and that is something we monitor and control very tightly. Currently, we utilize over-the-counter derivatives. Of course, you’ve got to manage exposures to spe-cific counterparties and make sure you don’t get outside your limits. Ellen and Julia set up really good mechanisms on that front.

This has been an unpredictable en-vironment in which to deploy $700 million in capital, which Principal had said it would. How are you handling the risks involved?ZIMPLEMAN: Since that’s a fairly large amount, you ought to expect us to be active in all areas. We have done about $350 million in M&A and about $450 million of share repurchases. And that $700 million, by the way, became about a $1 billion because of favorable events during the year. That doesn’t mean we have to spend it. This is not the drunken sailor syndrome. The board may simply choose to see what happens over time.

Page 20: riskprofessional201112-dl

19 RISK PROFESSIONAL D E C E M B E R 2 0 1 1 www.garp.org

C R E D I T R I S K

ver the last few years, and in particular in the light of the financial crisis of 2007-2008, stress testing has become a key tool in the risk management “arsenal” of financial institutions and a crucial input into the decision making process of a bank’s

board and top management. It is also a core component of the dialogue between banks and regulators as part of the Supervi-sory Review Process (Pillar 2 of Basel II).

In order to be a truly effective risk management tool, stress and scenario analysis must make clear how a given scenario impacts different portfolios, highlighting the assumptions on which the analysis is based. Equipped with this information, a firm will then be in a position to manage its portfolios pro-actively, ensuring that the impact of a given stress is within the firm’s tolerance to risk. Ultimately, robust stress testing is about preparing for anything that might happen.

In this article, we specifically focus on stress testing for banks’ corporate portfolios and describe an approach that allows a firm to assess in a transparent fashion how a given scenario affects banks’ capital requirements for corporate portfolios, via estimating stressed PDs and LGDs.

Our framework has three main advantages:• First, it links high-level scenarios to a more granular de-

scription of the economy, by estimating values for virtually any

Stress Testing: A Robust End-to-End Approach

There is one approach for stress testing corporate portfolios that can not only yield credible, transparent

results but also lead to improved accuracy of probability of default and loss-given default forecasts.

By Bruno Eklund, Evguenia Iankova, Alessandra Mongiardino, Petronilla Nicoletti , Andrea Serafino and Zoran Stanisavljevic

O variable that can be considered to be a risk driver for a sector. As such, it improves the accuracy of probability of default (PD) and loss-given default (LGD) forecasts.

• Second, by considering a large set of macroeconomic vari-ables in a sound econometric model, it contributes to produce credible and transparent results.

• Third, it facilitates the identification of portfolios, or parts of portfolios, that are particularly vulnerable to stressed condi-tions. As such, it provides a sound basis for proactive risk and portfolio management.

The main aim of stress testing is to evaluate the impact of se-vere shocks on a bank and assess its ability to withstand shocks and maintain a robust capital or liquidity position. If properly conducted, it sheds light on vulnerabilities otherwise not identi-fied, informs senior management in the decision-making pro-cess, and underpins risk-mitigating actions to ensure the long term viability of the firm.

When a scenario is set — for example, either by a bank’s management or by the regulator — it is typically only articu-lated on the basis of a few variables, such as GDP and inflation. However, the determinants of the solvency of firms, which are likely to be sector specific, may not be included among the sce-nario variables. Trying to impose a relationship only between the given scenario variables and the PD and LGD may not be

Page 21: riskprofessional201112-dl

www.garp.org D E C E M B E R 2 0 1 1 RISK PROFESSIONAL 20

C R E D I T R I S K

appropriate, and may lead to biased and spurious results that undermine the practical use of the stress analysis.

This article will describe a factor-model approach that al-lows us to obtain any variables considered to be determinants of the corporate PDs for each sector, based on broadly de-fined scenarios. On the basis of this model, it is then possible to estimate the impact of the scenario on portfolios’ PDs and LGDs and, through these, on the bank’s capital requirements.

This feature underpins a transparent end-to-end approach for stress testing of corporate portfolios that links the settings of high-level scenarios all the way to the estimation of capi-tal requirements under stress. It also provides a proactive risk management tool as it helps to identify which portfolios are most affected by a given stress and what mitigating actions would be required, if any, to ensure a firm’s financial strength under stress. The end-to-end process is summarized in the diagram below.

Diagram: From Scenarios to Mitigating Actions — A Robust End-to-End Approach

Our approach not only provides the tools to evaluate the impact of a scenario on sector-specific PDs, but is also less likely to suffer from under-specification, a problem shared by other macroeconomic models. In addition, unlike a purely judgmental approach to stress testing, it makes every step in the analysis transparent.

We should point out, though, that the model does not take into consideration firm-specific or idiosyncratic shocks, and is used only to evaluate the impact of macroeconomic stresses on PDs, LGDs and capital requirements. Furthermore, our

discussion focuses specifically on assessing the impact of stress and scenario analysis for banks’ corporate portfolios. The ex-tension of this approach to other types of portfolios — for example, sovereign — is an avenue for further work.

The article is divided in sections, loosely following the dia-gram above. Section I explores the link between base case, stress scenarios and losses. Section II discusses our analyti-cal approach (the macroeconomic model, the construction of factors and their use to replicate scenarios), while Section III concentrates on modelling risk drivers to obtain scenario-driven PDs and LGDs. Section IV explains the usefulness of the approach as a basis not only for forecasting capital and expected loss (EL), but also for productive discussions involv-ing different stakeholders and risk mitigating actions. Conclu-sions are found in Section V.

Section I: Why do Scenarios Matter? Base Case vs. StressBanks’ profitability, liquidity and solvency depend on many factors, including, crucially, the economic environment they face. In the assessment of extreme scenarios, there are two pressing questions for risk management: (1) What are the consequences of the extreme scenario on a particular bank? (2) Under that scenario, what can be done to improve the bank’s resilience to the shock?

To address the first question, the future impact of the economy under normal conditions (base case) is compared with that of the extreme scenario (stress). By affecting the risk drivers, the stress shifts the loss distribution (see Figure 1, pg. 21) and raises expected and unexpected losses and capi-tal requirements. To understand the extent of the impact on those variables, it is important to consider the link between risk drivers and economic dynamics.

Our model does not take into consideration firm-specific or

idiosyncratic shocks, and is used only to evaluate the impact of

macroeconomic stresses on PDs, LGDs and capital requirements.

High level scenarios

Full macroeconomic scenario - base case and stress

Factor analysis and

scenario replication

Stakeholder involvement and

mitigating actions (if necessary)

Capital and Expected Loss

forecasts - base case and stress

Scenario driven PDs and

LGDs

Page 22: riskprofessional201112-dl

21 RISK PROFESSIONAL D E C E M B E R 2 0 1 1 www.garp.org

C R E D I T R I S K

Figure 1: Scenarios and Losses

For corporate credit risk, the approach described allows for forecasting virtually any variable considered to be a risk driver given any scenario, and thus provides the basis for a robust an-swer to the first question about assessing extreme scenarios. The results are also useful in discussions on risk mitigating actions.

Section II: The Analytical Framework The first building block of our approach is a UK macroeco-nomic model that uses 500+ UK and US quarterly macro and financial time series from 1980-Q1. The large number of vari-ables considered over a fairly long sample period, which covers a few economic cycles, limits the risk of missing important drivers.

This model provides the background for stress and scenario analysis. To use an analogy, suppose that we want to evaluate the impact of a stone being dropped unexpectedly in the middle of a lake. The task is to predict the number of boats sinking and the number of fatalities. In this example, the probability of a boat sinking can be viewed as a “PD” and the mortality rate as a “LGD.”

The “PDs” and “LGDs” are affected by several factors – in-cluding, for example, the size of the stone; the size of the boat; the distance between the boat and the stone; the experience of the captain and crew; the availability of lifeboats; the strength of the wind; and the proximity to the shore. Assume now that we only use the first two variables, which, in this analogy, represent the set of scenario variables provided. On the basis of those two variables only, we may draw substantially wrong conclusions about the actual values of the “PDs” and “LGDs.”

On the other hand, the inclusion of too many variables can make a model unmanageable. Our preferred solution is to build

a relatively small model, while still retaining most of the infor-mation contained in the dataset by constructing principal com-ponents.

The basic idea behind this method is that many economic variables co-move, as if a small set of common underlying and unobservable elements — the principal components (also called factors) — is driving their dynamics. Under this assumption, a small number of factors explain most of the variation in a large data set, and thus those few factors can be used to predict the variables in the dataset quite well.1

We believe that building a vector autoregressive (VAR) model based on the factors is a far better solution than the standard approach of specifying a small VAR model using a subset of the macro variables. A major drawback of a small VAR model is the high chance of under-specification, which may lead to unreal-istic estimates and therefore significantly limits its practical use. For example, early VAR applications exhibited a price puzzle where a positive monetary shock was followed by a counterin-tuitive increase of the price level.2,3 Furthermore, it would be difficult to construct a meaningful VAR model that also would include the relevant determinants for the corporate sectors’ PDs.

Constructing the FactorsAfter collecting the economic and financial data, we use them to derive factors, which are a parsimonious and manageable “de-scription” of the state of the economy. All data are collected into a single matrix, Xt, where each variable has been transformed to be stationary and standardized to have a zero mean and unit variance. The matrix Xt corresponds to the “macro variables” box of historical data in Figure 2 below.

Figure 2: From Macro Variables to Factors, and Vice Versa

Macro variables Factors

(3)

(1)

(2)

Historical data

Forecast horizon

Page 23: riskprofessional201112-dl

www.garp.org D E C E M B E R 2 0 1 1 RISK PROFESSIONAL 22

C R E D I T R I S K

The principal components or factors can then be construct-ed using the so-called singular value decomposition, which ex-presses the matrix Xt as a product of three separate matrices:

where N is the number of observations and K is the number of variables in the standardized data set X.4 Using this decom-position of the matrix X, the factors are constructed as

Note that in this step, we obtain as many factors as variables included in the data set. Equation (2) corresponds to arrow (1) in Figure 2 (see pg. 21), linking the macro data to the factors. The data set can be retrieved by post-multiplying Ft with At, arrow (2) in Figure 2, and thus mapping the factors back to the macro data, as follows:

Since the general idea with using principal component fac-tors is to reduce the number of variables that needs to be in-cluded in the analysis, we only use the first r < K factors. This limited number of factors summarizes the information con-tained in the underlying macroeconomic variables efficiently.5

When neglecting the remaining columns of Ft, a small er-ror is normally made when transforming the factors back to the macro data set. The re-constructed matrix Xt can thus be expressed as a linear combination of the factors plus an error, as follows:

where Ft and At contain the first five columns of Ft and At, respectively, and t contains the estimated error made by not using all factors. This relationship is used to retrieve forecasts for the data matrix Xt from the factor forecasts over a given horizon.

The factors can be compared to indices — i.e., they can be viewed as weighted averages constructed using a number of different variables. For our UK macroeconomic model, four out of the five estimated factors have direct interpretations that allow us to gain valuable insights into the final results. The first factor is highly correlated with GDP growth and related vari-ables, and can be thought of as a proxy for economic activity. Similarly, the second factor is related to asset prices, the third

to real interest rates and the fourth to productivity and employ-ment costs.

Replicating Economic ScenariosThe framework previously described can be used to obtain estimates of the risk drivers, even if they are not included in the set of scenario variables provided, both under a base case and under a stress scenario. Good replication of a scenario is important for the overall performance of the stress testing pro-cedure. Its purpose is to analyze how all other macro and finan-cial variables — the potential risk drivers — are affected by the given stress scenario and, at a later stage, to analyze how this in turn affects the estimates of the PDs and LGDs.

As mentioned in the introduction, an important advantage of our framework is that it can be employed to forecast a wide variety of economic variables that can be used to derive PDs and LGDs. In our model, we can replicate a scenario by impos-ing the dynamics of the scenario on factors and on all other variables in the data set. The scenario usually consists of ex-plicit values of the most common macro and financial variables over a given forecast horizon.

The macroeconomic scenario can be represented in Figure 2 (pg. 21, see the vertical lines in the macro data set box). Each column in this matrix is a separate variable. The lines are dot-ted over the known sample period and solid over the forecast horizon.

To be able to estimate the effect of the stressed macro vari-ables on all other macro variables, we first need to obtain good estimates of the factor forecasts in box (3) of Figure 2. Once these forecasts have been estimated, we can use the back trans-formation, arrow (2) or equation 4, to re-construct the forecasts of the remaining macro variables. This final step will fill in the

Figure 2: From macro variables to factors and vice versa

Macro variables Factors

(3)

Historicaldata

(2)

(1)

Forecasthorizon

The principal components or factors can then be constructed using the so called singular value decomposition which expresses the matrix as a product of three separate matrices:

tX

(1) Tttt , t ALUX

where N is the number of observations and K is the number of variables in the standardized data set X.4 Using this decomposition of the matrix X, the factors are constructed as:

(2) ttt LUF .

Note that in this step we obtain as many factors as variables included in the data set. Equation (2) corresponds to arrow (1) in the figure above linking the macro data to the factors. The data set can be retrieved by post-multiplying with , arrow (2) in Figure 2, thus mapping the factors back to the macro data:

tF TtA

(3) tTt .tt

Ttt XALUAF

Since the general idea with using principal component factors is to reduce the number of variables that needs to be included in the analysis, we only use the first r < Kfactors. This limited number of factors summarizes the information contained in the underlying macroeconomic variables efficiently.5

4 Ut and At are (N N) and (K K) orthonormal matrices, UtTUt=IN, At

TAt=IK , Lt is a (NK)diagonal matrix with nonnegative elements in decreasing order.

5 The methodology developed by Bai and Ng (2002) has been adopted to determine the optimal number of factors to use, in our case five, which explains a major proportion of the variance in . tX

6

Figure 2: From macro variables to factors and vice versa

Macro variables Factors

(3)

Historicaldata

(2)

(1)

Forecasthorizon

The principal components or factors can then be constructed using the so called singular value decomposition which expresses the matrix as a product of three separate matrices:

tX

(1) Tttt , t ALUX

where N is the number of observations and K is the number of variables in the standardized data set X.4 Using this decomposition of the matrix X, the factors are constructed as:

(2) ttt LUF .

Note that in this step we obtain as many factors as variables included in the data set. Equation (2) corresponds to arrow (1) in the figure above linking the macro data to the factors. The data set can be retrieved by post-multiplying with , arrow (2) in Figure 2, thus mapping the factors back to the macro data:

tF TtA

(3) tTt .tt

Ttt XALUAF

Since the general idea with using principal component factors is to reduce the number of variables that needs to be included in the analysis, we only use the first r < Kfactors. This limited number of factors summarizes the information contained in the underlying macroeconomic variables efficiently.5

4 Ut and At are (N N) and (K K) orthonormal matrices, UtTUt=IN, At

TAt=IK , Lt is a (NK)diagonal matrix with nonnegative elements in decreasing order.

5 The methodology developed by Bai and Ng (2002) has been adopted to determine the optimal number of factors to use, in our case five, which explains a major proportion of the variance in . tX

6

(1)

(2)

Figure 2: From macro variables to factors and vice versa

Macro variables Factors

(3)

Historicaldata

(2)

(1)

Forecasthorizon

The principal components or factors can then be constructed using the so called singular value decomposition which expresses the matrix as a product of three separate matrices:

tX

(1) Tttt , t ALUX

where N is the number of observations and K is the number of variables in the standardized data set X.4 Using this decomposition of the matrix X, the factors are constructed as:

(2) ttt LUF .

Note that in this step we obtain as many factors as variables included in the data set. Equation (2) corresponds to arrow (1) in the figure above linking the macro data to the factors. The data set can be retrieved by post-multiplying with , arrow (2) in Figure 2, thus mapping the factors back to the macro data:

tF TtA

(3) tTt .tt

Ttt XALUAF

Since the general idea with using principal component factors is to reduce the number of variables that needs to be included in the analysis, we only use the first r < Kfactors. This limited number of factors summarizes the information contained in the underlying macroeconomic variables efficiently.5

4 Ut and At are (N N) and (K K) orthonormal matrices, UtTUt=IN, At

TAt=IK , Lt is a (NK)diagonal matrix with nonnegative elements in decreasing order.

5 The methodology developed by Bai and Ng (2002) has been adopted to determine the optimal number of factors to use, in our case five, which explains a major proportion of the variance in . tX

6

When neglecting the remaining columns of a small error is normally made when transforming the factors back to the macro data set. The re-constructed matrix can thus be expressed as a linear combination of the factors plus an error:

tF

tX

(4) tT

ttt AFX ~~ ,

where tF~ and tA~ contain the first five columns of and respectively, and tF tA tcontains the estimated error made by not using all factors. This relationship is used to retrieve forecasts for the data matrix from the factor forecasts over a given horizon.

tX

The factors can be compared to indices, i.e. they can be viewed as weighted averages constructed using a number of different variables. For our UK macroeconomic model, four out of the five estimated factors have direct interpretations which allow us to gain valuable insights into the final results. The first factor is highly correlated with GDP growth and related variables and can be thought of as a proxy for economic activity. Similarly, the second factor is related to asset prices, the third to real interest rates and the fourth to productivity and employment costs.

II.3. Replicating economic scenarios

The framework above can be used to obtain estimates of the risk drivers even if they are not included in the set of scenario variables provided, both under a base case and under a stress scenario. Good replication of a scenario is important for the overall performance of the stress testing procedure, since the purpose is to analyse how all other macro and financial variables, the potential risk drivers, are affected by the given stress scenario and, at a later stage, analyse how this in turn affects the estimates of the PDs and LGDs.

As mentioned in the introduction, an important advantage of our framework is that it can be used to forecast a wide variety of economic variables that can be used to derive PDs and LGDs. In our model, we can replicate a scenario by imposing the dynamics of the scenario on factors and on all other variables in the data set. The scenario usually consists of explicit values of the most common macro and financial variables over a given forecast horizon.

The macroeconomic scenario can be represented in Figure 2 above by the vertical lines in the macro data set box. Each column in this matrix is a separate variable. The lines are dotted over the known sample period and solid over the forecast horizon. To be able to estimate the effect of the stressed macro variables on all other macro variables, we first need to obtain good estimates of the factor forecasts in box (3). Once these forecasts have been estimated we can use the back transformation, arrow (2) or equation 4, to re-construct the forecasts of the remaining macro variables. This final step will fill in the forecast horizon of all macro variables, not only for the variables that were included in the scenario, and will thus allow us to use any macro variable we desire in later steps of our stress testing process.

7

(3)

(4)~

~

An important advantage of our framework is that it can

be employed to forecast a wide variety of economic variables

that can be used to derive PDs and LGDs.

T

Page 24: riskprofessional201112-dl

23 RISK PROFESSIONAL D E C E M B E R 2 0 1 1 www.garp.org

C R E D I T R I S K

forecast horizon of all macro variables (not just for the variables that were included in the scenario), and will thus allow us to use any macro variable we desire in later steps of our stress testing process.

Because the replication of the scenario is subject to an ap-proximation error, it is important that this process not only can replicate the given scenario variables but also gives rise to realis-tic forecasts of the other variables — especially the possible risk drivers. Below, we describe two alternative methods that can be used to replicate scenarios, and the conditions under which one is preferable to the other.

The first method calls for the inclusion of the variables in the stress scenario as exogenous variables in a VAR model (with factors as endogenous variables), specifying a so called VARX model. This can be depicted as follows:

where (L) represents a lag polynomial vector of finite or-der d, (L) is a lag polynomial of finite order s, t is an error term and the matrix Ht contains the variables whose fore-casts are given in the scenario — such as, for example, GDP growth and CPI inflation. The model should be specified to produce the best possible forecast performance and/or the best fit. Since the stressed variables have known paths over the forecast horizon, the factor forecasts (box (3) in Figure 2) can easily be estimated.6

The second method, an alternative to the VARX model, is to model the factors individually, which corresponds to set-ting the lag polynomial (L) equal to zero in equation (5).

In some cases, the second method is better at replicating the scenario than the VARX, possibly because it is more par-simonious. Including lagged dependent variables, as in the VARX model, might put a lot of weight on the history of the factors and could thus be worse at capturing the given stress dynamics. Individual factor models will also guarantee that any given shock to a macro variable will feed into the factor forecasts directly.

However, this second method might miss key interactions between the factors. The VARX also performs better at fore-casting factors with low correlation to the set of scenario variables given, and when the set of scenario variables avail-able is very limited.

One final point to remember is that a successful scenario replication requires an economic scenario derived using an

analytical approach, so that the included variables are eco-nomically consistent with each other. This is especially im-portant for stressed scenarios where variables might exhibit unexpected relationships between each other. Any ad hoc choices can severely affect any step of the stress testing pro-cess, introducing spurious results and unrealistic effects on macro and financial variables, and in turn on the forecasts for PDs and LGDs.

Section III: Scenario-driven PDs and LGDsOnce a scenario has been replicated, forecasts of the associ-ated risk drivers can be obtained. These estimates are then used when modeling and deriving forecasts of the PDs and LGDs, which will then drive the capital and EL estimation.7 Our framework has two distinct advantages: (1) it produces estimates of the risk drivers by linking them to the macroeco-nomic environment; and (2) it takes into account the hetero-geneity across business sectors.

Regarding the first aspect, our approach is original because it models the relationship between systematic factors, such as GDP growth and interest rates, and PDs and LGDs, while incorporating as much information as possible from the eco-nomic environment. Several papers estimate links between PDs and economic variables; however, our framework can use both macro variables and factors as explanatory variables for the PDs and LGDs.

As for the second aspect, our approach can differentiate between sectors in order to identify the specific vulnerabili-ties of a particular scenario. Different sectors are sensitive to sector-specific drivers and respond differently to the system-atic risk. Therefore, we model each sector PD separately and use variables related to that particular sector — e.g., house prices for real estate — in order to capture the sector-specific dynamics.8

Section IV: Estimation of Stressed Expected Loss and Capital, and Risk Mitigating Actions

After obtaining forecasts of PDs and LGDs, we can revert to our original question of assessing the impact of the stress on losses and capital requirements. The results of the stress test, usually compared to the results under the base case, will normally show an increase in EL and capital requirements. Figure 3 (next page) presents some stress test results in which the blue lines show the response to the base case and the red lines to the stress.

Because the replication of the scenario is subject to an approximation error, it is important that this process not only can replicate the given scenario variables, but also gives rise to realistic forecasts of the other variables, especially the possible risk drivers.6 Below we describe two alternative methods that can be used to replicate scenarios, and the conditions under which one is preferable to the other.

The first method is to include the variables in the stress scenario as exogenous variables in a VAR model with factors as endogenous variables, specifying a so called VARX model:

(5) tttt HLFLF )()( 1 ,

where represents a lag polynomial vector of finite order d, is a lag polynomial of finite order s, ηt is an error term, and the matrix contains the variables whose forecasts are given in the scenario such as, for example, GDP growth and CPI inflation. The model should be specified to produce the best possible forecast performance and/or the best goodness of fit. Since the stressed variables have known paths over the forecast horizon the factor forecasts, box (3) in the figure, can easily be estimated.

)(L )(L

tH

7

The second method, an alternative to the VARX model, is to model the factors individually, which corresponds to setting the lag polynomial )(L equal to zero in equation (5).

In some cases, the second method is better at replicating the scenario than the VARX, possibly because it is more parsimonious. Including lagged dependent variables, as in the VARX model, might put a lot of weight on the history of the factors and could thus be worse at capturing the given stress dynamics. Individual factor models will also guarantee that any given shock to a macro variable directly will feed into the factor forecasts. However, this second method might miss key interactions between the factors. The VARX also performs better at forecasting factors with low correlation to the set of scenario variables given, and when the set of scenario variables available is very limited.

Finally, a successful scenario replication requires an economic scenario derived using an analytical approach, so that the included variables are economically consistent with each other. This is especially important for stressed scenarios where variables might exhibit unexpected relationships between each other. Any ad hoc choices can severely affect any step of the stress testing process, introducing spurious results and unrealistic effects on macro and financial variables, and in turn on the forecasts for PDs and LGDs.

Section III. Scenario-driven PDs and LGDs

7 A method to replicate scenarios exactly has been developed. However, when applying it, forecasts of many of the non-scenario variables are unrealistic and meaningless. There is an apparent trade-off between realistic results and the degree of accuracy of the scenario replication.

8

(5)

Page 25: riskprofessional201112-dl

www.garp.org D E C E M B E R 2 0 1 1 RISK PROFESSIONAL 24

C R E D I T R I S K

Figure 3: An Example of Stress Test Results

The analysis is then followed by a comprehensive discussion that challenges both assumptions and results by stakeholders in different areas of the bank: e.g., risk and finance. The results emerging from this debate can then be used to inform risk- mit-igating actions.

In particular, our framework allows for the evaluation of corporate sector contributions to capital requirements and ex-pected losses. If a sector under a certain stress, for example, drives a rise in capital requirements and/or impairments above the levels compatible with the bank’s risk appetite, actions can be taken to limit the bank’s exposure to that sector.

In this process, the involvement of senior management and the board of directors ensures that any decision taken is aligned with the bank’s risk appetite and is effectively incorporated into the wider portfolio and risk strategy.

Section V: ConclusionsThe approach presented in this article shows how to establish a clear link between a broadly defined scenario, often defined only for a few macro variables, and a fully defined macroeco-nomic scenario. It also demonstrates how to assess the credit capital requirements for corporate portfolios via stressed PDs and LGDs.

The merits of our framework are multiple. First, by estimat-ing values for virtually any variable that can be considered to be a risk driver for each corporate sector, it raises the accuracy of PD and LGD forecasts. Second, by efficiently considering the information contained in a large set of macroeconomic variables in a sound econometric model, it produces transpar-ent results, which form the basis of discussion for proactive risk management. Third, it can derive estimates of PD and LGD determinants consistent with scenario variables produced by a bank’s top management and regulators.

The approach is practical and transparent, and can be used as a key input to assess the bank’s capital position at times of stress, with respect to its own risk appetite as well as regulatory requirements. Whenever mitigating actions need to be consid-ered, the framework allows one to identify the specific portfo-lios toward which these actions should be targeted.

Of course, the output of the analytical framework should not be used in a mechanistic way. Rather, it has to be subject to a critical review based on sound judgment. The combination of robust modeling and sound management is, we believe, the basis for good risk management.

FOOTNOTES1. See Jolliffe (2004) for details.2. See Sims (1972) and Sims (1980).3. For more technical information, please see Hamilton (1994), Lüt-kepohl (2005), Sims (1972, 1980), Stock and Watson (2002), and Ber-nanke, Boivin and Eliasz (2005).4. Ut and At are (N x N) and (K x K) orthonormal matrices; Ut Ut=IN, At At=IK, Lt is a (N x K) diagonal matrix with nonnegative elements in decreasing order. 5. The methodology developed by Bai and Ng (2002) has been ad-opted to determine the optimal number of factors to use (in our case, five), which explains a major proportion of the variance in Xt.6. A method to replicate scenarios exactly has been developed. How-ever, when applying it, forecasts of many of the non-scenario vari-ables are unrealistic and meaningless. There is an apparent trade-off

The approach presented in this article shows how to establish a

clear link between a broadly defined scenario, often defined only for a few macro variables, and a fully defined macroeconomic scenario.

T

T

Expected Loss(% of base quarter), Total

160 -

140 -

120 -

100 -

80 -

60 -

Capital(% of base quarter), Total

130 -

120 -

110 -

100 -

90 -

80 -

70 -- - - - - - - - - - - - - - - - - - - - - -

2010

Q2

2010

Q4

2011

Q2

2011

Q4

2012

Q2

2012

Q4

2013

Q2

2013

Q4

2014

Q2

2014

Q4

2015

Q2 - - - - - - - - - - - - - - - - - - - - - -

2010

Q2

2010

Q4

2011

Q2

2011

Q4

2012

Q2

2012

Q4

2013

Q2

2013

Q4

2014

Q2

2014

Q4

2015

Q2

Page 26: riskprofessional201112-dl

25 RISK PROFESSIONAL D E C E M B E R 2 0 1 1 www.garp.org

C R E D I T R I S K

between realistic results and the degree of accuracy of the scenario replication. 7. Here we focus on modeling probabilities of default. However, a way forward would be to apply a similar methodology to LGDs. 8. A common problem is that internal time series of PDs normally are too short, which means, in most cases, that the sample period does not cover a full credit or business cycle. This is the case for us, which is why we use MKMV Expected Default Frequencies (EDF) as a proxy for the PDs.

REFERENCESBai, J. and S. Ng (2002). “Determining the number of factors in ap-proximate factor models,” Econometrica, 70, 1, 191-221.

Bernanke, B.S., Boivin, J. and P. Eliasz (2005). “Measuring the effects of monetary policy: A factor-augmented vector autoregressive (FA-VAR) approach,” Quarterly Journal of Economics, 120, 1, 387-422.

Forni, M., Giannone, D., Lippi, M. and L. Reichlin (2007). “Open-ing the black box: Structural factor models with large cross-sections,” European Central Bank Working Paper, n.712.

Hamilton, J.D. (1994). Time Series Analysis, Princeton University Press.

Jolliffe, I.T. (2004). Principal Component Analysis, Springer.

Lütkepohl, H. (2005). New Introduction to Multiple Time Series Analysis, Springer.

Sims, C.A. (1972). “Money, income and causality,”American Eco-nomic Review, 62, 540-552.

ibid. (1980). “Macroeconomics and reality,” Econometrica, 48, 1-48.

Stock, J.H. and M.W. Watson (2002). “Macroeconomic forecasting using diffusion indexes,” Journal of Business & Economic Statistics, 147-162.

Alessandra Mongiardino (FRM) is the head of risk strategy and a member of the risk management executive team for HSBC Bank (HBEU). She is responsible for develop-ing and implementing the risk strategic framework, to ensure that risk management is effectively embedded throughout the bank and across all types of risk.

Zoran Stanisavljevic is the head of wholesale risk analytics at HSBC. After working at Barclays Capital as a credit risk data manager and a quantitative credit analyst, he joined HSBC as a senior quantitative risk analyst in 2005. Shortly thereafter, he took over the leadership on the bank’s stress testing and economic capital projects, and was promoted to his current role in 2010.

Evguenia Iankova is a senior quantitative economist at HSBC, where she is currently involved in several projects, such as scenario building and stressing risk drivers. Follow-ing a stint as a quantitative economist in the economic research department at Natixis in Paris, she joined HSBC in 2008 to work on macroeconomic stress testing. Bruno Eklund is a senior quantitative analyst at HSBC. Before joining HSBC, he worked at Bank of England, where he built models for stress testing for the UK bank-ing system. He also worked previously as a researcher at the Central Bank of Iceland, developing econometric models for forecasting the Icelandic business cycle.

Petronilla Nicoletti is a senior quantitative economist at HSBC, where she focuses on macroeconomic scenario development, replication for stress testing and the impact of shocks to risk drivers. Before joining HSBC, she worked as a senior risk specialist at the Financial Services Authority, the UK’s financial regulator.

Andrea Serafino is a senior econometrician at the FSA. Prior to joining the regulator in 2010, he worked at HSBC, specializing in macroeconomic stress testing. He has also served as a consultant for the SAS Institute in Milan.

This article reflects the work of HSBC employees at the time of their employment at HSBC, but does not represent the views of either HSBC or the FSA.

Page 27: riskprofessional201112-dl

www.garp.org D E C E M B E R 2 0 1 1 RISK PROFESSIONAL 26

TrusT The leader. TrusT Kaplan schweser.schweser materials and instruction are the best quality and

best value in FrM® exam preparation, allowing you to study

anytime, anywhere with flexible, efficient and effective

resources that ensure success on exam day:

The best materials• schweser study notes are concise, easy

to understand, and truly the most effective

tool for exam preparation offered by

any prep provider. They will save you

countless hours of study.

• Our practice exams are highly rigorous

assessment and learning tools designed

to evaluate your exam readiness.

The best instruction• schweser’s online classes provide

a truly cost-effective and efficient

way to master the FrM curriculum.

• Our instructors all have great

experience in preparing

candidates for the exam

and receive very high

candidate evaluations

year after year.

* Offer ends February 10, 2012

ENROLL TODAY!www.schweser.com/frm

877-641-3916 ∙ 608-779-5599

preMiuM sTudy

pacKages

May 2012 FrM®

early seasOn discOunT 20%

OFF*

FRM® Exam Prep

Page 28: riskprofessional201112-dl

27 RISK PROFESSIONAL D E C E M B E R 2 0 1 1 www.garp.org

C O U N T E R P A R T Y R I S K

CVA desks have been developed in response to crisis-driven regulations for improved counterparty risk management. How do these centralized groups differ from traditional approaches to manage counterparty risk, and what types of data and analytical challenges do they face?By David Kelly

How the Credit Crisis has Changed Counterparty Risk

Management

Page 29: riskprofessional201112-dl

www.garp.org D E C E M B E R 2 0 1 1 RISK PROFESSIONAL 28

C O U N T E R P A R T Y R I S K

he credit crisis and regulatory responses have forced banks to update their counterparty risk management processes substantially. New regu-lations in the form of Basel III, the Dodd-Frank Act in the U.S. and European Market Infrastruc-

ture Regulation (EMIR) have dramatically increased capital requirements for counterparty credit risk. In addition to im-plementing new regulatory requirements, banks are making significant changes to internal counterparty risk management practices.

There are three main themes inherent in these changes. First, better firmwide consolidated risk reporting has become a top priority. Second, centralized counterparty risk manage-ment groups (CVA desks) are being created to more active-ly monitor and hedge credit risk. Third, banks are making significant investments in technology to better support the firmwide risk reporting and CVA desk initiatives.

This article will explore some of the key changes to inter-nal counterparty risk management processes by tracing typi-cal workflows within banks before and after CVA desks, as well as how increased clearing due to regulatory mandates affects these workflows. Since CVA pricing and counterparty risk management workflows require extensive amounts of data, as well as a scalable, high-performance technology, it is important to understand the data management and analytical challenges involved.

Before CVA DesksCVA desks, or specialized risk control groups tasked with more actively managing counterparty risk, are becoming more prevalent, partly because banks that had them generally fared better during the crisis. To establish a basis for comparison, it is important to review counterparty credit risk pricing and post-trade risk management before the advent of CVA desks. The case where a corporate end-user hedges a business risk through a derivative transaction provides a useful example.

The corporate treasurer may want to hedge receivables in a foreign currency or lock in the forward price of a commodity input. Another common transaction is an interest rate swap used to convert fixed rate bonds issued by the corporation into floaters to mitigate interest rate risk.

In all of these cases, the corporate treasurer explains the hedging objective to the bank’s derivatives salesperson, who structures an appropriate transaction. The salesperson re-quests a price for the transaction from the relevant trading

desk, which provides a competitive market price with “credit” and “capital” add-ons to cover the potential loss if the coun-terparty were to default prior to maturity of the contract. The credit and capital charges are based on an internal qualitative and quantitative assessment of the credit quality of the coun-terparty by the bank’s credit officers.

The credit portion of the charge is for the expected loss — i.e., the cost of doing business with risky counterparties. (It is analogous to CVA, except that the CVA is based on market implied inputs, including credit spreads, instead of histori-cal loss norms and qualitative analysis). The capital portion is for the potential unexpected loss. This is also referred to as “economic capital,” which traditionally has been based on historical experience but is increasingly being calculated with market implied inputs. These charges go into a reserve fund used to reimburse trading desks for counterparty credit losses and generally ensure the solvency of the bank.

The trader on the desk works with the risk control group, which may deny the transaction if the exposure limit with that particular counterparty is hit. Otherwise, it provides the credit and capital charges directly or the tools to allow the trader to calculate them. The risk control group also provides exposure reports and required capital metrics to the bank’s regulator, which approves the capital calculation methodol-ogy, audits its risk management processes and monitors its ongoing exposure and capital reserves according to the Basel guidelines.

Figure 1: Counterparty Risk Workflow Before CVA Desks

T

Corporate Treasury

Bank Treasury (funding)

Collateral Management

Exchanges and Dealers

Regulatory Capital & Reporting

Credit & Capital Reserves

Derivatives Trading Desk

Risk Control

Derivative Pricing

Bank Regulator

Bank Derivatives Sales

Derivative Transaction

Funding

Collateral Risk

Market Risk

Page 30: riskprofessional201112-dl

29 RISK PROFESSIONAL D E C E M B E R 2 0 1 1 www.garp.org

C O U N T E R P A R T Y R I S K

While counterparty risk is managed through reserves in this example, the market risk of the transaction is fully hedged by the desk on exchanges or with other dealers. If the transac-tion is uncollateralized, the bank’s treasury provides funding for what is effectively a loan to the customer in the form of a derivative line of credit. Some portion of the exposure may also be collateralized, in which case there are additional op-erational workflows around collateral management.

CVA DesksOne of the drivers for CVA desks was the need to reduce credit risk — i.e., to free capacity and release reserves, so banks could do more business. In the late 1990s and early 2000s, in the wake of the Asian financial crisis, banks found themselves near capacity and were looking for ways to reduce counterparty risk. Some banks attempted to securitize and redistribute it, as in JPMorgan’s Bistro transaction. Another approach was to hedge counterparty risk using the relatively new CDS market.

The recent international (2005) and U.S. (2007) account-ing rules mandating the inclusion of CVA in the mark-to-market valuations of derivatives positions provided additional impetus for more precisely quantifying counterparty risk. Some banks actively attempted to manage counterparty risk like market risk, hedging all the underlying risk factors of the CVA. Given the complexity of CVA pricing and hedg-ing, these responsibilities have increasingly been consolidated within specialized CVA desks. Now, with the increased capital charges for counterparty default risk under Basel III and the new CVA VaR charges, there is even more incentive to imple-ment CVA desks.

With a CVA desk, most of the trading workflow is the same, except the CVA desk is inserted between the derivatives trad-ing desks and the risk control group. Instead of going to the risk control group for the credit and capital charges, the trad-er requests a marginal CVA price from the CVA desk, which basically amounts to the CVA desk selling credit protection on that counterparty to the derivatives desk. This is an internal transaction between the two desks, which can be in the form of a contingent credit default swap (CCDS).

The CVA charge is passed on to the external corporate customer by adding a spread to the receive leg of the trade. Unless the CVA is fully hedged, which is unlikely, the spread charged to the customer should also include some portion of economic capital to account for CVA VaR and potential un-expected loss from default.

Since credit risk spans virtually all asset classes, the CVA desk may deal with multiple internal trading desks. The risk control group treats the CVA desk like other trading desks, imposing trading limits and monitoring market risks using traditional sensitivity metrics and VaR. The CVA desk ex-ecutes credit and market risk hedges on exchanges or with other dealers and relies on the bank’s internal treasury to fund positions. To the extent the CVA desk attempts to replicate (hedge) CVA fully, while the derivatives desks also hedge the underlying market risks, there may be some inefficiencies due to overlapping hedges.

Figure 2: Counterparty Risk Workflow with CVA Desk

The key innovation introduced by CVA desks

is quantifying and manag-ing counterparty credit

risk like other market risks, instead of relying solely

on reserves.

Corporate Treasury

Collateral Management

Exchanges and Dealers

Marginal CVA Price

Derivatives Trading Desk

CVA Desk

Derivative Pricing

Bank Derivatives Sales

Derivative Transaction

Funding

Collateral Risk

Market Risk

Funding Credit Risk

Bank Treasury

Exposure Limits

Regulatory Capital & Reporting

Risk Control

Bank Regulator

Page 31: riskprofessional201112-dl

www.garp.org D E C E M B E R 2 0 1 1 RISK PROFESSIONAL 30

C O U N T E R P A R T Y R I S K

The key innovation introduced by CVA desks is quantifying and managing counterparty credit risk like other market risks, instead of relying solely on reserves. The main challenge is that not all CVA risk can be hedged due to insufficient CDS liquidity and unhedgeable correlation and basis risks. There-fore, some reserve-based risk management is unavoidable.

Based on trends among the top-tier banks, the optimal solu-tion involves hedging as much of the risk as possible, consid-ering the cost of rebalancing hedges in a highly competitive CVA pricing context, and then relying on experienced trad-ers well-versed in structured credit problems to manage the residual exposures.

ClearingThe predominant issues with CVA desks are that they insert another operational layer into the workflow and add substan-tial analytical complexity. CVA models have to incorporate all the market risk factors of the underlying derivative plus the counterparty credit risk. Even the CVA on a plain vanilla FX forward is complex, because the counterparty effectively holds an American-style option to default. In addition to the option risk profile, the CVA trader must also consider the cor-relation between the counterparty’s default probability and the underlying market risk factors — i.e., “wrong-way risk.”

Margining formulas are much simpler than CVA and eco-nomic capital models, and new regulations are either mandat-ing or heavily incentivizing banks to clear or fully collateralize derivative transactions. In cleared transactions, the clear-inghouse effectively replaces the CVA desk in the workflow. Transactions are assigned or novated to the clearinghouse, which becomes the counterparty to both sides of the trade.

Counterparty risk is virtually eliminated because the exposure is fully collateralized and ultimately backed by the clearing-house and its members.

There is a remote risk that the clearinghouse fails, but the risk control group can focus on relatively simpler issues like collateral management, liquidity and residual market risks. Since cleared transactions are fully margined, the CVA charge is replaced by the collateral funding cost, which is typically based on an overnight rate.

Figure 3: Counterparty Risk Workflow with Clearing

Not all trades can or will be cleared. Clearinghouses will

only be able to handle standardized contracts; moreover, the Dodd-Frank and EMIR regulations specifically exempt cor-porate end-user hedge transactions from mandatory clearing, so that banks can continue to provide credit lines and risk transfer services through derivatives. It is estimated that at least a quarter of derivatives transactions will remain OTC, which means banks will need to maintain both CVA and clearing workflows.Data & Technology ChallengesReliable data feeds that facilitate regular updates and data integrity checks are absolutely fundamental to effective coun-terparty risk management. Banks typically maintain separate systems for their main trading desks, roughly aligned by the major asset classes — interest rates and foreign exchange, credit, commodities and equities. Since counterparty risk spans all asset classes, the counterparty risk system must ex-

Since counterparty risk spans all asset classes, the counterparty risk system

must exctract transactions, market data and reference

data from the various trading systems.

Corporate Treasury

Bank Treasury (funding)

Collateral Management

Exchanges and Dealers

Regulatory Capital & Reporting

Collateral Funding Cost

Derivatives Trading Desk

Risk Control

Derivative Pricing

Bank Regulator

Bank Derivatives Sales

Derivative Transaction

Funding

Collateral

Market Risk

Credit Risk Clearinghouse

Page 32: riskprofessional201112-dl

31 RISK PROFESSIONAL D E C E M B E R 2 0 1 1 www.garp.org

C O U N T E R P A R T Y R I S K

tract transactions, market data and reference data from all the various trading systems. In addition, supplemental refer-ence data on legal entities and master agreements may need to come from other databases. These systems typically use dif-ferent data formats, symbologies and naming conventions, as well as proprietary interfaces, adding significant complexity.

The next set of challenges involves balancing performance and scalability with analytical robustness. The simulation en-gine may have to value on the order of one million transac-tions over a hundred future dates and several thousand market scenarios.

Depending on the size of the portfolio and number of risk factors, some shortcuts on the modeling side may be necessary. However, given their role in the crisis, there are critical ana-lytical aspects (such as wrong-way risk) that cannot be assumed away. Regulators are continuously raising the bar in terms of modeling every risk factor, such as potential shifts in basis spreads, volatilities and correlations. New regulatory require-ments for back-testing and stress testing are designed to ensure the validity of the model.

Outputs of the simulation engine include current, expected and potential future exposures, CVA and economic capital by counterparty. The system should also capture counterparty exposure limits and highlight breaches. Given the complexity of the inputs and outputs, a robust reporting system is criti-cal. The system should allow aggregation of exposure metrics along a variety of dimensions, including industry and legal jurisdiction. The system should also allow drilling down into results by counterparty netting set and individual transactions. By extension, users should have an efficient means to diagnose unexpected results.

There may also be a separate system for marginal pricing of new trades and active management of CVA. The marginal pricing tools need to access results of the portfolio simulation, since the price of each new trade is a function of the aggregate exposure with that counterparty. Because of this, calculating marginal CVA in a reasonable time frame can be a significant challenge.

The CVA risk management system provides CVA sensitivi-ties for hedging purposes. Hedges booked by the CVA desk should flow through the simulation engine, so that they are re-flected in the exposure and capital metrics. Whereas the CVA desk may hedge credit and market risks, only approved credit hedges — including CDS, CCDS and, to some extent, credit indices — may be included for regulatory capital calculations.

Cleared transactions must be fed to the clearinghouses’ sys-tems and trade repositories. Since clearing involves daily (or more frequent) margining, the bank’s collateral management system should be integrated with the clearinghouse. It should also be integrated with the counterparty risk system. Ideally, the simulation engine would upload current collateral positions and revalue them for each market scenario to determine net exposure. The simulation engine should also incorporate the “margin period of risk,” – i.e., the risk of losses from non-de-livery of collateral.

Figure 4: Counterparty Risk Data Flows & Technology Infrastructure

Even with the crisis-induced wave of improvements, the picture remains very complex. The most sophisticated global banks have targeted the CVA and clearing workflows and sup-porting technology infrastructure described in this article. It is expected that regional banks will follow suit over the next few years in order to optimize capital and manage risk more effec-tively, as well as comply with new regulations.

David Kelly is the director of credit products at Quantifi, a software provider special-izing in analytics, trading and risk management solutions. He has extensive knowl-edge of derivatives trading, quantitative research and technology, and currently over-sees the company’s credit solutions, including counterparty credit risk. Prior to joining Quantifi in 2009, he held senior positions at Citigroup, JPMorgan Chase, AIG and CSFB, among other financial institutions. He holds a B.A. in economics and math-ematics from Colgate University and has completed graduate work in mathematics at Columbia University and Carnegie Mellon.

Credit Commodities

Trading System(s)

Counterparty Exposure Simulation Engine

Reporting

Clearing Systems & Trade Repos

Collateral Management SystemCVA Risk Management

System

Marginal CVA Pricing Tools

• Transaction• Market Date• Reference Date

• Counterparties• Legal Agreements

IR/FX Equities

Hedges

Page 33: riskprofessional201112-dl

www.garp.org D E C E M B E R 2 0 1 1 RISK PROFESSIONAL 32

T H E Q U A N T C L A S S R O O M B Y AT T I L I O M E U C C I

Mixing Probabilities, Priors and Kernels

via Entropy PoolingHow to emphasize certain historical scenarios for risk

and portfolio management, according to their similarity with the current market conditions.

he Fully Flexible Probabilities framework discussed in Meucci (2010) represents the multi-variate distribution f of an arbitrary set of risk drivers, X (X1,...,XN)´, non-parametrically in terms of scenario-proba-bility pairs, as follows:

where the joint scenarios can be historical realizations or the outcome of simulations. The use of Fully Flexible Probabilities permits all sorts of manipula-tions of distributions essential for risk and portfolio manage-ment, such as pricing and aggregation (see Meucci, 2011) and the estimate of portfolio risk from these distributions.

The probabilities in the Fully Flexible Probabilities frame-work (1) can be set by crisp conditioning, kernel smoothing, ex-ponential time decay, etc. (see Figure 1, right).

(1)

Figure 1: Fully Flexible Probabilities Specification via Entropy Pooling

Another approach to set the probabilities in (1) is based on the Entropy Pooling technique by Meucci (2008). Entropy Pooling is a generalized Bayesian approach to process views on the market. It starts from two inputs, a prior market distri-bution, f0, and a set of generalized views or stress tests, , and yields a posterior distribution f that is close to the prior, but incorporates the views.

Entropy Pooling can be used in the non-parametric sce-

TMixing Probabilities, Priors and Kernels

via Entropy PoolingAttilio Meucci1

1 IntroductionThe Fully Flexible Probabilities framework discussed in Meucci (2010) repre-sents the multivariate distribution of an arbitrary set of risk drivers X ≡(1 )

0 non-parametrically in terms of scenario-probability pairs,

⇐⇒ {x }=1 , (1)

where the joint scenarios x ≡ (1 )0 can be historical realizations orthe outcome of simulations. The use of Fully Flexible Probabilities permits allsorts of manipulations of distributions essential for risk and portfolio manage-ment, such as pricing and aggregation, see Meucci (2011), and the estimate ofportfolio risk from these distributions.The probabilities in the Fully Flexible Probabilities framework (1) can be

set by crisp conditioning, kernel smoothing, exponential time decay, etc., referto Figure 1.Another approach to set the probabilities in (1) is based on the Entropy

Pooling technique by Meucci (2008). Entropy Pooling is a generalized Bayesianapproach to process views on the market. Entropy Pooling starts from twoinputs, a prior market distribution 0 and a set of generalized views or stress-tests V, and yields a posterior distribution that is close to the prior, butincorporates the views. Entropy Pooling can be used in the non-parametricscenario-probability representation of the Fully Flexible Probabilities framework(1), in which case it provides an optimal way to specify the probabilities p ≡(1 )

0 of the scenarios. Alternatively, Entropy Pooling can be used withparametric distributions that are fully specied by a set of parameters θ,such as the normal distribution.

1The author is grateful to Garli Beibi and David Elliott

1

Mixing Probabilities, Priors and Kernelsvia Entropy Pooling

Attilio Meucci1

1 IntroductionThe Fully Flexible Probabilities framework discussed in Meucci (2010) repre-sents the multivariate distribution of an arbitrary set of risk drivers X ≡(1 )

0 non-parametrically in terms of scenario-probability pairs,

⇐⇒ {x }=1 , (1)

where the joint scenarios x ≡ (1 )0 can be historical realizations orthe outcome of simulations. The use of Fully Flexible Probabilities permits allsorts of manipulations of distributions essential for risk and portfolio manage-ment, such as pricing and aggregation, see Meucci (2011), and the estimate ofportfolio risk from these distributions.The probabilities in the Fully Flexible Probabilities framework (1) can be

set by crisp conditioning, kernel smoothing, exponential time decay, etc., referto Figure 1.Another approach to set the probabilities in (1) is based on the Entropy

Pooling technique by Meucci (2008). Entropy Pooling is a generalized Bayesianapproach to process views on the market. Entropy Pooling starts from twoinputs, a prior market distribution 0 and a set of generalized views or stress-tests V, and yields a posterior distribution that is close to the prior, butincorporates the views. Entropy Pooling can be used in the non-parametricscenario-probability representation of the Fully Flexible Probabilities framework(1), in which case it provides an optimal way to specify the probabilities p ≡(1 )

0 of the scenarios. Alternatively, Entropy Pooling can be used withparametric distributions that are fully specied by a set of parameters θ,such as the normal distribution.

1The author is grateful to Garli Beibi and David Elliott

1

Mixing Probabilities, Priors and Kernelsvia Entropy Pooling

Attilio Meucci1

1 IntroductionThe Fully Flexible Probabilities framework discussed in Meucci (2010) repre-sents the multivariate distribution of an arbitrary set of risk drivers X ≡(1 )

0 non-parametrically in terms of scenario-probability pairs,

⇐⇒ {x }=1 , (1)

where the joint scenarios x ≡ (1 )0 can be historical realizations orthe outcome of simulations. The use of Fully Flexible Probabilities permits allsorts of manipulations of distributions essential for risk and portfolio manage-ment, such as pricing and aggregation, see Meucci (2011), and the estimate ofportfolio risk from these distributions.The probabilities in the Fully Flexible Probabilities framework (1) can be

set by crisp conditioning, kernel smoothing, exponential time decay, etc., referto Figure 1.Another approach to set the probabilities in (1) is based on the Entropy

Pooling technique by Meucci (2008). Entropy Pooling is a generalized Bayesianapproach to process views on the market. Entropy Pooling starts from twoinputs, a prior market distribution 0 and a set of generalized views or stress-tests V, and yields a posterior distribution that is close to the prior, butincorporates the views. Entropy Pooling can be used in the non-parametricscenario-probability representation of the Fully Flexible Probabilities framework(1), in which case it provides an optimal way to specify the probabilities p ≡(1 )

0 of the scenarios. Alternatively, Entropy Pooling can be used withparametric distributions that are fully specied by a set of parameters θ,such as the normal distribution.

1The author is grateful to Garli Beibi and David Elliott

1

Page 34: riskprofessional201112-dl

33 RISK PROFESSIONAL D E C E M B E R 2 0 1 1 www.garp.org

nario-probability representation of the Fully Flexible Prob-abilities framework (1), in which case it provides an optimal way to specify the probabilities p (p1,...,pJ)´of the scenarios. Alternatively, it can be used with parametric distributions f that are fully specified by a set of parameters , such as the normal distribution.

In this article, we show that Entropy Pooling represents the most general approach to optimally specify the probabilities of the scenarios (1) and includes common approaches (such as kernel smoothing) as special cases, when the prior distribution f0 contains no information. We will also demonstrate how to use Entropy Pooling to overlay different kernels/signals to an informative prior in a statistically sound way.

The remainder of the article will proceed as follows. In Section 2, we review common approaches to assign prob-abilities in (1) and we generalize such approaches using fuzzy membership functions.

In Section 3, we review the non-parametric implementa-tion of Entropy Pooling. Furthermore, we show how Entropy Pooling includes fuzzy membership probabilities. Finally, we discuss how to leverage Entropy Pooling to mix the above ap-proaches and overlay prior information to different estima-tion techniques.

In Section 4, we present a risk management case study, where we model the probabilities of the historical simulations of a portfolio P&L. Using Entropy Pooling we overlay to an exponentially time-decayed prior a kernel that modulates the probabilities according to the current state of implied volatil-ity and interest rates.

Time/State Conditioning and Fuzzy MembershipHere we review a few popular methods to specify the prob-abilities in the Fully Flexible Probabilities framework (1) ex-ogenously. For applications of these methods to risk manage-ment, refer to Meucci (2010).

If the scenarios are historical and we are at time T, a simple approach to specify probabilities exogenously is to discard old data and rely only on the most recent window of time . This entails setting the probabilities in (1), as follows:

An approach to assign weights to historical scenarios dif-ferent from the rolling window (2) is exponential smoothing

where > 0 is a given half-life for the decay.

An approach to assigning probabilities related to the rolling window (2) that does not depend on time, but rather on state, is crisp conditioning: the probabilities are set as non-null, and all equal, as long as the scenarios of the market drivers xj lie in a given domain χ. Using the indicator function 1χ (x), which is 1 if x X and 0 otherwise, we obtain

An enhanced version of crisp conditioning for assigning probabilities, related to the rich literature on machine learn-ing, is kernel smoothing. First, we recall that a kernel k (x) is defined by a positive non-increasing generator function k(d) [0.1], a target , a distance function d (x,y) and a radius, or bandwidth, , as follows:

For example, the Gaussian kernel reads

where the additional parameter is a symmetric, positive def-inite matrix. The Gaussian kernel (6) is in the form (5), where d is the Mahalanobis distance

Using a kernel we can condition the market variables X smoothly, by setting the probability of each scenario as pro-portional to the kernel evaluated on that scenario, as follows:

As we show in Appendix A.1, available at http://symmys.com/node/353, the crisp state conditioning (4) includes the rolling window (2) as a special case, and kernel smoothing (7) includes the time-decayed exponential smoothing (3) as a spe-cial case (see Figure 1).

We can generalize further the concepts of crisp condition-ing and kernel smoothing by means of fuzzy membership functions. Fuzzy membership to a given set χ is defined in terms of a "membership" function mχ(x) with values in the range [0,1], which describes to what extent x belongs to a

T H E Q U A N T C L A S S R O O M B Y AT T I L I O M E U C C I

2 Time/state conditioning and fuzzy member-ship

Here we review a few, popular methods to exogenously specify the probabilitiesin the Fully Flexible Probabilities framework (1). For applications of thesemethods to risk management, refer to Meucci (2010).If the scenarios are historical and we are at time , a simple approach to

exogenously specify probabilities is to discard old data and rely only on themost recent window of time . This entails setting the probabilities in (1) asfollows

∝½1 if − 0 otherwise.

. (2)

An approach to assign weights to historical scenarios different from therolling window (2) is exponential smoothing

∝ −ln 2 |− |, (3)

where 0 is a given half-life for the decay.An approach to assigning probabilities related to the rolling window (2)

that does not depend on time, but rather on state, is crisp conditioning: theprobabilities are set as non-null, and all equal, as long as the scenarios of themarket drivers x lie in a given domain X . Using the indicator function 1X (x),which is 1 if x ∈ X and 0 otherwise, we obtain

∝ 1X (x) (4)

An enhanced version of crisp conditioning for assigning probabilities, relatedto the rich literature on machine learning, is kernel smoothing. First, we recallthat a kernel (x) is dened by a positive non-increasing generator function () ∈ [0 1], a target μ, a distance function (xy), and a radius, or bandwidth, as follows

(x) ≡ ( (xμ)

). (5)

For example, the Gaussian kernel reads

(x) ≡ −122(x−)0−1(x−), (6)

where the additional parameter σ is a symmetric, positive denite matrix. TheGaussian kernel (6) is in the form (5), where is the Mahalanobis distance2 (xμ) ≡ (x− μ)0 σ−1 (x− μ).Using a kernel we can condition the market variables X smoothly, by setting

the probability of each scenario as proportional to the kernel evaluated on thatscenario

∝ (x). (7)

3

(2)

2 Time/state conditioning and fuzzy member-ship

Here we review a few, popular methods to exogenously specify the probabilitiesin the Fully Flexible Probabilities framework (1). For applications of thesemethods to risk management, refer to Meucci (2010).If the scenarios are historical and we are at time , a simple approach to

exogenously specify probabilities is to discard old data and rely only on themost recent window of time . This entails setting the probabilities in (1) asfollows

∝½1 if − 0 otherwise.

. (2)

An approach to assign weights to historical scenarios different from therolling window (2) is exponential smoothing

∝ −ln 2 |− |, (3)

where 0 is a given half-life for the decay.An approach to assigning probabilities related to the rolling window (2)

that does not depend on time, but rather on state, is crisp conditioning: theprobabilities are set as non-null, and all equal, as long as the scenarios of themarket drivers x lie in a given domain X . Using the indicator function 1X (x),which is 1 if x ∈ X and 0 otherwise, we obtain

∝ 1X (x) (4)

An enhanced version of crisp conditioning for assigning probabilities, relatedto the rich literature on machine learning, is kernel smoothing. First, we recallthat a kernel (x) is dened by a positive non-increasing generator function () ∈ [0 1], a target μ, a distance function (xy), and a radius, or bandwidth, as follows

(x) ≡ ( (xμ)

). (5)

For example, the Gaussian kernel reads

(x) ≡ −122(x−)0−1(x−), (6)

where the additional parameter σ is a symmetric, positive denite matrix. TheGaussian kernel (6) is in the form (5), where is the Mahalanobis distance2 (xμ) ≡ (x− μ)0 σ−1 (x− μ).Using a kernel we can condition the market variables X smoothly, by setting

the probability of each scenario as proportional to the kernel evaluated on thatscenario

∝ (x). (7)

3

2 Time/state conditioning and fuzzy member-ship

Here we review a few, popular methods to exogenously specify the probabilitiesin the Fully Flexible Probabilities framework (1). For applications of thesemethods to risk management, refer to Meucci (2010).If the scenarios are historical and we are at time , a simple approach to

exogenously specify probabilities is to discard old data and rely only on themost recent window of time . This entails setting the probabilities in (1) asfollows

∝½1 if − 0 otherwise.

. (2)

An approach to assign weights to historical scenarios different from therolling window (2) is exponential smoothing

∝ −ln 2 |− |, (3)

where 0 is a given half-life for the decay.An approach to assigning probabilities related to the rolling window (2)

that does not depend on time, but rather on state, is crisp conditioning: theprobabilities are set as non-null, and all equal, as long as the scenarios of themarket drivers x lie in a given domain X . Using the indicator function 1X (x),which is 1 if x ∈ X and 0 otherwise, we obtain

∝ 1X (x) (4)

An enhanced version of crisp conditioning for assigning probabilities, relatedto the rich literature on machine learning, is kernel smoothing. First, we recallthat a kernel (x) is dened by a positive non-increasing generator function () ∈ [0 1], a target μ, a distance function (xy), and a radius, or bandwidth, as follows

(x) ≡ ( (xμ)

). (5)

For example, the Gaussian kernel reads

(x) ≡ −122(x−)0−1(x−), (6)

where the additional parameter σ is a symmetric, positive denite matrix. TheGaussian kernel (6) is in the form (5), where is the Mahalanobis distance2 (xμ) ≡ (x− μ)0 σ−1 (x− μ).Using a kernel we can condition the market variables X smoothly, by setting

the probability of each scenario as proportional to the kernel evaluated on thatscenario

∝ (x). (7)

3

2 Time/state conditioning and fuzzy member-ship

Here we review a few, popular methods to exogenously specify the probabilitiesin the Fully Flexible Probabilities framework (1). For applications of thesemethods to risk management, refer to Meucci (2010).If the scenarios are historical and we are at time , a simple approach to

exogenously specify probabilities is to discard old data and rely only on themost recent window of time . This entails setting the probabilities in (1) asfollows

∝½1 if − 0 otherwise.

. (2)

An approach to assign weights to historical scenarios different from therolling window (2) is exponential smoothing

∝ −ln 2 |− |, (3)

where 0 is a given half-life for the decay.An approach to assigning probabilities related to the rolling window (2)

that does not depend on time, but rather on state, is crisp conditioning: theprobabilities are set as non-null, and all equal, as long as the scenarios of themarket drivers x lie in a given domain X . Using the indicator function 1X (x),which is 1 if x ∈ X and 0 otherwise, we obtain

∝ 1X (x) (4)

An enhanced version of crisp conditioning for assigning probabilities, relatedto the rich literature on machine learning, is kernel smoothing. First, we recallthat a kernel (x) is dened by a positive non-increasing generator function () ∈ [0 1], a target μ, a distance function (xy), and a radius, or bandwidth, as follows

(x) ≡ ( (xμ)

). (5)

For example, the Gaussian kernel reads

(x) ≡ −122(x−)0−1(x−), (6)

where the additional parameter σ is a symmetric, positive denite matrix. TheGaussian kernel (6) is in the form (5), where is the Mahalanobis distance2 (xμ) ≡ (x− μ)0 σ−1 (x− μ).Using a kernel we can condition the market variables X smoothly, by setting

the probability of each scenario as proportional to the kernel evaluated on thatscenario

∝ (x). (7)

3

2 Time/state conditioning and fuzzy member-ship

Here we review a few, popular methods to exogenously specify the probabilitiesin the Fully Flexible Probabilities framework (1). For applications of thesemethods to risk management, refer to Meucci (2010).If the scenarios are historical and we are at time , a simple approach to

exogenously specify probabilities is to discard old data and rely only on themost recent window of time . This entails setting the probabilities in (1) asfollows

∝½1 if − 0 otherwise.

. (2)

An approach to assign weights to historical scenarios different from therolling window (2) is exponential smoothing

∝ −ln 2 |− |, (3)

where 0 is a given half-life for the decay.An approach to assigning probabilities related to the rolling window (2)

that does not depend on time, but rather on state, is crisp conditioning: theprobabilities are set as non-null, and all equal, as long as the scenarios of themarket drivers x lie in a given domain X . Using the indicator function 1X (x),which is 1 if x ∈ X and 0 otherwise, we obtain

∝ 1X (x) (4)

An enhanced version of crisp conditioning for assigning probabilities, relatedto the rich literature on machine learning, is kernel smoothing. First, we recallthat a kernel (x) is dened by a positive non-increasing generator function () ∈ [0 1], a target μ, a distance function (xy), and a radius, or bandwidth, as follows

(x) ≡ ( (xμ)

). (5)

For example, the Gaussian kernel reads

(x) ≡ −122(x−)0−1(x−), (6)

where the additional parameter σ is a symmetric, positive denite matrix. TheGaussian kernel (6) is in the form (5), where is the Mahalanobis distance2 (xμ) ≡ (x− μ)0 σ−1 (x− μ).Using a kernel we can condition the market variables X smoothly, by setting

the probability of each scenario as proportional to the kernel evaluated on thatscenario

∝ (x). (7)

3

2 Time/state conditioning and fuzzy member-ship

Here we review a few, popular methods to exogenously specify the probabilitiesin the Fully Flexible Probabilities framework (1). For applications of thesemethods to risk management, refer to Meucci (2010).If the scenarios are historical and we are at time , a simple approach to

exogenously specify probabilities is to discard old data and rely only on themost recent window of time . This entails setting the probabilities in (1) asfollows

∝½1 if − 0 otherwise.

. (2)

An approach to assign weights to historical scenarios different from therolling window (2) is exponential smoothing

∝ −ln 2 |− |, (3)

where 0 is a given half-life for the decay.An approach to assigning probabilities related to the rolling window (2)

that does not depend on time, but rather on state, is crisp conditioning: theprobabilities are set as non-null, and all equal, as long as the scenarios of themarket drivers x lie in a given domain X . Using the indicator function 1X (x),which is 1 if x ∈ X and 0 otherwise, we obtain

∝ 1X (x) (4)

An enhanced version of crisp conditioning for assigning probabilities, relatedto the rich literature on machine learning, is kernel smoothing. First, we recallthat a kernel (x) is dened by a positive non-increasing generator function () ∈ [0 1], a target μ, a distance function (xy), and a radius, or bandwidth, as follows

(x) ≡ ( (xμ)

). (5)

For example, the Gaussian kernel reads

(x) ≡ −122(x−)0−1(x−), (6)

where the additional parameter σ is a symmetric, positive denite matrix. TheGaussian kernel (6) is in the form (5), where is the Mahalanobis distance2 (xμ) ≡ (x− μ)0 σ−1 (x− μ).Using a kernel we can condition the market variables X smoothly, by setting

the probability of each scenario as proportional to the kernel evaluated on thatscenario

∝ (x). (7)

3

2 Time/state conditioning and fuzzy member-ship

Here we review a few, popular methods to exogenously specify the probabilitiesin the Fully Flexible Probabilities framework (1). For applications of thesemethods to risk management, refer to Meucci (2010).If the scenarios are historical and we are at time , a simple approach to

exogenously specify probabilities is to discard old data and rely only on themost recent window of time . This entails setting the probabilities in (1) asfollows

∝½1 if − 0 otherwise.

. (2)

An approach to assign weights to historical scenarios different from therolling window (2) is exponential smoothing

∝ −ln 2 |− |, (3)

where 0 is a given half-life for the decay.An approach to assigning probabilities related to the rolling window (2)

that does not depend on time, but rather on state, is crisp conditioning: theprobabilities are set as non-null, and all equal, as long as the scenarios of themarket drivers x lie in a given domain X . Using the indicator function 1X (x),which is 1 if x ∈ X and 0 otherwise, we obtain

∝ 1X (x) (4)

An enhanced version of crisp conditioning for assigning probabilities, relatedto the rich literature on machine learning, is kernel smoothing. First, we recallthat a kernel (x) is dened by a positive non-increasing generator function () ∈ [0 1], a target μ, a distance function (xy), and a radius, or bandwidth, as follows

(x) ≡ ( (xμ)

). (5)

For example, the Gaussian kernel reads

(x) ≡ −122(x−)0−1(x−), (6)

where the additional parameter σ is a symmetric, positive denite matrix. TheGaussian kernel (6) is in the form (5), where is the Mahalanobis distance2 (xμ) ≡ (x− μ)0 σ−1 (x− μ).Using a kernel we can condition the market variables X smoothly, by setting

the probability of each scenario as proportional to the kernel evaluated on thatscenario

∝ (x). (7)

3

(4)

(5)

(6)

(7)

(3)

Page 35: riskprofessional201112-dl

www.garp.org D E C E M B E R 2 0 1 1 RISK PROFESSIONAL 34

given set χ. Using a fuzzy membership mx for a given set χ of potential

market outcomes for the market χ, we can set the probability of each scenario as proportional to the degree of membership of that scenario to the set of outcomes

A trivial example of membership function is the following indicator function, which defines crisp conditioning (4):

With the indicator function, membership is either maximal and equal to 1, if x belongs to χ, or minimal and equal to 0, if x does not belong to χ.

A second example of membership function is the kernel (5), which is a membership function for the singleton χ

The membership of x to is maximal when x is . The larger the distance d of x from , the less x "belongs" to , and thus the closer to 0 the membership of x.

Entropy PoolingThe probability specification (8) assumes no prior knowledge of the distribution of the market risk drivers X. In this case, fuzzy membership is the most general approach to assign probabilities to the scenarios in (1). Here, we show how non-parametric Entropy Pooling further generalizes fuzzy mem-bership specifications to the case when prior information is available, or when we must blend together more than one membership specification.

First, we review the non-parametric Entropy Pooling imple-mentation. (Please refer to the original article Meucci (2008) for more details and more generality, as well as for the code.)

The starting point for non-parametric Entropy Pooling is a prior distribution f (0) for the risk drivers X, represented as in (1) by a set of scenarios and associated probabilities, as fol-lows:

The second input is a view on the market X, or a stress-

test. Thus, a generalized view on X is a statement on the yet-to-be-defined distribution defined on the same scenarios A large class of such views can be characterized as expressions on the expectations of arbitrary functions of the market v(X), as follows:

where v* is a threshold value that determines the intensity of the view.

To illustrate a typical view, consider the standard views, a la Black and Litterman (1990), on the expected value aX of select portfolios returns aX, where X represents the returns of N securities, and a is a K x N matrix, whose each row are the weights of a different portfolio. Such view can be written as in (12), where

Our ultimate goal is to compute a posterior distribution f that departs from the prior to incorporate the views. The posterior distribution f is specified by new probabilities p on the same scenarios (11). To this purpose, we measure the "dis-tance" between two sets of probabilities p and p0 by the rela-tive entropy

The relative entropy is a "distance" in that (13) is zero only if p = p0, and it becomes larger as p diverges away from p0. We then define the posterior as the closest distribution to the prior, as measured by (13), which satisfies the views (12), as follows:

where the notation p means that p satisfies the view (12).Applications of Entropy Pooling to the probabilities in the

Fully Flexible Probabilities framework are manifold. For in-stance, with Entropy Pooling, we can compute exponentially decayed covariances where the correlations and the variances are decayed at different rates. Other applications include con-ditioning the posterior according to expectations on a market panic indicator. (For more details see Meucci, 2010.)

As highlighted in Figure 1, the Entropy Pooling posterior (14) also includes as special cases the probabilities defined in terms of fuzzy membership functions (8). Indeed, let us as-sume that in the Entropy Pooling optimization (14), the prior

T H E Q U A N T C L A S S R O O M B Y AT T I L I O M E U C C I

As we show in Appendix A.1, available at http://symmys.com/node/353,the crisp state conditioning (4) includes the rolling window (2) as a special case,and kernel smoothing (7) includes the time decayed exponential smoothing (3)as a special case, see Figure 1.We can generalize further the concepts of crisp conditioning and kernel

smoothing by means of fuzzy membership functions. Fuzzy membership to agiven set X is dened in terms of a "membership" function X (x) with valuesin the range [0 1], which describes to what extent x belongs to a given set X .Using a fuzzy membership X for a given set X of potential market outcomesfor the market X, we can set the probability of each scenario as proportional tothe degree of membership of that scenario to the set of outcomes

∝ X (x). (8)

A trivial example of membership function is the indicator function thatdenes crisp conditioning (4)

X (x) ≡ 1X (x) . (9)

With the indicator function, membership is either maximal, and equal to 1, ifx belongs to X , or minimal, and equal to 0, if x does not belong to X .A second example of membership function is the kernel (5), which is a mem-

bership function for the singleton X ≡ μ

(x) ≡ (x) . (10)

The membership of x to μ is maximal when x is μ. The larger the distance of x from μ, the less x "belongs" to μ and thus the closer to 0 the membershipof x.

3 Entropy PoolingThe probability specication (8) assumes no prior knowledge of the distributionof the market risk drivers X. In this case, fuzzy membership is the most generalapproach to assign probabilities to the scenarios in (1). Here we show how non-parametric Entropy Pooling further generalizes fuzzy membership specicationsto the case when prior information is available, or when we must blend togethermore than one membership specication.First, we review the non-parametric Entropy Pooling implementation. Please

refer to the original article Meucci (2008) for more details, more generality, andfor the code.The starting point for non-parametric Entropy Pooling is a prior distribution

(0) for the risk drivers X, represented as in (1) by a set of scenarios andassociated probabilities

0 ⇐⇒ {x (0) }=1 . (11)

4

As we show in Appendix A.1, available at http://symmys.com/node/353,the crisp state conditioning (4) includes the rolling window (2) as a special case,and kernel smoothing (7) includes the time decayed exponential smoothing (3)as a special case, see Figure 1.We can generalize further the concepts of crisp conditioning and kernel

smoothing by means of fuzzy membership functions. Fuzzy membership to agiven set X is dened in terms of a "membership" function X (x) with valuesin the range [0 1], which describes to what extent x belongs to a given set X .Using a fuzzy membership X for a given set X of potential market outcomesfor the market X, we can set the probability of each scenario as proportional tothe degree of membership of that scenario to the set of outcomes

∝ X (x). (8)

A trivial example of membership function is the indicator function thatdenes crisp conditioning (4)

X (x) ≡ 1X (x) . (9)

With the indicator function, membership is either maximal, and equal to 1, ifx belongs to X , or minimal, and equal to 0, if x does not belong to X .A second example of membership function is the kernel (5), which is a mem-

bership function for the singleton X ≡ μ

(x) ≡ (x) . (10)

The membership of x to μ is maximal when x is μ. The larger the distance of x from μ, the less x "belongs" to μ and thus the closer to 0 the membershipof x.

3 Entropy PoolingThe probability specication (8) assumes no prior knowledge of the distributionof the market risk drivers X. In this case, fuzzy membership is the most generalapproach to assign probabilities to the scenarios in (1). Here we show how non-parametric Entropy Pooling further generalizes fuzzy membership specicationsto the case when prior information is available, or when we must blend togethermore than one membership specication.First, we review the non-parametric Entropy Pooling implementation. Please

refer to the original article Meucci (2008) for more details, more generality, andfor the code.The starting point for non-parametric Entropy Pooling is a prior distribution

(0) for the risk drivers X, represented as in (1) by a set of scenarios andassociated probabilities

0 ⇐⇒ {x (0) }=1 . (11)

4

As we show in Appendix A.1, available at http://symmys.com/node/353,the crisp state conditioning (4) includes the rolling window (2) as a special case,and kernel smoothing (7) includes the time decayed exponential smoothing (3)as a special case, see Figure 1.We can generalize further the concepts of crisp conditioning and kernel

smoothing by means of fuzzy membership functions. Fuzzy membership to agiven set X is dened in terms of a "membership" function X (x) with valuesin the range [0 1], which describes to what extent x belongs to a given set X .Using a fuzzy membership X for a given set X of potential market outcomesfor the market X, we can set the probability of each scenario as proportional tothe degree of membership of that scenario to the set of outcomes

∝ X (x). (8)

A trivial example of membership function is the indicator function thatdenes crisp conditioning (4)

X (x) ≡ 1X (x) . (9)

With the indicator function, membership is either maximal, and equal to 1, ifx belongs to X , or minimal, and equal to 0, if x does not belong to X .A second example of membership function is the kernel (5), which is a mem-

bership function for the singleton X ≡ μ

(x) ≡ (x) . (10)

The membership of x to μ is maximal when x is μ. The larger the distance of x from μ, the less x "belongs" to μ and thus the closer to 0 the membershipof x.

3 Entropy PoolingThe probability specication (8) assumes no prior knowledge of the distributionof the market risk drivers X. In this case, fuzzy membership is the most generalapproach to assign probabilities to the scenarios in (1). Here we show how non-parametric Entropy Pooling further generalizes fuzzy membership specicationsto the case when prior information is available, or when we must blend togethermore than one membership specication.First, we review the non-parametric Entropy Pooling implementation. Please

refer to the original article Meucci (2008) for more details, more generality, andfor the code.The starting point for non-parametric Entropy Pooling is a prior distribution

(0) for the risk drivers X, represented as in (1) by a set of scenarios andassociated probabilities

0 ⇐⇒ {x (0) }=1 . (11)

4

As we show in Appendix A.1, available at http://symmys.com/node/353,the crisp state conditioning (4) includes the rolling window (2) as a special case,and kernel smoothing (7) includes the time decayed exponential smoothing (3)as a special case, see Figure 1.We can generalize further the concepts of crisp conditioning and kernel

smoothing by means of fuzzy membership functions. Fuzzy membership to agiven set X is dened in terms of a "membership" function X (x) with valuesin the range [0 1], which describes to what extent x belongs to a given set X .Using a fuzzy membership X for a given set X of potential market outcomesfor the market X, we can set the probability of each scenario as proportional tothe degree of membership of that scenario to the set of outcomes

∝ X (x). (8)

A trivial example of membership function is the indicator function thatdenes crisp conditioning (4)

X (x) ≡ 1X (x) . (9)

With the indicator function, membership is either maximal, and equal to 1, ifx belongs to X , or minimal, and equal to 0, if x does not belong to X .A second example of membership function is the kernel (5), which is a mem-

bership function for the singleton X ≡ μ

(x) ≡ (x) . (10)

The membership of x to μ is maximal when x is μ. The larger the distance of x from μ, the less x "belongs" to μ and thus the closer to 0 the membershipof x.

3 Entropy PoolingThe probability specication (8) assumes no prior knowledge of the distributionof the market risk drivers X. In this case, fuzzy membership is the most generalapproach to assign probabilities to the scenarios in (1). Here we show how non-parametric Entropy Pooling further generalizes fuzzy membership specicationsto the case when prior information is available, or when we must blend togethermore than one membership specication.First, we review the non-parametric Entropy Pooling implementation. Please

refer to the original article Meucci (2008) for more details, more generality, andfor the code.The starting point for non-parametric Entropy Pooling is a prior distribution

(0) for the risk drivers X, represented as in (1) by a set of scenarios andassociated probabilities

0 ⇐⇒ {x (0) }=1 . (11)

4

The second input is a view on the market X, or a stress-test. Thus a gen-eralized view on X is a statement on the yet-to-be dened distribution denedon the same scenarios ⇐⇒ {x }=1 . A large class of such views canbe characterized as expressions on the expectations of arbitrary functions of themarket (X).

V : Ep { (X)} ≥ ∗, (12)

where ∗ is a threshold value that determines the intensity of the view.To illustrate a typical view, consider the standard views a-la Black and

Litterman (1990) on the expected value μaX of select portfolios returns aX,where X represents the returns of securities, and a is a × matrix, whoseeach row are the weights of a different portfolio. Such view can be written as in(12), where (X) ≡ (a0−a0)0 and ∗ ≡ (μaX−μ0aX)0.Our ultimate goal is to compute a posterior distribution which departs

from the prior to incorporate the views. The posterior distribution is speciedby new probabilities p on the same scenarios (11). To this purpose, we measurethe "distance" between two sets of probabilities p and p0 by the relative entropy

E (pp0) ≡ p0 (lnp− lnp0) . (13)

The relative entropy is a "distance" in that (13) is zero only if p = p0 and itbecomes larger as p diverges away from p0.Then we dene the posterior as the closest distribution to the prior, as

measured by (13), which satises the views (12)

p ≡ argminq∈V

E (qp0) , (14)

where the notation p ∈ V means that p satises the view (12).Applications of Entropy Pooling to the probabilities in the Fully Flexible

Probabilities framework are manifold. For instance, with Entropy Pooling, wecan compute exponentially decayed covariances where the correlations and thevariances are decayed at different rates. Other applications include conditioningthe posterior according to expectations on a market panic indicator. For moredetails see Meucci (2010).As highlighted in Figure 1, the Entropy Pooling posterior (14) also includes

as special cases the probabilities dened in terms of fuzzy membership functions(8). Indeed, let us assume that in the Entropy Pooling optimization (14) theprior is non-informative, i.e.

p0 ∝ 1. (15)

Furthermore, let us assume that in the Entropy Pooling optimization (14) weexpress the view (12) on the logarithm of a fuzzy membership functionX (x) ∈[0 1]

(x) ≡ ln (X (x)) . (16)

Finally, let us set the view intensity in (12) as

∗ ≡P

=1X (x) lnX (x)P

=1X (x)

. (17)

5

The second input is a view on the market X, or a stress-test. Thus a gen-eralized view on X is a statement on the yet-to-be dened distribution denedon the same scenarios ⇐⇒ {x }=1 . A large class of such views canbe characterized as expressions on the expectations of arbitrary functions of themarket (X).

V : Ep { (X)} ≥ ∗, (12)

where ∗ is a threshold value that determines the intensity of the view.To illustrate a typical view, consider the standard views a-la Black and

Litterman (1990) on the expected value μaX of select portfolios returns aX,where X represents the returns of securities, and a is a × matrix, whoseeach row are the weights of a different portfolio. Such view can be written as in(12), where (X) ≡ (a0−a0)0 and ∗ ≡ (μaX−μ0aX)0.Our ultimate goal is to compute a posterior distribution which departs

from the prior to incorporate the views. The posterior distribution is speciedby new probabilities p on the same scenarios (11). To this purpose, we measurethe "distance" between two sets of probabilities p and p0 by the relative entropy

E (pp0) ≡ p0 (lnp− lnp0) . (13)

The relative entropy is a "distance" in that (13) is zero only if p = p0 and itbecomes larger as p diverges away from p0.Then we dene the posterior as the closest distribution to the prior, as

measured by (13), which satises the views (12)

p ≡ argminq∈V

E (qp0) , (14)

where the notation p ∈ V means that p satises the view (12).Applications of Entropy Pooling to the probabilities in the Fully Flexible

Probabilities framework are manifold. For instance, with Entropy Pooling, wecan compute exponentially decayed covariances where the correlations and thevariances are decayed at different rates. Other applications include conditioningthe posterior according to expectations on a market panic indicator. For moredetails see Meucci (2010).As highlighted in Figure 1, the Entropy Pooling posterior (14) also includes

as special cases the probabilities dened in terms of fuzzy membership functions(8). Indeed, let us assume that in the Entropy Pooling optimization (14) theprior is non-informative, i.e.

p0 ∝ 1. (15)

Furthermore, let us assume that in the Entropy Pooling optimization (14) weexpress the view (12) on the logarithm of a fuzzy membership functionX (x) ∈[0 1]

(x) ≡ ln (X (x)) . (16)

Finally, let us set the view intensity in (12) as

∗ ≡P

=1X (x) lnX (x)P

=1X (x)

. (17)

5

The second input is a view on the market X, or a stress-test. Thus a gen-eralized view on X is a statement on the yet-to-be dened distribution denedon the same scenarios ⇐⇒ {x }=1 . A large class of such views canbe characterized as expressions on the expectations of arbitrary functions of themarket (X).

V : Ep { (X)} ≥ ∗, (12)

where ∗ is a threshold value that determines the intensity of the view.To illustrate a typical view, consider the standard views a-la Black and

Litterman (1990) on the expected value μaX of select portfolios returns aX,where X represents the returns of securities, and a is a × matrix, whoseeach row are the weights of a different portfolio. Such view can be written as in(12), where (X) ≡ (a0−a0)0 and ∗ ≡ (μaX−μ0aX)0.Our ultimate goal is to compute a posterior distribution which departs

from the prior to incorporate the views. The posterior distribution is speciedby new probabilities p on the same scenarios (11). To this purpose, we measurethe "distance" between two sets of probabilities p and p0 by the relative entropy

E (pp0) ≡ p0 (lnp− lnp0) . (13)

The relative entropy is a "distance" in that (13) is zero only if p = p0 and itbecomes larger as p diverges away from p0.Then we dene the posterior as the closest distribution to the prior, as

measured by (13), which satises the views (12)

p ≡ argminq∈V

E (qp0) , (14)

where the notation p ∈ V means that p satises the view (12).Applications of Entropy Pooling to the probabilities in the Fully Flexible

Probabilities framework are manifold. For instance, with Entropy Pooling, wecan compute exponentially decayed covariances where the correlations and thevariances are decayed at different rates. Other applications include conditioningthe posterior according to expectations on a market panic indicator. For moredetails see Meucci (2010).As highlighted in Figure 1, the Entropy Pooling posterior (14) also includes

as special cases the probabilities dened in terms of fuzzy membership functions(8). Indeed, let us assume that in the Entropy Pooling optimization (14) theprior is non-informative, i.e.

p0 ∝ 1. (15)

Furthermore, let us assume that in the Entropy Pooling optimization (14) weexpress the view (12) on the logarithm of a fuzzy membership functionX (x) ∈[0 1]

(x) ≡ ln (X (x)) . (16)

Finally, let us set the view intensity in (12) as

∗ ≡P

=1X (x) lnX (x)P

=1X (x)

. (17)

5

(12)The second input is a view on the market X, or a stress-test. Thus a gen-

eralized view on X is a statement on the yet-to-be dened distribution denedon the same scenarios ⇐⇒ {x }=1 . A large class of such views canbe characterized as expressions on the expectations of arbitrary functions of themarket (X).

V : Ep { (X)} ≥ ∗, (12)

where ∗ is a threshold value that determines the intensity of the view.To illustrate a typical view, consider the standard views a-la Black and

Litterman (1990) on the expected value μaX of select portfolios returns aX,where X represents the returns of securities, and a is a × matrix, whoseeach row are the weights of a different portfolio. Such view can be written as in(12), where (X) ≡ (a0−a0)0 and ∗ ≡ (μaX−μ0aX)0.Our ultimate goal is to compute a posterior distribution which departs

from the prior to incorporate the views. The posterior distribution is speciedby new probabilities p on the same scenarios (11). To this purpose, we measurethe "distance" between two sets of probabilities p and p0 by the relative entropy

E (pp0) ≡ p0 (lnp− lnp0) . (13)

The relative entropy is a "distance" in that (13) is zero only if p = p0 and itbecomes larger as p diverges away from p0.Then we dene the posterior as the closest distribution to the prior, as

measured by (13), which satises the views (12)

p ≡ argminq∈V

E (qp0) , (14)

where the notation p ∈ V means that p satises the view (12).Applications of Entropy Pooling to the probabilities in the Fully Flexible

Probabilities framework are manifold. For instance, with Entropy Pooling, wecan compute exponentially decayed covariances where the correlations and thevariances are decayed at different rates. Other applications include conditioningthe posterior according to expectations on a market panic indicator. For moredetails see Meucci (2010).As highlighted in Figure 1, the Entropy Pooling posterior (14) also includes

as special cases the probabilities dened in terms of fuzzy membership functions(8). Indeed, let us assume that in the Entropy Pooling optimization (14) theprior is non-informative, i.e.

p0 ∝ 1. (15)

Furthermore, let us assume that in the Entropy Pooling optimization (14) weexpress the view (12) on the logarithm of a fuzzy membership functionX (x) ∈[0 1]

(x) ≡ ln (X (x)) . (16)

Finally, let us set the view intensity in (12) as

∗ ≡P

=1X (x) lnX (x)P

=1X (x)

. (17)

5

The second input is a view on the market X, or a stress-test. Thus a gen-eralized view on X is a statement on the yet-to-be dened distribution denedon the same scenarios ⇐⇒ {x }=1 . A large class of such views canbe characterized as expressions on the expectations of arbitrary functions of themarket (X).

V : Ep { (X)} ≥ ∗, (12)

where ∗ is a threshold value that determines the intensity of the view.To illustrate a typical view, consider the standard views a-la Black and

Litterman (1990) on the expected value μaX of select portfolios returns aX,where X represents the returns of securities, and a is a × matrix, whoseeach row are the weights of a different portfolio. Such view can be written as in(12), where (X) ≡ (a0−a0)0 and ∗ ≡ (μaX−μ0aX)0.Our ultimate goal is to compute a posterior distribution which departs

from the prior to incorporate the views. The posterior distribution is speciedby new probabilities p on the same scenarios (11). To this purpose, we measurethe "distance" between two sets of probabilities p and p0 by the relative entropy

E (pp0) ≡ p0 (lnp− lnp0) . (13)

The relative entropy is a "distance" in that (13) is zero only if p = p0 and itbecomes larger as p diverges away from p0.Then we dene the posterior as the closest distribution to the prior, as

measured by (13), which satises the views (12)

p ≡ argminq∈V

E (qp0) , (14)

where the notation p ∈ V means that p satises the view (12).Applications of Entropy Pooling to the probabilities in the Fully Flexible

Probabilities framework are manifold. For instance, with Entropy Pooling, wecan compute exponentially decayed covariances where the correlations and thevariances are decayed at different rates. Other applications include conditioningthe posterior according to expectations on a market panic indicator. For moredetails see Meucci (2010).As highlighted in Figure 1, the Entropy Pooling posterior (14) also includes

as special cases the probabilities dened in terms of fuzzy membership functions(8). Indeed, let us assume that in the Entropy Pooling optimization (14) theprior is non-informative, i.e.

p0 ∝ 1. (15)

Furthermore, let us assume that in the Entropy Pooling optimization (14) weexpress the view (12) on the logarithm of a fuzzy membership functionX (x) ∈[0 1]

(x) ≡ ln (X (x)) . (16)

Finally, let us set the view intensity in (12) as

∗ ≡P

=1X (x) lnX (x)P

=1X (x)

. (17)

5

(9)

(8)

(10)

(11)

(13)

(14)

Page 36: riskprofessional201112-dl

35 RISK PROFESSIONAL D E C E M B E R 2 0 1 1 www.garp.org

is non-informative -- i.e.,

Furthermore, let us assume that in the Entropy Pooling opti-mization (14) we express the view (12) on the logarithm of a fuzzy membership function mx (x) [0,1] as

Finally, let us set the view intensity in (12) as

Then, as we show in Appendix A.2, available at http://sym-mys.com/node/353, the Entropy Pooling posterior (14) reads

This means that, without a prior, the Entropy Pooling poste-rior is the same as the fuzzy membership function, which in turn is a general case of kernel smoothing and crisp condi-tioning (see also the examples in Appendix A.3, available at http://symmys.com/node/353).

Not only does Entropy Pooling generalize fuzzy member-ship, it also allows us to blend multiple views with a prior. Indeed, let us suppose that, unlike in (15), we have an infor-mative prior p(0) on the market X — such as, for instance, the exponential time decay predicate (3) — that recent infor-mation is more reliable than old information. Suppose also that we would like to condition our distribution of the market based on the state of the market using a membership function as in (18) — such as, for instance, a Gaussian kernel. How do we mix these two conflicting pieces of information?

Distributions can be blended in a variety of ad-hoc ways. Entropy Pooling provides a statistically sound answer: we sim-ply replace the non-informative prior (15) with our informa-tive prior p(0) in the Entropy Pooling optimization (14) driven by the view (12) on the log-membership function (16) with intensity (17). In summary, the optimal blend reads

More generally, we can add to the view in (19) other views

on different features of the market distribution (as in Meucci, 2008).

It is worth emphasizing that all the steps of the above process are computationally trivial, from setting the prior p(0) to setting the constraint on the expectation, to computing the posterior (19). Thus the Entropy Pooling mixture is also practical.

Case Study: Conditional Risk EstimatesHere we use Entropy Pooling to mix information on the dis-tribution of a portfolio’s historical simulations. A standard approach to risk management relies on so-called historical simulations for the portfolio P&L: current positions are evalu-ated under past realizations {xt}t=1,...,T of the risk drivers X, giving rise to a history of {xt}t=1,...,T. To estimate the risk in the portfolio, one can assign equal weight to all the realiza-tions. The more versatile Fully Flexible Probability approach (1) allows for arbitrary probability weights

Refer to Meucci, 2010, for more details.

In our case study we consider a portfolio of options, whose historically simulated daily P&L distribution over a period of ten years is highly skewed and kurtotic, and definitely non-normal. Using Fully Flexible Probabilities, we model the ex-ponential decay prior that recent observations are more rel-evant for risk estimation purposes

where, is a half-life of 120 days. We plot these probabilities in the top portion of Figure 2 (below).

Figure 2: Mixing Distributions via Entropy Pooling

T H E Q U A N T C L A S S R O O M B Y AT T I L I O M E U C C I

The second input is a view on the market X, or a stress-test. Thus a gen-eralized view on X is a statement on the yet-to-be dened distribution denedon the same scenarios ⇐⇒ {x }=1 . A large class of such views canbe characterized as expressions on the expectations of arbitrary functions of themarket (X).

V : Ep { (X)} ≥ ∗, (12)

where ∗ is a threshold value that determines the intensity of the view.To illustrate a typical view, consider the standard views a-la Black and

Litterman (1990) on the expected value μaX of select portfolios returns aX,where X represents the returns of securities, and a is a × matrix, whoseeach row are the weights of a different portfolio. Such view can be written as in(12), where (X) ≡ (a0−a0)0 and ∗ ≡ (μaX−μ0aX)0.Our ultimate goal is to compute a posterior distribution which departs

from the prior to incorporate the views. The posterior distribution is speciedby new probabilities p on the same scenarios (11). To this purpose, we measurethe "distance" between two sets of probabilities p and p0 by the relative entropy

E (pp0) ≡ p0 (lnp− lnp0) . (13)

The relative entropy is a "distance" in that (13) is zero only if p = p0 and itbecomes larger as p diverges away from p0.Then we dene the posterior as the closest distribution to the prior, as

measured by (13), which satises the views (12)

p ≡ argminq∈V

E (qp0) , (14)

where the notation p ∈ V means that p satises the view (12).Applications of Entropy Pooling to the probabilities in the Fully Flexible

Probabilities framework are manifold. For instance, with Entropy Pooling, wecan compute exponentially decayed covariances where the correlations and thevariances are decayed at different rates. Other applications include conditioningthe posterior according to expectations on a market panic indicator. For moredetails see Meucci (2010).As highlighted in Figure 1, the Entropy Pooling posterior (14) also includes

as special cases the probabilities dened in terms of fuzzy membership functions(8). Indeed, let us assume that in the Entropy Pooling optimization (14) theprior is non-informative, i.e.

p0 ∝ 1. (15)

Furthermore, let us assume that in the Entropy Pooling optimization (14) weexpress the view (12) on the logarithm of a fuzzy membership functionX (x) ∈[0 1]

(x) ≡ ln (X (x)) . (16)

Finally, let us set the view intensity in (12) as

∗ ≡P

=1X (x) lnX (x)P

=1X (x)

. (17)

5

The second input is a view on the market X, or a stress-test. Thus a gen-eralized view on X is a statement on the yet-to-be dened distribution denedon the same scenarios ⇐⇒ {x }=1 . A large class of such views canbe characterized as expressions on the expectations of arbitrary functions of themarket (X).

V : Ep { (X)} ≥ ∗, (12)

where ∗ is a threshold value that determines the intensity of the view.To illustrate a typical view, consider the standard views a-la Black and

Litterman (1990) on the expected value μaX of select portfolios returns aX,where X represents the returns of securities, and a is a × matrix, whoseeach row are the weights of a different portfolio. Such view can be written as in(12), where (X) ≡ (a0−a0)0 and ∗ ≡ (μaX−μ0aX)0.Our ultimate goal is to compute a posterior distribution which departs

from the prior to incorporate the views. The posterior distribution is speciedby new probabilities p on the same scenarios (11). To this purpose, we measurethe "distance" between two sets of probabilities p and p0 by the relative entropy

E (pp0) ≡ p0 (lnp− lnp0) . (13)

The relative entropy is a "distance" in that (13) is zero only if p = p0 and itbecomes larger as p diverges away from p0.Then we dene the posterior as the closest distribution to the prior, as

measured by (13), which satises the views (12)

p ≡ argminq∈V

E (qp0) , (14)

where the notation p ∈ V means that p satises the view (12).Applications of Entropy Pooling to the probabilities in the Fully Flexible

Probabilities framework are manifold. For instance, with Entropy Pooling, wecan compute exponentially decayed covariances where the correlations and thevariances are decayed at different rates. Other applications include conditioningthe posterior according to expectations on a market panic indicator. For moredetails see Meucci (2010).As highlighted in Figure 1, the Entropy Pooling posterior (14) also includes

as special cases the probabilities dened in terms of fuzzy membership functions(8). Indeed, let us assume that in the Entropy Pooling optimization (14) theprior is non-informative, i.e.

p0 ∝ 1. (15)

Furthermore, let us assume that in the Entropy Pooling optimization (14) weexpress the view (12) on the logarithm of a fuzzy membership functionX (x) ∈[0 1]

(x) ≡ ln (X (x)) . (16)

Finally, let us set the view intensity in (12) as

∗ ≡P

=1X (x) lnX (x)P

=1X (x)

. (17)

5

The second input is a view on the market X, or a stress-test. Thus a gen-eralized view on X is a statement on the yet-to-be dened distribution denedon the same scenarios ⇐⇒ {x }=1 . A large class of such views canbe characterized as expressions on the expectations of arbitrary functions of themarket (X).

V : Ep { (X)} ≥ ∗, (12)

where ∗ is a threshold value that determines the intensity of the view.To illustrate a typical view, consider the standard views a-la Black and

Litterman (1990) on the expected value μaX of select portfolios returns aX,where X represents the returns of securities, and a is a × matrix, whoseeach row are the weights of a different portfolio. Such view can be written as in(12), where (X) ≡ (a0−a0)0 and ∗ ≡ (μaX−μ0aX)0.Our ultimate goal is to compute a posterior distribution which departs

from the prior to incorporate the views. The posterior distribution is speciedby new probabilities p on the same scenarios (11). To this purpose, we measurethe "distance" between two sets of probabilities p and p0 by the relative entropy

E (pp0) ≡ p0 (lnp− lnp0) . (13)

The relative entropy is a "distance" in that (13) is zero only if p = p0 and itbecomes larger as p diverges away from p0.Then we dene the posterior as the closest distribution to the prior, as

measured by (13), which satises the views (12)

p ≡ argminq∈V

E (qp0) , (14)

where the notation p ∈ V means that p satises the view (12).Applications of Entropy Pooling to the probabilities in the Fully Flexible

Probabilities framework are manifold. For instance, with Entropy Pooling, wecan compute exponentially decayed covariances where the correlations and thevariances are decayed at different rates. Other applications include conditioningthe posterior according to expectations on a market panic indicator. For moredetails see Meucci (2010).As highlighted in Figure 1, the Entropy Pooling posterior (14) also includes

as special cases the probabilities dened in terms of fuzzy membership functions(8). Indeed, let us assume that in the Entropy Pooling optimization (14) theprior is non-informative, i.e.

p0 ∝ 1. (15)

Furthermore, let us assume that in the Entropy Pooling optimization (14) weexpress the view (12) on the logarithm of a fuzzy membership functionX (x) ∈[0 1]

(x) ≡ ln (X (x)) . (16)

Finally, let us set the view intensity in (12) as

∗ ≡P

=1X (x) lnX (x)P

=1X (x)

. (17)

5

(15)

(16)

(17)

(18)

Then, as we show in Appendix A.2, available at http://symmys.com/node/353,the Entropy Pooling posterior (14) reads

∝ X (x). (18)

This means that, without a prior, the Entropy Pooling posterior is the same asthe fuzzy membership function, which in turn is a general case of kernel smooth-ing and crisp conditioning, see also the examples in Appendix A.3, available athttp://symmys.com/node/353.Not only does Entropy Pooling generalize fuzzy membership, it also allows

us to blend multiple views with a prior. Indeed, let us suppose that, unlike in(15), we have an informative prior p(0) on the market X, such as for instancethe exponential time decay predicate (3) that recent information is more reli-able than old information. Suppose also that we would like to condition ourdistribution of the market based on the state of the market using a membershipfunction as in (18), such as for instance a Gaussian kernel. How do we mix thesetwo conicting pieces of information?Distributions can be blended in a variety of ad-hoc ways. Entropy Pooling

provides a statistically sound answer: we simply replace the non-informativeprior (15) with our informative prior p(0) in the Entropy Pooling optimization(14) driven by the view (12) on the log-membership function (16) with intensity(17). In summary, the optimal blend reads

p ≡ argminqE(qp(0)). (19)

where Eq {ln (X (X))} ≥ ∗.

More in general, we can add to the view in (19) other views on different featuresof the market distribution, as in Meucci (2008).It is worth emphasizing that all the steps of the above process are com-

putationally trivial, from setting the prior p(0) to setting the constraint on theexpectation, to computing the posterior (19). Thus the Entropy Pooling mixtureis also practical.

4 Case study: conditional risk estimatesHere we use Entropy Pooling to mix information on the distribution of a port-folio’s historical simulations. A standard approach to risk management relieson so-called historical simulations for the portfolio P&L: current positions areevaluated under past realizations {x}=1 of the risk drivers X, giving riseto a history of P&L’s {}=1 . To estimate the risk in the portfolio, onecan assign equal weight to all the realizations. The more versatile Fully FlexibleProbability approach (1) allows for arbitrary probability weights

⇐⇒ {(x) }=1 , (20)

refer to Meucci (2010) for more details.

6

Then, as we show in Appendix A.2, available at http://symmys.com/node/353,the Entropy Pooling posterior (14) reads

∝ X (x). (18)

This means that, without a prior, the Entropy Pooling posterior is the same asthe fuzzy membership function, which in turn is a general case of kernel smooth-ing and crisp conditioning, see also the examples in Appendix A.3, available athttp://symmys.com/node/353.Not only does Entropy Pooling generalize fuzzy membership, it also allows

us to blend multiple views with a prior. Indeed, let us suppose that, unlike in(15), we have an informative prior p(0) on the market X, such as for instancethe exponential time decay predicate (3) that recent information is more reli-able than old information. Suppose also that we would like to condition ourdistribution of the market based on the state of the market using a membershipfunction as in (18), such as for instance a Gaussian kernel. How do we mix thesetwo conicting pieces of information?Distributions can be blended in a variety of ad-hoc ways. Entropy Pooling

provides a statistically sound answer: we simply replace the non-informativeprior (15) with our informative prior p(0) in the Entropy Pooling optimization(14) driven by the view (12) on the log-membership function (16) with intensity(17). In summary, the optimal blend reads

p ≡ argminqE(qp(0)). (19)

where Eq {ln (X (X))} ≥ ∗.

More in general, we can add to the view in (19) other views on different featuresof the market distribution, as in Meucci (2008).It is worth emphasizing that all the steps of the above process are com-

putationally trivial, from setting the prior p(0) to setting the constraint on theexpectation, to computing the posterior (19). Thus the Entropy Pooling mixtureis also practical.

4 Case study: conditional risk estimatesHere we use Entropy Pooling to mix information on the distribution of a port-folio’s historical simulations. A standard approach to risk management relieson so-called historical simulations for the portfolio P&L: current positions areevaluated under past realizations {x}=1 of the risk drivers X, giving riseto a history of P&L’s {}=1 . To estimate the risk in the portfolio, onecan assign equal weight to all the realizations. The more versatile Fully FlexibleProbability approach (1) allows for arbitrary probability weights

⇐⇒ {(x) }=1 , (20)

refer to Meucci (2010) for more details.

6

Then, as we show in Appendix A.2, available at http://symmys.com/node/353,the Entropy Pooling posterior (14) reads

∝ X (x). (18)

This means that, without a prior, the Entropy Pooling posterior is the same asthe fuzzy membership function, which in turn is a general case of kernel smooth-ing and crisp conditioning, see also the examples in Appendix A.3, available athttp://symmys.com/node/353.Not only does Entropy Pooling generalize fuzzy membership, it also allows

us to blend multiple views with a prior. Indeed, let us suppose that, unlike in(15), we have an informative prior p(0) on the market X, such as for instancethe exponential time decay predicate (3) that recent information is more reli-able than old information. Suppose also that we would like to condition ourdistribution of the market based on the state of the market using a membershipfunction as in (18), such as for instance a Gaussian kernel. How do we mix thesetwo conicting pieces of information?Distributions can be blended in a variety of ad-hoc ways. Entropy Pooling

provides a statistically sound answer: we simply replace the non-informativeprior (15) with our informative prior p(0) in the Entropy Pooling optimization(14) driven by the view (12) on the log-membership function (16) with intensity(17). In summary, the optimal blend reads

p ≡ argminqE(qp(0)). (19)

where Eq {ln (X (X))} ≥ ∗.

More in general, we can add to the view in (19) other views on different featuresof the market distribution, as in Meucci (2008).It is worth emphasizing that all the steps of the above process are com-

putationally trivial, from setting the prior p(0) to setting the constraint on theexpectation, to computing the posterior (19). Thus the Entropy Pooling mixtureis also practical.

4 Case study: conditional risk estimatesHere we use Entropy Pooling to mix information on the distribution of a port-folio’s historical simulations. A standard approach to risk management relieson so-called historical simulations for the portfolio P&L: current positions areevaluated under past realizations {x}=1 of the risk drivers X, giving riseto a history of P&L’s {}=1 . To estimate the risk in the portfolio, onecan assign equal weight to all the realizations. The more versatile Fully FlexibleProbability approach (1) allows for arbitrary probability weights

⇐⇒ {(x) }=1 , (20)

refer to Meucci (2010) for more details.

6

In our case study we consider a portfolio of options, whose historically simu-lated daily P&L distribution over a period of ten years is highly skewed and kur-totic, and denitely non-normal. Using Fully Flexible Probabilities, we modelthe exponential decay prior that recent observations are more relevant for riskestimation purposes

(0) ∝ −

ln 2 (−), (21)

where, is a half-life of 120 days. We plot these probabilities in the top portionof Figure 2.

Prior (exponential time decay)

State indicator (2yr swap  rate ‐ current level)

Membership (Gaussian  kernel)

Posterior (Entropy Pooling mixture)

time

State indicator (1>5yr ATM implied swaption vol ‐ current level)

Figure 2: Mixing distributions via Entropy Pooling

Then we condition the market on two variables: the ve-year swap rate,which we denote by 1, and the one-into-ve swaption implied at-the-moneyvolatility, which we denote by 2. We estimate the 2×2 covariance matrix σ ofthese variables, and we construct a quasi-Gaussian kernel, similar to (6), settingas target the current values x of the conditioning variables

≡ exp(− 122

£(x − x )0 σ−1 (x − x )

¤). (22)

In this expression the bandwidth is ≈ 10 and ≈ 04 is a power for theMahalanobis distance, which allows for a smoother conditioning than = 2.If we used directly the membership levels (22) as probabilities ∝ ,

we would disregard the prior information (21) that more recent data is morevaluable for our analysis. If we used only the exponentially decayed prior (21),

7

(21)

Prior (exponential time decay)

State indicator (2yr swap rate - current level)

State indicator (1>5yr ATM implied swaption vol - current level)

Membership (Gaussian kernel)

Posterior (Entropy Pooling mixture)

time

(20)

(19)

Page 37: riskprofessional201112-dl

www.garp.org D E C E M B E R 2 0 1 1 RISK PROFESSIONAL 36

We condition the market on two variables: the five-year swap rate, which we denote by X1, and the one-into-five swaption implied at-the-money volatility, which we denote by X2. We estimate the 2 x 2 covariance matrix of these variables, and we construct a quasi-Gaussian kernel, similar to (6), setting as target the current values xT of the conditioning variables

In this expression, the bandwidth 10 and 0.4 is a power for the Mahalanobis distance, which allows for a smoother conditioning than =2.

If we used directly the membership levels (22) as probabili-ties pt mt, we would disregard the prior information (21) that more recent data is more valuable for our analysis. If we used only the exponentially decayed prior (21), we would disregard all the information conditional on the market state (22). To overlay the conditional information to the prior, we compute the Entropy Pooling posterior (19), which we write here

Notice that for each specification of the kernel bandwidth and radius depth in (22) we obtain a different posterior. Hence, a further refinement of the proposed approach lets the data determine the optimal bandwidth, by minimizing the relative entropy in (23) as a function of q as well as (,). We leave this step to the reader.

The kernel-based posterior (23) can be compared with al-ternative uses of Entropy Pooling to overlay a prior with par-tial information. For instance, Meucci (2010) obtains the pos-terior by imposing that the expected value of the conditioning variables be the same as the current value, i.e. Eq{X} =xT.

This approach is reasonable in a univariate context. How-ever, when the number of conditioning variables X is larger than one, due to the curse of dimensionality, we can obtain the undesirable result that the posterior probabilities are highly concentrated in a few extreme scenarios. This does not happen if we condition through a pseudo-Gaussian kernel as in (22)-(23)

In Figure 2 (see pg. 34), we plot the Entropy Pooling poste-rior probabilities (23) in our example. We can appreciate the hybrid nature of these probabilities, which share similarities with both the prior (21) and the conditioning kernel (22).

Using the Entropy Pooling posterior probabilities (23), we can perform all sorts of risk computations, as in Meucci (2010). In (24) we present a few significant statistics for our historical portfolio (20), and we compare such statistics with those stemming from the standard exponential decay (21) and the standard conditioning kernel (22). For more details, we refer the reader to the code available at http://symmys.com/node/353.

On the last row of (24) we also report the effective num-ber of scenarios, a practical measure of the predictive power of the above choices of probabilities, discussed in detail in Meucci (2012).

ReferencesBlack, F. and R. Litterman, 1990. "Asset allocation: Combining Investor Views with Market Equilibrium," Goldman Sachs Fixed Income Research.

Meucci, A., 2008. "Fully Flexible Views: Theory and Practice," Risk 21, 97—102. Article and code available at http://symmys.com/node/158.

ibid, 2010. "Historical scenarios with Fully Flexible Probabilities," Risk Professional December, 40—43. Article and code available at http://symmys.com/node/150.

ibid, 2011. "The Prayer: Ten-step Checklist for Advanced Risk and Portfolio Management," Risk Professional April/June, 54—60/55—59. Available at http://symmys.com/node/63.

ibid, 2012. "Effective Number of Scenarios with Fully Flexible Probabilities," Working Paper article and code available at http://symmys.com/node/362.

Attilio Meucci is the chief risk officer at Kepos Capital LP. He runs the 6-day "Ad-vanced Risk and Portfolio Management Bootcamp," see www.symmys.com. He is grateful to Garli Beibi and David Elliott.

T H E Q U A N T C L A S S R O O M B Y AT T I L I O M E U C C I

In our case study we consider a portfolio of options, whose historically simu-lated daily P&L distribution over a period of ten years is highly skewed and kur-totic, and denitely non-normal. Using Fully Flexible Probabilities, we modelthe exponential decay prior that recent observations are more relevant for riskestimation purposes

(0) ∝ −

ln 2 (−), (21)

where, is a half-life of 120 days. We plot these probabilities in the top portionof Figure 2.

Prior (exponential time decay)

State indicator (2yr swap  rate ‐ current level)

Membership (Gaussian  kernel)

Posterior (Entropy Pooling mixture)

time

State indicator (1>5yr ATM implied swaption vol ‐ current level)

Figure 2: Mixing distributions via Entropy Pooling

Then we condition the market on two variables: the ve-year swap rate,which we denote by 1, and the one-into-ve swaption implied at-the-moneyvolatility, which we denote by 2. We estimate the 2×2 covariance matrix σ ofthese variables, and we construct a quasi-Gaussian kernel, similar to (6), settingas target the current values x of the conditioning variables

≡ exp(− 122

£(x − x )0 σ−1 (x − x )

¤). (22)

In this expression the bandwidth is ≈ 10 and ≈ 04 is a power for theMahalanobis distance, which allows for a smoother conditioning than = 2.If we used directly the membership levels (22) as probabilities ∝ ,

we would disregard the prior information (21) that more recent data is morevaluable for our analysis. If we used only the exponentially decayed prior (21),

7

(22)

we would disregard all the information conditional on the market state (22).To overlay the conditional information to the prior, we compute the EntropyPooling posterior (19), which we write here

p ≡ argminq0 lnm≥∗

E(qp(0)). (23)

Notice that for each specication of the kernel bandwidth and radius depth in(22) we obtain a different posterior. Hence, a further renement of the proposedapproach lets the data determine the optimal bandwidth, by minimizing therelative entropy in (23) as a function of q as well as ( ). We leave this step tothe reader.The kernel-based posterior (23) can be compared with alternative uses of En-

tropy Pooling to overlay a prior with partial information. For instance, Meucci(2010) obtains the posterior by imposing that the expected value of the con-ditioning variables be the same as the current value, i.e. Eq {X} = x . Thisapproach is reasonable in a univariate context. However, when the number ofconditioning variables X is larger than one, due to the curse of dimensionality,we can obtain the undesirable result that the posterior probabilities are highlyconcentrated in a few extreme scenarios. This does not happen if we conditionthrough a pseudo-Gaussian kernel as in (22)-(23)In Figure 2 we plot the Entropy Pooling posterior probabilities (23) in our

example. We can appreciate the hybrid nature of these probabilities, whichshare similarities with both the prior (21) and the conditioning kernel (22).

Time decay Kernel Entropy PoolingExp. value 0.12 0.05 0.10St. dev. 1.18 1.30 1.26Skew. -2.65 -2.56 -2.76Kurt. 12.58 11.76 13.39VaR 99% -4.62 -5.53 -5.11Cond. VaR 99% -6.16 -6.70 -6.85Effective scenarios 471 1,644 897

(24)

Using the Entropy Pooling posterior probabilities (23) we can perform all sortsof risk computations, as in Meucci (2010). In (24) we present a few signicantstatistics for our historical portfolio (20), and we compare such statistics withthose stemming from the standard exponential decay (21) and the standardconditioning kernel (22). For more details, we refer the reader to the codeavailable at http://symmys.com/node/353.On the last row of (24) we also report the effective number of scenarios, a

practical measure of the predictive power of the above choices of probabilities,discussed in detail in Meucci (2012).

ReferencesBlack, F., and R. Litterman, 1990, Asset allocation: combining investor viewswith market equilibrium, Goldman Sachs Fixed Income Research.

8

(23)

(24)

Timedecay Kernel EntropyPooling

Exp. value 0.12 0.05 0.10

St. dev. 1.18 1.30 1.26

Skew. -2.65 -2.56 -2.76

Kurt. 12.58 11.76 13.39

VaR 99% -4.62 -5.53 -5.11

Cond. VaR 99% -6.16 -6.70 -6.85

Effectivescenarios 47 11,644 897

Page 38: riskprofessional201112-dl

37 RISK PROFESSIONAL D E C E M B E R 2 0 1 1 www.garp.org

T H E Q U A N T C L A S S R O O M B Y AT T I L I O M E U C C I

GARP Webcasts bring the risk profession’s top thought leaders and innovators direct to your desktop, so you can learn from their insights on the critical issues facing risk professionals today. GARP producesbetween fifteen and twenty live Webcasts each year. As a benefit of Membership, the US$49 fee per paid Webcast iswaived for GARP Individual and Student Members, who also receive unlimited access to the Webcast archive.

Upcoming Webcasts

w Risk Appetite, Governance and Corporate Strategy—Presented by SAS | December 6 >Register

w The Ramifications of Dodd-Frank: Consumer Protection | December 13 >Register

Featured On-Demand Webcasts

w The Ramifications of Dodd-Frank: The Volcker Rule >View

w The Ramifications of Dodd-Frank: Securitization >View

w The Ramifications of Dodd-Frank: Derivatives >View

w Using Operational Risk to Gain a Competitive Edge >View

A full selection of GARP Webcasts can be found at www.garp.org/webcasts

Creating a culture of risk awareness.TM

© 2011 Global Association of Risk Professionals. All rights reserved.

The world’s leading risk practitioners can now be found on your desktop.

Page 39: riskprofessional201112-dl

www.garp.org D E C E M B E R 2 0 1 1 RISK PROFESSIONAL 38

A Fresh Assessment

Recently refined risk management approaches could be more realistically aligned with strategy and governance.

By Gaurav Kapoor and Connie Valencia

W

R I S K S T R A T E G Y

hen tightrope walkers perform, they never look down. Even when their rope sways dangerously, they keep their eyes fixed on their destination. If they don’t, they could lose their balance and fall.

Managing a business is a lot like walking a tightrope. The risks below are plenty. But if you focus only on the risks, you could lose sight of the goal and fall.

Historically, few businesses focused on managing their risks. Most indulged in high-risk ventures without thinking of the consequences. Then a series of accounting scandals, security breaches and financial crises forced businesses to push risk man-agement up front and center. In response to the public outcry, the auditing profession adopted a risk-based approach. Risks were assessed, categorized and prioritized based on their likelihood of occurrence and impact. As a result, maximum resources and attention became focused on high-impact, high-frequency risks.

The risk-based approach, though well accepted, has room for improvement. For example, current guidance is designed to analyze risks in isolation, when, in reality, risks usually occur in a series or in combination. In addition, and most importantly, the risk-based approach categorizes low-frequency, “not likely to occur” risks as low-priority. In other words, current guidance doesn‘t provide analysis on so-called black swan events or unex-pected/unknown risks. However, over the last decade, headlines were filled with unexpected or unknown risk events such as the BP oil spill, the financial meltdown, the Toyota safety recall and the 2011 tsunami in Japan. These indicate that black swans oc-cur, and with alarming frequency.

An old saying is appropriate: Don’t fix your flat tire if your goal is to stay home. Risk management is no different. Assess-

ing risks without understanding the impact each risk has on the company’s goals is not very meaningful. The shotgun approach of analyzing and tracking every possible “what could go wrong” risk may lead an organization down the path of focusing on risks that may not have a significant impact on the company and its objectives.

There is a “new sound” to risk management that begins with identifying the company’s goals. Ask the executive team, what are we trying to achieve (as opposed to, what keeps you up at night)? Once the goals and objectives have been clearly de-fined, the next step is to focus only on the risks that threaten the achievement of the goals.

Taking a strategy-based approach to risk management, like the tightrope walker, puts the focus is on the goals and objec-tives. The strategy-based approach enables the risk manage-ment analysis to concentrate on all risks, including potential black swans, that threaten the achievement of the corporate goals; to analyze and measure the company’s capacity to achieve the goals; and to approach risk mitigation efforts in a balanced, focused and more cost-effective way.

The Right QuestionsAsking the right questions takes as much skill as getting the right answers. Doing it right creates a healthy dialogue among the executive and risk management teams to understand the rela-tionships among, and pervasiveness of, the company’s goals and objectives. Certain goals may conflict with others. Consider, for example, a cruise line’s desires to enhance passenger ticket rev-enue, increase customer service and decrease costs. They can be in inherent conflict with each other.

Page 40: riskprofessional201112-dl

39 RISK PROFESSIONAL D E C E M B E R 2 0 1 1 www.garp.org

R I S K S T R A T E G Y

Management might want to ask, how can we increase ticket-ing prices while maintaining customer satisfaction and reducing costs? Such thought-provoking questions can facilitate a healthy debate and, in turn, an exploration of relevant risks. In the cruise line example, executives might decide not only to raise passenger ticket prices, but also to improve customer service without incur-ring excessive costs, perhaps through incentives or streamlining staff.

Once the goals and related risks have been identified, the next step is a survey of management. Why a survey? It allows for any number of participants to weigh in, the data collection is auto-mated and efficient, and the anonymity fosters open and honest communication regarding risks.

Management is best positioned to assess these impacts because it is responsible for managing the day-to-day risks of operations. Because senior executives set strategy and business-line manag-ers have a handle on the risks, both constituencies are invested in the process and the outcome.

A true enterprisewide risk assessment solicits the feedback of those responsible for managing risks across the organization. Participants for the survey should be chosen wisely. They should have a macro view of the company and understand the intrica-cies of each goal, as well as the relationships of risks to each goal.

For a survey to have maximum effectiveness, the key is to strike the right balance in portraying the seeming conflicts among the goals and identifying the impact of each risk to each goal. De-signing a “right fit” survey is part of the “art” of an effective risk management assessment. Again, using the example of the cruise line company, the survey question might read as follows:

Analyzing ResultsOnce the responses are collected, the next challenge is in ana-lyzing the results. This is where the strategy-based approach suggests applying some science and technology to the tradi-tional performance of a risk assessment. Technology does not predict the future; only humans can speculate on future events. However, technology can analyze data trends and patterns, which are useful in studying the assessment results.

Any mathematical model can determine the correlation (the relevancy) between the risks identified and the impact each the risk has on each goal. In addition, most models can also ana-lyze the compounding impact of several risks occurring at the same time. Analyzing the compounding risks as they occur si-multaneously is important, as most risks do not occur in isola-tion. In fact, most emergency response professionals will agree that an accident is the result of a combination of multiple risks occurring at the same time.

We believe that corporate entities are no different. Most or-ganizations can handle isolated occurrences such as a financial restatement. However, a combination of risks, triggering simul-taneously within the same area, may have a catastrophic impact. Think of a company filing a financial restatement due to sus-pected foul play by its executives, compounded by a revised, re-ported loss during a down economy. This situation is dramatic, but not uncommon. Analyzing risk patterns is the most effective method for projecting the compounding impact of risks.

Managers InvolvedRunning the analysis of risk patterns based on the survey re-sponses is the easy part. The next step requires the involvement of managers enterprisewide to define and implement an appro-priate risk mitigation strategy and various control measures.

Back to the cruise line example: The risk of overbooking is found to have a low impact versus the risk of not enhancing revenue. Alternatively, overbooking cabins has a 20% impact on threatening guest satisfaction. However, the combined im-pact of overbooking cabins when on a cruise that has to tender to port (due to limited port docking capacity) exponentially increases the impact of guest dissatisfaction by over 50%. A proper risk mitigation strategy based on this impact analysis would be for management to enhance the sophistication of the online booking system or to implement more accurate mea-sures to forecast demand. In addition, management should actively focus on ensuring as many port-docking spaces as pos-sible for its ships’ itineraries.

To align the risk mitigation strategy to the company’s risk 0 1 2 3 4 5

Limited Port Capacity

Rising Fuel Costs

Global Terrorism

Overbooking Cabins

Down Economy

3

2

3

4

5

How does increasing passenger ticket sales impact our capacity to increase guest satisfaction when consider-ing the following risks (rate impact on scale of 1-5)?

Page 41: riskprofessional201112-dl

www.garp.org D E C E M B E R 2 0 1 1 RISK PROFESSIONAL 40

R I S K S T R A T E G Y

patterns properly, the following must be defined: • The relationship between the company’s risks and goals.

Risks that could potentially threaten the company’s goals must be correctly identified. In other words, did we ask the right “what could go wrong” questions? It is also important to un-derstand the strength of the correlation (or relationship) be-tween the goals set by the executive team and the risks identi-fied and assessed by management.

• The pervasive impact each risk has to your corporate strat-egy. How many goals are impacted by each risk, and how great is the impact of each risk on each goal?

Risks and controls should be monitored continually, as the impact of each risk on each goal varies over time. For instance, during peak season, the risk of overbooking a cabin may be greater than at any other time of the year. The ease of the survey approach allows for real-time updates as frequently as the organization demands.

Predictive Analytics and TechnologyA benefit of analyzing risk patterns over a period of time is the emergence of predictive analytics. Mathematical models that analyze and track risk patterns can also identify recurring pat-terns. The more such patterns there are for the model to ana-lyze, the stronger the correlation between risks and goals. The stronger the correlation, the greater the level of confidence in the model to be predictive or to detect black swans.

To have an effective risk pattern analysis over time, the goals of the organization must be consistent. Any newly set strategic goal will change the risk landscape and require a new assess-ment.

Together with the mathematical model, a well designed technology framework enables a systematic and focused ap-proach to risk and optimizes the deployment of resources. As a result, enterprise risk-resilience can be strengthened, custom-ers and stakeholders can be protected and profitability can be improved.

The key capabilities of technology that enable effective strategy-based risk management include:

• Integration. A centralized risk framework integrates risk management processes across the enterprise by providing a single point of reference to identify, assess and profile risks. This simplifies the mapping of risks to goals. Similarly, an inte-grated repository of risk assessments, controls, key risk indica-tors, loss events and other risk-related data enables managers to understand quickly the relationships among various risks, as well their relationships to goals. The survey management

itself benefits from an integrated approach. Platform-based technology can facilitate a streamlined, systematic and effi-cient process of survey design, distribution, implementation and response collection across departments, business units and geographic locations.

• Automation. Automating certain critical processes, such as the scoring and tabulation of risk assessments, can accelerate risk-goal mapping and save precious time, resources and ef-fort. It also eliminates the need for manual and cumbersome paper-based processes. Alerts can be configured to be sent out automatically to the relevant users when a risk threshold has been breached or when a control assessment is not up-to-date.

• Visibility. Displaying the quantified risks in color-coded charts or heat maps enables risk managers to understand the impact of risks on goals easily and quickly. If the maps can be drilled down into, managers can study each goal in relation to its corresponding risks. They can also help ascertain how each unit or department is managing risks. Technology can thus bring transparency to risks. It also simplifies the creation of risk reports, automatically generates trend analyses and en-ables managers to track risk management processes across the enterprise in real time.

• Sustainability. A technology-enabled framework can drive a proactive and sustainable approach to risk management by continually monitoring risks and their relationship to goals. The framework can also establish clear links between each goal, risk and control, thereby simplifying control assessments and monitoring.

If equipped with loss-event and incident management ca-pabilities, the technology framework can automatically track near misses and other threat issues, along with their root causes and owners. Subsequently, the issues can be routed through a systematic process of investigation and resolution.

As we have seen, risk management in isolation is not reflec-tive of real-world risk environments. Aligning risk manage-ment to strategic goals yields more accurate and focused re-sults. Integrating a strategy-based risk model with compliance, audit and governance processes, and confidently walking the tightrope of risks, will build resilience and keep stakeholders and customers happy.

Gaurav Kapoor is chief operating officer of MetricStream (www.metricstream.com), a Palo Alto, California-based provider of governance, risk and compliance solutions to a broad range of industries. Connie Valencia, a consultant who specializes in process im-provement and internal controls, is principal of Miami-based Elevate Consulting (www.elevateconsulting.com).