Prm Handbook III

March 28, 2018 | Author: Ailbhe Smith | Category: Operational Risk, Risk, Financial Capital, Value At Risk, Banks


Comments



Description

The Professional Risk Managers’ HandbookA Comprehensive Guide to Current Theory and Best Practices ___________________________________________________ Edited by Carol Alexander and Elizabeth Sheedy Introduced by David R. Koenig Volume III: Risk Management Practices The Official Handbook for the PRM Certification PDF created with pdfFactory Pro trial version www.pdffactory.com The PRM Handbook – Volume III Copyrighted Materials Published by PRMIA Publications, Wilmington, DE No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Publisher. Requests for permission should be addressed to PRMIA Publications, PMB #5527, 2711 Centerville Road, Suite 120, Wilmington, DE, 19808 or via email to [email protected]. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability of fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential or other damages. This book is also available in a “Sealed” digital format and may be purchased as such by members of the Professional Risk Managers’ International Association at www.PRMIA.org. ISBN 0-9766097-0-3 (3 Volume Set) ISBN 0-9766097-3-8 (Volume III) Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 2 The PRM Handbook – Volume III Contents Introduction .................................................................................................................................................7 Preface to Volume III: Risk Management Practices.......................................................................9 III.0 Capital Allocation and RAPM..........................................................................................................13 III.0.1 Introduction ..........................................................................................................................13 III.0.2 Economic Capital .................................................................................................................17 III.0.3 Regulatory Capital ................................................................................................................24 III.0.4 Capital Allocation and Risk Contributions.......................................................................31 III.0.5 RAROC and Risk-Adjusted Performance........................................................................35 III.0.6 Summary and Conclusions..................................................................................................39 References ................................................................................................................................................40 III.A.1 Market Risk Management .............................................................................................................43 III.A.1.1 Introduction......................................................................................................................43 III.A.1.2 Market Risk .......................................................................................................................43 III.A.1.3 Market Risk Management Tasks....................................................................................45 III.A.1.4 The Organisation of Market Risk Management..........................................................47 III.A.1.5 Market Risk Management in Fund Management........................................................49 III.A.1.6 Market Risk Management in Banking...........................................................................57 III.A.1.7 Market Risk Management in Non-financial Firms .....................................................66 III.A.1.8 Summary............................................................................................................................72 References ................................................................................................................................................73 III.A.2 Introduction to Value at Risk Models.........................................................................................75 III.A.2.1 Introduction......................................................................................................................75 III.A.2.2 Definition of VaR ............................................................................................................76 III.A.2.3 Internal Models for Market Risk Capital......................................................................78 III.A.2.4 Analytical VaR Models....................................................................................................79 III.A.2.5 Monte Carlo Simulation VaR.........................................................................................81 III.A.2.6 Historical Simulation VaR ..............................................................................................86 III.A.2.7 Mapping Positions to Risk Factors ...............................................................................95 III.A.2.8 Backtesting VaR Models .............................................................................................. 109 III.A.2.9 Why Financial Markets Are Not ‘Normal’................................................................ 111 III.A.2.10 Summary......................................................................................................................... 112 References ............................................................................................................................................. 113 III.A.3: Advanced Value at Risk Models ............................................................................................. 115 III.A.3.1 Introduction................................................................................................................... 115 III.A.3.2 Standard Distributional Assumptions........................................................................ 117 III.A.3.3 Models of Volatility Clustering ................................................................................... 121 Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 3 The PRM Handbook – Volume III III.A.3.4 Volatility Clustering and VaR...................................................................................... 126 III.A.3.5 Alternative Solutions to Non-normality.................................................................... 134 III.A.3.6 Decomposition of VaR................................................................................................ 142 III.A.3.7 Principal Component Analysis.................................................................................... 149 III.A.3.8 Summary......................................................................................................................... 154 References ............................................................................................................................................. 155 III.A.4 Stress Testing ............................................................................................................................... 157 III.A.4.1 Introduction................................................................................................................... 157 III.A.4.2 Historical Context......................................................................................................... 158 III.A.4.3 Conceptual Context...................................................................................................... 163 III.A.4.4 Stress Testing in Practice ............................................................................................. 164 III.A.4.5 Approaches to Stress Testing: An Overview............................................................ 166 III.A.4.6 Historical Scenarios ...................................................................................................... 168 III.A.4.7 Hypothetical Scenarios................................................................................................. 174 III.A.4.8 Algorithmic Approaches to Stress Testing ............................................................... 182 III.A.4.9 Extreme-Value Theory as a Stress-Testing Method................................................ 186 III.A.4.10 Summary and Conclusions .......................................................................................... 187 Further Reading.................................................................................................................................... 187 References ............................................................................................................................................. 188 III.B.1 Credit Risk Management ............................................................................................................ 193 III.B.1.1 Introduction................................................................................................................... 193 III.B.1.2 A Credit To-Do List..................................................................................................... 194 III.B.1.3 Other Tasks.................................................................................................................... 207 III.B.1.4 Conclusions.................................................................................................................... 208 References ............................................................................................................................................. 209 III.B.2 Foundations of Credit Risk Modelling..................................................................................... 211 III.B.2.1 Introduction................................................................................................................... 211 III.B.2.2 What is Default Risk? ................................................................................................... 211 III.B.2.3 Exposure, Default and Recovery Processes ............................................................. 212 III.B.2.4 The Credit Loss Distribution...................................................................................... 213 III.B.2.5 Expected and Unexpected Loss ................................................................................. 215 III.B.2.6 Recovery Rates .............................................................................................................. 218 III.B.2.7 Conclusion ..................................................................................................................... 223 References ............................................................................................................................................. 223 III.B.3 Credit Exposure........................................................................................................................... 225 III.B.3.1 Introduction................................................................................................................... 225 III.B.3.2 Pre-settlement versus Settlement Risk....................................................................... 227 Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 4 The PRM Handbook – Volume III III.B.3.3 Exposure Profiles.......................................................................................................... 228 III.B.3.4 Mitigation of Exposures .............................................................................................. 236 References ............................................................................................................................................. 241 III.B.4 Default and Credit Migration .................................................................................................... 243 III.B.4.1 Default Probabilities and Term Structures of Default Rates ................................. 243 III.B.4.2 Credit Ratings ................................................................................................................ 247 III.B.4.3 Agency Ratings .............................................................................................................. 252 III.B.4.4 Credit Scoring and Internal Rating Models .............................................................. 258 III.B.4.5 Market-Implied Default Probabilities ........................................................................ 262 III.B.4.6 Credit Rating and Credit Spreads ............................................................................... 268 III.B.4.7 Summary......................................................................................................................... 270 References ............................................................................................................................................. 271 III.B.5 Portfolio Models of Credit Loss ............................................................................................... 273 III.B.5.1 Introduction................................................................................................................... 273 III.B.5.2 What Actually Drives Credit Risk at the Portfolio Level?...................................... 276 III.B.5.3 Credit Migration Framework ...................................................................................... 279 III.B.5.4 Conditional Transition Probabilities– CreditPortfolioView .................................. 292 III.B.5.5 The Contingent Claim Approach to Measuring Credit Risk ................................. 294 III.B.5.6 The KMV Approach .................................................................................................... 300 III.B.5.7 The Actuarial Approach............................................................................................... 307 III.B.5.8 Summary and Conclusion............................................................................................ 312 References ............................................................................................................................................. 312 III.B.6 Credit Risk Capital Calculation.................................................................................................. 315 III.B.6.1 Introduction.................................................................................................................. 315 III.B.6.2 Economic Credit Capital Calculation ........................................................................ 316 III.B.6.3 Regulatory Credit Capital: Basel I ............................................................................. 320 III.B.6.4 Regulatory Credit Capital: Basel II............................................................................. 324 III.B.6.5 Basel II: Credit Model Estimation and Validation .................................................. 334 III.B.6.6 Basel II: Securitisation.................................................................................................. 336 III.B.6.7 Advanced Topics on Economic Credit Capital ....................................................... 338 III.B.6.8 Summary and Conclusions .......................................................................................... 340 References ............................................................................................................................................. 341 III.C.1 The Operational Risk Management Framework.................................................................... 343 III.C.1.1 Introduction................................................................................................................... 343 III.C.1.2 Evidence of Operational Failures............................................................................... 345 III.C.1.3 Defining Operational Risk........................................................................................... 347 III.C.1.4 Types of Operational Risk.......................................................................................... 348 Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 5 The PRM Handbook – Volume III III.C.1.5 Aims and Scope of Operational Risk Management ................................................ 351 III.C.1.6 Key Components of Operational Risk ...................................................................... 354 III.C.1.7 Supervisory Guidance on Operational Risk ............................................................. 357 III.C.1.8 Identifying Operational Risk – the Risk Catalogue ................................................. 358 III.C.1.9 The Operational Risk Assessment Process............................................................... 359 III.C.1.10 The Operational Risk Control Process...................................................................... 364 III.C.1.11 Some Final Thoughts ................................................................................................... 365 References ............................................................................................................................................. 366 III.C.2 Operational Risk Process Models ............................................................................................. 367 III.C.2.1 Introduction................................................................................................................... 367 III.C.2.2 The Overall Process ..................................................................................................... 369 III.C.2.3 Specific Tools ................................................................................................................ 372 III.C.2.4 Advanced Models ......................................................................................................... 374 III.C.2.5 Key Attributes of the ORM Framework................................................................... 378 III.C.2.6 Integrated Economic Capital Model.......................................................................... 381 III.C.2.7 Management Actions.................................................................................................... 384 III.C.2.8 Risk Transfer.................................................................................................................. 386 III.C.2.9 IT Outsourcing.............................................................................................................. 388 References ............................................................................................................................................. 393 III.C.3 Operational Value-at-Risk.......................................................................................................... 395 III.C.3.1 The ‘Loss Model’ Approach........................................................................................ 395 III.C.3.2 The Frequency Distribution........................................................................................ 401 III.C.3.3 The Severity Distribution ............................................................................................ 404 III.C.3.4 The Internal Measurement Approach ....................................................................... 407 III.C.3.5 The Loss Distribution Approach ............................................................................... 411 III.C.3.6 Aggregating ORC.......................................................................................................... 413 III.C.3.7 Concluding Remarks .................................................................................................... 415 References ............................................................................................................................................. 416 Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 6 The PRM Handbook – Volume III Introduction If you're reading this, you are seeking to attain a higher standard. Congratulations! Those who have been a part of financial risk management for the past twenty years, have seen it change from an on-the-fly profession, with improvisation as a rule, to one with substantially higher standards, many of which are now documented and expected to be followed. It’s no longer enough to say you know. Now, you and your team need to prove it. As its title implies, this book is the Handbook for the Professional Risk Manager. It is for those professionals who seek to demonstrate their skills through certification as a Professional Risk Manager (PRM) in the field of financial risk management. And it is for those looking simply to develop their skills through an excellent reference source. With contributions from nearly 40 leading authors, the Handbook is designed to provide you with the materials needed to gain the knowledge and understanding of the building blocks of professional financial risk management. Financial risk management is not about avoiding risk. Rather, it is about understanding and communicating risk, so that risk can be taken more confidently and in a better way. Whether your specialism is in insurance, banking, energy, asset management, weather, or one of myriad other industries, this Handbook is your guide. In this volume, the current and best practices of Market, Credit and Operational risk management are described. This is where we take the foundations of Volumes I and II and apply them to our profession in very specific ways. Here the strategic application of risk management to capital allocation and risk-adjusted performance measurement takes hold. After studying all of the materials in the PRM Handbook, you will have read the materials necessary for passage of Exam III of the PRM Certification program. Those preparing for the PRM certification will also be preparing for Exam I on Finance Theory, Financial Instruments and Markets, covered in Volume I of the PRM Handbook, Exam II on the Mathematical Foundations of Risk Measurement, covered in Volume II of the PRM Handbook and Exam IV - Case Studies, Standards of Best Practice Conduct and Ethics and PRMIA Governance. Exam IV is where we study some failed practices, standards for the performance of the duties of a Professional Risk Manager, and the governance structure of our association, the Professional Risk Managers’ International Association. The materials for this exam are freely available on our web site (see http://www.prmia.org/pdf/Web_based_Resources.htm) and are thus outside of the Handbook. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 7 The PRM Handbook – Volume III At the end of your progression through these materials, you will find that you have broadened your knowledge and skills in ways that you might not have imagined. You will have challenged yourself as well. And, you will be a better risk manager. It is for this reason that we have created the Professional Risk Managers’ Handbook. Our deepest appreciation is extended to Prof. Carol Alexander and Prof. Elizabeth Sheedy, both of PRMIA’s Academic Advisory Council, for their editorial work on this document. The commitment they have shown to ensuring the highest level of quality and relevance is beyond description. Our thanks also go to Laura Bianco, President of PRMIA Publications, who has tirelessly kept the work process moving forward and who has dedicated herself to demanding the finest quality output. We also thank Richard Leigh, our London-based copyeditor, for his skilful and timely work. Finally, we express our thanks to the authors who have shared their insights with us. The demands for sharing of their expertise are frequent. Yet, they have each taken special time for this project and have dedicated themselves to making the Handbook and you a success. We are very proud to bring you such a fine assembly. Much like PRMIA, the Handbook is a place where the best ideas of the risk profession meet. We hope that you will take these ideas, put them into practice and certify your knowledge by attaining the PRM designation. Among our membership are several hundred Chief Risk Officers / Heads of Risk and tens of thousands of other risk professionals who will note your achievements. They too know the importance of setting high standards and the trust that capital providers and stakeholders have put in them. Now they put their trust in you and you can prove your commitment and distinction to them. We wish you much success during your studies and for your performance in the PRM exams! David R. Koenig, Executive Director, Chair, Board of Directors, PRMIA Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 8 Operational risk. while operational risks are a by-product to be controlled. review their remuneration systems. contract or asset resulting from a change in the counterparty’s creditworthiness. stock prices.com 9 . Not only is it the final section.The PRM Handbook – Volume III Preface to Volume III: Risk Management Practices Section III is the ultimate part of The PRM Handbook in both senses of the word. The practice of risk management is evolving at a rapid pace. While the importance of operational risk management is increasingly accepted. For a traditional commercial bank. Section III is itself split into three parts which address market risk. Operational risks are. In contrast. shareholders and other stakeholders increasingly demand higher standards of risk management and disclosure of risk. swap. interest rates. an inevitable consequence of any business undertaking. train their staff. it would not be an overstatement to say that risk consciousness is one of the defining features of modern business. the priorities are reversed. or other counterparty instruments. Sections I (Finance Theory. adapt their business practices and scrutinise controls for this new era. a financial risk. commodity prices. Aside from these regulatory pressures. These three are the main components of risk borne by any organisation. upgrade their models and systems. although the relative importance of the mix varies. the production and marketing of the service or product in which Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. is not. For financial institutions and fund managers. credit and market risks are taken intentionally with the objective of earning returns. Nowhere is this truer than in the financial services industry. it will probably never have the same status in the finance industry as credit and market risk which are the chosen areas of competence. Financial Instruments and Markets) and II (Mathematical Foundations of Risk Measurement) laid the necessary foundations for this discussion of risk management practices – the primary concern of most readers. Interest in risk management is at an unprecedented level as institutions gather data. however. contracts or assets resulting from movements in exchange rates. It is defined as the risk of default on debt. Here some of the foremost practitioners and academics in the field provide an up-to-date. people and systems or from external events. strictly speaking. The focus should be on the risks associated with the particular business. the risk of loss resulting from inadequate or failed internal processes. market risk refers to changes in the values of securities. rigorous and lucid statement of modern risk management. For non-financial firms. especially with the impending arrival of Basel II.pdffactory. Credit risk may also result from a change in the value of a security. etc. In fact. credit risk and operational risk in turn. but it represents the final aims and objectives of the Handbook. credit risk has always been the most significant. as it ensures that a firm can continue as a going concern even if substantial and unexpected losses are incurred. while the advanced versions are covered in III. While relatively simple to define for standard loans.B. These days one of the major tasks of risk managers is to measure risk using value-at-risk (VaR) models. Chapter III. thus setting the scene for the quantitative chapters that follow. Market and credit risks are usually of secondary importance as they are a byproduct of the main business agenda. although the basic versions are easier to implement.A. it is essential that they be appropriately adjusted for risk. Market Risk Chapter III.3 takes a more detailed look at the exposure amount. since it is relevant for each of the three risk areas that follow. which can be defined as random processes.0 explores this fundamental idea at a general level. VaR models for market risk come in many varieties. which is one of the reasons why stress tests are a popular tool.pdffactory.B.3 along with some other advanced topics such as risk decomposition. Only then will appropriate incentives be created for behaviour that is beneficial for shareholders and other stakeholders. Subsequent chapters on credit risk focus primarily on its modelling.2. fund managers and corporate treasurers. Chapter III. assessment. Realistically. It explains the four major tasks of risk management (identification. credit limits and credit derivatives. especially volatility clustering.1 introduces the topic of market risk as it is practised by bankers. including the use of collateral.1 introduces the sphere of credit risk management. Some fundamental tools for managing credit risk are explained here.2. The product of these three. is the credit loss distribution. one of the major themes of Section III is how to determine the appropriate size of this capital buffer. monitoring and control/mitigation). The advanced models are generally more successful in this regard.4 explains the need for stress tests and how they might usefully be constructed. Accordingly. They can be considered an ad hoc solution to the problem of model risk. which explains the three basic components of a credit loss: the exposure.The PRM Handbook – Volume III expertise is held. Foundations for modelling are laid in Chapter III. How much capital is enough to withstand unusual losses in each of the three areas of risk? The measurement of risk has further important implications for risk management as it is increasingly incorporated into the performance evaluation process.B. Credit Risk Chapter III.com 10 . the default probability and the recovery rate.A. The last line of defence against risk is capital. Chapter III.A. Since resources are allocated and bonuses paid on the basis of performance measures. The more basic VaR models are the topic of Chapter III. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.A. The main challenge for risk managers is to model the empirical characteristics observed in the market. there will never be a perfect VaR model. com 11 . Operational Risk The framework for managing operational risk is first established in Chapter III. Elizabeth Sheedy.3. Operational VaR is the subject of Chapter III. It also discusses the relationship between credit ratings and credit spreads. it explains how it may be identified. both analytical and simulation methods.C. including discussion of loss models. Chapter III.C. including the credit migration approach. Chapter III. Chapter III. Finally.1.C. By better understanding business processes we can find the sources of risk and often take steps to reengineer these processes for greater efficiency and lower risk.B.B. and the aggregation of operational risk over all business lines and event types.2 builds on this with a discussion of operational risk process models. whose values are a function of market movements.6 extends the discussion of credit VaR models to examine credit risk capital. and the actuarial approach. A number of tools are examined.5 tackles one of the most crucial issues for credit risk modelling: how to model credit risk in a portfolio context and thereby estimate credit VaR. One of the most perplexing issues for risk managers is to determine appropriate capital buffers for operational risks. Chapter III. Member of PRMIA’s Academic Advisory Council and co-editor of The PRM Handbook.pdffactory. risk measures on a portfolio basis are fundamental. the contingent claim or structural approach.4 examines in detail the default probability and how it can evolve over time.B. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. standard functional forms. assessed and controlled. Since diversification is one of the most important tools for the management of credit risk.The PRM Handbook – Volume III assessment of the exposure amount can present challenges for other credit sensitive instruments such as derivatives. and credit scoring models. After defining operational risk. It compares both economic capital and regulatory capital for credit risk as defined under the new Basel Accord. pdffactory.com 12 .The PRM Handbook – Volume III Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. 1 Role of Capital in Financial Institution Banks generate revenue by taking on exposure to their customers and by earning appropriate returns to compensate for the risk of this exposure. which has presented the basic principles behind the capital structure of the firm. III.The PRM Handbook – Volume III III. There it was argued that the actual capital – the physical capital that a firm holds – should be distinguished from the optimal level of capital. In this chapter you will learn: the role of capital in financial institutions and the different types of capital. economic capital and regulatory capital. In general.1 Introduction We introduce in this chapter the definitions and key concepts regarding capital.1. increase the possibility of facing losses to the extent that it defaults on its debt obligations and is forced out of business.A. if a bank takes on more risk. targeted levels of capital are direct functions of the riskiness of the business activities or. focusing on the important role that capital plays in financial institutions. the key concepts and objectives behind regulatory capital.0. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. We make the distinction between the various types of capital: book capital. the definition and mechanics of economic capital as well as the methods to calculate it.com 13 . the use of economic capital as a management tool for risk aggregation. it can expect to earn a greater return.0 Capital Allocation and RAPM Andrew Aziz and Dan Rosen 1 III. the riskiness of the assets. The optimal capital depends on many things.5. including capital targets that are associated with a desired level of ‘capital adequacy’ to cover the potential for losses made by the firm. as well as the main calculations principles in the Basel I Accord and the current Basel II Accord. risk-adjusted performance measurement and optimal decision making through capital allocation. Capital adequacy is a measure of a firm’s ability to remain a going concern. Capital adequacy is assessed using internal models (for economic capital) and for banks a certain level of capital is imposed by external standards (regulatory capital). Readers should already be familiar with Chapter I.pdffactory. Banks that are managed well will attempt to maximise their returns only 1 Algorithmics Inc. from a balance sheet perspective. however. is that the same bank will. decision making and performance measurement. Thus. in general. We discuss briefly the use of economic capital as a management tool for risk aggregation. This introductory section presents the definition of capital and its role in financial institutions.0. The trade-off. then the credit rating of a firm becomes a function of the overall riskiness of its assets and the amount of capital that the bank holds. the primary role of capital in a bank. operational risk – losses associated with operating failures. Typically. capital represents the difference between the market value of a bank’s assets and the market value of its liabilities. Because capital can be viewed as a buffer against insolvency. Firms which hold more capital are able to take on riskier assets than firms of similar credit rating which hold less capital. borrower or debt issuer). Capital represents an ideal metric for aggregating risks across both different asset classes and across different risk types. In contrast to a typical corporation.pdffactory. is to act as a buffer to: absorb large unexpected losses. Banks usually have ready access to funding through their deposit-taking activities. In its pure form. protect depositors and other claim holders. capital adequacy is a measure of a bank’s ability to remain a going concern under adverse conditions. Instead. which can be increased fairly fluidly. provide enough confidence to external investors and rating agencies on the financial health and viability of the firm. apart from the transfer of ownership. the sources of risk within the assets of a firm are classified as follows: credit risk – losses associated with the default (or credit downgrade) of an obligor (a counterparty. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. the key role of capital in a financial institution such as a bank is not primarily one of providing a source of funding for the organisation. The primary role of the risk management function in a bank is to ensure that the total risk taken across the enterprise is no greater than the bank’s ability to absorb worst-case losses within some specified confidence interval.The PRM Handbook – Volume III through risk taking that is prudent and well informed. market risk – losses associated with changes in market values.com 14 . If we make the assumption that liabilities are riskless. A firm’s credit rating can be seen as a measure of its capital adequacy and is generally linked to a specific probability that the firm will enter into default over some period of time. more generally this might also include other assets like liquid debt or hybrid instruments In practice. This reflects both historical and practical business reasons (for example. they might be too slow to invest it effectively). The foremost objective of regulations. regulatory capital has traditionally been defined with respect to accounting book value measures rather than 2 This is a general classification. 3 Regulatory capital (RC) – the capital that a bank is required to hold by regulators in order to operate. the regulators and the investors themselves. 222–223) defines economic capital as risk capital plus goodwill. as well as their more conservative view on the applicability of the models. Matten (2000. this is an accounting measure defined by the regulatory authorities to act as a proxy for economic capital. and there are various alternative definitions of capital and terminology used to describe them. given their size. This emphasis on growth precipitated a decline of capital levels throughout the 1980s that led to fears of increasing instability in the international banking system. Thus. While in its strictest definition this should be simply equity capital.1. the determination of EC has traditionally been highly institution-specific.pdffactory. and thus it is limited by the ability to mark to market a balance sheet in a manner that is indisputable for all key constituencies – the financial institution.6). 3 Some authors have used alternative definitions: for example. Book capital (BC) – the actual physical capital held. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Sometimes this is also referred to as risk capital. many firms hold book capital in excess of the required economic and even regulatory capital.0.2 Types of Capital We can broadly classify capital into three types: 2 Economic capital (EC) – an estimate of the level of capital that a firm requires to operate its business with a desired target solvency level. These concerns motivated the push for the creation of international capital adequacy standards such as those ultimately established by the Basel Committee on Banking Supervision (BCBS). Perold (2001) defines risk capital in terms of insurance (explained in Section III.com 15 . however. As such.The PRM Handbook – Volume III III. The combined forces of deregulation and the increased market volatility in the late 1970s motivated many banks to aggressively grow market share and to acquire increasingly riskier assets on their balance sheets. to reduce the overall riskiness of the international banking system. is to define an unarguable standard for capital comparison that creates a level playing field across all financial institutions. In general. The imposition of the Basel I Accord in 1988 proved to be successful in its objective of increasing worldwide capital levels to desired levels by 1993 and. pp.0. ultimately.2. EC is meant to reflect the true ‘fair market’ value differential between assets and liabilities. 5).3 Capital as a Management Tool Capital can be used as a powerful business management tool. 2003b). performance measurement.com 16 . While some firms remain sceptical of the value of reducing all risks to a single number. (This is further discussed in Sections III. the application of risk aggregation and EC methods is still in the early stages of its evolution.0. whereby investment is determined not on the basis of risk–reward optimisation but on the basis of regulatory capital– reward optimisation. accounting practice allows balance sheet items to be reported on a market value basis). asset and business allocation. it is recognised that regulatory capital calculations tend to contain a number of inconsistencies. Risk aggregation generally refers to the development of quantitative risk measures that incorporate multiple sources of risk.0. across different businesses and activities. The most common approach is to estimate the EC that is necessary to absorb potential losses associated with each of the risks. RAPM thus becomes an ideal tool for capital allocation purposes. with returns adjusted appropriately in the context of the amount of risk taken on.1.0. In some cases. credit risk and operational risk.4 and III. As such. be assessed on a consistent basis. The objective of risk-adjusted performance measurement (RAPM) is to define a consistent metric that spans all asset and risk classes. According to a recent study (BCBS. which have led regulators to set prescribed levels on a conservative basis. Each asset can. net expected payoffs can then be expressed as returns on capital. thereby providing an ‘apples to apples’ benchmark for evaluating the performance of alternative business opportunities. since it provides a consistent metric to determine: risk aggregation. these inconsistencies have led to the notion of regulatory arbitrage. EC can be seen as a common measure that can be used to summarise and compare the different risks incurred by a firm. in some cases. To capture the discrepancy between fair values and market values. many now believe that there is a need for a common metric Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. regulatory capital measures incorporate ad hoc approaches to normalise asset book values to reflect differences in risk.The PRM Handbook – Volume III to market value measures (notwithstanding the fact that.pdffactory. therefore. By allocating the appropriate amount of EC to each asset. III. different types of risk – market risk. 9%). the probability of default over the next year for the firm cannot be greater than 3. (III. respectively.com 17 ..2 Economic Capital Economic capital acts as a buffer that provides protection against all the credit. Given the desire to achieve a BBB rating and to remain solvent 99. In so far as EC reflects the amount of capital required to maintain a firm’s target capital rating.g.5% of the time the future value of the non-defaulted assets must be at least equal to the future value of the liabilities. Example III. l (i. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.5% quantile of the loss distribution.0%.0.0.g.5% of the time.pdffactory..and at the end of one year. respectively. insurance versus trading). t = 0 . The available capital Ct for the current time. would result in the value of assets at t = 1 just being sufficient to cover the value of debt in t = 1. 99. can be expressed as C0 = A0 – D0.The PRM Handbook – Volume III that allows risk–return comparisons to be made systematically across business activities whose mix of risks may be quite different (e. III. for the same firm to achieve a target S&P credit rating of BBB.2. such as depositors.0.1 below gives a simple outline of how this could be achieved.1) C1 = A1 – D1. there remains a wide variation in the manner in which aggregated risk measures such as EC are used for risk management decision making in practice today. market. If the nominal returns on the assets and liabilities are equal to rA and rD.5%. when C1 = 0 for a given confidence interval). For example. Then C1 = 0 = A0 (1 + rA)(1 l) D0(1 + rD). then a worst-case loss from all sources.1 Understanding Economic Capital Denote by At and Dt the market values (at time t) of the assets and liabilities. the confidence interval can be defined at a very high quantile of the loss distribution. a firm must have enough capital to sustain a ‘0. EC is set at a confidence level that is less than 100% (e. so the quantile should be set at least at 97%. t = 1. since it would be too costly to operate at the 100% level. In contrast. that is. operational and business risks faced by an institution. it must lower its probability of default to be at most 0.e. III. The confidence interval is chosen as a trade-off between providing high returns on capital for shareholders and providing protection to the debt holders (and achieving a desired rating) as well as confidence to other claim holders. 99. corresponding to the 99..5% worst-case loss’ over a one-year time horizon. to achieve a target S&P credit rating of BB.0. However. consider the case where credit risk is the sole source of business risk to which the firm is exposed. it is given by: EC0 = A0 (1 – [(1 + rA)(1 l )/(1 + rD )]). An increase in expected returns (to compensate for increased risk) is reflected in the numerator. u.pdffactory. l.0. then. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.0.3) is often expressed in terms of value-at-risk notation in the following manner: EC0 = A0 – VaR/(1+rD).3). EC0 = A0 {1 – (1 + rD)(1 + u)(1 – l )/(1 + rD)} (III.2) Since EC0 is the minimum amount of capital required to sustain such a loss. (III. while the increase in risk is reflected in the denominator (the current EC).6) This relationship is illustrated with respect to a default loss distribution in Figure III.3) Hence the minimum amount of EC a financial institution must take on in order to avoid insolvency increases as the level of the worse-case loss l increases.The PRM Handbook – Volume III Thus the maximum amount of debt allowable to sustain solvency under the worst-case scenario cannot exceed D0 = A0 (1 + rA)(1 l )/(1 + rD).5) simplifies to the following more familiar expression for economic capital: EC 0 A0 ( l u) . Equation (III.0. (III. Returning to equation (III.5) = A0 {1 – (1 + u)(1 – l )}.0.0. corresponding to the appropriate (x%) confidence interval. The expected return on EC reflects the impact of leverage on risk and reward. For ease of presentation. under the simplifying assumption that the spread between the nominal return on the assets and the return on the liabilities is roughly equal to the expected default loss.0. (III. The expected return on EC over the period from t = 0 to t = 1 is given by [E(EC1 ) / EC0] 1. By then ignoring second-order effects. equation (III.0.4) where VaR represents the Ai value associated with the worst-case loss. (III.com 18 .1.0.0. then the current capital for this firm is calculated as C0 = $8 million (the difference between the market value of the assets and the market value of the liabilities).e. Example III.6) highlight the link between VaR measures and EC.0.0. with a cost of debt of rD = 5%. equation (III. the VaR measure should explicitly account for the interest payments on the funding debt. Suppose the firm has liabilities consisting of D0 = $92 million in deposits.0.0. More precisely. in its most common definition. Also. If the nominal values of the assets and liabilities are equal to the market values.1: Credit loss distribution: expected and unexpected losses Expressions (III.0.The PRM Handbook – Volume III Credit Loss Distribution Expected Loss Volatility Probability Unexpected Loss VaR( 0 %) Loss Figure III.com 19 . 2002).0. EC is defined to absorb only unexpected losses (UL) up to a certain confidence level (i. see Kupiec. Thus.81%. A0u). representing a compounded spread of 1..). While the UL approximation has very little effect on market risk.1 Consider a BBB-rated firm (or a firm that has targeted a BBB rating).4) shows that the VaR measure appropriate for EC should in fact measure losses relative to the assets’ initial mark-to-market (MtM) value and not relative to the EL in its end-ofperiod distribution. The weighted average nominal return across the $100 million in total assets is rA = 6. The simplifying assumption leading to equation (III.9%.6) and illustrated in Figure III.1 is the approach commonly taken by practitioners and generally leads to conservative estimates (for a detailed discussion. where the horizon is short (and EL is small) it may have a higher impact in credit risk.. which have been invested in A0 = $100 million of assets (40% at a nominal return of 6. Credit reserves are traditionally set aside to absorb expected losses (EL) over the period (i.0. A0(l – u)). Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.75% and 60% at a nominal return of 7%.e.4) and (III.pdffactory. 46 million. The shareholders should be 99. At the enterprise level. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.6 million.com 20 . Under this scenario. the firm will become insolvent as the value of the assets will be A1 = $100 million ? 1. Therefore.7 million. the minimum amount of capital the firm must hold to avoid insolvency in the worst-case scenario is EC0 = A0 (1 – [(1 + rA)(1 l )/(1 + rD)]) = 100(1 – [1.05]) = 13. III.5% sure that such an increase in capital will ensure solvency from t = 0 to t = 1. it must increase its capital from $8 million to $13.46.2.85 = $90.069 ? 0. giving a capital shortfall of $5.The PRM Handbook – Volume III Assume that. where EaR represents the difference between expected earnings and the earnings under the worst-case scenario.9 million. As the determination of EC is based on the ability to sustain a worst-case loss associated with a given confidence interval.2. while the value of the liabilities will be D1 = $92 million ? 1. for the firm to improve its capital adequacy to the desired level.0. 2003).0. by assuming that all expected future earnings of the firm are equal to the next period’s expected earnings. where k represents the required return associated with the riskiness of the earnings. credit and operational risk. the value of capital can be expressed as C0 = Expected earnings / k. From equation (III.1 Top-Down Earnings Volatility Approach A top-down approach based on a firm’s earnings makes the simplifying assumption that the market value of capital is equal to the value of a perpetual stream of expected earnings. EC can be estimated based on aggregate information of the firm’s performance. and across different types of risk: market. in a ‘0. EC0 = EaR / k.pdffactory. across different businesses and activities.0. In other words.3).2 The Top-Down Approach to Calculating Economic Capital EC can be seen as a common measure that can be used to summarise and compare the different risks incurred by a firm. III. Such ‘top-down approaches’ generally use one of two types of information: earnings or stock prices. for a given confidence interval (see Saita.5% worse-case scenario’ the firm has a potential for a loss of 15% in the value of total assets.069(1 0. Often this approach relies on the additional assumption that earnings are normally distributed. and thus the confidence interval can be determined as a multiple of the standard deviation.05 = $96.15)/1.2. the default threshold (the asset level at which the debt holders demand repayment and bankruptcy can occur).com 21 . the risk-free interest rate (maturity corresponding to the time horizon). The bottomup approach has now become best practice and. It does not link EC directly to the sources of risk. III. it does not naturally allow capital to be separated out into its market.2.B. III.. in contrast to the top-down approaches. If the value of assets at the end of the period (t = 1) is greater than the value of the debt. The EC can then be determined on the basis of reapplying the BSM model for a level of debt that ensures. few companies have enough data to yield reliable estimates. that the firm remains solvent.5) assumes that the market value of capital can be modelled as a call option on the value of the firm’s assets where the strike price is the notional value of the debt. market risk and operational risk Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. In general. credit and operational risk components. An advantage of this approach over the one based on EaR is the availability of stock market data. it is equal to zero.3 The Bottom-Up Approach to Calculating Economic Capital In this approach. the average duration of the firm’s assets). The BSM model allows us to estimate the implied probability of insolvency for the firm over the period from t = 0 to t = 1. Using this approach assumes we have the following information available: the current market value and volatility of the company’s net assets. nor across different business lines or activities.2. EC is estimated by modelling individual transactions and businesses and then aggregating the risks using advanced statistical portfolio models and stress testing. credit and operational risks. several simplifications regarding the capital structure and model assumptions must be made to apply this tool in practice.pdffactory. nor does it suggest how to allocate it across different business lines or activities. otherwise (in the case of insolvency).2.g. the time horizon (e.0.The PRM Handbook – Volume III Limitations of the earnings volatility approach include the following: It requires historical performance data for reliable estimates of the mean and standard deviation of earnings. provides greater transparency with regard to isolating credit risk. However. then the value of capital is equal to the difference between the value of the assets and the debt. Similar to the EaR approach.0. even under the worst-case scenario (at a given confidence interval).2 Top-Down Option-Theoretic Approach A top-down approach based on the Black–Scholes–Merton (BSM) framework (see Chapter III. a key limitation is that it does not allow the separation of capital into different risks such as market. as well as developing frameworks to measure these risks in a more integrated way. a credit VaR methodology for credit risk and a loss-distribution approach for operational risk. Capital is indeed a powerful tool for understanding. activities and transactions. credit risk and operational risk at the enterprise level.5 Enterprise Capital Practices – Aggregation A large firm. Then. and the combination of stress testing and statistical measures. This produces a conservative capital measure (basically assuming that the risks are perfectly positively correlated). the firm must consolidate the capital across these risks. Current portfolio losses are then assessed against these specific scenarios. In essence.pdffactory. For example. such as a bank. a firm might use an internal VaR model for market risk. acquires different types of financial risk through various businesses and activities. III. it is currently common practice to add up the credit risk capital.0.2. To estimate total capital. largely based on management’s objectives and judgement. III. Furthermore.com 22 .The PRM Handbook – Volume III capital. In a bottom-up approach. to develop EC measures is today more an art than a science. the estimation of enterprise EC requires consolidation of risks at two levels: First. it naturally accommodates various methodologies to allocate capital to individual businesses. at the second level. practitioners usually use stress testing as an important part of their EC methodology (see Chapter III.2. comparing and aggregating Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. today. many firms are now devoting considerable effort to measuring the correlations between these risks (and hence the levels of diversification). Stress scenarios may be based on historical experience or management judgment.4 Stress Testing of Portfolio Losses and Economic Capital In addition to the statistical approaches inherent in credit portfolio models. However. the stress-testing methodology involves the development of one or several specific adverse scenarios.A.4). an institution might assign the EC for market risk as 50% times the 99% VaR plus 50% the loss outcome from some stress scenarios (thus normally being higher than the actual 99% VaR). The translation of the specific stress scenario losses.0. Commonly. market risk capital and operational risk capital. it computes market risk. To achieve this. firms make their own decision on the relative weights of the statistical and stress test results in estimating the amount of EC required to support a portfolio. which are judged to be extreme (falling beyond the desired confidence level). various correlation assumptions.6 Economic Capital as Insurance for the Value of the Firm Standard practice is to define EC as a buffer to cover unexpected losses. III. etc. and thus is approximately proportional to VaR. some economists have used the term ‘risk capital’ to define capital in economic terms (Merton and Perold. Perold. The enterprise aggregation of capital is still in its infancy and is a topic of much research today. Thus it is defined in terms of the tail of the loss distribution. in general. 2001): risk capital is the smallest amount that can be invested to insure the value of the firm’s net assets 4 against loss in value relative to a risk-free investment. valued as if these liabilities are default-free.2.0. If each type of risk is modelled separately. either implicitly or explicitly. under this definition. a firm might aggregate the individual stand-alone capital estimates using analytical models and simple cross-business (asset) correlation estimates. whereas VaR ignores the magnitude of outcomes conditional on being in the extreme tail of the distribution. then the amounts of EC estimated for each need to be combined to obtain an enterprise capital amount. insurance contracts. using measures such as VaR.com 23 . a firm may have any number of methodologies for various risks and segments. 4 ‘Net assets’ refers to ‘gross assets minus customer liabilities (swaps. the value of such a put option is approximately proportional to the standard deviation of the return on the bond. the put option accounts for the full distribution of losses. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. the firm needs to incorporate. the risk capital of a long US Treasury bond position is the value of a put option with strike equal to the forward price of the bond. For example.pdffactory.The PRM Handbook – Volume III different types of risk to determine the overall health of the firm and to support better business decisions. Such an institution is likely to have separate methodologies to measure market risk. it is likely that the institution has different methodologies to measure the credit risk of its larger commercial loans or its retail credits. In addition. In making the combination. As pointed out by Perold (2001). 1993. Alternatively. The methods of aggregation are as follows: Sum of stand-alone capital for each business unit and type of risk. More generally. Ad hoc or top-down estimates of cross-business and cross-risk correlation. This methodology essentially assumes perfect correlation across business lines and risk types and does not allow for diversification from them. credit risk and operational risk. In order to allow for some cross-business and cross-risk diversification.). When returns are normally distributed. 0. They have an interest in ensuring that banks remain capable of meeting their obligations and in minimising potential systemic effects on the economy.3. protect depositors and other claim holders. from the external perspective of the regulator.pdffactory. As also mentioned in Section III.1.3 Regulatory Capital This section considers the key concepts and objectives behind regulatory capital. While the general intention is to make regulatory capital more risk-sensitive and align it more closely to economic capital. from an internal bank perspective. Creating a level playing field: to ensure a more even playing field for internationally active banks.1. It is largely an accounting measure defined by the regulatory authorities to act as a proxy for economic capital. In a sense. III. as well as the main principles used in regulatory capital calculations in the Basel I Accord and the latest proposals of the current Basel II Accord. capital adequacy requirements fulfil two objectives: Reducing systemic risk: to safeguard the security of the banking system and ensure its ongoing viability. Capital adequacy is generally the single most important financial measure used by banking supervisors when examining the financial soundness of an institution. regulatory capital refers to the capital that an institution is required to hold by regulators in order to operate. The overall objective should be to set up an enterprise risk management framework. by submitting all banks to (roughly) the same rules. it is important to understand the limitations of using it directly for managing risk.com 24 . As defined in Section III. and provide enough confidence to external investors and rating agencies on the financial health and viability of the firm. measuring performance and pricing credits. and it is vital for practitioners to be familiar with both the latest regulations and the specific requirements in each jurisdiction. it is important to highlight two points: Regulatory requirements are continuously changing. which measures economic capital and reconciles it with regulatory capital.1 Regulatory Capital Principles In this subsection. national governments act as guarantors. otherwise borne by national governments.The PRM Handbook – Volume III III. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. capital is designed as a buffer to absorb large unexpected losses.0. In contrast. Regulatory capital helps to ensure that banks bear their share of the burden.0. we focus mainly on the capital regulation in the banking industry.0. As we review the key concepts in regulatory capital. April 2003 – The BCBS releases the third consultative paper (CP3) on the new Basel Accord (BCBS. 1996). intended to gather information to assess whether it has met its goals. This framework has been adopted as the underlying structure of all bank capital adequacy regulations throughout the G10. 5 Sometimes referred to as BIS 98.pdffactory. Commonly referred to as the Basel I Accord or the BIS I Accord. also referred to as the Basel II accord). Summary Chronology of Banking Regulatory Capital 1988 – The BCBS introduces the framework for international capital adequacy standards. Today. as well as many other countries around the world. 2003a).The PRM Handbook – Volume III III. 1995 – An amendment to the initial accord further allows banks to reduce ‘creditequivalent exposures’ when netting agreements are in place (BCBS. as well as in over 100 other countries (BCBS. 2000–2003 – The BCBS releases various consultation documents and conducts major data collection exercises called quantitative impact studies (QIS). 1995). it was the first step in establishing a level playing field across member countries for internationally active banks. This is commonly referred to as the Basel II Accord or BIS II Accord. which first introduced the framework for international capital adequacy standards. 1988).2 The Basel Committee of Banking Supervision and the Basel Accord A key cornerstone of international banking capital regulation is the Basel Committee on Banking Supervision. after its date of implementation. banks are required specifically to allocate capital against operational risks for the first time. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. The 1988 accord focused mainly on credit risk. It is adopted throughout the G10.3. Under the proposal.0. The new accord attempts to improve the capital adequacy framework by substantially increasing the risk sensitivity of the minimum capital requirements. 1999 – The BCBS issues a proposal for a new capital adequacy framework to replace the 1988 Basel I Accord. and also encompassing a supervisory review and market discipline principles. 1996 – The 1996 amendment 5 extends the capital requirements to include risk-based capital for the market risk in the trading book (BCBS.com 25 . over 100 countries are expected to implement the latest guidelines set by the BCBS. 3.7) where VaR and Specific VaR denote. which applies to all positions in the trading and banking books (including OTC derivatives and balance sheet commitments). migration and changes in spreads.com 26 .org. thus resulting in several variations of the implementation across jurisdictions.1 Minimum Capital Requirements under Basel I Capital requirements under Basel I are the sum of: credit risk capital charge. All the papers from the BCBS can be downloaded from www. and MMR and MSR are multipliers designed to adjust the capital to cover for modelling errors and reward the quality of the models. Basel I left various choices to be made by local regulators.0. The regulatory charge for banks using internal market risk models is given by Market Risk Capital [ M MR VaR M SR SpecificVaR ] Trigger . For market risk capital.0. Finally.3. respectively. The first one ranges between 3 and 4. ad hoc capital standards. The 1996 amendment further extended the capital requirements to include risk-based capital for the market risk in the trading book. The reader is referred to Chapter III. Currently it is set to 8 in North America and between 8 and 25 in the UK.A.bis. bank capital was regulated through simple.pdffactory. establishing minimum capital standards that linked capital requirements to the credit exposures of banks.3 Basel I Regulation The 1988 accord focused mainly on credit risk. While generally prescriptive. Basel I does not cover capital charges for operational risk. market risk capital charge for the trading book portfolio and off-balance sheet items. 2006–2007 – Currently scheduled implementation of Basel II. Prior to its implementation in 1992. in addition to a standardised method. the accord allows.The PRM Handbook – Volume III June 2004 – The final version of the Basel II Accord is published (BCBS. III. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. 2004).2 for the basics of market risk VaR. the 99% market VaR and specific VaR over a 10-day horizon. 8 (III. the Trigger is related to quality of controls in the bank. III. and the second one between 4 and 5.3. Specific VaR applies to both equities and bonds. For bonds it covers the risk of defaults. the use of internal VaR models covering both general market risk (or systemic risk) and specific risk.0. For example. (III. or simply place them outside the regulated banking system.3. This is described in greater detail in Chapter III. some criticisms on the credit risk capital include the lack of proper differentiation for credit quality and maturity. This refers to the process by which regulatory capital is reduced through instruments such as credit derivatives or securitisation.3.com 27 . Its simplicity also has been its major weakness.0. banks typically transfer lowrisk exposures from their banking book to their trading book. III. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.0. Through regulatory arbitrage instruments. Tier 2 capital: long-term subordinated debt.3.6).pdffactory. and applying the capital adequacy ratio. without an equivalent reduction of the actual risk being taken. III. A great strength of Basel I is the simplicity of the framework. together with the financial engineering advances in credit risk over the last decade. other qualifying hybrid instruments and reserves (such as loan loss reserves).3 Meeting Capital Adequacy Requirements Available regulatory credit capital is divided into two categories: Tier 1 capital: essentially shareholder funds – equity – and retained earnings. insufficient incentives for credit mitigations techniques. Thus.The PRM Handbook – Volume III The methodology for credit capital is simple. Minimum capital requirements are obtained by multiplying the sum of all the risk-weighted assets by the capital adequacy ratio of 8% (also referred to as the Cook ratio): Capital RWAk 8% .3.6.B. computing loan equivalents for off-balance sheet and OTC portfolios. have lead to the development of a regulatory capital arbitrage industry. it has been quite successful in achieving its two general objectives (to safeguard the stability of the banking system and to ensure an level playing field internationally). This has allowed it to be implemented in countries with different banking and accounting practices.0. for example.B. and lack of recognition of portfolio effects (these are discussed briefly in Chapter III.8) k Thus the calculation of credit regulatory requirements has three steps: converting exposures to credit equivalent assets.2 Regulatory Arbitrage under Basel I The lack of differentiation in the accord. as the accord does not effectively align regulatory capital requirements closely with an institution’s risk. a substantial increase in the risk sensitivity of the minimum capital requirements. For greater detail.0. In this subsection we present a brief summary of the basic principles of Basel II.0. Second. The Basel II Accord consists of three pillars: minimum capital requirements. the key formulae for minimum capital requirements for credit risk are given in Chapter III.3. This is to be accomplished by closely aligning banks’ capital requirements with prevailing modern risk management practices. Short-term subordinated debt can be used to meet market risk requirements as well. but also supervisory review and market discipline. We briefly summarise these below and then present the key principles behind the computation of minimum capital requirements.B. 7 A small number of open issues are still to be resolved during 2004.The PRM Handbook – Volume III From a regulatory perspective. 7 Its implementation will take effect between the end of 2006 and the end of 2007. III. For example. the development of a capital regulation that encompasses not only minimum capital requirements.3. In addition. supervisory review. A 6% Tier 1 capital ratio refers to Tier 1 capital being 6% of the RWA.4.6. but not credit risk. that is. the subordinated debt included in Tier 2 cannot exceed 50% of the Tier 1 capital. III. definition of capital (no major changes from the 1988 accord). Basel II attempts to improve capital adequacy framework along two important dimensions: First. 2004).4 Basel II Accord – Latest Proposals The final version of the Basel II Accord was published in June 2004 (BCBS. The new accord intends to foster a strong emphasis on risk management and to encourage ongoing improvements in banks’ risk assessment capabilities. an 8% capital ratio means that the total Tier 1 and Tier 2 capital is 8% of the risk-weighted assets (RWA).com 28 .and capital-related disclosures.Minimum Capital Requirements Minimum capital requirements consist of three components: 1.pdffactory.6 Capital adequacy is generally expressed as a ratio. Tier 1 capital must cover at least 50% of the total capital. the reader is referred to the BCBS papers.1 Pillar 1 . 6 A Tier 3 capital was introduced with the market risk requirements. and market discipline. Tier 2 cannot exceed Tier 1 capital. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. and by ensuring that this emphasis on risk makes its way into supervisory practices and into market discipline through enhanced risk. 5 A Simple Derivation of Regulatory Capital Recognising that the simple difference between the value of assets and the value of liabilities in accounting value terms is not a good indicator of the true difference in market value terms has led regulators to make appropriate adjustments in the calculation of regulatory capital.4.3. Important new components of Pillar II also include the treatment of stress testing.The PRM Handbook – Volume III 2.Supervisory Review The second pillar is based on a series of guiding principles. definition of RWA. III.B. the third pillar aims to encourage safe and sound banking practices through effective market disclosures of capital levels and risk exposures. Basel II moves away from a one-size-fits-all approach to the measurement of risk. guarantees and credit derivatives as well as specific securitisation exposures (these are discussed further in Chapter III. the introduction of an explicit treatment of operational risk that will result in a measure of operational risk being included in the denominator of a bank’s capital ratio.4.3. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. III. The operational risk approaches can be found in Chapter III.C.Market Discipline Also referred to as public disclosure.3 Pillar 3 .0.6 briefly reviews the three credit risk approaches. Banks under internal ratings-based credit models will be required to demonstrate that they use the outputs of those models not only for minimum capital requirements but also to manage their business.2 Pillar 2 .3. Banks and supervisors can thus select the approaches that are most appropriate to the stage of development of banks’ operations and of the financial market infrastructure.0.3.com 29 . we follow a similar approach to Section III. minimum ratio of capital/RWA (remains 8%).B. Chapter III. concentration risk and the residual risks arising from the use of collateral. In this section. and for supervisors to review and take appropriate actions in response to those assessments.pdffactory. 3. III.0.1 to understand regulatory capital.6).2. These approaches present increasing complexity and risk-sensitivity. Basel II proposes to modify the definition of risk-weighted assets in two areas: substantive changes to the treatment of credit risk relative to the Basel I Accord. This will help market participants assess better a bank’s ability to remain solvent. The inclusion of supervisory review provides benefits through its emphasis on strong risk assessment capabilities by banks and supervisors alike. through the introduction of three distinct options for the calculation of credit risk and three others for operational risk.0. which point to the need for banks to assess their capital adequacy positions relative to their overall risks. 0.The PRM Handbook – Volume III For a typical bank balance sheet. under the first Basel Accord loans to private companies are assigned a risk weight of 100%. The amount of available capital can thus be represented as follows: RC0 = A0{B/S} + A0{non-B/S} – D0{BV}.11) A0{non-B/S} = RV0 + UP0. That is. however. For example.0.0. can be defined loosely as the difference between total assets (balance sheet as well as non-balance sheet) and only that component of total liabilities where non-payment of returns defines insolvency. (III. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. (III. A0{B/S} = A0{BV} – SP0 (III.13) The first two terms of this definition of regulatory capital are referred to as Tier 1 capital. RV0. QD0. as well as the book value of equity E0{BV}: A0{B/S} – L0{BV} = GP0 + E0{BV} + R0. and reserves R0. (III. standards on regulatory capital are typically expressed as minimum percentages of risk-weighted assets rather than total assets.0.12) Combining these relationships provides a very straightforward definition of regulatory capital in book value terms that is designed to be as good a proxy as possible for true market valuation: RC0 = E0{BV} + R0 + QD0{BV} + GP0 .pdffactory. UP0. while loans to banks in OECD countries are assigned a risk weight of 20%. that combines both debt and equity features: 8 D0{BV} = L0{BV} – QD0{BV}.9) The amount of available capital for regulatory purposes. Note that the above adjustments to book value measures still do not adequately capture the true market values and. 8 Non-payment of quasi-debt does not imply insolvency. while non-balance sheet assets consist of any revaluations. Capital adequacy standards are based on minimum requirements for each of the two tiers of capital. (III. Those liabilities whose non-payment constitutes insolvency represent the difference between total liabilities and the quasi-debt. hence. An RWA is expressed as a percentage of the nominal value of a balance sheet asset. Therefore. while the second two terms are referred to as Tier 2 capital.10) Balance sheet assets are the net of the book value of assets and any special provisions to account for defaulted or nearly-defaulted positions.com 30 . at least for a period of time. reflecting the perceived differences in risk between the two categories of borrowers. to market value as well as any undisclosed profits.0. the true riskiness of the assets. the difference between assets and liabilities includes general provisions GP. the allocation of EC to business units. to maximise risk-adjusted returns. such as the firm’s activities. activities and assets. and allocate it a priori in an optimal fashion. III.4 Capital Allocation and Risk Contributions We discuss the importance for allocating economic capital to different business units in a firm – or to the constituents of a ‘portfolio’ – and the key methodologies to compute contributions to EC.1 Capital Allocation In addition to computing the total EC for a firm or portfolio. Thus. profitability assessment and limits.4. which explicitly allocates the diversification benefits of the portfolio. business units and even individual transactions. An ‘optimal’ level of group risk taking can be achieved only when diversification benefits are allocated to at least the major business units (and perhaps even to the transaction level). and thus it is important to understand how risk contribution tools can be applied to EC allocation decisions. EC allocation down to the portfolio is required for: management decision support and business planning.2 Risk Contribution Methodologies for EC Allocation There is no unique method to allocate EC down a portfolio. Rather.pdffactory. From a strategic management perspective.The PRM Handbook – Volume III III.com 31 . Indeed. There are two views prevalent among firms: Diversification benefits should not be passed down to the business units. Thus. Whether it is Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. the sum of the stand-alone EC for each asset or business does not equal the total portfolio EC. pricing. it is important to develop general methodologies to attribute this capital a posteriori to various ‘sub-portfolios’. activities and transactions is an issue that is receiving significant attention in the industry today.0. building optimal risk–return portfolios and strategies.4.0. performance measurement and risk-based compensation. it is preferable for each business unit to be assigned an EC allocation closer to its ‘marginal contribution’ to the total EC. III. In the general case.0. each unit is expected to operate on a stand-alone basis. since there are diversification benefits. it is higher. it is important to devise a general methodology to assign capital to individual business units. 2. and marginal EC contributions. As such.. and might be more appropriate for a particular managerial application.com 32 . 9 Every methodology has its advantages and disadvantages. a disadvantage of this methodology is that it is not additive.2 Incremental EC Contributions This method is also referred sometimes as the discrete marginal EC allocation method. incremental EC contributions. 10 See. 2003). it does not reflect the beneficial effect of diversification.3 Marginal EC Contributions It would be useful to obtain measures of risk contributions that are additive. III. activities or sub-portfolios is generally greater than the total EC for the firm. are sometimes also referred to as diversified capital (risk) contributions or. arbitrary sub-portfolios or assets.0. the sum of incremental EC for all the firm’s business units (activities or sub-portfolios) is smaller than (or equal to) total EC for the firm. Sometimes referred to as diversified EC contributions. such measures are intended to capture the amount of the firm’s total capital that should be allocated to a particular business or sub-portfolio when viewed as part of a multi-business firm. Incremental EC is a natural measure for evaluating the risk of acquisitions or divestitures. Thus. Perold (2001) – note that the author refers to this as ‘marginal’ EC. for example. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Marginal risk contributions. As defined here. incremental capital (risk) contributions are sometimes also referred to as marginal capital (risk) contributions or discrete marginal capital (risk) contributions. The resulting sum of stand-alone capital for the individual business units.4. III. While it does capture the benefits of diversification.4. the 9 The reader is cautioned that there is currently no universal terminology for these methodologies in the literature. 10 But.4. while very intuitive. as termed here. This methodology thus captures exactly the amount of capital that would be released if the business unit were sold or added (everything else remaining the same).pdffactory. in the form of reduced EC. if it were an independent firm). They are specifically designed to allocate the diversification benefit among the business units and activities.2.0.2. we can classify the allocation methodologies which are currently used in practice into three categories: stand-alone EC contributions. more precisely.1 Stand-alone EC Contributions An individual business or sub-portfolio is assigned the amount of capital that it would consume on a stand-alone basis (e.0. by construction. It is calculated by taking the EC computed for the entire firm (including the business unit or subportfolio) and subtracting from it EC for the firm without the business unit or sub-portfolio.g. III.The PRM Handbook – Volume III EC contributions to business units. continuous marginal capital (risk) contributions (Smithson. Under this method the EC allocated to a business unit or sub-portfolio attempts to capture an appropriate amount of risk capital that the unit contributes to the entire firm’s capital requirements. given the nonnormality of their loss distributions.B. they can be computed analytically and are simply given by the covariance of losses of that business unit with the overall portfolio divided by the volatility of losses (see Praschnik et al. Industry best practices are shifting towards allocations based on VaR or expected shortfall (ES) – see Chapters III. see Kalkbrener et al. xi (III. For some discussions on the use of ES and VaR for capital allocation.A. However.3 and III..com 33 .15) Denoting by xi the size of the ith business unit or sub-portfolio. one can show that for EC based on volatility. Marginal EC contributions require the computation of the first derivative of the risk measure with respect to the size of each unit. Smithson. VaR or ES: 12 EC i EC ( x ) xi . 2001. While most explanations of this risk decomposition methodology are based on the use of volatility (or standard deviation) as a risk measure. EC (III. 1999).pdffactory. practical methodology is the one based on marginal risk contributions. 11 An additive decomposition of EC is of the form: EC i .14) i where EC i denotes the EC contribution of business unit or sub-portfolio i.2.16) That is. such allocations can be ineffective for credit and operational risk.. 2003): 11 While VaR is defined as a loss which cannot be exceeded x% of the time (a quantile of the loss distribution). conditional on reaching at least an x% loss (i.. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.e.The PRM Handbook – Volume III sum of diversified EC for all the firm’s business units and activities is equal to total EC for the firm.0. (2004) and Mausser and Rosen (2004). The most widespread and. This is a requirement of coherent risk measures (Artzner et al. We can then define the percentage risk contribution of the ith business unit or sub-portfolio as: EC Contribi EC i 100% . the methodology is quite general and applicable to other risk measures. EC (III. it is the average of the x% largest losses). this follows from Euler’s theorem. 12 More formally. This product essentially represents the rate of change of EC with respect to a small (marginal) percentage change in the size of the unit. Volatility-based contributions are common practice today (see Smithson.0. perhaps. Sometimes ES is also referred to as ‘tail conditional expectation’ or ‘conditional VaR’.0.A. if the risk measure is homogeneous of degree 1 and differentiable. 2003). an EC marginal contribution is the product of the size of business unit i and the rate of change of EC with respect to that position. There are various methodologies that produce additive risk contributions.5. III. ES is commonly defined as the expected loss. When the risk measure used is volatility. 0% 61.0.2% 14.pdffactory. Column 4 (and 5) in the table give the incremental capital for each business (in money terms and percentage contributions). 2000.7% Business 3 20. The total stand-alone capital is thus $100 million.64 100..The PRM Handbook – Volume III EC Contribi Cov( L i . and that they are uncorrelated. 2002. see also Chapter III. The stand-alone capital of each line is.6% 100.49 10.00 100.8% Business 2 30.0% The last line of this table gives the total capital as a percentage of EC.0.00 20.60 23. Proponents of marginal EC approaches point out that incremental EC always under-allocates total firm EC and that.B.0% 25. even if the incremental EC allocations were scaled up.2% 59. marginal EC is likely to be suboptimal for analysing the addition or removal on an entire business.7% 40.72 100.0% 3.0% 7. The total economic capital of the firm is simply given by EC EC 12 EC 22 EC 32 61. The sum of incremental Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. respectively. Notice that the total standalone capital represents a 62% increase of EC. Example III.1% 6. the signals are potentially misleading.0.00 30.00 50. $50 million.0% 36. $30 million and $20 million. Assume.64 million. They also naturally explain how to move EC from one business to another (on a marginal basis).com 34 .5% Total 100. However. Table III.0% % EC 162.2: Capital Allocation Methods Table III.5). which is not marginal to the firm. 2000. Marginal EC contributions are very general and are best suited to understand the amount of capital to be consumed by an instrument or portfolio (which really is small compared to the whole firm). ES) has been also developed in the last few years (see Gouri閞oux et al. It is important to stress that these contributions must be interpreted on a marginal basis. Tasche.33 9.1: Capital allocation methods for a simple firm Stand-alone % Stand-alone Incremental Capital (m) Contributions Capital (m) % Incremental Capital Contributions Marginal Marginal Capital (m) Capital Contributions Business 1 50.1 illustrates the different capital allocation methods for a simple firm consisting of three business lines. The general theory behind the definition and computation of these derivatives in terms of quantile measures (VaR.56 65. L )/ ( L ) . for simplicity that the total losses over one year for each business are normally distributed.79 21.59 69. III.5. and the basic principles of risk-adjusted return on capital. or by bringing in the notion of profitability.0. Game-theoretic tools are commonly applied to problems involving the attribution of cost among a group and. This can be understood from the fact that as the biggest unit.0. the EC assigned to a unit becomes a cost.4 Alternative Methods for Additive Contributions 13 Recently there has been discussion of several alternative methods arising from game theory. it is potentially more practical. and may be impractical for problems with even a small number of business units. and each unit attempts to minimise its cost. A variant called the Aumann–Shapley method further allows for ‘fractional’ units and requires less computation. can potentially offer a useful framework for identifying ‘fair’ EC attributions. This method is computationally intensive. Marginal contributions add up to the total EC. III. the role of capital allocation. one would rather increase the share of the smaller units.5 RAROC and Risk-Adjusted Performance We describe the objectives of risk-adjusted performance measurement. either simply with respect to overall asset size. both these methods may yield similar results to marginal contributions. 2002). Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory. which allocate the diversification benefits across the portfolio and yield additive risk contributions (see Denault. leaves the coalition if it is attributed a larger share of EC than its own stand-alone EC. While these methods are today receiving some academic attention.4.0. In this approach. To diversify risk in an optimal way. it represents almost two-thirds of the EC on a marginal basis. the last two columns give the marginal contributions (in money and percentage terms). 2001.6% of EC. While the largest business (business 1) contributes one half of the stand alone capital. with respect to 13 This section is added for completeness and is not mandatory. hence.1 Objectives of RAPM Banks traditionally measured their performance relative to their balance sheet assets. increasing its share of the portfolio also marginally increases the overall risk.The PRM Handbook – Volume III capital is only 59. which describes how coalitions can be formed so that a group of units benefits more as a group than if each works separately. A player. Under most (but not all) conditions. Finally.com 35 . III. Note in particular that the stand-alone percentage contributions for each business differ meaningfully from the marginal contributions. thus. they are mostly not yet used in practice by financial institutions. An example of these tools is the Shapley method. of course. Koyluoglu and Stoker.2. at least. or with respect to a distribution of default losses that is then netted against nominal returns on assets. There are a number of issues that make these approaches far from ideal. From these distributions. with risk adjustments occurring in both the numerator and the denominator. RAPMs come in many different forms. or by adjusting book-value measures. worst-case. of which two are quite fundamental. Both EC models and regulatory capital models attempt to address the second issue by focusing on market valuation directly. expected. In addition to the leverage effect. In most of the more sophisticated applications of RAPM. The second fundamental issue is that a simple ROA measure does not distinguish between different classes of assets with varying levels of risk.The PRM Handbook – Volume III returns on assets (ROA). asset riskiness is modelled explicitly with respect to a distribution of default-adjusted returns directly. and x% confidence interval default losses (or default-adjusted returns) can be defined and applied to either the return measures or the underlying capital measure.com 36 . today many banks have off-balance sheet exposures that are ignored or. as in the case of economic models. This approach addresses the first fundamental issue as return on equity (ROE) captures the impact of financial leverage as well as. the performance impact of financial leverage is ignored as it pertains to managing risk and return for shareholders.pdffactory. not well captured by the assets as represented on a typical balance sheet. The former applies the risk adjustment to the numerator while the latter applies the risk adjustment to the denominator. The objective of a risk-adjusted performance measure is to define a consistent metric that spans all asset and risk classes. prompting the increasing usage of the term risk-adjusted return on risk-adjusted capital Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. RAPMs thus become an ideal tool for capital allocation purposes. the impact of non-balance sheet assets. thereby providing an ‘apples to apples’ benchmark for evaluating the performance of alternative business opportunities. as in the case of regulatory models. Early attempts to address these issues focused on shifting to performance measures that are defined relative to capital rather than to assets. but they can all be loosely defined as a return on capital whereby the measurement of asset riskiness is a key component of the derivation of the formula. The first issue is that by focusing solely on assets. in theory. Two broad classes of RAPM measures include risk-adjusted return on capital (RAROC) and return on risk-adjusted capital (RORAC). Often the distinction between these approaches becomes blurred. Recall that balance sheet assets are typically book value based and not market value based. 0. Expected losses would be determined by a risk assessment of the asset base. The amount of debt each asset class can support is determined by the amount of EC that must be allocated to that asset class. common to all these approaches is the principle of incorporating the joint default likelihood of a bank’s obligors explicitly into a bank’s RAPM.75% and 60% at a nominal return of 7%). including general overhead. regardless of their unique levels of risk.81%). III. then the economic capital. the true benefit for an individual firm is that it provides a consistent metric to evaluate the performance of the firm’s portfolio of assets.2 Mechanics of RAROC All RAROC models follow the simplified general formula RAROC = (revenues – costs – expected losses) / capital. in practice. The firm’s balance sheet can be conceptually decomposed into two balance sheets. A1. all other sources of revenue. EC1. including credit risk. The example in the introduction considered a bank whose only activities were the taking in of deposits and the extending of credit. capturing losses arising from all sources. Nonetheless. Note that. and all other sources of costs.0. with a firm with liabilities of D0 =$92 million in deposits at a cost of debt of rD = 5%. associated with asset class 1.0. can be determined in a similar manner to the EC for the entire firm: Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.The PRM Handbook – Volume III (RARORAC). while costs include all returns to the liabilities holders of the bank.5. (III. it also provides a benchmark for making allocation decisions.pdffactory. market risk and operational risk. RAROC = (A0 rA – D0 rD – expected losses) / EC0.9% (a compounded spread of 1. while appropriating scarce capital amongst possible new investment opportunities. In that case the above equation can be rewritten as.0. The weighted average nominal return across the $100 million in total assets is rA = 6.0.17) Revenues include all the nominal returns on assets.0. The current capital for this firm was calculated as E0 = $8 million. (III.18) While a RAPM measure like RAROC is certainly a better indicator of a firm’s overall performance relative to other firms than a more traditional ROA or ROE approach. would normally be incorporated into the RAROC measure. including service fees.3: A Simple Model for RAROC We return to Example III. Taken one step further. Example III. Assuming the total worst-case loss in value associated with asset class 1 is 18%. one associated with each asset class. which have been invested in A0 = $100 million of assets (40% at a nominal return of 6.1.com 37 .0. 01% 9.5% 7.0.86 EC 10. associated with asset class two. In this example.16 105.3 illustrates the expected balance sheets for each asset at year end and. the current balance sheet can therefore be decomposed on the basis of asset class as shown in Table III.60.2 42. the EC. while asset 1 has the higher nominal return.0.3) as EC2.64 38.94 14. For illustrative purposes. Table III.22 90.The PRM Handbook – Volume III EC1.05} = 9. assuming the total worst-case loss in value associated with asset class 2 is 10.2: Current balance sheet Asset Class 1 Asset Class 2 Total Assets 60 40 100 Debt 50. the change in equity must reconcile with the more familiar relationship.4 86.18)/1. A2.0.0.0. In other words.68% Of course.0 = A2.0 = A1.0{1 – (1 + r1. EC1 – EC0 = (A1 – D1) – (A0 – D0 ) = A0 rA – D0 rD – expected losses.54 EC 9. Likewise.46 Table III. Table III.05} = 3.0.com 38 .0{1 – (1 + r2.56 3. thus.105)/1.0675(1 l )/(1 + rD)} 0.36 Debt 52.A)(1 = 60{1 – 1. its RAROC is in fact slightly lower than that of asset 2 as proportionately more EC must be allocated to it to compensate for its higher risk.5%.2. the RAROC or the return on EC for each asset can be measured by the change in the equity position over the year (RAROC in is calculated as EC1/EC0 –1).0.14 36. can be determined from equation (III.86.07(1 l )/(1 + rD)} 0.A)(1 = 40 {1 – 1.8 13. EC2.3: Expected balance sheet Asset Class 1 Asset Class 2 Total Assets 63. the excess nominal return of asset 1 over asset 2 is not quite enough Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.86 3.5 RAROC 7.pdffactory. com 39 . apart from the transfer of ownership.4: In the simple example above. protect depositors and other claim holders. is given by the expected losses of that asset conditional on the total portfolio losses being equal to VaR – that is. the marginal contributions coincide with the stand-alone contributions given the discrete nature of the problem and the high correlation of the asset classes implied by the scenarios. In general. in this simple example. asset 1 is a less desirable investment. it is directly linked to the asset allocation methodology chosen by the institution (see previous section). it is important to highlight that the denominator measures the capital contribution of the asset or business to the overall portfolio. This is actually consistent with the marginal risk allocation methodology.16).3 RAROC and Capital Allocation Methodologies When RAROC is used to measure the performance of an asset classes or business. it is beneficial to use allocation methods that account for diversification. for example.0. the measure of performance will not account for the diversification opportunities that a given asset or business brings to the overall portfolio.. 14 Furthermore. the expected losses corresponding to all scenarios which lead to the given VaR (see Gouri閞oux et al. III. Note that in this simple example the sum of the EC of each asset class equals the EC of the firm as a whole.The PRM Handbook – Volume III to compensate for its increased risk and therefore. is to act as a buffer to absorb large unexpected losses.0. physical capital that a firm holds (book capital). or to allocate capital.0. on a risk-adjusted basis.pdffactory. the economic capital associated with a targeted level of solvency and 14 One can show that for quantile-based measures such as VaR. Thus. III. if a firm uses stand-alone risk contributions. Hence. Example III. which leads to the marginal capital allocated to a given asset class (or sub-portfolio). We distinguish between three different types of capital: the actual. such as the marginal risk contributions. Both asset classes incur a 12% loss in this scenario. and give external investors and rating agencies enough confidence in the financial health and viability of the firm. This implies that no diversification exists between the two asset classes because the risk of each asset class on a stand-alone basis is equal to each asset class’s contribution to the overall risk of the firm.5. the capital contribution of each asset was measured as the capital each asset consumes in the scenario that produces the ‘extreme’ 1% loss. 2000).6 Summary and Conclusions The primary role of capital in a firm. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. the derivative in equation (III.0. org Basel Committee on Banking Supervision (1996) Overview of the amendment to the capital accord to incorporate market risk. D (1999) Coherent measures of risk. This results in minimum capital requirements that are more risk-sensitive.com 40 . Available at http://www. with its three-pillar foundation. and Heath. many firms hold book capital in excess of the required economic and regulatory capital.bis. Economic capital management tools generally require a bottom-up approach for its estimation.org Basel Committee on Banking Supervision (2003a) The new Basel capital accord: Consultative document.bis.bis. and. The objective of a risk-adjusted performance measure is to define a consistent metric that spans all asset and risk classes. Regulatory capital and economic capital have differed substantially in the past.org Basel Committee on Banking Supervision (1995) Basel capital accord: treatment of potential exposure for off-balance-sheet items. 9(3). Available at http://www. Available at http://www.bis.The PRM Handbook – Volume III assessed through the use of internal models. since it provides a consistent metric for risk aggregation. pp.bis.org Basel Committee on Banking Supervision (2003b) Trends in risk integration and aggregation. performance measurement. Available at http://www. and asset and business allocation. in order to support a practical.bis. the minimum capital imposed by regulatory authorities (regulatory capital). The new Basel II Accord for banking regulation has introduced a closer alignment of regulatory capital with economic capital and current best-practice risk management by introducing operational risk capital and allowing the use of internal models for both credit risk and operational risk. Basel II focuses not only on the computation of regulatory capital. Eber.org Basel Committee on Banking Supervision (2004) International convergence of capital measurement and capital standards: A revised framework. Basel Committee on Banking Supervision (1988) International convergence of capital measurement and capital standards. This reflects some historical and practical business considerations and a more conservative view on the applicability of the models. available at http://www. thereby providing an ‘apples to apples’ benchmark for evaluating the performance of alternative business opportunities. Available at http://www. P. References Artzner. risk-sensitive capital allocation methodology. Economic capital is a powerful business management tool.org Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory. Finally. Mathematical Finance. J-M. F. 203–228. Delbaen. Working paper. particularly for credit risk (and also regulatory capital under Basel I did not cover operational risk). but also on a holistic approach to managing risk at the enterprise level. for banks. In practice. C (2003) Economic capital . Saita. 1–34. pp. 90–94. S25–S27. F (2003). R C. Technische Universit鋞M黱chen. H. J-P. Philadelphia: SIAM. 16–32.sdabocconi. Working Paper 89/03. Tasche. IMF. Measuring risk-adjusted performances for credit risk. P. 225–245. Journal of Empirical Finance.com/paper. Working paper. March. A F (1993) Theory of risk capital in financial firms. (2002) Calibrating your intuition: Capital allocation for market and credit risk. Merton. J. Chichester: Wiley. D (2004) Scenario-based risk management tools. Lotter. J (2002) Honour your contribution. S19–S24. Applications of Stochastic Programming.ssrn. and Scaillet.pdffactory. Kupiec. D (2000) Conditional expectation as quantile derivative. A (2001) Calculating the contribution. pp. available at http://www. Kalkbrener. H. Risk.html Smithson. April. and Rosen. 6(Fall). pp. and Overbeck. 7(3–4). In S W Wallace and W T Ziemba (eds). C (2000) Managing Bank Capital. Perold. 60– 63. Working paper. L (2004) Sensible and efficient capital allocation for credit portfolios.org/picsresources/pkcyi. 4(1).how much do you really need?. November. available at http://www. Risk. D (2002) Expected shortfall and beyond. A F (2001) Capital allocation in financial firms. pp. and Perold.it/it/ricerca/pubblicazioni/dir2003. C. pp.The PRM Handbook – Volume III Denault.com 41 . 14(10).gloriamundi. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. G. M. Working paper 99/02. Koyluoglu.pdf Matten. Risk. Tasche. and Principato. available at http://papers. Harvard Business School Working Paper 98-072. Risk. O (2000) Sensitivity analysis of values at risk’. Laurent. pp.taf?abstract_id=267282 Praschnik. Technische Universit鋞 M黱chen. Gouri閞oux. Mausser. Journal of Applied Corporate Finance. pp. M (2001) Coherent allocation of risk capital. Hayt. Journal of Risk. H. and Stoker. pdffactory.The PRM Handbook – Volume III Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.com 42 . 1. at the limit. Thus.1 Market Risk Management Jacques P閦ier 15 III. III. assessment. Broadly speaking. We therefore examine three typical activities – fund management.pdffactory. it would be dangerous to ignore their potential change in value. people and systems or from external events. they are practical questions that every financial as well as non-financial firm must grapple with. monitoring and control/mitigation of market risks. In this chapter students are introduced to the four major tasks of risk management applied to market risks. The difficulties faced in carrying out these tasks vary according to businesses. credit risk refers to changes in value of assets due to changes in the creditworthiness of an obligor and.2 Market Risk To facilitate the analysis and understanding of risks faced by financial firms it is common practice to classify them into major types according to their main causes. the identification. More detailed quantitative analyses are given in subsequent chapters.A. banking and manufacturing – to illustrate a broad spectrum of problems and state-of-the-art approaches.The PRM Handbook – Volume III III. and what does a market risk manager do in a day at the office? These are not theoretical questions with only right or wrong answers. ISMA Centre. Accountants usually shy away from attributing ‘fair values’ to such assets in the absence of reliable and objective market values. banking risks are typically classified as being either market.A. losses due an obligor failing to meet its commitments. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. when the core activity of a business is to hold portfolios of illiquid assets. environment.1 Introduction What is market risk and whom does it concern? What do we mean by market risk management. UK. However. operational risk is defined as the risk of loss resulting from inadequate or failed internal processes.1. and consequently market risks are difficult to assess on illiquid assets. 16 It is not clear to what extent market risks should be or can be considered for less liquid assets such as real estate or banking loans.A. market risk refers to changes in the value of financial instruments or contracts held by a firm due to unpredictable fluctuations in prices of traded assets and commodities as well as fluctuations in interest and exchange rates and other market indices. objectives and organisation of each firm. namely. 16 By contrast. We aim to develop a conceptual and largely qualitative understanding of the topic. what appear to be reasonable answers depends very much on the activity. 15 Visiting Professor.com 43 . credit or operational in origin. culture. University of Reading. banking supervisors have erred on the side of objectivity rather than comprehensiveness when defining market risks. III.A. by default. loans) must. but market risk nonetheless remains a major determinant in the success or failure of most economic activities and the welfare of people in free market economies. it also requires enlarging the scope of market risk assessment to those areas that have been largely ignored by regulators because accrual accounting practices hide the risks and/or the risks are difficult to quantify objectively. harmonisation of insurance company solvency tests with minimum capital requirements for banks is a long-term aim for regulators. To be able to receive trading book capital treatment for eligible positions. be placed in the banking book.2. A trading book consists of positions in financial instruments and commodities held either with a trading intent or to hedge other elements of the trading book. are recognised only in the trading book. banking regulators have always inflated the capital treatment of credit risks in the banking book to cover for hidden market risks. To this end they have designed a set of minimum regulatory capital requirements for all types of traded assets. the development and adoption of better risk management practices in banks remains an objective ultimately beyond the reach of banking supervisors. Suffice to observe how variations in the price of energy affect manufacturing and 17 See Basel Committee on Banking Supervision (BCBS.1.1 Why Is Market Risk Management Important? Banking supervisors also hope that regulations will promote the adoption of stronger risk management practices. paragraphs 4 and 720). 17 By and large. 2004a). Pension funds and other funds designed to meet strict liabilities are also subject to solvency tests by the relevant regulatory authorities. bar exceptional circumstances. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory. Banks must allocate their assets to either a banking book or a trading book. To remain prudent. which they view as a worthwhile goal. 18 See BCBS (2004a. 1996. they have certainly put these issues in the limelight and progress has been made. all other assets (e.The PRM Handbook – Volume III Banking supervisors have taken a special interest in codifying risks and in setting standards for their assessment. market risks.com 44 . Insurance companies are subject to different solvency tests. Outside financial services there are no prudential regulations offering guidelines for the management of market risk. based on detailed definitions and assessments of risk. Their purpose is essentially prudential: to strengthen the soundness and stability of the international banking system whilst preserving fair competition among banks. It requires enlarging the purpose of risk management from a purely prudential objective (setting a limit on insolvency risks) to a broader economic objective (balancing risks and returns). some further basic requirements must be met such as clearly documented trading policies. including some illiquid ones. 18 Working in collaboration with the industry. However. daily mark to market (or mark to model) of positions and daily monitoring of position limits.g. III. consider a firm exposed to exchange-rate fluctuations – a market risk. The absence of prudential regulations for non-financial firms gives an opportunity to reconsider the best way to recognise and tackle market risks. so that market risks can be distinguished from other risks and managed separately. indeed. and the degree of interaction and coordination between functions so that they can operate coherently. And what should become clear is that. nor any management system that could control one type of risk without affecting others. and the performance of securities markets affects pensions. consider the technology bubble that burst at the turn of the millennium.com 45 . paragraphs 725–745 and 663(a)).The PRM Handbook – Volume III transportation. III. in particular BCBS (2004a. It may seek cover by entering into a forward exchange-rate agreement with a bank. In 1996 Alan Greenspan.A. but it thereby takes a credit exposure on the bank if the bank has to pay the firm under the contract. It distinguishes four main tasks that it defines as identification. the bank may seek some degree of credit cover by asking for securities or property to be placed as collateral.2. changes in interest rates affect the cost of mortgages and thereby property prices. a balance must be struck between the degree of specialisation of risk management functions. In any organisation. Distinguishing market risks from other risks and managing them separately from and independently of other risks and profit considerations is therefore only valid up to a point. For an example on a micro-economic scale.1. the Fed Chairman. assessment.A. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.3 Market Risk Management Tasks The Basel Committee on Banking Supervision has given a great deal of thought to the role and organisation of a risk management function. there is no classification system by which every risk would fall into one causal category and one category only. monitoring and control/mitigation of risks. Or consider a bank that makes a floating rate loan to a firm.2 Distinguishing Market Risk from Other Risks Some examples will illustrate how market. credit and operational risks are interrelated.1. described as ‘irrational exuberance’ the expectations placed on new technologies. they did not perform as well or as rapidly as predicted – a business or operational risk problem. thus taking primarily a credit risk on the firm. A wave of corporate failures followed – a credit risk – and the Fed as well as many other monetary authorities across the world reacted by lowering interest rates – a market risk for bond portfolio holders. On a macro-economic scale.pdffactory. but the value of the collateral will be subject to market risk. satisfying as it may be to categorise risks according to causes. 19 These tasks are relevant 19 See publications on the New Basel Accord. But internally. but it has wisely been replaced by ‘assessment’ in more recent publications. and difficulty in raising new funds having just returned some capital to shareholders. knew almost everything that could be known about market risks. many corporates do not hedge currency exposures because they are not sure how to assess them or how to hedge them. For example. Banking supervisors have set qualitative and quantitative standards for the assessment of market risks to suit their aim.’ Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. even when they are denominated in the same currency as the domestic market. is generally considered more risky than investing in the domestic equity market. Risks are about future unexpected gains or losses.). Assessment is the second step.The PRM Handbook – Volume III for market risks. a coherent and relevant set of assumptions and parameters. but they were caught unawares by an unusual combination of events: the repercussions of the Russian bond crises of August 1998. In the end. former loss events) and others being chosen for the purpose of the exercise (time horizon. Indeed. trading strategy. they have to be recognised and delineated. banks and other firms should take a wider view to choose standards that suit their own situation and objectives. diminished liquidity in major bond markets because of the withdrawal of a large market maker. But not recognising a new risk or combination of risks may be the greatest danger. that is. In the next sections we shall illustrate how different assessment standards may be suitable for different 20 Readers may remember how the US Secretary of Defence. Socrates had put this point across more elegantly when he said: ‘Wisest is the man who knows what he does not know. lack of familiarity may lead to over-cautious reactions. it may be easier for a corporate treasurer to remain passive and blame the currency markets for a loss than to be active and have to explain why a loss was made on a hedge. Of course. Donald Rumsfeld. 20 The managers of the hedge fund LTCM. The term ‘assessment’ reflects the need for a statistical model. On the other hand. ignorance of new risks. This is in fact old military lore: in the US Navy unknown unknowns are colloquially called ‘unk-unks’ and said to rhyme with sunk-sunk. was once derided for trying to explain at a press conference that there are knowns and unknowns and that among the unknowns there known unknowns and unknown unknowns. investing in foreign equity markets. but it may be less obvious than first thought. Exposures to market risks can easily be overlooked because of either over-familiarity (risks we have always lived with without doing anything about them) or.com 46 . risks are not like objects that can be measured objectively and accurately with a simple measuring tape. Identification is the necessary first step.g. the two circumstances having about equal probabilities. for example. Real-world problems do not come neatly defined as in textbooks. at the other extreme. the latter being the most dangerous. etc. as they are for most other risk types and for non-financial institutions as well as for banks. some being supported by past evidence (e.pdffactory. including two Nobel prize laureates in economics and finance. The word initially chosen by the BCBS was ‘measurement’. The first case is all too common. the determination of prudent minimum capital requirements for banks. namely. and a key responsibility of the risk manager is to verify that limits are not exceeded or. Finally. Mitigation has a wider meaning than control.pdffactory. 21 But first a few comments about how the risk management function should be organised III. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. It would be ineffectual to give general answers to these questions. Even in a steady-state situation more information can be collected over time to develop a better understanding of market risks. control has been replaced by control/mitigation in the latest publications of the BCBS. if they are.1.4 The Organisation of Market Risk Management Banking regulators have put forward a few general recommendations for the organisation of the risk management function. Firms. and (ii) there are ways to manage market risks actively that need to be investigated. Monitoring is particularly important when hedging strategies are in place so that one can verify the efficiency of these strategies and update the corresponding risk models. to blow the whistle. assessment and control/mitigation vary between them. it should have its own independent sources of 21 We leave aside in this chapter the more routine and easily understood ‘monitoring’ task.The PRM Handbook – Volume III businesses lines. it indicates that (i) there is a trade-off between risk and return and an optimal balance should be sought. technology and regulation all evolve. Risks themselves cannot be monitored. 22 (i) The risk management function should be part of a risk management framework and policies agreed by the board of directors.A. these are covered in the following chapters. markets. They reflect a general consensus in the banking industry and are probably valid as well for many other businesses. But we shall not go into detailed quantitative techniques. The board and senior managers should be actively involved in its oversight. Control gives too much the impression that market risks are intrinsically bad and therefore must be subject to limits. Monitoring refers to the updating and reporting of relevant information. competition. 22 The following four bullet points are not direct quotes but a summary by the author of recommendations that have appeared in various BCBS publications. we shall explore three business types and show how the three tasks of market risk identification. Exposures and results can be monitored. and therefore market risks also change over time. Rather. strategies. How each of these tasks should be carried out by market risk managers and what specific problems they may encounter depend upon the business at hand. (ii) The risk management function should operate independently of the risk/profitgenerating units. in particular. but they should be frequently reassessed.com 47 . credibility and Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory. In a few instances. These market risk reports are usually combined with credit exposure reports and profit attribution analyses where exceptional gains or losses as well as potential risks are explained. The middle office should produce regular (at least daily for banks) market risk reports for the front office and for senior management. (iv) The risk management process should be well documented and audited at regular time intervals by both internal and external auditors. but it should use its own independent information sources for prices and derived parameters such as volatilities and correlations.com 48 . assessing and recommending hedging strategies. senior management and the board of directors. market risk managers may be asked to implement global hedging policies that would not sit naturally within any existing department. and it should use its own models to assess and forecast market risks. to design and implement stress test scenarios. It is clear from these recommendations that the risk management function should be separate from and independent of risk-taking line management functions in the front office and support functions in the back office but should be in close communication with them. (iii) The risk management function should produce regular reports of exposures and risks to line management. More recently. The middle office must receive information on exposures from the front office in a timely fashion. The risk management function should be given sufficient resources to carry out its tasks with integrity. to establish controls and procedures for new products and to verify valuation methodologies and models used by traders. It may also be required to calculate provisions and deferred earnings. Proper resources. Noncompliance with the risk management policies should be communicated immediately.The PRM Handbook – Volume III information and means of analysis. The risk management function in financial firms is also normally in charge of preparing market risk management policies – to be submitted for the approval of the board – and of designing. This is why most banks locate the risk management function in a separate ‘middle office’. These reports must be immediately verified and approved by designated front office and senior managers. independence and lines of communication to senior management and ultimately to an executive director on the main board are crucial for the integrity. containing detailed and aggregate risk estimates and comparisons against limits. The middle office is also often responsible for producing statutory risk reports for banking supervisors. market risk managers have been asked to contribute to the analysis of the optimal level and structure of their firm’s capital (as compared to the evaluation of minimum regulatory capital) and to risk budgeting (also called ‘economic capital allocation’) with the aim of improving risk-adjusted return on capital. 1. III. disclosure requirements. Supporting and empowering the risk management function are lesser problems in firms that do not seek market risks but would rather avoid them. In the USA. Their income is usually a set percentage of the value of assets under management (plus some participation in profits in the case of hedge funds).5. most funds invest in a large number of liquid.g. who might be more willing to take risks. in others. Interestingly.1. in mid-2004. derivative products and leverage. unfettered funds became known as ‘hedge funds’ because many of them used trading strategies based on spreads between long and short positions rather than pure directional bets. 24 In most countries traditional funds are subject to strict regulations (about the type of securities in which they can invest. The market risks are born by the investors. a fund investing in a few high-yield corporate bonds or a certain emerging market). in return. ‘capital guarantee’ or ‘index tracker’ funds. starting in the 1970s.5 Market Risk Management in Fund Management III. total worldwide bond and equity markets were valued at about $70 trillion. for example.) that are aimed at protecting investors. etc. Their popularity grew in the mid-1990s and even more so when the technology bubble burst and traditional funds’ returns inevitably tumbled.A. unless there is a clearly defined and adequately supported market risk management function in these firms. more than half these assets were managed by institutions. they can use short sales. it is not directly affected by losses. good-quality securities and therefore credit risks are less important than market risks. Fund managers themselves are only indirectly affected by market losses. bar special cases (e. Fund managers also often choose to limit and specialise themselves further according to market sectors or investment strategies. these new.com 49 . pensions). private investment pools were created to offer to wealthy individuals. market risks may not be properly appreciated and managed. But the reputation of 23 Funds also take credit and other risks but. But specialisation and constraints can only limit potential returns so. 24 In some cases returns must be sufficient to meet certain liabilities (e. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.The PRM Handbook – Volume III efficiency of the risk management function in financial institutions that aim to derive a profit from taking market risks.g.A. That is particularly so for pension funds that should provide long-term security to their members and. mutual funds alone managed about $7 trillion of assets. the fund managers’ objective will be to maximise some risk-adjusted performance measure. 25 Funds take market risks for the potential benefit of their investors. 25 By comparison. specialist funds may describe themselves as ‘UK equity’. However. Total assets under hedge fund management may soon pass the trillion dollar mark. leaving to investors the choice to allocate their savings among funds and manage their own portfolio diversification. enjoy certain tax advantages. the promise of greater returns by avoiding traditional fund constraints and regulations – for example.1 Market Risk in Fund Management The core activity of fund management is to take market risks 23 with the expectation of generating adequate returns. 28 FRS 17 becomes mandatory in the U. In general. This characteristic is commonly referred to as market ‘depth’. Traditional funds are bound by regulations to hold only highly liquid positions. particularly for hedge funds. but the two are closely related. at the same time as the new International Accounting Standards (IAS) become mandatory for companies listed on European Union stock exchanges. and (iii) fund managers keep to the terms that have been agreed. Financial Reporting Standard 17 (FRS 17) prescribes 28 that asset and liabilities in company pension schemes be immediately 26 Liquidity risks deserve to be analysed separately from market risks. Nonetheless some risks may be overlooked. It is therefore crucial that (i) fund managers explain to their clients the risks they are taking. as well as difficult to estimate. they can easily be thrown into a momentary cash-flow squeeze or even a terminal problem by a liquidity crisis. there are securities that do not trade regularly and yet can be traded in large single blocks without putting undue pressure on their price. Exceptionally. the spread between two similar securities. especially in funds following sophisticated strategies. When funds have to meet specific liabilities – and many do27 – managers seek to maintain a stable surplus of assets over liabilities and should therefore be concerned by possible market risks on the liability as well as on the asset side.pdffactory. Hedge funds. if a fund is allowed to short securities. the bid–offer spread increases rapidly with the size of a transaction relative to the average daily trading volume when that fraction is significant. Liquidity risk is another relevant concern. 26 Some assets may not be bought or sold at the anticipated price because the transaction is too large compared to the market appetite. funds supporting defined benefit pension schemes. Likewise. tax or regulatory changes. Usually.The PRM Handbook – Volume III fund managers and therefore their ability to retain existing investors and to attract new ones depends on their ability to manage their risks (and their clients). on the other hand. Actuarial practices and accounting standards have generally overlooked or hidden these risks in the past but new rules are now coming into effect that bring them to the fore. The liquidity of a security can be characterised by its average daily trading volume.1. We have already referred to LTCM as an example. funds following spread or arbitrage type strategies will have reduced primary directional risks but will have increased exposures to secondary risks. do not have such constraints and may end up holding relatively large positions in specific securities. that is for accounting periods ending on Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. III.com 50 .K. say A and B shares of a company. It would not be much consolation to decide that such events are operational rather than market risks if they have not been foreseen. the uncertainty in the repo cost incurred over the long term may be quite considerable.A. For example. insurance policies (life or property and casualty) or backing the issuance of guaranteed investment contracts (GIC). 27 For example. in the United Kingdom. For example.2 Identification It should be an easy task for fund managers to identify market risks because they normally have chosen deliberately to take those risks. If in addition they are highly leveraged.5. (ii) clients agree formally the terms and conditions of their investment. may be drastically affected by legal. S. the most common RAPM being the Sharpe ratio or ratio of expected excess return relative to the risk-free interest rate divided by the standard deviation of return. comparison to peers or to a chosen benchmark rather than absolute performance is seen as a clear indicator of skills and a powerful source of motivation. 29 Depending on their strategy they choose or create a benchmark and estimate their risk-adjusted performance relative to the benchmark.5.1.pdffactory. There are some simple arguments why investors should prefer funds with the highest Sharpe ratios. Often or after January 1. but a generalised Sharpe ratio or some other RAPM accounting for skewness and kurtosis might do. However. The standard deviation of returns relative to the benchmark is called the tracking error. generally accepted accounting principles (U. It has become part and parcel of performance assessment and. both on an annualised basis. III. ‘You cannot eat a relative performance sandwich’. Financial Accounting Statement No.S. 87 (FAS 87). GAAP) issued in 1985 lags behind IAS 19 and FRS 17 both in the application of fair valuation and the rapid recognition of gains and losses. a fund selling out-of-the-money options or implementing a dynamic strategy with similar consequences should exhibit a significant downward skewness and excess kurtosis of long-term returns.A. These estimates are then fed into risk-adjusted performance measures (RAPMs). The Sharpe ratio would be inadequate to compare the performance of these two funds.com 51 .A. See Chapter I. it is called the ‘information ratio’ or ‘appraisal ratio’. the equivalent statement under U. 29 Although. They produce estimates of return distributions.The PRM Handbook – Volume III recognised on the company balance sheet at their market value for assets or present value based on relevant gilt rates for liabilities. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.1 for a full discussion of RAPMs.3 Assessment The assessment of market risk is now a very well-developed activity in the fund management industry. whereas a well-diversified traditional fund holding long security positions only may be expected to exhibit approximately log-normally distributed returns. the international standard relative to employee benefits has moved in the direction of FRS 17. Because many traditional funds are limited in their choice of securities and/or investment strategies. For example. IAS 19. Sharpe ratios may lead to unwarranted conclusions if applied to the comparison of funds with significantly different return distributions. as Warren Buffet said. FRS 17 stipulates immediate recognition of gains and losses on a company pension scheme but in a secondary statement of gains and losses called ‘Total Recognised Gains and Losses’ rather than in the Profit and Loss account. they prefer to be judged on relative rather than absolute performance. The choice RAPM is the ratio of the average excess return relative to the benchmark over the tracking error. Ex-post analyses are usually pure statistical analyses of time series of returns. it is certainly done ex post by a number of analysts in order to compare the so-called risk-adjusted performance of funds. if not always done thoroughly ex ante by fund managers. 2005. but the design of a risk mitigation strategy is as complex as the design of the investment strategy itself. It is only on the basis of ex-ante assessments of risks that fund managers can check and justify that they are adhering to their management mandate as described in a mutual fund prospectus or agreed with trustees or shareholders. some undesirable risks may have been acquired as part of a package and need to be reduced. Control of limits is not much of a problem. A typical example is that of a fund investing in a particular industry sector Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Too often ex-ante analyses are carried out using standard commercial models without sufficient questioning of the assumptions contained in these models. but if the composition of the portfolio being evaluated is significantly different from the composition of the benchmark.A. they could only lead to a more statistically accurate forecast of short-term returns. but they do not replace the need for ex-ante assessments. daily for instance.1.5. Ex-post assessments are certainly useful for comparison and analysis of returns as well as to check ex-ante assessments. Ex-post assessments lack reliability and relevance because they are based on limited information – typically.The PRM Handbook – Volume III the analysis of performance is extended to a full performance attribution analysis to explain which strategies and which changes in market factors have contributed to profits and losses. and consequently short-term returns are not mutually independent.A.4. ex-ante assessments are more difficult. risk assessment two stars and control/mitigation three stars.A.pdffactory. is to assume that future departures from the performance of a benchmark will be small if the tracking error has been small in the past. the two cannot be separated except in a few special circumstances.com 52 . In fact. Methods such as exponential moving averages (EWMA) and GARCH (see Section III. A typical error. III. But estimates of short-term volatilities have little relevance for long-term risks when these are governed by a specific investment strategy such as capital protection. a few years of monthly returns – and they are not forward-looking. The problem is that. One must rely on assumptions about future market behaviour and sometimes introduce a degree of subjectivity. for example. the two return series may well have different trends. the identification of market risks in fund management should be attributed only one star.5. Of course.4 Control/Mitigation If star ratings from one to three were to mark the degree of difficulty of a task. Even if return data were available on a much more frequent basis. III.3. the tracking error is usually calculated on de-trended return series. One must also assume a trading strategy complete with limits and contingency plans.4) have proved useful to estimate daily risks in financial markets exhibiting time-varying volatilities.1.1 Selective Hedging As a first special case. ex post. 31 A parallel shift in bond yields corresponds approximately to a parallel shift in the zero-coupon rate curve. 30 In the same way as we now say ‘value-at-risk’ rather than ‘dollar-at-risk’.A. These derivatives or ‘sensitivities’ are called ‘value duration’ and ‘value convexity’ respectively.and second-order derivatives of bond values with respect to their yield to maturity (see Section I. for example. The second-order sensitivity relative of a bond value with respect to its yield is called ‘value convexity’ because it relates to the curvature of the value versus yield curve. but these hedges will have to be readjusted as a function of changes in foreign-denominated asset values and rolled over regularly. is given in Section I.2. hedging one bond with another. for example. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. A more complex case is that of positioning a bond portfolio to take advantage of some interestrate changes whilst protecting the portfolio against other possible interest-rate changes. The corresponding costs and residual uncertainties will need to be estimated. for the two strategies above. it is equal to the maturity of the bond. Table 6. and.7. p. is frequently described as a parallel shift when in fact it is often anything but parallel. when the bond yield is expressed on an annual basis. which is a relatively unlikely scenario. When minus the value duration is divided by the value of the bond. which usually explains 75% to 80% of interest rates’ total variance. 2001.7). An example of immunisation against a parallel shift of bond yields. Active managers of bond portfolios seek to exploit specific views on interest rates.2b.3. it is time to say ‘value duration’ rather than ‘$duration’. Finally. Actual movements of the zero-coupon rate curve are best captured by a principal component analysis (see Section III.6). 31 many other variations of interest rates are possible. ‘duration’ (or ‘Macaulay duration’) is defined as ‘modified duration’ multiplied by (1 + yield). 149). the portfolio value duration and value convexity are simply obtained by adding up the individual bond value durations and value convexities. he chose the name because.pdffactory. parallel shifts of all interest rates. that an interest-rate term structure will flatten or that rates in two currencies will converge. Medium-term rates (18 months to 3 years) are often more volatile than both short-term and long-term interest rates (see. A traditional method to achieve this is to calculate the first. for example. Historically. The first principal component. In this case forward currency contracts can be used as hedges.The PRM Handbook – Volume III worldwide but wishing to maintain currency exposures to a minimum.B. Macaulay was the first author to introduce the concept of duration. Adjusting the composition of a bond portfolio so that these two sensitivities become negligible is often interpreted as ‘immunising’ the value of the portfolio against parallel shifts in interest rates. At the same time they are likely to want to reduce exposures to interest-rate movements that should not affect their strategies. It is an efficient immunisation only against small parallel shifts in bond yields. the result is called the ‘modified duration’. 30 Assuming the same changes in yields across all bonds in a portfolio. for a zero-coupon bond.com 53 . it is equal to the average maturity of the cash flows weighted by their corresponding discount factors calculated at the bond yield.B. Alexander.2. But note that reducing the value duration and value convexity of a portfolio to zero does not eliminate all interest-rate risks. A sensible approach to selecting a portfolio of bonds to be immune to some movements in interest rates whilst maximising the profit opportunity from a forecast movement is first to calculate each bond price variation relative to each relevant interest-rate movement and then to choose the portfolio weights so as to maximise the portfolio gain for the forecast interest-rate movement whilst leaving the portfolio value unchanged for the other movements. for a coupon bond. information ratio or other (see Section I.A. an approximate global hedge can be achieved. The type of assets in which a fund can invest is certainly a major determinant of the volatility of returns. fund managers may fear a correction in the markets that would harm their performance or may simply wish to reduce some exposures for peace of mind because they are momentarily absorbed by other tasks. And fund managers must remain true to the description of their products. The expected return above the risk-free rate and the volatility of return vary proportionally to the amount invested in the risky assets relative to the equity value of the fund.4 for details).6. or leveraging. But other factors have also a large influence on risk.pdffactory.4.B. There are many examples throughout the Handbook of such strategies. III. we have the momentarily undesirable risks. objectives and target risk levels. Closing down some positions for a short while may prove difficult or expensive.2 Momentary Hedging As a second special case.A. diversification reduces total risk by averaging out the effect of specific. say. chief among them are the level of diversification and the degree of gearing of the risky assets. Adding offsetting derivative positions does it. within equities some sectors such as dotcoms and emerging markets are clearly more volatile than. risk mitigation strategies are inseparable from investment strategies designed to achieve some risk-adjusted performance target.4.A.1.7. etc.3. and so on. Gearing up. Diversification can be optimised to obtain the best possible value of the chosen risk-adjusted performance target – Sharpe ratio. independent risks.The PRM Handbook – Volume III III.3 Managing for a Risk-Adjusted Performance Target Coming back to the general case. Like customers in a supermarket who want choice and want to know what they buy by reading the labels on the cans.4.6. investors want a description of the funds offered to them in terms of composition of assets. selection of securities. Even if the derivatives are not a perfect offset for the undesirable exposures. At times. but it can be used to adjust the risk level to Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. It may work particularly well during brief market crises when correlations between related market factors tend to increase.A. By increasing the number of relatively independent securities in a portfolio.1. means increasing the allocation of funds to a risky asset class relative to a risk-free asset class. utilities in G7 countries. usually cash deposits.5. For example.1. strategies. There is but one task for the active fund manager: to follow a policy – encompassing level of diversification.1. – compatible with the objective of investors and his own forecasts. an overlay hedge may be less costly and still efficient. the use of call options on bond futures to hedge a bond portfolio is given in Example I. For example.5.3. Therefore gearing up or down does not affect the Sharpe ratio of a fund (see Section I. or even borrowing in order to invest more in the risky assets than the equity value of the fund. The use of equity swaps is explained in Section I.C. leverage.com 54 .5 for details). equities are generally regarded as more volatile than bonds. This is a subject of academic interest (see Davis and Norman. a deposit or a bond.1.A.com 55 . I read the following offer received today: after 5 years you will be guaranteed 105% of the performance of the FTSE 100 index on your initial investment or your money back.5.g. cointegration or convertible arbitrage) or those promising protected returns – rely on systematic rebalancing rules. This leaves fund managers with the choice of dynamic investment strategies as a means of controlling risk and return. On the other hand. a capital guarantee or the return on a low-risk investment – for example. many investors are sophisticated enough to manage themselves their own gearing and diversification. they are actually simple to manufacture.The PRM Handbook – Volume III suit a specific group of investors. the sponsor could use the initial investment Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. To add to the popularity of these products and attract small savers to long-term equity investments. For instance. All they want is to have a choice among a wide variety of well-defined funds. Only a small proportion of active fund managers – essentially those using highly quantitative investment strategies (e. Most active fund managers rely on heuristics to rebalance their portfolios. tax advantages may also be available at maturity. They try to offer the best of two worlds: on the upside a participation in the potentially high returns of a risky asset class – typically equities or commodities – or. Positions can be actively adjusted either with the objective of achieving a specific return distribution – we discuss a couple of examples in the following subsection – or simply to take advantage of evolving return forecasts. etc.4 Capital Protection Since the early 1980s. With few exceptions capital guaranteed products have a stated maturity of a few years. usually a good-quality insurance company or bank. Although these products may appear as manna from heaven to the unsophisticated investor. there has been a growing number of funds offering some kind of performance protection in order to attract risk-averse investors. The latter is particularly difficult to optimise as the cumulative costs of rebalancing more frequently compared to the expected opportunity losses of rebalancing less frequently are difficult to perceive intuitively and to analyse quantitatively. as a minimum on the downside. III. In our example.pdffactory. 1990).4. delta limits. limits on turnover. whichever is the highest. Investors staying until maturity are guaranteed to receive a defined performance. early withdrawals are not guaranteed or are guaranteed at only a fraction of the initial investment. Of course. but implementation of systematic dynamic strategies is lagging behind theory. These are rules of thumb based on trial and error combining profit objectives with multiple limits: stop losses. The guarantee is from the sponsor of the product or a third party. Investors like the flexibility of open-ended funds whose shares can be issued or redeemed at any time at their net asset value plus or minus a small commission. the risky asset could be an equity index future and the exposure would be 200% of the excess value of the fund above the value of 90% of the initial investment placed in a short-term money market. the dynamic hedging of option portfolios. Not surprisingly.com 56 . 1988). Thus the fund manager would promise (sometimes with a bank guarantee) as a minimum the money market return on 90% of the initial investment but would raise the expectation of an equity index performance with a leverage of up to 200%. We leave to Section III. that of constant proportional portfolio insurance (CPPI) (Black and Perold. many funds became interested in the concept of portfolio insurance and tried to implement it by themselves or with the help of consultants.A. Insurance of an equity portfolio consisted of overlaying short positions in the new equity index futures at critical times. This. fixed maturity issue. For example. A na飗e strategy. at which point the fund becomes a pure money market fund.4 the manufacturing of the call option and. Under CPPI a fund would maintain an exposure in a risky asset proportional to the net asset value of the fund above a certain minimum. index jumps. leaving 7% to the sponsor to cover his expenses and contribute to profit. more generally. Portfolio insurance strategies were improved (Black and Jones. in effect. In practice. is a very inefficient attempt at replicating a put option. The main drawback of capital guaranteed investments is their bullet form: one fixed size. In the early 1980s.g. it creates significant residual risks and costs. during the crash of October 1987. for example.1. 1987) and new concepts emerged. 32 The zero-coupon bond might cost 75% and the call option 18%. would be to short futures whenever the market index fell below a predefined level and to buy them back whenever the market index recovered above that level.6. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Can any form of downside protection be offered on an open-ended fund? This question has exercised the minds of many financial engineers and only approximate answers have been found.The PRM Handbook – Volume III to buy the equivalent face value of a five-year zero-coupon bond and a FTSE 100. not to mention implementation difficulties (e. negative jumps combined with a poor initial performance of the risky asset may bring the fund value rapidly to its minimum guaranteed level. In continuous markets and with frequent (weekly or daily) rebalancing. 32 Always read the small print on the exact definition of the pay-off. this type of portfolio insurance disappointed and various dynamic portfolio strategies came under criticism (Dybvig. at-the-money. five-year over-the-counter (OTC) call option on 105% of the initial investment from a specialist bank. it is often not as straightforward as it first appears. notably. CPPI would be safe. lack of liquidity). This is what has happened in the early 2000s to many CPPI funds that were launched at the end of the 1990s.pdffactory. 1992). settled out of court for a substantial amount. deposit taking from clients is based on trust and it is crucial for banks to maintain their reputation of financial stability and competence in managing risks. At the same time. These issues are often obscured by ad hoc actuarial rules and regulations and are in great need of reexamination. Beyond compliance with regulations and accounting standards.1. Most market risks are taken by banks voluntarily with a view to benefiting from the exposures. the Chief Investment Officer of the Unilever Superannuation Fund (a pension fund) accused the Mercury Asset Management unit of Merrill Lynch Investment Managers of negligence after the fund had underperformed its benchmark by more than 10% over little more than a year (January 1997 to March 1998). like hedge funds.A. investors have sued fund managers for negligence or noncompliance with agreed policies.A. Otherwise. They differ in that many of their assets (e.1 Market Risk in Banking Banks. and the reorganisation of several financial conglomerates. Similar cases have followed since.4.5 Compliance and Accountability Investors are becoming increasingly sophisticated and capable of scrutinising the market practices and performance of fund managers. scores of firings. insurance claims). Unilever sought damages of ? 30 million. in June 2001. beyond the avoidance of market malpractice. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.1. 33 The fund management industry has recently been the subject of a series of enquiries about conflicts of interests due to close relationships between investment bankers and fund managers and about market malpractices such as ‘market timing’ which have resulted in hundreds of millions of dollars of fines. take geared positions on various asset classes.pdffactory.The PRM Handbook – Volume III Special types of portfolio insurance strategies should be developed for funds that must meet specific liabilities. The case revolved about the improper use of risk assessment models and. The investor does not necessarily have to lose money for this approach to succeed. deposits on call) are low-cost but with an indeterminate term outside the banks’ control.5. III. When results have been disappointing. Merrill admitted no liability but.6. Although the return on the ? billion fund had still been positive over the period.g. withdrawal of billions of dollars of funds.g.com 57 . fund managers must aim for a stable surplus of assets over liabilities. crucially. loans) are illiquid and some of their sources of funds (e. 33 fund managers must satisfy investors that they are following strategies compatible with stated performance objectives and risk limits.g. about the delegation of day-today operations to a ‘junior’ investment officer. In a case that has set new standards of accountability for fund managers. banks are engaged in a number of fee-earning activities. Future liabilities may be uncertain both in amounts and timing (e. but that is not of primary interest as far as market risks are concerned. nonetheless. In addition.6 Market Risk Management in Banking III. III.1. Mercury had agreed a target of 1% per year above the benchmark return with a tracking error of no more than 3%.A. Moreover. Only assets in the trading book are subject to detailed statutory assessments and corresponding capital charges. volatility risk. causing funding to become rapidly more expensive. a risk on repo costs. The secondary risks are the other risks. and so are market risks affecting liabilities. Likewise. accrual accounting standards tend to hide such risks. It is part of their core competences. To facilitate market risk analyses. market risks are traditionally categorised by main markets. and the business may fall into a downward spiral. See BCBS (2004b) for general principles on the management of interest-rate risks. deemed a priori to be less important. 34 Within the trading book. may seem more important a priori than specific risks due to unequal variations of prices of shares in that 34 Capital surcharges are left to the discretion of banking supervisors under Pillar II of the Basel Accord. But banks are generally well equipped to manage market risks. III. Only if common sense indicates that unusually large market risks are present in the banking book will banking supervisors request some ad hoc estimates and monitoring/control procedures and be free to impose additional capital charges. market risks in the banking book are largely ignored by banking supervisors. In fact. a share denominated in a foreign currency. they have powerful systems to analyse and monitor risks and good access to the markets for hedging.The PRM Handbook – Volume III clients may slip away. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.1. a key distinction is made between liquid assets eligible for capital treatment under trading book regulations and less liquid assets or assets held with a long-term intent that are relegated to the banking book. except for the fact that many trading books are managed actively with the purpose of reducing primary risks to negligible proportions at the expense of an increase in secondary risks.A.2).A. interestrate. equity. such as an exposure to movements of an equity index. a dividend risk. By contrast. risk on spread between a security and a futures contract on that security. for example.pdffactory. for example. they operate under the close supervision of banking regulators. Nonetheless many positions entail risks in several markets. so. such positions will have to be identified under each of the corresponding market risk types. general market risks.2 Identification As mentioned earlier (see Section III. a convertible bond. The primary risks are the directional risks resulting from taking a net long or short position in a given class of securities or commodities.com 58 . Secondary risks may well appear less important than primary risks.6. it is also traditional to distinguish primary from secondary risks and general market risks from specific risks.1. currency and commodity risks are often identified separately. a commodity linked loan. Because of this non-linearity. clients will take maximum advantage of these options to make loans cheaper or deposits more attractive and therefore less profitable for the bank. A financial option is an instrument offering the right but not the obligation for the owner to make a claim by a certain date if some underlying market factor (or combination of market factors) is favourable or. For example. a step that will be essential for the assessment of market risks. interest-rate maturity transformation. for instance because of lack of diversification or because of the implementation of a particular investment strategy (say. has been a traditional banking strategy since time immemorial and entails an exposure to interest-rate rises. a bond price is a non-linear function of changes in the discount-rate curve. the fair value of an option or an option-like instrument depends on the full probability distribution of future values of the underlying market factor(s) and not only on their current or expected future values. to forgo any claim and gain nothing. instruments that yield a non-linear pay-off in some market factor(s) can be called option-like – for example. But it is common sense that most banks are exposed to large market risks in their banking books assets as well as on their liabilities. Some option-like positions in the banking book are particularly susceptible to interest-rate changes. one should take into account the increasing proportion of option and option-like instruments in trading books.The PRM Handbook – Volume III index. Long-term volatilities and correlations affecting the value of options offer new trading opportunities and hence new market risks. the short-term funding of longer-term loans. Thus the pay-off of an option is a non-linear function of some market factors. By extension. the composition of a portfolio of shares may differ markedly and systematically from their weightings in a reference index. Finally. A systematic identification of market risks in the trading book should proceed through the identification/selection of key market factors and the construction of models relating the value of instruments in the trading book to these factors. specific risks may be large compared to the general (also called ‘systematic’) market risk and should not be overlooked. market risks are harder to identify because positions are generally not valued at fair prices and therefore fair price variations are of little concern. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. value strategy rather than growth strategy). Within the banking book. In fact. Consequently. However.com 59 . otherwise. a line of credit at a predetermined spread above Libor is economically equivalent to an option on the credit spread of the client: the line is much more likely to be drawn upon when the creditworthiness of the client has declined than when it has been maintained.pdffactory. The impact of interest-rate changes is also enhanced by the clients’ behaviour: if there are any prepayment or extension possibilities on loans and deposits. at worst they can be misleading. We should acknowledge immediately that. they are idealisations of reality. The assessment of market risks is based on the selection of a limited number of market risk factors and a choice of models to describe uncertainties in the future values of these factors.6. adds some efficient market assumptions and a hedging argument to yield a risk-free option price. their impact on the value of individual instruments and.A. a number of models have been put forward and tested over the last 30 years or so.com 60 . different models may suit. A prime example of the second type is the Black–Scholes.pdffactory.The PRM Handbook – Volume III III.A. to describe interest-rate risks Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. stochastic volatility models. by whom and for what purpose before ‘recognising’ its use for regulatory reporting and the determination of capital ratios.1. they are indispensable. on the values of portfolios. They are bound to be approximate at best. There is a large degree of subjectivity in the choice of a model.3 Assessment We stressed the multiplicity of market risk factors in the preceding section on identification: there are systematic and specific market risks. A balance must be struck between realism and tractability.A. consequently. and (iii) risk aggregation models evaluating the corresponding uncertainties on the future values of portfolios of financial instruments. For example. (ii) pricing models relating the prices and sensitivities of instruments to underlying market factors. volatilities and correlations. GARCH models. The greatest degree of freedom in the choice of models lies at the very first stage in the choice of market factors and the description of their dynamics. That is why banking supervisors want to exercise some control over the use of models for regulatory risk reporting – they want to examine the way a model is used. for a given choice of dynamics for the underlying asset price.A. a particular choice of model is always vulnerable to ‘gaming’ by traders seeking to construct portfolios that will apparently exhibit little risk. Nonetheless. produce a probability distribution (or at least some statistics) on the future value of a static portfolio at a chosen future time. no matter how sophisticated mathematical models have become. Conversely.7) which.8. These are explored in Chapters III. The third type is exemplified by value-at-risk (VaR) models which. with the help of a few simplifying assumptions. Depending on the business at hand. In the first category are the stochastic processes commonly used to describe the evolution of market factors: geometric Brownian motion. Fortunately.2 and III. option pricing model (see Section I.3. etc. primary directional risks and secondary risks. They fall into three main categories: (i) probabilistic/statistical models describing uncertainties about the future values of market factors. but it would be complex to describe the effects of new business and dynamic hedging strategies. 35 This number is then back-tested against actual conditions and scaled up to produce a minimum capital requirement for market risks. Pricing models are indispensable for all securities that are not readily priced in the market.4) are then applied to ensure that the minimum capital requirements are sufficiently safe. they may differ between normal and extreme market conditions. swaps and other interest-rate derivatives.com 61 . bond derivatives. some simplifications/approximations are usually introduced to obtain realistic prices within a limited computation time. we need to know how the prices of bonds would be affected by some fluctuations in the risk-free interest-rate curve. we need to know how they would be affected by changes in the value of the selected market factors.The PRM Handbook – Volume III across portfolios of bonds. But a market risk manager should be concerned with comprehensiveness and coherence across models so that the effects of a variety of possible interest-rate fluctuations are taken into account realistically and consistently. They are also necessary for most securities whose prices are readily available in the market because. It can be one of many single. However. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. many further simplifications are introduced for two reasons: (i) Dependencies between market factors can be very complex. such fluctuations being relative to the selected market factors. for example. risk aggregation models should follow logically from the choice of market factors dynamics and pricing models. one starts with a choice of interest-rate term structure model. Pricing models should follow logically from the choice of dynamics in the underlying market factors. Likewise. Note that this na飗e aggregation 35 This is what is commonly known as the VaR figure in banking.A.and long-term horizons. Market risk capital requirements for portfolios assessed separately are then simply added together and added to capital requirements for other types of risks to yields the total minimum regulatory capital (MRC). The conventional wisdom in banks is therefore to concentrate on the short term under normal market conditions where dependencies may be approximated by linear correlations and portfolios may be assumed to remain relatively static. This is what regulators ask banks to do: to estimate the maximum level of market losses that would not be exceeded with a probability of more than 1% on a static portfolio over the following 10 trading days.or multiplefactor models from which all interest rates are derived. between short. Stress tests (explained in Chapter III. unless these prices are selected as market factors. however. (ii) Trading book portfolios are by definition very dynamic. The temptation on the front desk is to choose the simplest adequate model for each task at hand and therefore to use different models for different instruments.pdffactory. most OTC derivative and structured products. For example. super-additivity may occur when the tail risks are bigger than if they were normally distributed. Unfortunately. Derivatives also exist on assets that would not be easy to trade. assume some initial pricing uncertainties and take into account the effects of trading and hedging strategies. the total market value of these instruments – counting the positive side only of each transaction – is less than $3 trillion and. These features make derivatives the 36 The addition of VaR figures is not sub-additive in general.4 Control/Mitigation Banks have the means and the competence to manage most market risks very effectively. to set up an ideal level of capitalisation or to test strategic plans. Banks can choose parameters to suit their own internal purposes – whether to improve resource allocation. about 60% of financial derivatives are in the form of interest-rate swaps. less than $2 trillion. it is not necessarily safe. they have some degree of control over the market risk they take.pdffactory.6. In the medium term they can shape the risks by modifying the design and pricing of the products they offer to their customers. about $110 trillion is OTC and $30 trillion listed. However. First. they are twice as large as bond and equity markets combined. In the banking book. about 40 times smaller than the credit risk created by bonds and equities. Derivatives are also relatively cheap to trade. They can choose their own time horizon for risk assessment and confidence level for extreme losses. 37 Of the $140 trillion total. Many banks have now taken a wider view of market risks than that requested by banking supervisors. 36 In some cases.A. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. They can also adjust their liabilities to a large extent to match the risk profiles of their assets. such as equity indices and notional bonds. the sum of bid–offer spread and commissions on derivatives is typically at least ten times cheaper than for the relevant underlying assets.com 62 . adding the VaRs may understate the gross risk. In particular. they may develop models of customer behaviour in response to changes in interest rates and other market factors. after netting exposures to single counterparties. The importance of financial derivatives as market risk hedging instruments needs to be stressed. however.1. In terms of notional size of the underlying assets.The PRM Handbook – Volume III process is likely to produce a larger total MRC than necessary because it does not recognise the effects of diversification among risks. They can incorporate less liquid instruments into the banking book. The credit risk created by derivative products is therefore very small. As a fraction of notional size. about $140 trillion against $70 trillion. In the short term they can use derivative products to hedge most market risks if they wish to do so.37 In terms of trading volumes they are even larger. Financial derivatives have become a huge market. III. The PRM Handbook – Volume III choice instruments for hedging market risks. Second. 39 Risks generated by various management units may have a different impact on global risk. But the bulk of market risks taken by banks is still taken willingly with the objective of deriving a profit.3. others may be highly correlated with global risk. The simpler objectives usually also take the form of riskadjusted performance measures. risk mitigation is achieved more economically at a macro than a micro level. there is roughly one chance in two that the hedge will generate a loss.A. The risk management objective must therefore be the optimisation of some risk-adjusted performance measure within the constraints on minimum regulatory capital and various concentration limits imposed by banking supervisors or adopted internally. the European markets became flooded with petro-dollars. In the late 1970s. As for fund managers and even more so. if the rationale for the hedge has not been clearly agreed at the start. from time to time there may also be risks that should clearly be reduced because of changes in market or management circumstances. but with a cost of risk (or cost of risk capital) adapted to each management unit. The cost of risk attributed to each unit should therefore reflect the marginal contribution of each unit to global risk to ensure that local optimisations of risk-adjusted performance lead to a global optimisation of risk-adjusted performance. a loss on the hedge will reflect badly on the hedger.pdffactory. International treasury divisions of banks grew rapidly to handle this ‘hot’ money that could flow in and out rapidly. When a market risk hedge is put in place.5 trillion of underlying assets. the design and implementation of a control/mitigation strategy should follow naturally. The crucial element to set up a market risk control/mitigation strategy in a bank is the definition of the objective. Many international treasuries implemented a very cautious micro-hedging strategy: each dollar deposit had to be matched with a corresponding lending and vice versa. Some may have a diversifying or even hedging effect. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. The following case illustrates these two points.com 63 . thus doubling the size of the balance sheet and losing a bid-–offer spread to the market. there are some undesirable market risks accumulated in the course of normal business. At the same time in the 38 Note that there are also financial derivatives to cover credit risks. 38 and indeed banks are usually found on at least one side of most OTC derivative products and as active participants in listed derivatives markets. especially if the risks being covered are risks that the bank used to accept in the past. individual incentives should be aligned with the stated objectives. First. there are still some complexities due to people and organisations. All too often. Rewards cannot be based solely on results without considering and agreeing ex ante the risks being taken.6. recognised the tools that can be used for their control and defined the objective of market risk management. Decomposition of risk is explained more fully in Section III. This general objective is translated in the short term and at various hierarchical levels (division/desk/trader) into simpler objectives and limits. 39 Having assessed market risks. The market for credit default swaps and other credit derivatives has grown at about 50% per year over the last 10 years and now covers about $2. In reality. other names have been used in other markets. indeed. over a given time interval. there was a lack of consistency between the risk management objectives and mitigation strategies between the domestic and international sides. Primary risks can be covered (‘delta hedged’) relatively cheaply with futures and forward contracts. gamma or second-order sensitivity of the portfolio to changes in the underlying asset price. 40 Delta is the word commonly used to describe the first-order sensitivity of or change in an option value per unit of underlying asset relative to a very small change in the underlying asset price. daily) and managed globally.com 64 .pdffactory. that is.5. the transaction costs of rebalancing a hedge (bid– offer spreads and commissions) increase as the square root of the rebalancing frequency. as explained in Section I. but we can highlight a couple of points. Historically. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. But eventually a more balanced approach had to be implemented in which the interest-rate maturity gap could be tracked on a net basis at regular intervals (e. managers are mostly concerned about reducing primary risks.A.8. It could be explained for a while by fear of the unknown on the international side.The PRM Handbook – Volume III same banks. Exposures to primary risks are characterised by first-order sensitivities. domestic treasuries were continuing to run significant interest-rate gaps (longer interest-rate maturity schedules on the asset side than on the liability side) without worrying about it because they had always done so.1. Therefore. How often should delta hedges be rebalanced? As a rule of thumb. the risks associated with net long or short positions in various asset classes. unit transaction costs). First.A. hedges with futures and forwards must be rebalanced over time as prices fluctuate. for example. This achieved today in most banks and. An optimum frequency (or more efficient rules based on actual market movements) can be derived from a choice of trade-off between costs and residual risk and the knowledge of some portfolio and market characteristics (volatility of underlying asset price. The results may surprise traders because it is very difficult for anyone to gain an intuitive view about the right balance between expected costs and residual risks 40 When multiplied by the notional size of the underlying asset we obtain a so-called ‘dollar-delta’ or ‘delta-equivalent value’ that is the value of a position on the underlying asset having the same sensitivity as the option. however.g.8. the exposures are not linear in the hedging instruments because of the presence of options or option-like instruments. see. the ‘deltas’ to the corresponding market factors. international and domestic treasuries are now often merged in a single treasury division implementing a coherent market risk management policy across desks. Clearly. for example ‘modified duration’ in the bond markets as we have seen in Section III.4. whereas the variance of residual risks decreases as the inverse of the rebalancing frequency. In many instances. We do not have the space here to detail market risk hedging strategies. Hodges and Neuberger (1989). The PRM Handbook – Volume III that accumulate slowly over time but can reach very large figures and can be very different from one portfolio to another. volatilities may fluctuate rapidly (short-term volatilities may suddenly double or treble in a crisis) and are difficult to predict. then volatility may change over time and space (market prices) and one can no longer speak about a single vega. it is also referred to as ‘convexity’. vega hedging will be very crude and almost impossible to combine with gamma hedging. for portfolios of options and option-like instruments with different maturities.8) are related but are not the same. At least this is my excuse for having given a longer description of these three tasks in the banking section compared to the fund management and non-financial firms sections. I am minded to give the maximum three-star rating to all three tasks. How frequently should B be rebalanced relative to A? The answer is. 43 American traders. A has twice the gamma of B (in dollar terms). To summarise the degrees of difficulty in the identification.pdffactory. Hedging market risks remains therefore something of an art. having quickly run out of Greek letters. Obviously. 42 The sensitivity of a portfolio to changes in volatility (assuming volatility can be described by a single parameter) is usually referred to as ‘vega’. 41 Second. unless options for hedging can be found with maturities similar to the original exposures. Gamma risk is a concern inasmuch as if large and left unchecked it would require a very active and therefore expensive delta-hedging strategy and still leave the trader exposed to large residual risks in case of sudden market movements. having more or less delta-hedged a portfolio. assessment and control/ mitigation of market risks in banking. If the model for the dynamics of a market factor does not assume a constant volatility source of risk.e. four times more frequently than A. there is no longer a simple relationship between vega and gamma. However. consider two portfolios A and B. Vega risk is a concern inasmuch as. As a rule of thumb. opted for a hot blue star and an easy alliteration (i.com 65 . on average. twice the volatility and twice the transaction costs per unit volume. For a single plain vanilla option and using a constant volatility model. also known as gamma risks from the name generally given to second-order sensitivities to market factors. 41 As a test. 42 Because gamma is related to the curvature of the value of a portfolio as a function of a market factor. That is certainly the case in bond markets when describing the second-order sensitivity of a bond price with respect to its yield. the optimal rebalancing frequency is proportional to (volatility)2 ? (gamma/unit transaction cost)2/3 .A. for many market factors. 43 The two risks (explained in Section I. which is not immediately obvious. Key among these risks are exposures to volatility changes and larger than expected underlying asset price movements. at least not without redefining what is meant by vega. vega is equal to minus gamma multiplied by time to maturity.8. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Both gamma and vega risks can be controlled with the use of options. vega and volatility). one is left with secondary risks now playing a primary role. by the crippling effects of an interest-rate increase and the difficulties in raising capital. Why? Because the management of financial risks is by definition not among the core competences of non-financial firms and therefore it tends to be neglected. followed by assessment and then control/mitigation. Equities tend to be the exception.com 66 .A. In our modern. but if they were they might not be so difficult to evaluate and control. market risks are very pervasive. The three main sources of market risks are interest rates.1. three stars for identification. So market risks may be not properly recognised.A. Company reports are full of comments about business being affected by the weakness of one currency or the strength of another. except for holding companies and other companies relying heavily on investments in securities. foreign currency exchange rates and commodity prices. two for assessment and only one for control/mitigation. they affect the cost of raw materials.7. Their core competences lie elsewhere and they would rather unload these risks on to market professionals or hedge them directly in the markets. they affect firms either directly or indirectly through competition. Do such programmes add value to shareholders or are they simply contributing to the profits of banks and other financial intermediaries? There are few guidelines on best practice for market risk management in non-financial firms and no regulations comparable to those in banking or fund management. the price at which finished products can be sold in foreign markets as well as the price of competitive foreign imports. whether in service. The real question is to what extent non-financial firms should design and implement hedging programmes to reduce the impact of market risks.7 Market Risk Management in Non-financial Firms III. These are common market risks but they are often regarded by entrepreneurs as externalities about which they can do little. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.1.7.1 Market Risk in Non-financial Firms Non-financial firms.1. globalised and deregulated economies. For example.The PRM Handbook – Volume III III. trading or manufacturing industries. take on market risks in the natural course of their business without seeking such risks to derive a profit. by the cost of energy or raw materials. Thus.A. III.2 Identification The identification of market risks in non-financial firms is arguably the most difficult of the three risk management tasks. in our global markets most manufacturers are exposed to foreign currency fluctuations. It is a natural tendency that we tend to address the problems we know how to solve and ignore the others.pdffactory. In fact more and more can be done to reduce these risks or at least smooth out their effects in the short to medium term. and profit margin above Libor will also fluctuate. and considering the medium to long term. In making these judgements. say. if the assets of the firm are perceived to generate returns independent of future interest rates. but the operational profit margin of a business before financing costs can often be related to inflation indices and so are the financing costs.The PRM Handbook – Volume III Finance directors sometimes ask: ‘What is less risky. The fair value of a fixed rate loan. There is actually a growing market for inflationlinked bonds and loans that secure this relationship and thus are attractive to both investors and borrowers. A similar approach should be used to recognise foreign exchange risk. more generally. When companies were mostly domestic. subjective and inaccurate as it may be.com 67 . on the other hand. there may be some doubt about the choice of a suitable reference currency. it is easy to verify that the fair value of a properly priced floating rate note at Libor should be close to its face value at each interest-rate payment date. whereas the present value of the fixed rate debt varies with interest rates like the price of the equivalent bond. it may be helpful to consider inflation indices as intermediate factors. the present value of the floating rate debt is almost constant. 44 The choice of accounting standards or. will fluctuate like the fair value of the equivalent coupon bond. borrowing at fixed rates or at floating rates?’ Here lies the paradox: on an accrual or cost-accounting basis. the choice of a coherent frame of reference for risk evaluation is critical. We may need two sets of accounting principles: one to report results objectively and accurately. ideally the currency in which most assets and liabilities are denominated. a fixed rate funding may be the safer option. but on a fair accounting basis it is the opposite. the choice was obvious. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. But the answer to the previous question does not depend only on the choice of accounting standards. the other to serve as a rational basis for risk management. The 44 If a loan is evaluated at a fair price like a bond and future cash flows are discounted at. borrowing at fixed rates is safe. Thus. but if future returns are perceived as being highly correlated with interest rates then a floating rate funding is the safer option. the financing costs are fixed. whereas floating rates are risky.pdffactory. Difficult. and to which most shareholders are economically tied. Now that many companies are truly international. going down when interest rates go up and vice versa. the going Libor rates. Real interest rates relative to inflation tend to be smaller and more stable than nominal interest rates. in particular on the composition of assets and liabilities. mixing the use of several frames of references can only lead to confusion. In particular. I think that a fair valuation of assets and liabilities is the only acceptable basis for recognising and assessing risks. Companies select a reporting currency. it also depends on the business. even though for other good reasons companies use accrual accounting extensively in their reports. most revenues and costs are incurred. It is the uncertainty about the future equity value of the firm that is of concern to shareholders. Future inflation rates are uncertain. There may be small fluctuations between interest-rate payment dates and the present value of any credit spread. that is. Companies face multiple choices Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. even if most of the business is conducted outside the UK. A key decision for assessing market risks in non-financial institutions is the choice of time horizon. it will not be able to adjust the price of its products and services within this time frame or it has long-term assets and liabilities denominated in foreign currencies. unlike transaction risk. less certain and less precise. technology or regulation. Risk managers can take their cues from equity analysts and rating agencies. It is difficult to appreciate these dangers in advance but critical to develop some awareness of them rather than ignoring them.1.com 68 . III. An outsider’s view may be informative. foreign exchange risks should be assessed on a sterling value and hedges against sterling should be put in place if the risks are deemed excessive.The PRM Handbook – Volume III easiest reference currency is the one with which most shareholders are comfortable. some companies assess foreign exchange risks purely on current payables and receivables. translation risk has no impact on cash flows so it is sometimes neglected. It will be a better basis for deciding not only what hedges to implement in the short term but also what offsets could be taken in the long term. for instance.pdffactory. They argue that only those ‘transactional’ foreign exchange risks can be assessed accurately and hedged accordingly. They are terribly difficult to recognise and appreciate. Long-term risk assessments extending to several years are indeed indispensable for deciding on major investments and developing long-term strategic plans. Similarly. The latter has been called ‘translation’ or ‘conversion’ risk. Thus if the majority of shareholders are based in the UK the preferred reporting currency should be the pound sterling. But an approximate evaluation of these further exposures is preferable to total ignorance. For example. Even purely domestically based companies may find themselves suddenly uncompetitive because of a flood of cheap imports brought about by the weakening of some foreign currency relative to their own domestic currency. although many can be directly traced to market factors such as exchange rates rather than to non-market factors such as innovation.A. A more fundamental question is whether the company is already exposed to exchange-rate fluctuations beyond this horizon because.3 Assessment To be approximately correct as a whole is more important than to seek accuracy in some areas and to ignore others. because the goal is to reduce risks for the shareholders. but they are important. Note that. They are experienced in detecting threats to individual companies and company sectors caused by possible changes in market conditions.7. to a horizon of perhaps three months. longer-term transaction risks are sometimes ignored because they are less immediate. Commodity and indirect market risks due to competition are often called economic risks or input/output risks rather than market risks. or whether to invest more now to maintain flexibility of choice at a later stage. whether to invest in new ventures with payback periods of many years. it does not matter what labels are put on risks. At this stage we should understand what are the critical decisions and the most significant risk factors that would influence the choice of strategy. It calls for the application of decision analysis methods. The following stage consists of introducing probability distributions to describe our state of uncertainty about the significant risk factors. A decision analysis cycle proceeds as follows. it is the combination of multiple risks and decisions – including responses from competitors – that is significant.pdffactory. Based on initial estimates of the risk factors. Note two essential points in this approach. The role of market risk specialists will be to alert management to the existence of certain risks and to contribute to the description of these risks. For decision-making. The assessment of long-term market risks and their potential impact on a firm therefore goes far beyond the calculation of VaR as carried out in banks. A company may acquire a foreign exchange exposure if it wins a contract. We also design alternative strategies that might do better depending on the evolution of the uncertain factors. But among the possible choices there are often possibilities to acquire more information about some of the sources of uncertainty or to refine the basic model in order to determine the best strategy with greater accuracy and thereby to improve the objective. – must be analysed simultaneously. one risk is often contingent on another. but may not be sure to win. These uncertainties and the corresponding decisions – pricing the bid.com 69 . taking into consideration the risk attitude of stakeholders in the firm. whether to outsource some services. market risk factors are combined with other sources of risks in this type of analysis. and we deduce probability distributions for the objective value under alternative strategies.g. and even if it wins the foreign exchange exposure may vary as a function of fluctuating demand. Next we explore the sensitivity of the base case to changes in initial estimates (typically we consider variations in a (subjectively) realistic range such as a 90% range) to identify the most significant sources of uncertainty as far as the objective is concerned. deciding on hedges. Risk assessment is intimately combined with risk management. The decision analysis cycle should then be repeated with the updated information until no more economically valuable information or refinement can be found. etc. we calculate a base case value of the objective. Second.The PRM Handbook – Volume III that are risk-dependent such as where to locate a production facility. maximising the value of the firm) as a function of a few main market and other risk factors and for a base case strategy. the assessment of risks is carried out with the specific objective of improving decisions. A choice of optimal strategy can then be attempted. We construct a simple model of the objective under scrutiny (e. Small Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. First. They cannot work in isolation. The PRM Handbook – Volume III firms may lack the expertise to carry out this type of analysis but there is no shortage of consultancy firms and financial intermediaries ready to help. in principle. purchase an OTC option on the excess cost of the second feed relative to the first. the US subsidiary of the German company Metallgesellshaft. on the other hand. The textbook case is MGRM. Many financial derivatives are liquid only over a relatively short term. In 1993 MGRM had accumulated positions on 154 million barrels of crude oil futures on the New York Mercantile Exchange (NYMEX) to hedge longterm supply contracts of crude oil at fixed prices it had agreed with its customers. the chemical company could consider building a plant suitable for one feed and. Likewise. But one should be aware of two likely problems with long-term financial hedges: liquidity and cash flow. margins must be posted. Unfortunately for MGRM. When used to cover long-term exposures. due to uncertainties in the future costs of these two feeds. At every rollover.1. That is a form of long-term market risk management. the Japanese manufacturer could opt for the country offering the lowest production and delivery cost into the US market and hedge currency risks by entering into forward exchange contracts. commodity derivatives are still relatively thin and it is very unlikely that the chemical company could find an OTC option to cover its risk over more than a few months.4 Control/Mitigation The decision analysis method outlined in the previous section is particularly useful for making major strategic choices over the medium to long term. By the end of the year the board of Metallgesellschaft decided that Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. it is worthwhile to make the extra investment in a plant that can accept both feeds. expiring contracts must be settled. A decision analysis may reveal that. the hedger may accumulate large realised losses on the short-term contracts against unrealised gains on the initial exposure. Many financial derivatives markets are very deep. The cash-flow problem is linked to liquidity. If unlucky. III. a Japanese car manufacturer producing cars for the US market may decide to locate production facilities in the USA to reduce foreign exchange risks rather than in a country with currently lower labour costs.com 70 . positions in short-term derivatives are stacked up and rolled over. For example.7. For example.pdffactory. The cash-flow problem thus created may prove fatal. between rollovers. Likewise. a chemical company producing high-density polypropylene should consider whether naphtha or gas oil would be the more economical feed. thus the Japanese manufacturer may find forward contracts in sufficient sizes to cover exchange-rate risks for the entire economic life of its plant.A. crude oil prices started to decline and futures were in continuo (higher prices than spot). ‘Physical’ long-term solutions limiting exposure to market risks are usually preferable to ‘financial’ hedges that could be considered as alternatives. so that at each monthly rollover MGRM had to pay for the decline in prices of contracts it had bought a month earlier. (iii) Shareholders and investors know what the risks are. (iv) If competitors do not hedge a certain risk.g. Most firms prefer instead to hedge translation risks by matching assets and liabilities in the same currency. Indeed. currency risks). it would be unsafe to be too much out of line: winning on the hedge might not be as favourable as losing would be damaging.pdffactory. it is generally considered inappropriate to hedge translation risks or long-term transaction risks using derivatives. thus resulting in cash flows. Translation risks relate to revaluation of foreign assets (such as subsidiaries) and liabilities rather than to cash flows. But whether market risks in non-financial firms should be hedged at all remains an interesting question. But others argue in favour of hedging that: (i) Market risks in non-financial companies serve no useful purpose. If the right instruments are not available on exchanges. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Hedging economic exposures and long-term transaction exposures with derivatives can also be problematic not only because of the mismatch between short-term and long-term cash flows but also because of the uncertainties attached to these future cash flows and the difficulty in predicting exactly how profits will be affected by a possible market movement. most importantly. there is no risk premium in the pricing of those market risks that are diversifiable (e. Some argue against hedging as follows: (i) Short-term uncertainties will tend to average out naturally over the long term. and in the long term it only contributes to banks’ profits. these risks are already priced in the market or washed out by diversification. but. MGRM had made a simplistic calculation resulting in an over-hedge and misjudged the rollover risks and the risk of holding a very large proportion of the futures contracts (which led NYMEX to call for additional margins). Traditional hedging strategies with derivatives would only be applicable if there is a plan to sell these assets or refinance liabilities in a different currency. They are not chosen with the expectation of deriving a profit. they had underestimated the potential cash-flow problem resulting from hedging 10-year exposures with short-term financial derivatives. Thus. In such cases operational solutions such as those mentioned earlier can be preferable. (ii) Hedging is costly.The PRM Handbook – Volume III they could no longer afford to support the losses of their subsidiary.com 71 . For example. a foreign subsidiary could be funded in the currency of that subsidiary rather than in the home currency. firms will find many banks willing to offer tailor-made OTC products. Financial derivatives remain the choice instruments for hedging short-term transaction risks. diversified. A short-term effect of these changes may be to increase the volatility of company 45 Research into the use of derivatives by non-financial corporations suggests that derivatives are more likely to be used by firms with greater leverage. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.A. Greater stability of earnings may mean that lower interest rates apply. company pension schemes) will help companies pay attention to market risks. and any attempt to assess them is based on a large number of assumptions. avoidable gambles. Or. trade creditors. In small firms. market risks create uncertainties in the performance of business units and the firm in general. the debt–equity ratio can be increased and tax benefits can be reaped.com 72 . of course. the wider adoption of the new International Accounting Standards favouring fair value and hedge accounting. the valuation of contingency claims (e. To start with the risk identification phase.The PRM Handbook – Volume III (ii) On the contrary. if only a proper hedging strategy were put in place. (iii) If unnecessary risks are eliminated.g. executive share option schemes) and the recognition of assets and liabilities heretofore not affecting reported company profits (e. 45 (iv) Reduction in risk can also make the firm more attractive to other stakeholders such as lenders. head-in-thesand attitude. more specialised firms.pdffactory. III. I hope the reader will have realised that market risks. market risks may already be well diversified and treasury departments may have the expertise to decide which risks are likely to be beneficial. etc. results become more stable. obscures the true profitability of various activities and confuses the reward scheme. trade terms are more advantageous. internationally active firms than for smaller. In large firms. customers and employees (especially if they hold executive stock and have poorly diversified portfolios). That would be an ostrich-like. vice versa.g. Hedging market risks may be less important for large. This is known as reducing the costs of financial distress.8 Summary There is a pervasive view today that market risk management consists essentially in calculating a value-at-risk. Instead. And. firms that reduce their markets risks can afford a higher leverage and a reduce cost of capital. although relatively well understood.1. This makes the planning process more difficult. are still hidden in many places. some market risks might be crippling and should be seen by most stakeholders as unnecessary. It is not because a firm uses accrual accounting that these risks do not exist. Fortunately. This introduction should help dispel this false impression. market risk management does not stop at the assessment phase but should lead to control and mitigation. one should not forget that there are market risks hidden in illiquid assets and liabilities that are not evaluated at fair value. Different firms may reach different conclusions. whether portfolios should be assumed to be static or dynamic. but in the long term it will help better risk management and should result in a more efficient allocation of resources.com 73 .pdffactory. Depending on the business.org/publ/bcbs. Nonetheless. liquid and efficient market in financial derivatives. must be able to rely on adequate resources. to name just a couple of new commodity derivatives. ‘capital should not be regarded as a substitute for addressing fundamentally inadequate control or risk management processes’ (BCBS. this is what banking supervisors are focusing on. some of these constraints may be biting. there cannot be any logical control/mitigation strategy without a clear objective. To operate efficiently. shareholder value. and so on. ultimately. Chichester: Wiley. Market risk management is still a relatively new and growing field of expertise. Available at http://www. As banking regulators remind us. The objective may be to assess the probability of insolvency within a year. limiting the level of acceptable market risk. whether the business should be regarded as a going concern or whether some assets should be valued on a fire-sale basis. modified September 1997). as we see happening with telecommunications bandwidths and pollution credits. But it may also be to assess some risk-adjusted performance measure and improve resource allocation accordingly. there may also be a number of constraints. To each objective corresponds a reasonable set of assumptions. must have reporting lines through a general risk management function up to the board of directors. or it could be any of a number of other objectives such as developing contingency plans. like internal audit. The risk control/mitigation phase follows logically from a choice of objective. whether normal or extreme market conditions should be considered. There are few market risks that cannot be adequately covered when there is a wish to do so. the implementation of hedging and risk control strategies is now a lesser problem because of the existence of a deep. a choice of time horizon. Fortunately.bis. par. for instance. The risk assessment phase is mathematical but relies on the choice of an objective and coherent set of assumptions. C (2001) Market Models: A Guide to Financial Data Analysis. BCBS (1996) ‘Amendment to the capital accord to incorporate market risks’ (January.htm Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. regulatory or otherwise. The quality of risk management directly affects risk-adjusted performance measures and. New hedging requirements create new derivatives markets.The PRM Handbook – Volume III returns and make equity investments less attractive. the market risk management function must be independent of risk-taking functions as well as of the accounting and internal auditing functions. must communicate regularly with risk-taking departments and senior management and. 723). 2004a. References Alexander. The PRM Handbook – Volume III BCBS (2004a) ‘International convergence of capital measurements and capital standards’ (June). 16. Review of Futures Markets.. Davis.com 74 . F. 48–51. Dybvig.org/publ/bcbs. 222–239. and Jones. Optimal replication of contingent claims under transactions costs. Hodges. A R (1990) ‘Portfolio selection with transactions costs’. pp. A (1989). S D and Neuberger. R (1987) ‘Simplifying portfolio insurance’. and Perold. Mathematics of Operations Research. F.pdffactory.htm Black.bis. Black.bis. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. P H (1988) ‘Inefficient dynamic portfolio strategies. M H A. 403–426. 67–88. Available at http://www. 676–713. Journal of Portfolio Management.org/publ/bcbs. 1. 8. Fall. A F (1992) ‘Theory of constant proportion portfolio insurance’. and Norman. Available at http://www. pp. 15. Review of Financial Studies.htm BCBS (2004b) ‘Principles for the management and supervision of interest rate risk’ (July). Journal of Economic Dynamics and Control. pp. or How to throw away a million dollars’. and this inevitably results in open positions being created that are exposed to loss from adverse market movements. they need to stand ready to execute trades on demand. including delta equivalent exposures to movements in underlying rates and prices. and David Rowe is Group Executive Vice President for Risk Management at SunGard Trading and Risk Systems in London. Many of these criticisms relate to important precautions as to how VaR results should be interpreted as well as limitations on their use.1 Introduction Value at risk (VaR) has been the subject of much criticism in recent years. but the major exposure to loss from a general rise in rates has been eliminated. Other criticisms. A common example is a dealer who executes an interest-rate swap with a customer in which he/she receives fixed and pays floating and then hedges by shorting government bonds and investing the proceeds in short-term instruments. rates. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. 46 Kevin Dowd is Professor of Financial Risk Management at Nottingham University Business School. There is still basis risk.pdffactory. In the longer term. since the spread between the swap and bond rates may change. although this is less desirable since it requires paying away a bid or offer spread instead of earning the spread on a customer deal. To be successful. have been more sweeping. the permissible amount of negative gamma in option positions. Until the early 1990s these limits were in the form of restrictions on: the size of net open positions. Failing that.A. the dealer will try to attract offsetting customer trades by shading future quotes to make such offsets attractive to the market. Given that running a market-making function inevitably gives rise to market risk.2. Market risk arises from mismatched positions in a trading book that is marked to market daily based on uncertain movements in prices. it is useful to consider how trading risk limits were determined before VaR became widely accepted. institutions have always imposed restrictions on traders designed to limit the extent of such risk-taking. the degree of maturity mismatch in the net position. in some cases dismissing the entire concept as misdirected and wrong-headed.2 Introduction to Value at Risk Models Kevin Dowd and David Rowe 46 III. These open positions are hedged in the short run with less than perfect offsets. exposure to changes in volatility. In that context.The PRM Handbook – Volume III III.com 75 . volatilities and other relevant market parameters. UK. however. Market makers cannot operate successfully if they only broker exactly offsetting trades between customers. the dealer may execute an offsetting swap with another dealer.A. regardless of the exact structure of those positions.pdffactory. with all sorts of inconsistencies and other undesirable results: ‘good’ risks were often passed over because they ran into arbitrary risk limits. and even the technicians were hard pressed to translate the limits into a consistent measure of risk.com 76 . Naturally. They were difficult to enforce effectively.2 Definition of VaR VaR is an estimate of the loss from a fixed set of trading positions over a fixed time horizon that would be equalled or exceeded with a specified probability. the goal is to arrive at the best Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. the value of any VaR estimate will depend on the stochastic process that is assumed to drive the random realisations of market data. In particular. not least because risk exposures. From this realization was born the concept of value at risk. and so forth. VaR is an estimate. let alone provide an effective system of integrated risk management.A. reducing risk in one area seldom allowed greater risk-taking elsewhere. one of the most important contributions of VaR has been an improvement in the quality of the management of risk at the firm-wide level.2. not a uniquely defined value. In essence. Several details of this definition are worth emphasizing. III.The PRM Handbook – Volume III These limits imposed a complex array of constraints on trader positions. This requires us to resort to historical experience and raises a whole host of issues such as the length of the historical sample to be used and whether more recent events should be weighted more heavily than those further in the past. The structure of the random process has to be identified and the specific parameters of that process must be calibrated. Senior management committees charged with approving such limits were often at the mercy of technicians. decisions were made with inadequate appreciation of the risks involved. Gradually a consensus arose that what was of fundamental interest to the institution was the probability distribution of potential losses from traders’ positions. The information and management systems of the time also meant that these limits were enforced piecemeal. Perhaps the most important shortcoming of this old system was the absence of integrated risk management. This gave management a much more consistent way of embedding acceptable levels of risk into the formal limits within which traders were required to operate. but VaR did allow a firm’s management to define limits that reflected a well-considered risk appetite in a way that the pre-existing system did not. There was little coherence between the structure or management of the limits and the range of potential losses that they permitted to occur. there was still a heavy dependence on market risk technicians to translate market dynamics and the traders’ positions into estimates of the risks being taken. In that sense. such as those based on option gammas. often move quickly in response to market developments. A short holding period is preferable for model validation or backtesting purposes: reliable validation requires a large data set and a large data set requires a short holding period. The holding period may also be specified by regulation.g. 47 One common example of this is the requirement to estimate VaR over a 10-day time horizon for purposes of calculating regulatory capital for market risk under the Basel Capital Accord.. VaR does not address the distribution of potential losses on those rare occasions when the VaR estimate is exceeded. The choice of holding period can also depend on other factors: The assumption that the portfolio does not change over the holding period is more easily defended with a shorter holding period. depending on their investment and/or reporting horizons. but institutions can also operate on other holding periods (e. the ideal holding period appropriate in any given market is the length of time it takes to ensure orderly liquidation of positions in that market. Basel Accord capital adequacy rules stipulate that internal model estimates used to determine minimum regulatory capital for market risk must reflect a time horizon of two weeks (i.pdffactory. The holding period can also depend on the liquidity of the markets in which an institution operates.e. 10 business days). it is necessary to model trades that mature within the specified time horizon and make behavioural assumptions relating to trading strategies during the period. one quarter or more). Analysis of the magnitude of rare but extreme losses must invoke alternative tools such as extreme value theory or simulations guided by historical worst-case market moves. This raises difficult questions when the evaluation period is long enough to make this assumption unrealistic.com 77 . The usual holding period is one day or one month. Other things being equal.The PRM Handbook – Volume III possible estimate of the stochastic process driving market data over the specific calendar period to which the VaR estimate applies. The trading positions under review are fixed for the period in question. It is never correct to refer to a VaR estimate as the ‘worst-case loss’. The use of VaR involves two arbitrarily chosen parameters – the holding period and the confidence level. Differing methods for dealing with the uncertainty surrounding changes in these random processes are at the heart of why VaR estimates are not unique. For example. 47 In this instance it is most common to scale up a VaR estimate for a shorter period on the assumption that market data move independently from day to day. it is also clear that market data are not generated by stable random processes. Otherwise. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Moreover. Indeed. relatively lower confidence levels are desirable to get a reasonable proportion of excess-loss observations. and in the 1990s the Basel Accord was amended to reflect banks’ exposure to market risk. III. However.2. a very high confidence level. It set down rules for calculating minimum regulatory capital for banks based on a simple set of multipliers applied to credit-risky assets. the ‘best’ choice for these parameters depends on the context. among others.3 Internal Models for Market Risk Capital The original Basel Capital Accord was put into effect at the beginning of 1988. the confidence levels required for these purposes can be higher than those needed to meet regulatory capital requirements. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.97%. for backtesting and model validation. The time horizon for the regulatory capital calculation had to be two weeks (i. In a significant departure from traditional conventions. The model used for calculating regulatory capital had to be the same one used for dayto-day internal risk management (the so-called ‘use test’). since losses over that limit can occur with a reasonable likelihood.e. most institutions prefer confidence levels low enough that actual losses exceed the corresponding VaR estimate somewhere between two and twelve times per year (implying a daily VaR confidence level of 95% to 99%). Thus. which were still quite small. often as great at 99. This forces policy committees to take the size of the limit seriously. For limit-setting. market risk factors were becoming more important constituents of the risk profile of most major money centre banks. The minimum capital calculation did not reflect risks associated with a bank’s mark-to-market trading activities.pdffactory. For the above reasons.A. the new Amendment also allowed banks to employ their own internal VaR models to calculate their minimum regulatory capital for market risk. This permission was conditional on several requirements: The models and their surrounding technical and organisational infrastructure had to be reviewed and approved by the bank’s supervisor.com 78 . is appropriate if we are using risk measures to set capital requirements and wish to achieve a low probability of insolvency or a high credit rating.The PRM Handbook – Volume III The choice of confidence level depends mainly on the purpose to which our risk measures are being put. The VaR confidence level used in the regulatory capital calculation had to be 99%. 10 business days). What is important is that the choices be clear in every context and be thoroughly understood throughout the institution so that limit-setting and other risk-related decisions are made in light of this common understanding. On the other hand. 4. using the standard normal transformation (II. x =Z + Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. This was to apply the ‘square root of time’ rule.A.2.2. the 10-day VaR estimate would be 3.4 Analytical VaR Models The assumption that holding period returns (i. the consequences of this assumption will (hopefully!) be evident from the backtests on the VaR model that are described in Section III. This rule is based on the formula for the distribution of the sum of random variables.1) as in Section II. which almost all of them adopted.com (III.16228 times the daily VaR.2. we write R N( .4. the capital requirement was to be at a level at least 3. 2) (III. Now if the portfolio is currently worth S. III. In other words. assuming that daily returns are independent of each other (see Chapter III.A.1.A. Supervisors retained the prerogative of applying a larger multiplier if the results of backtesting exercises suggested that internal models were generating insufficiently high VaR estimates.3) 79 .8. banks obtained 10-day VaR estimates by multiplying the daily VaRs by 10 3.E.pdffactory.4. Moreover. This procedure effectively says that if traders took the same level of risk as indicated by the one-day VaR estimate for 10 consecutive days.28) we can write Z = (x – )/ .E. = –x S where x is the lower percentile of the distribution N( .e.0 times the 10-day VaR estimate averaged over the last 60 business days for any reporting period. x is the number such that the (see Section II. If our h-day returns R are normally distributed with mean and standard deviation .2. The extended holding period presented some questions.16228. Thus x will typically be negative. That is. in so far as even a one-day static portfolio is unrealistic.1). Since we require a fairly high degree of is small (normally 0 < < 0.A. (III. our h-day VaR at the confidence level 100(1 )% is given by VaR h.4. probability that R < x = confidence. Assuming a static portfolio for one day avoids most of the complications described above for longer time horizons.A.A.The PRM Handbook – Volume III Assuming the above conditions were met.E.3).3).2. In fact. That is. h-day relative changes in value) are normally distributed provides us with a straightforward formula for value at risk.2) 2). How should positions that matured during this time horizon be handled? What about trades likely to be booked to correct for the erosion of initial hedges due to ageing of the portfolio? The amended Accord also offered banks a solution to many of these problems. 005 and 0.897 = $88. Applying the square root of time rule (explained in Chapter III. as the following example shows.A.pdffactory.01 3.64485 for Z0. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.The PRM Handbook – Volume III where Z is the lower percentile of the standard normal distribution. 0. respectively.A. Perhaps the most important of these is that many parametric VaR applications are based on the assumption that market data changes are normally distributed. 3. vendors such as RiskMetricsTM supply updated estimates of the volatility and correlation parameters themselves.2. we have derived the following simple analytic formula for VaR that is valid under assumption (III.2. 0.1: Analytic VaR calculation Suppose we are interested in the normal VaR at the 95% confidence level and a holding period of 1 day and we estimate and over this horizon to be 0.005 1.4) Estimating VaR at a given probability using the normal distribution is very easy.2.A.2.A.005 2.05)’ into Excel gives the value –1. VaR 1. = (Z + )S.16228 ? $41.2.2) and (III.02) ? $1 million = $41.218. the greater the VaR. (III.2.A.com 80 . once we have an estimate of the mean and standard deviation. But while simple and practical as rough approximations. Note that the higher the confidence level. typing ‘=NORMSINV(0.A.05 VaR 10. Now (III. such as the NORMSINV function in Excel.527. our VaR would be 48 VaR 1.05 Putting together (III. analytic VaR estimates also have shortcomings.01 = 10 VaR 1. Example III.A. For instance.05 = 10 VaR 1. 0.02) ? $1 million = $27.16228 ? $27.01 = (0.01)’ in Excel gives –2.32634. the corresponding VaRs over a 10-day holding period are: VaR 10. This can be obtained from standard statistical tables or from spreadsheet functions. They rely on parameter estimates based on market data histories that can be obtained from commercial suppliers or gathered internally as part of the daily mark-to-market process. 0.4) tells us that for a portfolio worth $1 million. and this assumption 48 ‘=NORMSINV(0.05 = (0.3).64485 0.3). if we were interested in the corresponding VaR at the 99% confidence level. For active markets. 0.897.32634 0. Analytical approaches provide the simplest and most easily implemented methods to estimate VaR.02.527 = $131.1): VaR h. 0. For instance.320. For example. deltagamma methods. and hence fairly large. Assuming normality when our data are heavy-tailed can lead to major errors in our estimates of VaR.5 Monte Carlo Simulation VaR Fortunately. whose payoffs interact in ways that cannot be handled using analytical methods.The PRM Handbook – Volume III is seldom correct in practice.A. even modest instability of the value sensitivities can result in major distortions in the VaR estimate.7. VaR will be underestimated at relatively high confidence levels and overestimated at relatively low confidence levels. we might prefer to use simulation methods. but their maximum loss occurs when there is no market movement at all. Such distortions are magnified when options are a significant component of the positions being evaluated.2. For example. analytic approaches provide a reasonable starting point for deriving VaR estimates. we have to be careful which analytical methods we apply.com 81 . III. which are often used for options VaR.A.3. (These methods are discussed in Section III. Further discussion on this will be given in Chapter III. changes in market conditions.2. and especially more nonlinear. Analytic VaR is particularly inappropriate when there are discontinuous payoffs in the portfolio. as is often the case with exotic options.6. and as positions become more complex. but should not be pushed too hard.pdffactory. our portfolio might be a collection of heterogeneous instruments. Since VaR is often based on fairly rare. or the values of the instruments in our portfolio might be ‘complicated’ functions of otherwise straightforward risk factors. Although the VaR of such a position can be obtained using analytical methods.A. can be very treacherous when applied to such positions because they assume that the maximum loss occurs when underlying variables exhibit large moves. more sophisticated approaches are necessary to provide reliable VaR estimates. we might have simple positions that can be handled using analytical methods. In summary.) In such cases. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. since market value sensitivities are especially unstable in that situation. And there again. This is typical of transactions like range floaters and certain types of barrier options. but as the magnitude of risk increases. Alternatively. A good case in point would be a portfolio of long straddles: these options are simple. They may be acceptable on a long-term basis if the risks involved are small relative to a firm’s total capital or aggregate risk appetite. many problems that cannot be handled by analytical methods are quite amenable to simulation methods. But analytical approaches can also be unreliable for other reasons: Market value sensitivities often are not stable as market conditions change. we might have stochastic processes that exhibit jumps or certain types of heavy tails that do not allow an analytical solution for our VaR. because we know they are reliable. but are better handled using simulation. A.5. Each simulation ‘trial’ leads to a possible profit/loss (P/L). Now suppose that we wish to simulate the stock price over some period of length T.6) assumes that the rate of change of the stock price is normally distributed with mean t and standard deviation t .6) where S is the change in the stock price over the time interval t.)%. the most natural approach is to use Monte Carlo simulation.A. we approximate (III.2. Note that (III. which is a very powerful method that is tailor-made for ‘complex’ or ‘difficult’ problems.pdffactory. If t is some small time increment. and so forth – and then simulate a large number of possible outcomes based on these assumptions. we set t = T/N).A. suppose we wish to carry out a Monte Carlo analysis of a stock price S. we would often work with this model in its discrete-form equivalent. The essence of this approach is first to define the problem – specify the random processes for the risk factors of the portfolio. and we can read off the VaR as a lower percentile of this density. dW is known as a Wiener process.The PRM Handbook – Volume III In these and similar circumstances. unless we employ an assumption for the underlying dynamics that is more appropriate than the geometric Brownian motions with constant volatility (III.2.1 Methodology To illustrate.A. This is the standard stock-price model used in quantitative finance.5).5) by S/S t t (III. and can be written as dW = (dt)1/2. and we assume that S follows a geometric Brownian motion process: dS/S = dt + dW where (III. Hence our criticisms of analytic VaR with respect to the normality assumption will also apply to the Monte Carlo VaR methodology. where is a drawing from a standard normal distribution. We would usually divide T into a large number N of small time increments t (i. In practice. If we simulate enough trials.A. III. We Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.e.2.2. The (instantaneous) rate of change in the stock price dS/S evolves according to its drift term dt and realisations from the random term . Substituting for dW.2.A. we can then produce a simulated density for our P/L.5) is its expected (per unit time) rate of return and is the spot volatility of the stock price. In the following when the context is clear we drop the notation for the dependence of VaR on holding period h and confidence level 100(1.com 82 . writing simply ‘VaR’ for VaR h. . the ways in which they affect our portfolio. we get dS/S = dt + (dt)1/2. and S/S is its (discretised) rate of change.2. pdffactory. equals 1. The larger the number of trials. At this point.2. We assume here that the starting value of our stock price. and draw a random value of to update S using (III. We can then repeat the exercise many times and produce as many simulated price paths as we wish. that the simulated terminal stock prices will tend to approach the ‘true’ distribution of terminal stock prices as the number of draws grows larger. Moreover. Note. Some illustrative simulated price paths are shown in Figure III. too. so each path starts from the 1 on the y-axis.6).A. we can see that most of the terminal values are clustered around a central value. say S(0).1.com 83 . with relatively few in the tails. moving randomly in accordance with their ‘laws of motion’ as given in the above equations. the more dispersed the stock prices will be at any point in the simulation. The bigger is . which has only a limited number of paths. and we repeat the process again and again until we have changes in the stock price over all N increments. the closer is the simulated terminal distribution to the true terminal distribution. all we need to do is carry out a large number of simulation trials.A. If we want to obtain a simulated terminal distribution which is close to the true distribution. this gives the change in the stock price over the first time increment. there is a tendency for the stock prices to ‘drift’ upwards. we have simulated the path of the stock price over the whole period T.The PRM Handbook – Volume III take a starting value of S. The degree of dispersion of the simulated stock prices – the extent to which they move away from each other over time – is governed by the volatility .2. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. S(0). Even in this figure. since is assumed to be positive. Thereafter the paths typically diverge. 2. If we wish.4.2 shows the histogram of simulated S(T) values from 10. This example illustrates how easy it is to estimate VaR using Monte Carlo simulation. so we can say that the VaR at the 95% confidence level is 0.420 or less over the period.420 = 0. T=1 and 30 step increments.2). given the parameters assumed. we can estimate the VaR of the stock price by simulating a large number of terminal stock prices S(T).580. This percentile is equal to 0.420 corresponds to a loss equal to 1 – 0. The figure also shows the 5th percentile of the simulated stock price histogram. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.05. indicating that there is a 5% probability that the initial stock price (of 1) could fall to 0. In addition. To illustrate. We then read the VaR from the histogram of S(T) values so generated.000 simulation trials. using the same stock price parameters as in Figure III.1.10. Monte Carlo simulation can easily handle problems with more than one random risk factor (see Section II.A.420.1: Some simulated stock price paths Note: Based on 15 Monte Carlo trials using parameters =0.The PRM Handbook – Volume III Figure III.2. The shape of the histogram is close to a lognormal – which it should be. Figure III.580.2.A.A.D. =0.pdffactory. as the stock price is assumed to be lognormally distributed.com 84 . A terminal stock price of 0. In such cases.A. We might have a portfolio of heterogeneous instruments.pdffactory. We might have a portfolio of options. credit derivatives.A. and might be impossible to handle using analytical methods even if the risk factors are themselves ‘well behaved’.2. We might be dealing with instruments with complicated risk factors.g. the heterogeneity of which prevents us from applying an analytical approach.com 85 . and so forth. Examples of such problems include the following.2 Applications of Monte Carlo simulation Monte Carlo methods have many applications in market risk measurement and would be the preferred method in almost any ‘complex’ risk problem.2: Histogram of simulated terminal stock prices III.The PRM Handbook – Volume III Figure III. foreign exchange options. such as mortgages. the value of the portfolio is a nonlinear (or otherwise difficult) function of underlying risk factors. our portfolio might be a collection of equities. because they jump or show heavy tails) or we might have a mixture of heterogeneous risk factors. bonds. For example. and so forth. among many other possibilities: We might be dealing with underling risk factors that are ‘badly behaved’ in some way (e. we might have credit-related risk factors as well as normal market risk factors. For example.5. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. and the credit risk factors cannot be modelled as normal.2. 1 The Basic Method In applying basic historical simulation. These P/Ls or returns will be measured over a standard time interval (e.2. Hence Monte Carlo VaR estimates are often viewed as coming from a black box whose credibility rests solely on the reputation of the technicians responsible for producing them.The PRM Handbook – Volume III III. III.A. The idea here is that we estimate VaR without making strong assumptions about the distribution of returns.6 Historical Simulation VaR Historical simulation is a very different approach to VaR estimation. beyond the usual VaR cutoff.6. including the payoffs to very complicated financial instruments. This requires a set of historical P/L or return observations on the positions currently held. we first construct a hypothetical P/L series for our current portfolio over a specified historical period. Conversely.A. It can capture risk that arises from scenarios that do not involve extreme market moves.2. For large sophisticated trading operations.5. to which we now turn. Monte Carlo simulation is the most widely used approach to VaR estimation. We try to let the data speak for themselves as much as possible and use the recent empirical return distribution – not some assumed theoretical distribution – to estimate our VaR.g. It lends itself easily to evaluating specific scenarios that are deemed worrisome based on geopolitical or other hard-to-quantify considerations.com 86 .3 Advantages and Disadvantages of Monte Carlo VaR Monte Carlo simulation has many advantages over analytical approaches to calculating VaR: It can capture a wider range of market behaviour.2.pdffactory. This type of approach is based on the underlying assumption that the near future will be sufficiently like the recent past that we can use the data from the recent past to estimate risks over the near future – and this assumption may or may not be valid in any given context. Non-technicians also find the process of imposing historically consistent characteristics on the scenarios to be quite impenetrable. It can deal effectively with nonlinear and path dependent payoffs. and its popularity is likely to grow further as computers become more powerful and simulation software becomes more user-friendly. it can provide detailed insight into the impact of extreme scenarios that lie well out in the tails of the distributions. The biggest drawbacks to the Monte Carlo approach to VaR estimation are that it is computer intensive and it requires great care to be sure all the details of the calculation are executed correctly. a day) and we want a reasonably large set of historical observations Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. III. the only other widely used approach is historical simulation. Nevertheless.A. ) If ri.3: Historical simulation VaR Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. The VaR at the 95% probability is the negative of this percentile. our VaR is given implicitly by the x-value that cuts off the bottom 5% of worst P/L outcomes from the rest of the distribution. suppose we have 1000 hypothetical daily observations in our P/L series (approximately four years of data for about 250 business days per year) and we plot the histogram shown in Figure III. and is therefore 1. this x-value (or the 5th percentile point of the P/L histogram) is –1. and for each asset i we have the observed return for each of T intervals in our historical sample period. If we take our VaR confidence level to be 95%.A.604.A.t is the return on asset i in sub-period t.2. To illustrate. and if Ai is the amount currently invested in asset i. Figure III.2.3. then the simulated P/L of our current portfolio in sub-period t is: n (P/L)t Ai ri . This series will not be the same as the P/L actually earned on our portfolio in each of those periods because the portfolio actually held in each historical period will virtually never match our current positions.t .pdffactory.com 87 . Suppose we have a portfolio of n assets. we can estimate VaR by plotting the data on a simple histogram and then reading off the appropriate percentile. In this particular case.604. Having obtained our hypothetical P/L data set. but we talk of assets for convenience. (Our ‘portfolio’ could equally well include a collection of liabilities and/or instruments such as swaps.The PRM Handbook – Volume III over the recent past. i 1 Calculating this for all t gives us the hypothetical P/L for our current portfolio throughout our historical sample. but the fall (rise) in VaR is only a ghost effect created by the weighting structure and the length of sample period used. However. and a zero weight if it is older than that. We then note the age of each return observation. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. and calculate a suitable agerelated weight for each return. or anything else.The PRM Handbook – Volume III III. and in this case we might age-weight our data so that older observations in our historical simulation sample have a smaller weight than more recent ones (see Boudoukh et al..com 88 . we might believe that newer observations in our sample are more informative than older ones.A. market volatility. a more detailed discussion of this methodology is left to Chapter III.A. treating all observations as having equal weight will tend to underestimate true risks in the winter.3. 1998).2 Weighted historical simulation One of the most important features of basic historical simulation is the way it weights past observations. the VaR will fall (rise) again. and overestimate them in the summer (see Shimko et al.1 and in the Workbook entitled Age-Weighted Historical Simulation. This weighting structure also creates the potential for ghost effects – we can have a VaR that is unduly high (low) because of a short period of high (low) volatility. For illustration only we assume an unrealistically 49 This approach takes account of the loss of information associated with older data and is easy to implement. The first column gives the ordered returns. In the natural gas case just considered. we begin by ordering our returns. More detailed discussion of this point is left to Chapter III. For example. which indicates how much each observation’s weight decays from one day to the next. At that point. A worked example is provided in Table III. and vice versa for a summer VaR. A good way to do so is to use an exponentially weighted moving average (EWMA). Again. Our historical simulation P/L series is constructed in a way that gives any observation the same weight on P/L provided it is less than n periods old.6. a problem with this is that it is hard to justify giving each observation in our sample period the same weight. However. and this VaR will continue to be high (low) until n days or so have passed and the observations have fallen out of the sample period. so a raw historical simulation approach that incorporates both summer and winter observations will tend to average the summer and winter P/L values together. 49 To implement an ‘age-weighted’ historical simulation. As a result. regardless of age.2. it is well known that natural gas prices are usually more volatile in the winter than in the summer.2. worst return first. and the second gives the age of the corresponding return observation in days. We can ameliorate these problems by suitably weighting our observations. it can aggravate the problem of limited data in the tails of the distribution.3.A. Alternatively. we might give the winter observations a higher weight than summer observations if we are estimating a winter VaR. 1998).pdffactory. We choose a decay parameter .A.. 530% 89 . In this case. falling between the fourth and fifth worst simulated losses. Basic historical simulation gives each return a weight of 0. 0. 0.A.60% -2.50% -3.00667 0.530% of the portfolio size.00667 0.1: Age-weighted historical simulation Analysis at Initial Date Ordered daily return -3.96000 2.700% 0.98667 2.659% of the portfolio value.00667 0.00% -2. Table III.94000 2. The historical simulation confidence level is 1 minus the cumulative weight plus 0. and give more recent observations higher weights. we have made the simplifying assumption that the positions are the same as on the first day so that all the simulated historical P/L values for any given calendar day are unchanged.450% Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.04667 0.02000.00667 0. reflecting the observations’ greater age.600% 0.57% -2.00667. However.2.04000 0. so that the worst simulated losses are the same as those recorded in the analysis at the initial date.93333 2. weight cl VaR at chosen cl 0.00667. Table III. if we apply exponential weighting using = 0.800% 0.00000 3. however. implying cumulative weights of 0.000% 0.05333 0.00667 0.99333 3.97333 2. By contrast.pdffactory.95333 2.01333. leads to a lower VaR for any given confidence level.04057.480% 0.01333 0. The effect of exponential weighting in this case is to raise the estimated VaR since some of the largest simulated losses occur relatively recently in the sample period. for example. in turn.2. Consequently.48% -2. the point half way between the eighth and ninth worst simulated losses.00667 0. a value interpolated between the ninth and tenth worst simulated losses. The rest of the analysis then proceeds as before.70% -2.00667 0.00667 0. the cumulative weights are also lower and the impact is to increase the ‘effective’ confidence level for any given return observation.650% 0. in this case the historical simulation VaR at the 95% confidence level is 2. The cumulative weights given in the next column are 0.00667 0. To illustrate the point. we get the weights given in the column headed ‘AW weight’.94667 2. and the VaR is the negative of the relevant observation.55% -2.45% Age 5 27 55 65 30 50 45 10 6 17 24 Basic historical simulation HS weight HS cum. We also assume that there are no large losses in the intervening 25 days. 0.02684.02667 0.570% 0.06667 0.65% -2.97.03333 0. the VaR at the 95% confidence level is 2.06000 0.503%. and so on. Now. In this case.com 95% VaR 2.80% -2.96667 2.00000 1.00667 0.The PRM Handbook – Volume III small sample of only 150 observations. these observations have aged and their weights are lower.550% 0.A.00667 0. So. This. These are rather different from the historical simulation equal weights.1 also shows the same analysis conducted 25 days later.00667. the historical simulation VaR has remained unchanged because the historical simulation weights are unaltered.00667 0.510% 0.02000 0.500% 0. etc.51% -2. the EWMA weighted historical simulation VaR at the 95% confidence level is 2.98000 2. com 90 .93333 2.04057 0.01894 0.00667 0.01505 Age-weighted historical simulation AW cum.80% -2.00667 0. then data a month old understate the changes we can expect to see tomorrow.97631 2.85428 2.98667 2.00000 1.450% 95% VaR 2.800% 0. weight cl VaR at chosen cl 0. weight cl VaR at chosen cl 0.05074 0.02603 0.00681 0.00000 3.00% -2.05333 0. Again let ri.503% If we are concerned about changing volatilities.600% 0.800% 0.550% 0.60% -2.06327 0.00000 1.02369 0.02954 0.45% AW weight 0.96000 2.96357 2.95281 2.650% 0.98747 3.55% -2.01216 0.04642 0.00794 0.650% 0.02684 0.t be a forecast of the volatility of the return on asset i Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.700% 0.57% -2.06667 0.03333 0.97333 2.00432 0.659% Analysis 25 Days Later Ordered daily return -3.07008 0. month-old data will overstate the changes we can expect tomorrow.480% 0.00585 0.93195 2. So.10107 0.00000 1.01373 0.00667 0.94000 2.00273 0.89893 2.00703 Age 30 52 80 90 55 75 70 35 31 42 49 HS weight 0.00870 0.03273 0.02667 0.70% -2.00202 0.00318 0.550% 0.98000 2.99333 3.00667 0.96727 2. for example. Now suppose we are interested in forecasting VaR for day T.00667 0.92198 2.12710 0.500% 0.50% -3. if last month’s volatility was 1% a day but current volatility is 0.93673 2.02305 0.01076 0.94926 2.02000 0.510% 0.00667 Basic historical simulation HS cum.600% 0.04000 0.600% 0.00667 0.96667 2.450% 95% VaR 2.550% 0.530% 95% VaR 2.5% a day.000% 0.07802 0.02168 0. On the other hand.87290 2.500% 0.04719 0.01253 0.04667 0.97046 2.00000 3.95333 2.500% 0.65% -2.94065 2.570% 0.00371 0.00667 0. we can also weight our data by contemporaneous volatility estimates.01253 0.95358 2.98106 2.01333 0.480% 0.700% 0.000% 0.510% 0.800% 0.00585 0.t be the historical return in asset i on day t in our historical sample.01862 0.06000 0. and it was only 1% a day a month ago.00667 0.00667 0. if the current volatility in a market is 2% a day. The key idea – suggested by Hull and White (1998) – is to update return information to take account of changes in volatility.The PRM Handbook – Volume III AW weight 0.00667 0.570% 0.450% Age-weighted historical simulation AW cum.00641 0.570% 0.94667 2.06805 0.97832 2.pdffactory. weight cl VaR at chosen cl 0.51% -2.05935 0.510% 0.650% 0.00667 0.03643 0.92992 2.00000 3. i.97316 3.02684 0.01253 0.480% 0.48% -2.700% 0.14572 0.95943 2.000% 0. Note that the volatility weighting will generally alter the specific historical dates corresponding to the critical confidence level depending on the pattern of contemporaneous volatility. In this case.t substituted in place of the original data set ri.A.8%.0. Most of the volatility weights are therefore greater than 1. to the value of current volatility. current daily volatility is 1. ri. given by ri *.2.A. The confidence levels are the same as before.T ri .A.com 91 .T be our most recent forecast of the volatility of asset i. this table shows two cases. made at the end of day t – 1. but with r i *. with volatility-adjusted returns.2 and in the Workbook entitled Volweighted Historical Simulation. and the ‘effective’ or volatility-adjusted changes are correspondingly reduced as is the estimated VaR. Using the same returns as in Table III.2.t.t .1. We then replace the returns in our data set.t. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. generally above the range of contemporaneous daily volatilities over the historical dates with the worst losses. In the first case. however. depending on whether the current forecast of volatility is greater (smaller) than the estimated volatility for period t. we have the same contemporaneous volatilities but a current volatility of only 0. The volatility weights are now generally less than 1.6. The calculations involved are illustrated in Table III. In the second case. We now calculate the historical simulation P/L as explained in Section III.1. and i.2.t Actual returns in any period t are therefore increased (decreased).4%. These dates are invariant. i .t i .0 and the volatility-adjusted changes in the portfolio are correspondingly greater (in absolute value) than the simulated historical changes. the effect of the volatility weighting is to increase the ‘effective’ changes and thus to increase the estimated VaR.The PRM Handbook – Volume III for day t.pdffactory. equal weighted historical simulation Current volatility generally above contemporaneous historical volatility Basic historical simulation Ordered Daily Return -3.48% -2.60% -2.45% 95% VaR 2.50% 3.40% 1.30% 1.06667 cl 1.65% 2.98% -4.93333 VaR at Chosen cl 5.93333 VaR at Chosen cl 3.80% -2.55% 2.4000 1.57% -2.94000 0.13% 3.02000 0.99333 0.60% -2.96667 0.00000 0.80% -2.70% -2.96000 0.98000 0.13% -3.60% 2.02000 0.94000 0.70% -2.The PRM Handbook – Volume III Table III.01333 0.03333 0.23% 2.2727 1.40% 1.27% -3.85% 0.0769 1.99333 0.40% 1.57% 2.pdffactory.98667 0.98000 0.40% 1.40% 1.00% 2.50% 3.25% 1.51% 2.2.23% -2.48% -2.80% -2.48% 2.31% 3.73% -4.50% Current vol 1.90% 0.45% -2.83% 3.3333 1.02667 0.03333 0.01333 0.55% -2.04667 0.74% -2.40% 1.95333 0.55% -2.80% 0.31% -3.00000 0.40% 1.40% 1.00% 2.45% 95% VaR Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.95333 0.51% -2.com 2.94000 0.48% 2.97333 0.6471 1.65% 2.96000 0.98667 0.50% -3.06667 cl 1.05333 0.97333 0.00000 0.51% -2.00667 0.05333 0.40% 1.00000 0.02667 0.50% -3.7500 1.00667 0.47% 95% VaR 3.53% 92 .97333 0.74% 2.45% Cumulative Weight 0.57% 2.65% Contemp Volatility 0.06000 0.53% Vol-weighted historical simulation Daily return -3.94667 0.4737 1.9333 Ordered Voladjusted return -5.80% 2.04667 0.57% -3.10% 1.51% 2.99333 0.65% -2.55% 2.55% -2.00% -2.82% 0.47% cl 1.95% 1.27% 3.00000 0.96667 0.36% 4.40% 1.1200 0.98667 0.60% -2.45% Cumulative Weight 0.94667 0.04000 0.57% -3.00% 1.70% 2.70% 2.00% -2.57% -2.83% -3.65% -2.5556 1.96667 0.50% -2.70% -2.A.93333 VaR at Chosen cl 3.7073 1.57% 3.73% 4.36% -4.60% 2.96000 0.51% -2.2: Volatility weighted vs.04000 0.94667 0.05% 1.98000 0.25% Current volatility generally below contemporaneous historical volatility Basic historical simulation Ordered Daily Return -3.80% 2.06000 0.95333 0.00% -2.48% -2.98% 4.40% Vol weight 1. 87% -1. Historical simulation approaches are.g.70% -2.85% 1.80% 0.41% 95% VaR 1.A.80000 0.80% 0.com 93 . including Monte Carlo simulation. they can accommodate heavy tails.80% 0.80% 0.97561 1.55% -2.30% 1.00000 0.72727 0.The PRM Handbook – Volume III Vol-weighted historical simulation Daily return -3.48% -2.89% 1.57% -1.50% Current vol 0.99333 0.80% -2. age. This can lead to a number of problems: Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.85% 0.89% -1.57% 1.49% 2. by season. There is a widespread perception among risk practitioners that historical simulation works quite well empirically.g.90% 0. The advantages are: They are intuitive and conceptually simple.41% cl 1.36% 2.51% -2.84211 0.70% 2.80% 0. Bloomberg) or from in-house data sets (e.97333 0.05% 1.80% 0.19% 2. although not directly included in the VaR number. or volatility).64000 0.61538 0.95% 1.pdffactory. skewness.50% -2. bank supervisors or rating agencies).g. providing results that are easy to communicate to senior managers and interested outsiders (e. The weaknesses of historical simulation stem from the fact that results are completely dependent on the data set.3 Contemp Volatility 0. Since they do not depend on parametric assumptions about the behaviour of market variables.57% -3.70% -2.94000 0.82% 0.g.36% -2.80% 0.2.98000 0. either from public sources (e.95333 0.60% -2.80% 0.94118 0.86% Advantages and Disadvantages of Historical Approaches Historical simulation methods have both advantages and disadvantages.96000 0.76190 0.98667 0.6.19% -2. Dramatic historical events (sometimes irreverently referred to as ‘the market’s greatest hits’) can be simulated and the results presented individually even when they pre-date the current historical sample.65% III. They use data that are (often) readily available.80% 0.49% -2.85% -1.80% 0.53333 Ordered Voladjusted return -3.00% -2.25% 1. in varying degrees.87% 1.04% -1.80% Vol weight 0.41% -2. Thus the hypothetical impact of extreme market moves that are strongly remembered by senior management can remain permanently in the information presented. fairly easy to implement on a spreadsheet and can accommodate any type of position. although formal evidence on this issue is inevitably mixed.96667 0.04% 1.41% 2. including derivatives positions. Historical simulation approaches can be modified to allow the influence of observations to be weighted (e.94667 0.93333 VaR at Chosen cl 3. collected as a by-product of marking positions to market).10% 1.00% 1.45% -2. and any other non-normal features that can cause problems for parametric approaches.88889 0.80% 0.00000 0. For example. For historical simulation VaR it is sometimes possible to synthesize proxy data for markets prior to their existence based on their behaviour over a more recent sample period. The longer the sample size. the more the news in current market observations is likely to be drowned out by older observations – and the less responsive will be our risk estimates to current market conditions. historical simulation estimates of VaR make no allowance for plausible events that might occur but did not actually occur in our sample period.pdffactory.The PRM Handbook – Volume III If our data period was unusually quiet (or unusually volatile) and conditions have recently changed. such as the increases in risk associated with sudden market turbulence. where long runs of historical data do not exist and are not necessarily easy to proxy. There can also be problems associated with the length of our data period. As a broad rule of thumb. it will take time for standard historical simulation VaR estimates to reflect the new conditions. 50 In practice. Parametric approaches need a reasonable history if they are to use estimates (rather than just guesstimates or ‘expert judgements’) of the relevant parameters assumed to be driving the distributions. On the other hand. many practitioners point to the Basel Committee’s recommendations for a 50 However. and the longer we will have to wait for ghost effects to disappear. Historical simulation approaches have difficulty properly handling shifts that took place during our sample period. This is a particular concern with new or emerging market instruments. the longer the period over which results will be distorted by past events that are unlikely to recur. but when doing so it is extremely important to avoid overestimating the accuracy of the risk estimates by treating pseudo-data as equivalent to real data. historical simulation approaches are sometimes slow to reflect major events. this problem is not unique to the historical simulation approach. if there is a permanent change in exchange-rate risk. our main concerns are usually to obtain a long-enough run of historical data. In general. Most forms of historical simulation are subject to distortions from ghost effects stemming from updates of the historical sample. We need a long data period to have a sample size large enough to get risk estimates of acceptable precision. VaR estimates will fluctuate over time so much that limit-setting and risk-budgeting becomes very difficult. A long sample period can lead to data-collection problems. the bigger the problem with aged data. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Without this. a very long data period can also create its own problems: The longer the data set.com 94 . Similarly. The longer the sample period. historical simulation will tend to produce VaR estimates that are too low (high) for the risks we are actually facing. If we have n different instruments in our portfolio. the historical simulation VaR estimator is determined by the most extreme two or three observations in a one-year sample. we would need data on n separate volatilities. For instance. the historical simulation VaR estimator is effectively determined by fewer and fewer observations and therefore becomes increasingly sensitive over time to small numbers of observations. In addition. such a small sample size is far too small to ensure that an historical simulation approach will give accurate and robust results. we might have an emerging market security that has a very short track record or we might have a new type of over-the-counter instrument that has no track record at all. A third reason for mapping is that it can greatly reduce the necessary computer time to perform risk simulations. the number of parameters that need to be estimated grows exponentially. it is not always possible or even desirable to model each and every position in this manner. As n increases. at the Basel mandated confidence level of 99%. we deal with a single stock market factor as represented by a stock market index. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. reducing a highly complex portfolio to a 51 There are. with a fixed length sample. and this is hardly sufficient to give us a precise VaR estimate. requiring at least a year’s worth of daily observations (i. 250 observations.2. and it becomes increasingly difficult to collect and process the data involved. In practice. So.7 Mapping Positions to Risk Factors Portfolio P/L is derived from the P/L of individual positions. instead of dealing with each of n stocks in a stock portfolio as separate risk factors.e. In such circumstances it may be necessary to map our security to some index and the overthe-counter instrument to a comparable instrument for which we do have sufficient data.A. n(n + 1)/2 relevant values in an n ? n symmetric correlation matrix.The PRM Handbook – Volume III minimum number of observations. at 250 trading days to the year). However.0. one for each instrument. III. This process of describing positions in terms of these standard risk factors is known as ‘risk factor mapping’. In effect. plus data on n(n – 1)/2 correlations 51 – a total of n(n + 1)/2 pieces of information.pdffactory. We engage in mapping for three reasons: We might not have enough historical data for some positions. Perhaps the best response to this problem is to map each asset against a market risk factor along capital asset pricing model (CAPM) lines. This problem becomes particularly acute if we treat every individual asset as a separate risk factor. for example.com 95 . as the confidence level rises. For example. However. of course. we project our positions onto a relatively small set of risk factors. and we have assumed up to now that we are able to model the latter directly. but we do not have to estimate the values on the main diagonal which are all identically equal to 1. The dimensionality of our covariance matrix of risk factors may become unworkably large. futures/forward positions. allowing simulations to be done faster and with only minimal loss of precision. primitive instruments. we need to proxy them by equivalents in terms of core currencies. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Instead of trying to map and estimate VaR for each specific type of instrument. These are: spot foreign exchange positions. If the value of our position is A in foreign currency units and the exchange rate (in units of domestic currency per unit of foreign currency) is X.e. zero-coupon bonds. etc.2. but the task of mapping them and estimating their VaRs can be simplified tremendously by recognising that most instruments can be decomposed into a small number of more basic. If we assume A to be a credit-riskless 52 Where currencies are not included as core currencies. our own and the foreign currency) are included as core currencies in our mapping system. holdings of foreign currency instruments whose value is fixed in terms of the foreign currency). equity positions.A. all we need to do is break down each instrument into its constituent building blocks – a process known as reverse engineering – to give us an equivalent portfolio of primitive instruments. 52 We would then already have the exchange-rate volatilities and correlations that we require for the covariance matrix. which we can then map to a limited number of risk factors. Typically. The calculation of VaR under more realistic assumptions for risk factor return distributions is discussed in the next chapter of the Handbook. there is a huge variety of different financial instruments. There are four main types of basic building blocks.7. and compute the normal analytic VaR only. as the Dutch guilder was tied very closely to the German mark before introduction of the euro in both countries).1 Mapping Spot Positions The easiest of the building blocks corresponds to basic spot positions (e. we assume the risk factors to which positions are mapped have returns that are normally distributed.com 96 . Naturally.g. III. that is to say. the value of the position in domestic currency units – or the mapped position – is AX. Including closely related currencies as separate core instruments would lead to major collinearity problems.pdffactory.g. non-core currencies would be either minor currencies or currencies that are closely tied to some other major currency (e. the variance–covariance matrix could fail to be positive definite. In this section we will examine the mapping challenges presented by each of these in turn.The PRM Handbook – Volume III consolidated set of risk-equivalent positions in basic risk factors simplifies the problem. These positions are particularly simple to handle where the currencies involved (i. For example.g. III.A. We estimate the annual volatility of the sterling–dollar exchange rate to be 15%. (III.64485 ? 0.A. and handling equity positions is slightly more involved.2.0. This means that X = 0.009487 ? 0. A = 1 million (measured in dollars). our ‘base’ currency is the pound.143. then VaR = – X AX. If we treat every individual issue of common stock as a distinct risk factor.65.7) The same approach also applies to other spot positions (e. The daily standard deviation is therefore 0. provided we have an estimate or proxy for the spot volatilities involved. its value in units of foreign currency is constant and the only risk to a holder with a different base currency arises from fluctuations in X. Imagine we hold an amount Sk invested in the common equity shares of firm k.A.2.2: Foreign exchange VaR Suppose we have a portfolio worth $1 million. In this situation we can calculate VaR analytically in the usual way.7.65 million = ? 0. is related to the equity market return.com 97 . For example. which was covered in detail in Chapter I.000 companies. and £0.009487 and the pound values of the daily VaRs at the 95% and 99% confidence levels are then given by VaR1.15/ 250 = 0.pdffactory. a workable solution to this dilemma is the central concept in the CAPM. The basic assumptions of the CAPM approach imply that the return to the equity of a specific firm k. Rk.The PRM Handbook – Volume III instrument bearing an overnight foreign interest rate.2 Mapping Equity Positions The second type of primitive position is equity.65 million = ? 4. Rm. if the exchange rate is assumed to be normally distributed with zero mean and standard deviation X over the period concerned. In fact.009487 ? 0.65 = $1. VaR1.01 = 2.05 = 1.346. Example III. we can easily run into the problem of estimating a correlation matrix whose dimensions number in the tens of thousands.0. if we wanted to evaluate the risk for an arbitrary equity portfolio drawn from a pool of 10.4.A. by the following condition: Rk = k+ kRm + k Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. the number of independent elements in the correlation matrix to be estimated would approach 50 million! It is no wonder that an alternative approach is desirable.2.32635 ? 0. in commodities). the CAPM method requires a little more than 0. k for k = 1. … . It is important to recognise that when we aggregate risk across many holdings in a welldiversified portfolio. Assume the betas of these holdings are the market return volatility is m.pdffactory.The PRM Handbook – Volume III where k and is a firm-specific random element assumed to be uncorrelated with the market.04% as many parameter values. k is the market-specific component of firm k’s equity return variance of the firm’s return is then: 2 k where 2 k is the total variance of Rk. Estimating just the systematic risk of multi-asset equity portfolios using the CAPM approach reduces to a simple mapping exercise. Look again at the data requirements needed to be ready to estimate VaR based on an arbitrary portfolio drawn from a universe of 10. The variance of the firm’s return 2 k 2 m and a firm-specific component 2 k . 10.s is the for company k.001 parameters in all. The k is a firm-specific constant. This is 20.000 market betas for each stock and each stock’s specific risk volatility (or more commonly every stock’s total return volatility from which. with the market volatility and the individual stock betas. the main contributor to the total will be the market-based component 2 k 2 m . the share of total risk contributed by the specific risk terms falls continuously as the portfolio becomes more diversified and approaches zero when the portfolio approximates the composition of the total market. we can derive the specific volatilities). Since the specific risk of each holding is assumed to be uncorrelated both to the market return and to all other specific risk elements. the VaR of an equity position currently valued at xk in the shares of firm k is then: VaR Z k xk .000 individual stocks. … .com 98 . Instead of almost 50 million pairwise correlations we only need the market return volatility. N and Then the aggregate systematic VaR of the portfolio is: VaR Z n m k xk . k 1 Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.s is the variance of the market return Rm and variance of the firm-specific random element k therefore consists of a market-based component 2 k . Assuming the firm’s equity returns are normally distributed with zero mean. Assume we have N separate equities holdings with market values of xk for k = 1. N. 2 m 2 k 2 m 2 k . Compared to a full correlation approach.s . D and E. We are interested in a daily holding period. Example III.1] = $20.3: VaR of equity portfolio Suppose we have a stock market portfolio with five stocks. the VaR at the 95% confidence level is equal to n VaR ZX m 0. In this context it is standard practice to represent prevailing market conditions in terms of a continuously compounded zero-coupon interest-rate curve (sometimes also known as Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. C.8) k 1 In this form.1.A.20). when there are significant industry concentrations within an equity portfolio). We will assume for convenience that we are dealing with instruments that have no default risk.3+0. D = = 0. namely.2 k =1. If we define X as the total market value of the portfolio we can modify the above equation slightly to read: n VaR ZX ( kx k / X ) . the term in square brackets is the portfolio’s net beta. or ‘effective’ market beta. labelled A.2.3. k 1 There is no covariance matrix in the example above because all equities are mapped to the same single risk factor.2.561.A. The value of the portfolio is $1 million (so X = $1 million) and we have equal investments in each stock (so xA/X = xB/X = … = xE/X = 0. However.7. III. and the daily stock market standard deviation ( m) is estimated to be 0.A. the market index.9.2 [0.g.8). this single-factor mapping will underestimate VaR if we hold a relatively undiversified portfolio because it ignores the firm-specific risk.9+0.pdffactory. and E A = 0.5%).2. The stock market betas are 0.000.7+0.5+0.A. and this is highly convenient when dealing with equity portfolios.com 99 . We should also keep in mind that because it assumes a single dominating risk factor it can be unreliable when dealing with portfolios with multiple underlying risk factors (e.2. m (III. B. C = 0. B = 0.025 (hence the annual volatility of the market is approximately 39.5.3 Mapping Zero-Coupon Bonds The third type of primitive instrument is a zero-coupon bond (often referred to simply as a ‘zero’ for short). where the weights are the respective betas for each holding.64485 $1. Substituting the relevant values into equation (III. the systematic VaR is the appropriate critical value times the market volatility times a weighted sum of the market positions.025 0.The PRM Handbook – Volume III Thus. but the underestimation will often be fairly small unless the portfolio is very undiversified.000 0.7. of course. What criterion will impose this effective risk equivalence? A common approach is to require the present value impact of a one basis point (0.The PRM Handbook – Volume III a spot rate curve) across a selected set of future maturity dates. in developed industrial countries.01%) change in the zero rate at the two surrounding grid points to be the same for the allocated cash flows as it was for the original cash flow.3.0000% 6. Example III. Given the number of days to the defined grid dates at 3 months. A simplified example of such a curve appears in Table III.000.75 years.0000% 6.com 100 . To avoid this. some more complex form of interpolation such as cubic splines may be substituted. 3. to the maximum extent possible.5000% 7.2. 5. 53 Table III. we would have many more grid points at more frequent intervals than is shown in this simplified illustration.A.5000% 8.3 Continuously Compounded Zero Coupon Discount Rates Tenor 3-Mos 1 Year 2 Year 3 Year 5 Year 7 Year Today 4. that have risk-equivalent characteristics to the one cash flow maturing at 2. assume we have a risk-free cash flow of 1. we can have fixed cash flows maturing on any day out to seven years. However. 53 In practice.pdffactory. we will interpolate linearly to obtain the effective zero rate on any arbitrary date within this grid. It also should be noted that the practice of linear interpolation is not universally preferred because it implies there will be discontinuous changes in the slope of the zero curve at some grid points if the term structure is not uniformly linear over all horizons. one at two years and one at three years. 1.2. and 7 years. we will assume linear interpolation here to keep the example fairly simple. An example will help to illustrate how we can achieve this. interest-rate futures are used to calibrate this curve to the market.A.75 years.2.2.A. 2.0000% 3-Mos 1 Year 2 Year 3 Year 5 Year 7 Year > > > > > > In the context of Table III.A.3. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. 54 In practice. We want to create two cash flows. which would be no more than three months in the future. 54 We now want to allocate (or map) a cash flow maturing on an intermediate date to the fixed grid dates in such a way that.4: Mapping cash flows for a zero-coupon bond Again using the simplified zero curve shown above. We would use an overnight one-day rate as the basis to interpolate out to the next future roll date.5000% 5. the latter have the same risk characteristics as the original cash flow. Now consider the implications of this for mapping zero-coupon cash flows on arbitrary future dates to a set of fixed grid dates.000 maturing at 2. Similarly.75 years ‘maps’ to these latter two cash flows. To do so. 55 55 Obviously this approach to mapping cash flows can be applied in the context of more complex methods for interpolation of the zero curve.75-year rate rising to 0.0651 – e–3 × 0. In such cases the mechanics become more complicated.000 cash flow after versus before these changes.75) + r3 (2.063825 – e–2.69/–0.73 = –173.75 × 0.75 = 0.063825.177.000. Put differently. The next step is to find cash flows at two years and three years that have equal PV01 values.75 years is: r2. an increase of one basis point in the three-year rate would result in the 2. the risk sensitivity to changes in the two-year and three-year zero rates of 1.06/–0. Thus: PV012 = 1.194.0651 0. We then rearrange slightly to get: C2 = –57.000.06.The PRM Handbook – Volume III We begin by recognising that the current interpolated zero rate for 2.0651 – C3 e–3 × 0.25 + 0.06.0600) = –57.063775.75 × 0.0650 0.69.75 = 0. Thus.063775 – e–2.258.75 = r2 (3 – 2. which can therefore be considered its mapped equivalent.0650) = –173.0650 = –173.000(e–2.000. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.99 and C3 = –173.06/(e–3 × 0.000(e–2.063750) = 839.0601 – C2 e–2 × 0.600 0.0650 0.25 + 0. the ‘original’ cash flow of 1.pdffactory.04 – 839.063750) = 839.25 + 0.75-year rate rising to 0. An increase of one basis point in the two-year rate would result in the 2.0600 = –57.75 = 0.06375.000 at 2.000377366 = 325.69/(e–2 × 0.0600 0.75 × 0.67 – 839.75 × 0.137.99 at 2 years and 701.75 – 2) = 0.56.000.56 at 3 years. but the core principle of equivalent value sensitivity to small movements in the points that define the zero curve remains the same. we want to solve the following two equations: C2 e–2 × 0.0601 0. PV013 = 1.73 = –57.75 years is the same as that of two cash flows of 325.177.69 and C3 e–3 × 0.194. The ‘PV01’ (also called the present value of a basis point or PVBP) change in the two-year and three-year rates respectively will be the difference in the present value of the 1.000246813 = 701.0601 – e–2 × 0.com 101 .258.021.000 at 2.000. 696 = (17. The derived one-day standard deviation of the 10- month zero rate equals 4.2253 Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.31.625)2 + (62.The PRM Handbook – Volume III Example III. the volatility of the rates and their correlation. the variance is 399.645 III.5% = 0.000)2 + 2 0.645) is 1.000.01 ? 5.189 at 12 months.25% for the three-month rate and 1.85. Given 3-month and 12month zero rates of 4.23 to a one basis point change in the 12-month rate. the resulting VaR is 1. their correlation and their relationship to the 10-month rate. Assume analysis of the historical data indicates estimated daily volatilities of 1.779 5.645 399.D. we have a zero-coupon bond with 10 months to maturity. The second sheet of the book then performs the VaR calculation.1).62. As explained in Chapter I.A.85 17.5% and 5%.0125 ? 4.625 62. Further assume an estimated correlation between the three-month and 12-month returns of 0.779 5.A. We can confirm this result by deriving the VaR directly from the standard deviation of the 10month zero rate based on the standard deviations of the reference rates. 102 .2253 5.78 to a one basis point change in the three-month rate. This means that we need to map our single 10 months’ horizon cash flow into (nearly) ‘equivalent’ cash flows at horizons of 3 and 12 months.003.5: VaR of zero-coupon bond Now suppose we wish to estimate the VaR of a mapped zero-coupon bond. Since the impact of a one basis point change on the value of the bond is $80. 56 Hence the VaR at the 95% confidence level (taking Z0. We then carry out the ‘PV01’ cash flow mapping and find that a 10-month zero with face value $1m maps to (nearly) ‘equivalent cash flows of $719. For the 12-month rate it translates into an absolute volatility of 0. Now refer to the workbook. Based on this information about the value sensitivity to rate changes.0% = 0. To be more specific. For the three-month rate this translates into an absolute volatility of 0.05% or 5.2.3. and the nearest reference horizons in our mapping system are three months and 1 year.00% for the 12-month rate.7.217 at 3 months and $654.2.31.0 basis points as a one standard deviation daily change.0563% or 5.B. Mapping Forward/Futures Positions The fourth building block is a forward/futures position.62 = $657. and that the cash flow mapped to the 12-month maturity has a value sensitivity of $62. The previous worksheet showed that the cash flow mapped to the three-month maturity has a value sensitivity of $17.pdffactory.3.05 = 1.com 5.4 4. the portfolio of mapped cash flows has a standard deviation of $399. we see that the interpolated 10-month zero rate is just under 4.2 and/or Section II.003 = 657.63 basis points as a one standard deviation daily change. VaR of a Mapped Zero Coupon Bond.2.995 80.A.9%.622 = 159.2.995 basis points. a forward contract is an agreement to buy a particular commodity or asset at a specified future date at a 56 Using the standard formula for the variance of a portfolio (see Section I. As with the zero-coupon bond. To illustrate what is involved for VaR computation. with the price being paid when the commodity/asset is delivered. the VaR of our position is VaR Z F xF . The VaR of our coupon bond is then based on the VaR of its mapped equivalent in zero-coupon cash flows at the fixed grid points. and 4 years. 3.A. These multiple mapped cash flows.2. We can map coupon bonds by regarding them as portfolios of zerocoupon cash flows and mapping each individual cash flow separately to its surrounding grid points. 1. Of course. The workbook entitled VaR of a Mapped Coupon Bond illustrates the mapping process and associated VaR calculation.A. any one grid point can receive mapped cash flows from one or more actual cash-flow dates on either side of it. If we have x contracts each worth F. and we would use some interpolation method to obtain estimates of interim standard deviations. the value of our position is xF.6: VaR of Coupon Bond Suppose we have a coupon bond with an original maturity of 4 years and a remaining maturity of 3 years and 10 months. (i) Coupon-paying bonds. and more importantly their associated sensitivities.9) F for the horizon involved. but for our purposes here these differences are seldom important and we can treat the two contracts together and speak of a generic forward/futures contract.7.The PRM Handbook – Volume III price agreed now. If the return on each futures contract F is normal with standard deviation F and zero mean return.7. Example III. Section I.com 103 . There are a number of differences between futures and forward contracts.2. 2. In this example we assume reference points for defining the yield curve at 3 months.2.pdffactory. we can now map more complex positions by producing ‘synthetic equivalents’ for them in terms of positions in our primitive building blocks. Typically.C. and a futures contract is a standardised forward contract traded on an organised exchange.5 presents some examples of VaR calculations for portfolios of commodity futures. The main problem in practice is to obtain an estimate of the standard deviation (III. are aggregated into a single mapped cash flow and associated sensitivity for each grid point.5 Mapping Complex Positions Having set out our building blocks. suppose we have a forward/futures position that gives us a daily return that is dependent on the movement of the end-of-day forward/futures price. the observed daily volatilities for each rate are scaled by the rate levels to derive an absolute daily Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. III.A. we would have estimates for various horizons. We can therefore map an FRA and estimate its VaR by treating it as a long position in one zero-coupon bond combined with a short position in another zero-coupon bond with a shorter maturity.7: VaR of floating-rate note The analysis of an FRN is very similar to that of a zero-coupon bond. Since a floating-rate note (FRN) reprices to par with every coupon payment.com 104 . the mapped cash flows could be treated as the effective positions and analysed by either the Monte Carlo or historical simulation approach.2. We also assume a set of pairwise correlations of the daily changes in the rates. Here there is one important distinction between FRAs and interest-rate futures. The correlations used in the example are arbitrary. Example III. but do reflect the tendency for changes in rates of similar maturities to be more highly correlated than those where maturities differ by a greater amount.A. This is that interest-rate futures are settled at the beginning of the forward period. (iii) Floating-rate instruments. Alternately.D. The worksheet then applies the standard matrix formula (II. we can think of it as equivalent to a zero-coupon bond whose maturity is equal to the period until the next coupon payment.4) for deriving aggregate volatility to estimate the VaR. based on the discounted present value of the floating payment determined at that point. A forward-rate agreement (FRA) is an agreement to pay an agreed fixed rate of interest for a specific period starting at a known future date. It is equivalent to a portfolio that is long in a zero-coupon bond maturing at the end of the forward period and short in a zerocoupon bond maturing at the beginning of the forward period. The key point here is to appreciate that fixed-income theory tells us that we can price an FRN by treating it as a zero that pays the FRN’s current coupon and principal at the FRN’s next coupon date. we see that our FRN has a current coupon rate Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory. (ii) Forward-rate agreements. pairs of rates with similar differences in maturities tend to be more highly correlated for rates at longer than at shorter maturities. In addition. This means that there is some continuing market risk for an FRA up to the end of its forward period. Referring to the workbook VaR of Mapped Floating Rate Note. FRAs are settled at the end of the forward period at the undiscounted amount of the floating payment that was fixed at the beginning of the period.The PRM Handbook – Volume III standard deviation of the rate in basis points. We can therefore map a floating-rate instrument by treating it as equivalent to a zero-coupon bond that pays its principal plus the current period interest amount on the next coupon date. whereas an interest-rate future is settled at the beginning of the forward period (adjusted for a short settlement delay) and thereafter has no impact on market risk. A.5%. Similarly. A vanilla interest-rate swap (IRS) is equivalent to a portfolio that is long a fixed-coupon bond and short a floating-rate bond.The PRM Handbook – Volume III of 5. that we are the fixed-rate receiver on a vanilla IRS. 1. (See the above referenced spreadsheet for the details of the mapping. the floating-rate leg has the same features as the FRN we have just considered. Then it is ‘as if’ the cash flows from our swap are as given in Table III.000. we first note that for our purposes the swap can be regarded as equivalent to the exchange of a coupon bond for an FRN: one leg of the swap has the cash flows of a coupon bond.A.2. Suppose. for mapping purposes the FRN is equivalent to a zero.2.749 We then derive an analytic estimate of the standard deviation of the portfolio’s daily change in value.5K Net –$997.662 $258. as we have just seen.) Table III. (iv) Vanilla interest-rate swaps. To map an IRS.027.406 in 12 months.5: Mapped cash flows from vanilla interest-rate swap t (years) 0.25 1 2 3 4 Mapped Cash Flows –$1. the portfolio of mapped cash flows has a standard deviation of $185 and therefore a VaR of 1.140 $59. namely. In Table III.000.156. and we already know how to map these instruments. therefore.5 we map these ‘as if’ cash flows to obtain their (near) equivalents at the same reference points we used for the coupon bond. etc.992 in three months and $39.pdffactory.5K $30K $30K $30K $30K $30K $30K $1030K These are not the ‘actual’ cash flows from the swap.). this implies that our FRN maps to cash flows of $1. 2.2. same term to maturity.8: VaR of interest-rate swap The workbook entitled VaR of Mapped Vanilla Interest Rate Swap illustrates how we deal with this type of instrument. Table III. 3 months.A.4: ‘As if’ cash flows from vanilla interest-rate swap t (months) 4 10 16 22 28 34 40 46 Fixed rate leg $30K $30K $30K $30K $30K $30K $30K $1030K Floating-rate leg –$1027.com 105 . Given that date is four months hence.645 $185 = $304.212. or vice versa. Given a face value of $1. Example III. 3.500 at its FRN coupon date.A.2.2.000 $16. and the other the cash flows of an FRN. and the fixed-rate leg of the swap has the same features as the coupon bond we have just considered (same notional principal.256 $843. We do this by scaling the value of a one basis point change in each reference rate times Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.A. but for mapping purposes we can consider them as if they were. Given earlier assumptions about rate volatilities and correlations. and 4 years. this means that we can treat it as a zero that pays $1.4. However. C. the domestic interest rate. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. the foreign interest rate and the spot foreign exchange rate. So how do we map option positions? The usual answer is to use a first.645 $1719 = $2827.2. linear) mapping procedure. they can be applied to option positions that are nonlinear functions of an underlying cash price and to fixed-income instruments that are nonlinear functions of a bond yield. This nonlinearity can seriously undermine the accuracy of any standard (i.6 Mapping Options: Delta and Delta-Gamma Approaches The instruments just covered all have in common that their returns or P/L are linear or nearly linear functions of the underlying risk factors.7. equity and foreign exchange swaps.A.4 and II. These can be regarded as a combination of interest-rate swaps and conventional floating rate notes. Thus it will be sensitive to three market variables.3.3. We replace an option position with a surrogate position in an option’s underlying variables and then use a first. the portfolio of mapped cash flows has a standard deviation of $1719. which we can already map.D. over a moderate range of variation. Such methods can be used to estimate the risks of any positions where the value is reasonably approximated by a quadratic function of its underlying risk factor(s). We should be wary of using linear-based mapping systems in the presence of significant optionality.3.com 106 . The VaR is then the appropriate multiple of this standard deviation estimate.e. (vii) Commodity. once we have significant optionality in our positions.or second-order Taylor series approximation. or vice versa.2. Mapping is then fairly straightforward. and therefore a VaR (at the 95% confidence level) of 1. and some other forward/futures contract or bond contract on the other. A foreign exchange forward is the equivalent of a long position in a foreign currency zero-coupon bond and a short position in a domestic currency zero-coupon bond. (vi) Foreign exchange forwards. Thus.C. their values become nonlinear functions (often highly so) of the underlying risk factors.or second-order approximation – often known as a delta or delta-gamma approximation – to estimate the VaR of the surrogate position. (v) Structured notes. III. However. See Sections II. II. These can be broken down into some form of forward/futures contract on the one hand.The PRM Handbook – Volume III that rate’s one standard deviation change and applying the standard correlated aggregation procedure using the assumed correlations for the daily rate changes.pdffactory. For the volatility and correlation assumptions made. 3.10) is the standard deviation of returns over the relevant holding period.D. If the stock prices rises ( S > 0) the gamma term implies that the call option value rises by more than the delta-equivalent amount.2. we can try to accommodate nonlinearity by taking a second-order Taylor series (or delta-gamma) approximation: c S 2 ( S )2 . the option VaR is approximated by multiplying the underlying VaR by .The PRM Handbook – Volume III Suppose we have a simple European equity call option of value c. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. The new parameter introduced into the calculation.3 for the mathematical details of delta-gamma approximation and Section II. However.1). However.A. in the case of a short position in the call option.3 for further examples of its application. first-order approaches are only reliable when our portfolio is close to linear in the first place.10) can easily be extended to portfolios of options using the portfolio delta (see Section II.4. The value of this option depends on a variety of factors (the price of the underlying stock. VaR where S is the current price and Z S (III.2.A. Readers should refer to Section II. both delta and gamma are negative.com 107 . In this case.2. the call option value falls by less than the delta-equivalent amount. and such methods can be very unreliable when positions have considerable optionality or other nonlinear features.) but suppose we ignore all factors other than the underlying stock price. both delta and gamma are positive.pdffactory. and use only the first-order Taylor series approximation of the change in the option value: c price respectively. If the stock price falls ( S < 0).C. the gamma term contributes an increase in the value of the call option both when the stock price rises and when it falls. if we further assume that S is normally distributed. and S where c and S are the changes in the option price and the stock is the option’s delta. the option is also readily available for any traded option and equation (III. Hence the delta approach requires minimal additional data. If a first-order approximation is insufficiently accurate. the volatility of the underlying stock price. If we are dealing with a very short holding period (so we can take as if it were approximately constant over that period). etc.D. For a long call position.2. Now if S > 0 the option value falls by more than the delta-equivalent amount and if S < 0 the option value will rise (become less negative) by a smaller amount than is implied by the delta term. Hence. the exercise price of the option. we obtain the option gamma using the standard formula for the Black–Scholes gamma (see Table I. and (annual volatility of underlying) = 25%. option maturity = half a year. This gives the delta approximation for the VaR of a call option as 0.8.645 0.2. This turns out to be 2.2. Now suppose we wish to do the same exercises for a long position in a European put. We wish to estimate the VaR using a confidence level of 95% and a holding period of 10 days.25/5)] 2 = 0.2.2).591.2. whether the price of the underlying rises or falls. Applying equation (III.2.com 108 .645 (0.0412.645 0.11) Note that VaR is reduced if gamma is positive and increased if gamma is negative.A.) To make use of this approximation.A.198. In this case. This observation motivates the following as one possible modification to the VaR for an option position to account for the second-order impact of the gamma term: VaR Z S 2 2 Z S . as we would expect.The PRM Handbook – Volume III The bottom line is that positive gamma contributes a favourable impact to the value of a position. This gives us = 0. Taking account of the positive gamma term therefore reduces our VaR. If we wish to use a delta-gamma approximation instead.409.25/5 – (2.10).A.8.2).A.A.0486.pdffactory.2. We then input this value and the values of the other parameters into equation (III. we calculate the option delta using the standard formula for the Black–Scholes delta (see Table I.591 1.11) to get 0. The parameters of the option are: S (underlying price) = X (strike price) = 1. negative gamma contributes an adverse impact to the value of a position. We then input the relevant parameter values into (III.9: Option VaR (delta-gamma approximation) Suppose we wish to estimate the VaR of a European call option.A. One approach is to use the delta approximation (III.10) VaR where the sigma term is multiplied by Z 10/ 250S 10/ 250 1/5 to convert the holding period from one year (250 business days) to two weeks (10 business days.591 1. (III.198/2) [1.25/5 = 0.10) then gives us the delta approximation for the VaR as: Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Example II. the option delta is –0. and the gamma remains the same at 2.A.A. Conversely.198. whether the price of the underlying rises or falls. r (risk-free rate) = 5%. with the first-order term reflecting the bond’s duration and the second-order term reflecting its convexity. These involve comparing ex-ante VaR estimates with ex-post values of (a) actual P/L in the applicable periods and (b) hypothetical P/L assuming positions remained static for the applicable period.2. including (depending on the approach used): inaccurate historical market data and/or parameter estimation. one problem with such a test is that there is more than one reason why actual gains and losses may exceed the risk estimates unexpectedly often.8 Backtesting VaR Models Backtesting involves after-the-fact analysis of the performance of risk estimation models and procedures.0336. Control of the actual recorded P/L is what risk systems are ultimately designed to achieve.com 109 .11): –(–0. The corresponding delta-gamma approximation is then found by applying equation (III. This may occur because of weaknesses in the VaR estimation system.The PRM Handbook – Volume III –(–0.A.645 (0. the use of inaccurate or insufficient historical time series to produce historical simulation VaR estimates.A. the generation of Monte Carlo scenarios that fail to match the target characteristics implicit in the estimated parameters.645 0.2. However. incorrect estimation of the portfolio standard deviation or excessive nonlinearities and/or non-normal return distributions resulting in inaccurate VaR estimates using the analytical approach. taking account of the gamma term serves to reduce our VaR.409) 1. Again.25/5 – (2. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. inaccurate mapping of trades to risk-equivalent positions in a limited set of primitive securities.645 0.25/5)]2 = 0. III. incomplete consolidation of trading positions.0262.pdffactory. In the case of market risk there are two distinct types of backtesting that serve different purposes.25/5 = 0.198/2) [1. Much the same approach can be used to give us a second-order approximation to the VaR of a bond portfolio.409) 1. Hence comparing VaR estimates to actual P/L must be a part of any backtesting process. The first approach is the one required of banks under the market risk amendment to the Basel Capital Accord and can be thought of as an all-in test. Accounting systems are not designed to maintain consistent time series except over fixed reporting periods such as a fiscal quarter. Obviously this will result in misleading daily P/L figures for both the day the mistake was made and the day the correcting entry was booked. this will be adjusted by a correcting entry the following day (or later) to bring the fiscalperiod-to-date figures into line. more generally. by original design. which are often the periods of interest for market risk estimation and backtesting. but one must be careful in constructing the hypothetical ex-post P/L results. the actual P/L series may be sufficiently flawed to make meaningful comparison impossible. This is where the second form of backtesting is useful. they are both subject to the same flawed calculation methods. This can result in attributing inaccurate VaR estimates to the impact of day trading when the actual problem still lies in the estimation process itself. Failing this. actual P/L (adjusted for accounting errors and their subsequent correction) tends to produce a lower than expected frequency of observations outside the VaR estimate than is consistent with the probability used in those estimates. The second approach involves comparison of VaR estimates with the hypothetical P/L that would have resulted in the applicable periods if all the trades at the beginning of each period were simply revalued based on end-of-period market prices. When backtesting reveals weaknesses in the VaR estimation system.com 110 . Then the comparison of VaR to these ex-post hypothetical P/L estimates may look acceptable when. Obviously the key comparison in backtesting is whether the actual or hypothetical P/L series exceeds the corresponding ex-ante VaR estimate (or. This allows investigation and documentation of accounting errors and their appropriate treatment in the risk system while the details are still fresh in people’s minds. that these are based on the same valuation methods used in the VaR estimation process and that these methods are flawed because of one or more of the first three causes noted above. it is important to review the official P/L series to establish their relevance to the exercise. When conducting the first and more traditional approach to backtesting. In most cases.The PRM Handbook – Volume III It also may be caused by gains or losses from intra-day trading that are. it is important to be able to isolate the source of the problem. It is therefore important to compile the actual P/L to be used for backtesting on a current basis. in fact. if there is a mistaken entry that results in a large but erroneous daily gain or loss. VaR estimates predicated on different confidence levels) with the predicted frequency (or frequencies). This appears to be because day trading allows positions to Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Assume. This eliminates the impact of day trading that affects the actual P/L. For shorter sub-periods such as a day. Thus. although not always practiced. for example. not reflected in any of the three main approaches to VaR estimation.pdffactory. accounting systems attempt to maintain accurate period-to-date information only. is a fundamental statistical insight with far-reaching consequences. More often than not this is a reasonably good description of the thousands. thereby reducing actual losses compared to holding a static portfolio for a full 24 hours.The PRM Handbook – Volume III be closed quickly when markets become volatile. Thus. We would see a sudden outlier in the distribution as passengers rush to find a good viewing spot among the limited Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. III. Then proceed to calculate once every minute the centre of gravity of all these locations with reference to the two-dimensional framework of the ship and plot the resulting distribution. Today the presence of high kurtosis or ‘heavy tails’ in such distributions is a well-accepted fact. their mood of the moment and the available alternatives.2. it is important to consider why such persistent departures from the pervasive normal distribution should occur. This gives the normal distribution certain plausibility.A. however. and normal distributions are in fact commonly observed in the natural sciences.9 Why Financial Markets Are Not ‘Normal’ The central limit theorem. of individual buy and sell decisions that drive changes in demand and supply on any given day. The consequences are fairly obvious. Now assume there is an announcement over the ship’s loudspeaker that a pod of whales is breaching off the port bow. Suppose you equip the passengers of a singledeck cruise ship with a device that allows you to locate them exactly at any given moment. it is not surprising that in the early days of modern finance there was some serious debate over whether the distribution of changes in market data departed from a normal distribution in a systematic way. their energy levels. Consider an example totally unrelated to finance. While exceptions exist.pdffactory. We would expect it to exhibit something very close to a bivariate normal distribution. A key assumption behind the central limit theorem is that the individual observations of random variables going into an average or sum are statistically independent. it is not surprising that changes in such prices often exhibit a roughly normal distribution. It states that the distributions of sums and averages of random variables exhibit a traditional bell curve or normal distribution even when the individual variables are not normal. Since the market clearing price reflects the net balance of these largely independent decisions.com 111 . This is. The resulting distribution of their centre of gravity over time will be a cloud of points bunched around the centre of the available passenger areas. this holds true for almost any stable random variable found in nature. or even millions. sometimes referred to as the law of large numbers. not always the case. In trying to incorporate such behaviour into market risk analysis. At most times passengers will be in a variety of locations based on their personal preferences. Various statistical methods are used to try to build such behaviour into distributions of risk factor returns. is therefore an essential component of effective market risk management.10 Summary In this chapter we have explained the underlying methodology for the three basic VaR model approaches – analytic. The core structural assumptions that underpin a normal distribution have temporarily broken down and we see a sudden extreme observation. A main focus of this Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.A.4. It is this final point. Rather they are subject to a common shared perception. everyone knows that everyone else knows. He is awakened by a call at home in the early hours of the morning from another member of the firm. Thoughtful consideration of such potential scenarios. ‘This is just going to kill the market!!!’ In effect this is much like the announcement on the ship but on a global basis.A.The PRM Handbook – Volume III spaces available.com 112 . especially those that present special threats given existing open positions in the book.pdffactory. III. The time to see it is limited. In the immediate aftermath of the announcement. Each passenger reacts to the knowledge that speed is of the essence if a good viewing place is to be secured. If the ship was nearly empty. that makes for the sudden mad rush to the port bow. all passengers know several things: There is an opportunity to see something quite unique. historical simulation and Monte Carlo simulation. say. Observers around the world are suddenly focused on a common crystallising event with obviously directional implications for the market. or if only a few people were aware of the opportunity or were likely to take advantage of it (if. Everyone else knows what they know. The voice at the other end of the phone says. There is an ideal location for viewing the phenomenon.2. Suddenly the millions of decisions that drive the market are no longer randomly independent. But what these approaches cannot do is predict in advance when such events will occur. this ‘mutual self-awareness’. In addition. This topic will be discussed in detail in Chapter III. ‘Turn on CNN!!’ The TV in the bedroom flickers to life showing scenes of the Kobe earthquake. There is a relevant scene in the movie Rogue Trader about Nick Leeson and the Barings debacle. Such analysis remains in the realm of experience and seasoned judgement that no amount of advanced analytical technique can replace. most passengers were confined to their cabins with sea sickness) the sense of urgency would be greatly reduced. The voice at the other end of the phone says urgently. We have also considered simple analytic formulae for VaR based on delta-gamma approximation to simple portfolios of standard European options. in reality things are not that straightforward. and Whitelaw. Vol. p. B. Richardson. 11(6). Shimko.com 113 . However. Humphreys. A (1998) ‘Incorporating volatility updating into the historical simulation method for value-at-risk’. Hull. J. 47. Portfolio returns are not normally distributed and. 11(5). Journal of Risk. Risk. 1 (Fall). 64–67. pp. and Pant. pp. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory. J.The PRM Handbook – Volume III chapter has been the analytic VaR models that are only valid when portfolio values are linearly related to the underlying risk factors and portfolio returns are normally distributed. Vol. options portfolios typically contain products with many underlying risk factors and various exotic features. V (1998) ‘Hysterical simulation’. as is evident from Chapter I. Vol. R (1998) ‘The best of both worlds: a hybrid approach to calculating value at risk’. D B. M.B. 5–19.9. References Boudoukh. and White. The next chapter of the Handbook will consider how the basic VaR methodology that we have introduced here may be extended to more realistic assumptions about the products traded and the behaviour of asset returns. Risk. The PRM Handbook – Volume III Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.com 114 .pdffactory. 58 Many thanks to Kevin Dowd and David Rowe for their careful editing of this chapter. 57 Carol Alexander is Chair of Risk Management and Director of Research. If it were different.d. Business School.1 Introduction The previous chapter introduced the three basic VaR models: the analytic model and two simulation models.1. The Excel spreadsheet SimpleVaR. one without options. A number of variations on these assumptions are in common use and some of these will be reviewed later in this chapter.com 115 . Macquarie University.3. and would only be applied to portfolios containing options. that would be because not enough simulations were used and a ‘small sample’ error had been introduced. From the single historical price series in the spreadsheet we compute the 1% 10-day VaR as: $878.3.pdffactory.A. stands for ‘independent and identically distributed’. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. for which the portfolio value is a linear function of the underlying risk factors – the VaR estimate should be similar to the analytic VaR model estimate. actually). The fundamental assumptions applied in the basic forms of each model can be summarised as in Table III. Australia.1: Basic VaR model assumptions Analytic Historical Monte Carlo VaR VaR VaR Risk Factor/Asset Distributions Normal No Assumption Normal P&L Distribution Analytic Empirical Empirical (Normal) (Historical) (Simulated) Requires Covariance Matrix? Yes No Yes Risk Factor/Asset Returns i. Elizabeth Sheedy is Associate Professor.xls examines the VaR of a simple portfolio ($1000 a point on the Johannesburg Top40 index. Table III.3: Advanced Value at Risk Models Carol Alexander and Elizabeth Sheedy 57.d. Applied Finance Centre. one that is based on historical observations and another that uses a covariance matrix to generate correlated scenarios by Monte Carlo (MC) simulation.A.The PRM Handbook – Volume III III.i.233 according to the analytic VaR model. Indeed.A. ISMA Centre. if one were to apply the basic MC VaR model above to a linear portfolio – i.3. But of course. University of Reading. one would not apply MC VaR to a linear portfolio. this method is quite computationally intensive. 58 III. 59 i. UK.i.A.e.?59 Yes Yes Yes Clearly the analytic and the MC VaR models are very similar. and one would need a history spanning decades to obtain enough data! Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. the portfolio price series is computed over these 1000 or so days using the current portfolio weights.829 according to the historical VaR model.3. for instance. we still have to assume that returns are i. And even if the current weights are plausible.i. with returns often exhibiting skewness and leptokurtosis.3. an important limitation is that the use of a single covariance matrix assumes that all co-variations between risk factors are linear. even for an extremely simple portfolio as shown above. for some reason. the MC VaR result would be $878. in Section III. since these would have to be non-overlapping.60 Even if the data were available. The historical simulation method. 750–1250 observations. but for internal purposes less than a year of data may be used as. normally distributed.e.4). and this may not be plausible (would you have traded the same portfolio 5 years ago. If the portfolio P&Ls are non-normal. However. But this requires a very large amount of historical data. The main problem with the basic analytic and MC VaR models is that they are both based on the very restrictive assumptions that risk factor (or asset) returns are i..d. Regulators.000+ simulations. 61 You cannot use 10-day returns instead.i. then the historical VaR should be more accurate. In this spreadsheet only 1000 simulations are used so the result can vary a lot each time we press F9 to generate new simulations. if we used 100.A. This has led to the popularity of the historical simulation method. can give results quite different from the other two methods. at least 1000 days. when the market was in all probability quite different?).d. 61 The use of a covariance matrix in the analytic and the MC VaR models has both advantages and limitations.2). Hence the assumption of current portfolio weights is not as problematic as it is with the historical VaR model.233 (hopefully) according to the MC VaR model.3. and use the ‘square root of time’ rule to compute the 10-day VaR (see Section III. which can be applied to any type of portfolio. require at least one year of historical data to be employed when computing regulatory capital using a VaR model (see Section III.233 (or very close to that) as in the analytic VaR model. i. The MC VaR model result changes every time we perform the simulation.4.A.com 116 . Most practitioners are aware that the assumption of normality is violated in reality.1. something close to $878.A.pdffactory. assuming the sample contains enough data points to estimate the 1% lower percentile of the historical distribution with sufficient accuracy. However.The PRM Handbook – Volume III $839. but they are not 60 Regulators recommend 3–5 years of daily data. One advantage is that the covariance matrix will not normally be based on a very large amount of historical data. each return is an independent realisation from the same normal distribution (see Section II.3.A. but volatility clustering.d. It is often desirable to decompose VaR into components for purposes of limit setting and performance measurement. normal.5.3.4. Section III.1 and described in Table III.i.8 concludes the chapter.5 examines some other solutions to the problem of heavy tails in VaR estimation: Student’s t. To illustrate this point. all three of the approaches are defective in different ways when the basic assumptions are used.A.3. Section III. It is well known that the extreme variations in major risk factors can be more highly correlated than the ‘normal’ variations. III. let us consider daily returns for the USD/JPY for the period from January 1996 to July 2004 as illustrated in Figure III.3.B.A.5. It concludes.A.d. which is the annualised standard deviation.A.2 Standard Distributional Assumptions When measuring risk. that the most pressing problem for those modelling market risks is not heavy tails.2.3. Yet many financial analysts are sceptical about the assumption of normality.6.A.3.4). an important tool for analysing bond and futures portfolios. Thus. Section III. Accordingly.i. Normality implies that the distribution can be completely described with only two parameters: the mean and the variance. The skewness and (excess) kurtosis should equal zero. for example in a VaR calculation. it is often necessary to make some assumption about the distribution of portfolio returns. The nub of the problem is this: how can VaR be estimated using assumptions that are consistent with the stylised facts we observe in financial markets? Most of this chapter is directed at this problem. That is. which is fundamentally concerned with model risk.2 examines the standard distributional assumptions more closely and the ways in which they are violated.4.A. These decomposition techniques are considered in Section III.3 examines two different approaches to modelling volatility clustering: EWMA and GARCH. somewhat counter-intuitively. then the square root of time rule applies. This rule is explained in Section II.A.3. but this is rarely the case. Section III.3.2 highlights the stylised facts or characteristics commonly observed in financial returns: The mean of the daily returns is very close to zero.E.3. Table III. The remaining sections of the chapter address some other advanced topics in VaR modelling.3.A.A. The most widespread choice is the assumption that returns are i.3.3. It states that the standard deviation Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Risk is expressed in two ways – standard deviation per day and volatility. We shall address this issue in Section III.pdffactory.com 117 . Section III. EVT and normal mixtures. If we assume returns are i. These models are then applied to the problem of VaR estimation in Section III.3.The PRM Handbook – Volume III linear.A.A.3.7 examines principal component analysis. Log returns have t+h t t the nice property that the sum of h consecutive one-day log returns is the h-day log return.02 -0.06 -0.A. let denote the standard deviation of one-day returns. The skewness (see Section II.The PRM Handbook – Volume III of h-day returns is h standard deviation of one-day returns. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. if h is small. 63 Note that throughout this Handbook we adopt the standard convention of quoting volatility on an annual basis.1135.5. defined as ln(P /P ) where P is the price at time t.A. indicating that the distribution has heavy tails relative to the normal. The excess kurtosis (see Section II.02 0 -0. Figure III. and since there are 250/h time periods of length h days per annum. volatility = 0.6) is estimated using the Excel function ‘KURT’ and is found to be positive. Jan. Then the volatility based on 1-day returns = 250 Under the square root of time rule. 62 or equivalently.00718 250 = 0.5.3. yet the actual data have heavy tails.04 0. 62 To be precise. Also. capital reserves may be an insufficient buffer to withstand disaster at the desired confidence level.com 118 .08 The implication of negative skewness and positive excess kurtosis is that the true probability of a large negative return is greater than that predicted under the normal distribution.5) is estimated using the Excel function ‘SKEW’ and is found to be negative.3. 63 To see this. Consequently.B. the standard deviation of h-day returns is h. This finding has potentially grave consequences for the measurement of VaR at high confidence levels. If VaR is calculated under the assumption of normality.B.04 -0. that volatility is constant.1: USD/JPY returns. then VaR will understate the true risk of a disastrous outcome. 1996 to July 2004 0. the volatility based on h-day returns = h (250/h) which also equals 250 So in Table III.2. it is easy to show that log returns are very close indeed to the ‘absolute’ return (Pt+h Pt )/Pt. the rule is based on log returns.pdffactory. both positive and negative. Instead of a lower bound of –0.A.The PRM Handbook – Volume III Table III. 22 expected Under the normal distribution. Jan. the lower bound of the 99% one-tailed confidence interval is defined by the mean less 2.35% Skewness –0.60047 10 5 Standard deviation per day 0. the US dollar depreciated by more than 6% against the Japanese yen. the main problem with financial data is not skewness/kurtosis (often referred to as heavy tails).00718 = –0. The problem with this analysis.A. however. as in 1998. In this case the lower bound equals – 2.33 × 0. This leads us to an alternative way of understanding the data: the large number of observations that appear in the lower tail is a result of changes in volatility throughout the sample period. the lower bound would be very much lower (for example.33 standard deviations. On one day at the height of the Russian crisis. High-volatility periods are characterised by large returns. 1996 to July 2004 Number of observations 2161 Mean return 2. Sometimes. is that it incorrectly assumes that volatility has remained constant for the entire sample period of 8.241 48 vs. If we take account of the increase in volatility it is likely we will find that the number of observations in the tail is close to 1% as expected. Of the 48 observations in the lower tail of the confidence interval. 18 fall in 1998. volatility is much higher and so the confidence interval is correspondingly wider.718% Volatility 11.3.5 years! Reviewing Figure III. –0. of observations below the lower bound of 99% confidence interval 6. The largest positive return (in excess of 3%) occurred during the same period of high volatility. examination of these data reveals that 48 returns are below the lower bound. In 1998 we see that oscillations about the mean were much greater than at other times.3. with the period of greatest volatility being 1998. so we should expect that only 1% of the return observations will be lower than this figure. This was a time when most financial markets exhibited high risk following the Russian crisis.0167. but the fact that volatility is changing over time. This kind of analysis is often performed to demonstrate that finance data violate the assumption of normality. Note that it is the absolute size of the return that is important rather than direction. In other words.0167.com 119 . In fact. only 22 returns (1% of 2161) should lie in this lower tail. we can see that the volatility of USD/JPY returns has varied considerably.2 Analysis of USD/JPY returns. In short.pdffactory.8255 Excess kurtosis No. With 2161 observations.0335 if volatility were to double).1. if we Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. a period of very high volatility. Having taken account of volatility clustering.d. then the parameters of the distribution (mean and variance) should be constant over time. However. The most useful financial models take account of this fact. the probability of heads on the next toss is still 50% (assuming a fair coin!). we can say that traditional financial models have often assumed that returns are i. the empirical analysis of financial returns shows that the assumption of independence is unsupported: the size (but not the direction) of yesterday’s return does have implications for today’s return.d. A large return yesterday (in either direction) is likely to be followed by another large return in either direction.i.d. To summarise this section. Even if I toss 10 heads in a row. largely because of his groundbreaking contribution in this area. Tossing coins is a good example of independent outcomes. Financial data often generally exhibit significant positive autocorrelation in squared returns.com 120 . We can test for volatility clustering by examining the autocorrelation in squared returns. This brings us to a further discussion of ‘i.3 for further discussion of this concept. then yesterday’s return will have no bearing on today’s return. In 2003 Robert Engle won (jointly) the Nobel Prize for Economics. We will show in the following sections that these characteristics can be modelled successfully. the issue of heavy tails becomes less significant. the world of finance research was for ever changed in the 1980s when volatility clustering was first identified by Robert Engle and his associates. the next toss is unaffected by what has gone before. In reality we observe changes in volatility and volatility clustering. in which case the data are not i. Indeed.’ –returns. In other words. This concept is commonly referred to as ‘the heat wave effect’ or ‘volatility clustering’.pdffactory.5. If returns are independent. Squared returns are used because squaring removes the sign of the return – we can focus purely on its magnitude rather than its direction. 64 See Section II.B. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.The PRM Handbook – Volume III take account of these changes in volatility then the assumption of normality may actually be quite a reasonable one.i.i. This assumption is clearly violated since the variance parameter is changing. normal. Volatility clustering is arguably the most important empirical characteristic of financial data. 64 If returns are ‘identically distributed’. In Section III.The PRM Handbook – Volume III III. If yesterday’s return is large.3. .3 Models of Volatility Clustering If we believe that today’s volatility is positively correlated with yesterday’s volatility.3. According to RiskMetrics.1 Exponentially Weighted Moving Average The EWMA method for estimating volatility was popularised by the RiskMetrics Group. A pragmatic alternative calculation that is much easier to replicate for the auditors is to truncate the historical sample at n past observations. available at: www.html#rmtd Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.3. in either direction.3. III. today’s variance will be positively correlated with yesterday’s variance.3. for the following reasons: There is no proper statistical estimation procedure for the smoothing constant: the user simply assumes some value for .3. 1996.3.A.4. may also be referred to as the ‘persistence’ parameter. so we see that EWMA captures the idea of volatility clustering. a market shock is the characteristic of a GARCH model. However. where n is defined as the point at which n drops below some critical value C.A. EWMA volatility models are not.2 we shall see that this type of reaction to.3.1) in the formula bar.pdffactory. We will discuss two methods for achieving this: exponentially weighted moving average (EWMA) and generalised autoregressive conditional heteroscedasticity (GARCH). in (III. then it is appropriate to estimate conditional volatility.002 then n = 100 observations.A.com 121 . The parameter. a value for of around 0.A.2 of RiskMetrics Technical Document. The application of these methods to VaR models will be discussed in Section III. For instance.com/techdoc. the variance will increase through the first term on the right-hand side of (III.3.1) where is a ‘smoothing constant’ and rt–1 is the most recent day’s return. 65 Notice that since is positive.A. if = 0.riskmetrics. 66 65 This can be calculated in Excel by typing (III.A. The EWMA variance also reacts immediately to market shocks. the EWMA can be thought of as a very simple GARCH model. the greater will be the size of the reaction to a return shock.94 and C = 0. The greater is 1 – .A. whilst GARCH volatility models are based on firm statistical foundations. or by using the exponential smoothing analysis tool. Today’s estimate of variance is: ˆ t2 rt 2 1 1 ˆ t2 1 . and persistence following. (III.94 is generally appropriate when analysing daily data.A. The higher the value of . Note that technically. The EWMA on n observations can be written as a finite sum: ˆ 2t n 1 t 1 t 2 t r n / 1 t t 1 66 See Section 5.3. In fact.1). the more will high variance tend to persist after a market shock. The latter is more difficult to implement but offers some potential advantages.3. that is.1) the volatility estimate depends on the entire historical data set rather than a limited past history. volatility that is conditional on the recent past. Hence the EWMA model is not really appropriate for estimating the market evolution over time horizons longer than a few days.A. Having estimated conditional volatility in this way.3. Recall from the previous section that unconditional volatility for this same time period was calculated as 11.2). We do this to try to make Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.2 highlights the variability of conditional volatility.A.1.pdffactory. In the quieter periods of 1996 and early 2004.D. conditional volatility briefly dips below 5% pa. More details are given in Section III. Jan. some portfolios can have negative variance unless this matrix is positive semi-definite (see Section II.2: EWMA volatility – USD/JPY returns. Following the market shocks of 1998. and then applying the square root of time rule: EWMA volatility estimate at time t for a horizon of h days = ˆ t h 250/ h . That is.3. where the variance is calculated using a covariance matrix. Figure III.3.3.3.94. Here the EWMA model has been applied with = 0. 1996 to July 2004 40% 35% 30% 25% 20% 15% 10% 5% 0% Figure III.A.The PRM Handbook – Volume III In portfolio models.4. each daily return is divided by the relevant standard deviation. conditional volatility reaches a maximum of 35%.A. The EWMA variance estimate is converted to volatility by taking the square root. as explained in Section III. we can then standardise the daily returns.com 122 .2. to obtain the standard deviation. this is equivalent to a constant volatility assumption. For this reason one cannot form an EWMA covariance matrix using different values for for different assets: in fact the same value of must be used for all variances and covariance in the matrix.35% pa. But. What all GARCH models share.7). however.3. Prior to standardisation. 2 t 1 (III.3.1974 Excess kurtosis 0. almost infinitely. (Feb. so GARCH models are concerned with the process by which the scale of returns. The word heteroscedasticity is Greek for ‘different scale’. We then repeat the analysis of Table III. but this time using the standardised returns (see Table III. i. of observations below the lower bound of 99% confidence interval 29 vs.2727 No. That is. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. and of the GARCH model are usually estimated using maximum likelihood estimation (MLE). when the distribution of on all information up to time t is normal with variance 2 t t conditional . .3. In normal GARCH models. to take account of the factors specific to a particular market.A.A. is changing.The PRM Handbook – Volume III the returns comparable. each return comes from a distribution having a different volatility and is therefore not strictly comparable. that is. 1996 to July 2004 Number of observations 2140 Skewness –0. This suggests that the issue of heavy tails.3. is a positive correlation between risk yesterday and risk today.2 Generalised Autoregressive Conditional Heteroscedasticity Models GARCH models are similar to EWMA in that both focus on the issue of volatility clustering.com 123 . an ‘autoregressive’ structure in risk.3 Analysis of standardised USD/JPY returns.A.3). we find that the problems noted earlier are much diminished. or volatility.A. 21 expected Having standardised returns on conditional volatility. Table III. skewness and excess kurtosis are now much closer to zero. This involves using an iterative method to find the maximum values of the likelihood function. often noted by finance analysts. III.3. The first is the conditional mean equation and the second is for the conditional variance: rt c t 2 t 2 t 1 0.A.3. the likelihood function is a multivariate normal distribution on the model parameters (see Section II. GARCH models are ‘generalised’ in the sense that they can be varied. The parameters .E.2.e. can be partly explained by volatility clustering.pdffactory.4. The simplest GARCH model consists of two equations that are estimated together. The number of observations in the lower tail is much closer to that expected under normality.2) 0. The PRM Handbook – Volume III In its simplest form (shown here) the conditional mean equation merely adjusts for the mean (c). This means that the variance will. the model is called an integrated GARCH (IGARCH) model.2. 67 but in this case the GARCH forecasts behave just like those from a constant volatility model.A. tend towards its steady-state variance defined by 2 1 .1: GARCH model for spot USD/JPY We estimate a GARCH(1. stands in place of 1 – .3. This is close to the sample volatility of 11. Perhaps the most important difference between EWMA and GARCH is the fact that in GARCH there is no constraint that the sum of the coefficients ( + ) should equal one.24%. commonly used for modelling currency returns. where implied volatilities of long-term options do not vary as much as the implied volatilities of short-term options.3.3) we can see that if 1 (as in EWMA) then the denominator equals zero so the steady-state variance is undefined.A.78 10 7 0.3). From (III. but its effect relatively long-lasting.pdffactory. It is possible that data are best explained when the estimated coefficients do add to one.3. Substituting the estimated parameters into this equation. If the sum + is less than one (the more usual case) then volatility is said to be mean- reverting and the rate of mean reversion is inversely related to this sum. The IGARCH is most Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Conditional variance for the USD/JPY is quite persistent (with a persistence coefficient of 0. Example III. leaving an ‘unexpected’ return t. 67 When this happens. As the sum of these two coefficients is less than unity.35% p.955710) and not particularly reactive (reaction coefficient of 0. we can say that conditional variance is mean-reverting to a steady-state variance defined by (III. in Table III.03684) compared with some other markets. The conditional variance equation is estimated using a maximum likelihood technique: 2 t 3. The constant term ( 3. In this case variance is not mean-reverting and is assumed constant as we project forwards in time. and an extra constant term ( ) is also included.A.78 10 7 ) is statistically significant but very small.03684 2 t 1 0.com 124 .1) model for daily log returns from January 1996 to July 2004.A.95571 2 t 1 .0000505794.a. (III. in the absence of a market shock. equivalent to an annualised volatility of 11.3. The conditional variance equation appears very similar to the EWMA equation where stands in place of .A.3) The GARCH volatility forecasts then behave like the volatility term structures we observe in implied volatilities.3. This means that the initial reaction to new information will be more muted. the steady-state variance is 0. There is a huge body of research literature on GARCH models.pdffactory.The PRM Handbook – Volume III Figure III. For example. then further action may be necessary to adjust for the extreme moves (or for the heavy tails of the distribution).3 shows the conditional volatility resulting from the GARCH model. but this is beyond the scope of the PRM exam. For instance.A. We then divide each daily return by the relevant conditional standard deviation to obtain a standardised return. The two series. If not. EWMA volatility – USD/JPY returns 40% 35% EWMA GARCH 30% 25% 20% 15% 10% 5% 0% Figure III.com 125 . leaving residuals that are i. we might use an asymmetric specification for the conditional variance that can account for a ‘leverage effect’ in equities.d. If this series is normally distributed then we can conclude that volatility clustering fully explains the extreme moves. If GARCH modelling is successful.A.i. which we then test for normality. then it should account for these heavy tails. and/or a conditional distribution for t that is non-normal. so the figure emphasises the similarity between the two approaches. We have argued that volatility clustering is the main cause of the apparent heavy tails observed in financial data.3: GARCH vs. it may be necessary to use a more complex GARCH specification.3. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Is this in fact the case? It can be tested by examining the series of standardised returns. EWMA and GARCH. To obtain the standardised returns we estimate a GARCH model and from this create a series of daily conditional standard deviation estimates. track each other closely.3. normal. Ideally.4 compares two volatility measures for USD/JPY returns for calendar year 1998. The choice of volatility measure will have major implications for the VaR measure and consequently for capital adequacy.pdffactory. this measure is very slow to react to market shocks. The other series shows unconditional volatility calculated using rolling one-year samples.3.A. Unfortunately.3.2.A. Figure III. causing a large reaction in market prices. Both regulators and risk Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. The EWMA measure of conditional volatility reacts swiftly as the crisis unfolds in mid-1998. Backtests of such models will reveal clusters of exceptions where the actual losses exceed VaR. Consider the situation in which new and unexpected information comes to the market.com 126 . By failing properly to take account of volatility clustering.A. Since each day in the sample is equally weighted.3.4 Volatility Clustering and VaR Volatility clustering has important implications for VaR. In addition.The PRM Handbook – Volume III III. the risk is that financial institutions will take unduly large risks (or will hold insufficient capital) in periods of market crisis.8). as it did when Russia defaulted on its debt in 1998. they will hold too much expensive capital at other times. our VaR measure will increase significantly. and has a small weight of only 1/250.4: USD/JPY volatility in 1998 35% 30% EWMA 1-year Volatility 25% 20% 15% 10% Figure III. Our knowledge of volatility clustering tells us that this market shock is likely to be followed by large returns (in either direction) for some time. the timing of exceptions is often ignored in traditional backtesting procedures which focus only on the number of exceptions (see Section III.A. sending the appropriate signal to risk managers either to reduce risk through hedging or to ensure that capital is adequate to withstand the higher risk environment. Analytical VaR using EWMA. building on the discussion in Section III.4. Again.2. Historical simulation makes no assumption about the distribution of historical returns apart from independence.A.B. This approach has a number of attractions.5 and II. at p.1) shows the EWMA equation for variance. January 1996.4. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.3. Returns could be simulated under the assumption of normality. MC simulation using EWMA.B.A. 68 III.3. For reasons that remain unclear to the authors. The analogous equation for covariance between assets 1 and 2 is: 68 See the discussion of ‘historical observation period’ in Basel Committee ‘Amendment to the Capital Accord to Incorporate Market Risks’. this method is most relevant for option-affected portfolios. Of these three methods. This method is explained in detail in Section III. 44.2. the Basel regulations relating to market risk currently require financial institutions to measure volatility using at least one year of data. In Section III. This is preferable to the standard measures of unconditional variance and covariance. especially for option-affected portfolios. but using a covariance matrix created using EWMA.com 127 . We will explain this method here. Historical returns are standardised using conditional volatility estimates calculated using EWMA. There are at least three ways that this can be done: Historical simulation using volatility weighted data. the last two make use of the assumption that returns are conditionally normally distributed. A series of large losses in quick succession is potentially far more serious for solvency than smaller losses spread over time.A.2. 69 Equation (III.pdffactory.A. One way to do this is to calculate the covariance matrix using the EWMA measures of variance and covariance. This regulation encourages the use of volatility measures that react very slowly to new information such as the one illustrated above.3 we explained that normality is a much more reasonable assumption to make if we properly account for changes in volatility. even if the total combined losses are equal. which are slow to respond to new information when estimated using long samples.A.6.6 for descriptions of these standard measures. this is exactly the wrong way to proceed.1 VaR using EWMA Volatility clustering can be relatively easily incorporated into VaR measures using the EWMA approach.The PRM Handbook – Volume III managers are concerned about the timing and size of exceptions.3. In our view. Thus it is attractive if it is feared that the standardisation process has not entirely eliminated the heavy tails evident in raw returns. 69 See Sections II. t 1 ˆ 12.A. otherwise portfolio volatility may not be defined (see Section II. Unlike GARCH. The standard deviation in (III.t 1 . under EWMA.3.4) critical value. as explained in Section III.t 1r2. VaR in each asset. the average volatility over any forecast horizon is a constant. Equivalently. where Z is the standard normal (III.1): (i) Representation at the asset level.com 128 . Note that when constructing a large covariance matrix it is always important to ensure that it is positive semidefinite.4) the h-day VaR estimate at the significance level is given by: VaR .D.1.t–1 are yesterday’s returns for assets 1 and 2.3. variance is not mean-reverting. where p = P w is the vector of nominal amounts invested Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. If we use a different value of for each variance and covariance term.A.A.2. .3. 10 business days. where r1. 70 70 Equivalently.pdffactory. When calculating VaR we need to forecast portfolio variance for the horizon of interest. wn) is the portfolio weights vector and V is the h-day forecast of the covariance matrix of asset returns. equal to the volatility estimated today. EWMA is a non-stationary model of variance.2.h = Z P .2). say.4) is computed using a forecast covariance matrix of h-day returns as follows (see Section II.t 1 r1. In the analytical method (See Section II. the matrix will not necessarily be positive semi-definite. it can then be used for VaR calculations using either: the analytical method (appropriate for simple linear portfolios) or MC simulation (best for option-affected portfolios).t–1 and r2.3.A. w ' Vw where w = (w1.h =Z P&L Z p ' Vp . This means that the EWMA forecast of any future variance (or covariance) is the same as the estimate made today. RiskMetrics gets around this problem by using the same value for (being 0.94 in the case of daily data) throughout the covariance matrix.The PRM Handbook – Volume III ˆ 12. P is the current value of the portfolio and is the forecast of the standard deviation of the h-day portfolio return. That is. …. respectively. Once the covariance matrix has been defined.D.3. 2: Analytical method with EWMA for two-asset portfolio Consider a simple portfolio with $1m invested in asset 1 and $2m invested in asset 2.The PRM Handbook – Volume III (ii) Representation at the risk factor level. P&L Z p ' Vp . 'V where = ( 1. the results will be the same.3207m. But note that these two conversion methods assume both constant volatility and no serial correlation in the forward projection of volatility. What is the 5% 10-day VaR? We have p = (1.038 = $0. 71 To calculate analytic VaR using EWMA we simply calculate today’s portfolio variance as above. You will get an incorrect result if you simulate the one-day VaR and multiply the result by 10 Example III. the 1% 10-day VaR for this portfolio is 2. the same result would be obtained if V is the given matrix but with each element multiplied by 10 and we calculated Z p ' Vp .005 0. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. or we use a 10-day covariance matrix for V.3207 p ' Vp = 0. By contrast. 72 This is high.645 VaR = 0.43m.pdffactory. so almost half the amount invested would be required for risk capital to cover this position! However. Suppose the EWMA model estimated on daily returns for asset 1 gives a variance estimate of 0. So the P&L volatility = 0. because the assets have a very high variance (and covariance).002 .01.014m. where p = P is the vector of sensitivities to each risk 72 Of course. To convert to a 10-day horizon we simply multiply the one-day VaR by 10.005 and the EWMA covariance is 0. the example illustrates an important 71 Equivalently VaR .195 10 = $1.33 0. for asset 2 returns the EWMA variance is 0. which is one reason why the EWMA VaR estimate does not fully reflect volatility clustering. 2) and V is the matrix 0.com 129 .002. = Z h factor in nominal terms.A. for non-linear portfolios the MC VaR method uses a covariance matrix. and you should be sure to use the 10-day covariance matrix directly in the simulations.3. using the square root of time rule.002 0. n) is the portfolio sensitivity vector and V is the h-day forecast of the covariance matrix of risk factor returns.195 = $0. Thus the 5% 10-day 10 = $1. ….01 corresponds to an annual volatility of 2. a daily variance of 0. In either case. The 10-day matrix can be obtained.5 = 158%! In fact. For instance.01 0. by multiplying every element of the one-day covariance matrix by 10.19m and the 5% one-day VaR is 1. 32634 10 5 342 428 3 2 10 5 = 0. it also has some distinct advantages. and the portfolio beta is 1. With a correlation of 0. In this regard EWMA is far superior to the approach.00036 0.00016 for GBP/USD.com 130 . Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. this clearly is a nonsensical result.3.31845m.2)/250 = 0. that the estimated VaR will be more than the total investment.3: Analytical method for portfolio mapped to two risk factors Suppose a US investor buys $2m of shares in a portfolio of UK (FTSE100) shares. The one-day variances are: 0. Note that the $2m exposure to the equity portfolio described above is equivalent in risk terms to a $3m exposure to the FTSE index since beta is 1.3.0225/250 = 0.3 0. Being based on the assumption that portfolio P&L is normally distributed.4. there is a chance. Suppose the FTSE100 and GBP/USD volatilities are 15% and 20% respectively (with corresponding variances of 0.2 VaR and GARCH Alternatively.00009 0.04/250 = 0. we can say that VaR calculations. the issue of volatility clustering may be incorporated into VaR estimates using the more sophisticated GARCH approach.00016 so the 10-day covariance matrix is 10 5 . however small. because GARCH is a more general model it can explain the characteristics of the data more precisely to 73 Note that the one-day covariance matrix is 0.5.The PRM Handbook – Volume III weakness in the use of the analytic VaR method for long-only positions.04) and their correlation is 0. Hence.A. Example III.15 0.5. 73 p Vp = 3 2 90 36 3 36 160 2 and the 1% 10-day VaR is therefore 2. can be greatly improved by taking account of volatility clustering. whether analytical or simulationbased.0009 0.00009 for the FTSE and 0.00036 0.000036 0. III.000036. the one-day covariance is (0.pdffactory. While GARCH is undoubtedly more challenging to implement than some other methods. In concluding this section.01882 10 0. which employs unconditional volatility based on large samples of data.A. For a long-only position.0016 90 36 36 160 0.202 = 0.152 = 0.3. 2) .01882 = $0.000036 0. For example. unfortunately encouraged by regulators.0225 and 0.3. What is the 1% 10-day risk factor VaR in US dollars? We have two risk factors (FTSE and GBP/USD) with p = (3. then it could be helpful to take account of this mean reversion feature. Table III.07 0. If we are forecasting volatility over.A. one year of daily data) stands at 13.A.0001.0 10 6 0.0001698 0. 0.07 2 t 1 0.4: Forecasting volatility with GARCH Suppose that we have estimated a GARCH model for a stock index such that the mean return is zero and the conditional variance equation is as follows: 2 t 5. equivalent to annual volatility of 0.0001698 vs. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.04 2 0. We cannot do this with EWMA volatility models because they use the square root of time rule. When forecasting variance on the subsequent day (and thereafter) we do not know the return shock so variance is forecast as: 2 t 2 5.pdffactory. To forecast variance tomorrow we proceed as follows. say.0001698 Notice the large size of forecast variance (0.a.88 2 t 1 In this case the steady-state variance (using (III.00006): 2 t 1 5.81% p.88 0. say.0001663 and similarly for the days afterwards. Example III. To put this in perspective. equivalent to daily volatility of 1% and annual volatility of 1% 250 = 15.07 0.0 10 6 (0. A distinct advantage of GARCH over EWMA is that GARCH is a mean-reverting model of volatility.3. we regularly observe that volatility will tend toward a ‘mean’ or ‘steady-state’ value after a period of unusually high or low risk.com 131 .25%. The current estimate of unconditional annual volatility (using.The PRM Handbook – Volume III ensure that all evidence of non-normality and dependence is removed from the standardised returns.00006) as variance reacts to today’s market shock.0 10 6 0.00006 0.04 or 4%.3)) is equal to 0. as explained above. causing a large return of +0.04) and today’s variance (0. This fits better with the stylised facts observed in the market.3.7746% 250 = 12. Assume that today new and unexpected economic data hit the market.A.0% and today’s estimate of conditional daily volatility is 0. a return of 4% in one day is a greater than 5 standard deviation event (using unconditional volatility).007746. substituting the appropriate values for today’s shock (0. Suppose that we are estimating VaR at a time when the market has recently been unusually quiet.3. the next 10 days.88) 0.4 shows the daily variance forecasts for each day in the 10-day horizon. The GARCH model tells us that in the absence of any further shock.1.0001569 0. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.4. the unconditional volatility forecast for the next 10 days is approximately 13% – today’s large return having only a marginal impact due to its small weighting of 1/250.A. As explained in Section III. This example also illustrates the point that the square root of time rule is inappropriate in a world of volatility clustering. thereafter the forecast volatility falls slightly. We obtain a forecast of volatility over the next 10 days by summing the 10 daily variances. Historical simulation using volatility weighted data. Table III.A.0% Sum = 0. volatility will gradually revert towards the steady-state volatility.The PRM Handbook – Volume III While tomorrow’s forecast volatility is significantly higher than today’s.3% 19.6% 20.3. This gives 19. using a univariate GARCH model for each asset/factor.0015601 19.0001630 0.A.1% 19.0001487 0.7% The significance of this for the VaR estimate is obvious. In contrast. This is a convenient solution for linear portfolios.1.7% pa. volatility is likely to increase initially and then gradually decline over the 10-day horizon.0001663 0.0001440 Equivalent volatility % pa 20. but this time standardising returns using GARCH volatility estimates.e. but this time using a covariance matrix based on GARCH variance–covariance forecasts over the risk horizon.2% 20. This method makes no assumption about the distribution of standardised returns apart from independence.0% 19.4.pdffactory.0001698 0.0001598 0. How might she actually go about it? There are a number of possibilities: Analytical VaR. using the GARCH conditional volatility forecast will significantly boost the VaR.0001463 0. As explained in Section III.3. For the professional risk manager calculating VaR.0001513 0. the task of implementing GARCH techniques might appear daunting.3. signalling to risk managers that risk should either be substantially reduced or capital increased to withstand the new high-risk environment.6% 19. The GARCH model tells us that in this situation. This could be done quite simply. T+1 to T+10 0. multiplying by 250/10 and taking the square root.0001540 0.4% 20. To assume constant volatility for 10 days is not suitable.com 132 .8% 19.4 Forecasting Volatility with GARCH Day Variance T+1 T+2 T+3 T+4 T+5 T+6 T+7 T+8 T+9 T+10 10-day horizon i.4% 19. t 1 j .3. an artificial history of daily P&Ls is created using the current portfolio weights. as well as the underlying. Variance terms in the matrix are estimated using the simple GARCH(1.pdffactory. particularly as we must ensure that the covariance matrix is positive semi-definite.3. Various approaches have been suggested in the academic literature.2). These excellent results are achieved without adversely affecting the number of exceptions in backtests (see Section III. But instead of taking a lower percentile of the empirical distribution of these P&Ls for the VaR estimate.t 1 is the sample correlation of mean-corrected returns.2. The difficulty compounds as the number of assets or risk factors grows. This can present substantial implementation problems. thus avoiding the need for a large covariance matrix. As in the standard historical simulation approach described in Chapter III.4). Professional risk managers are generally analysing investment or trading portfolios containing multiple assets. The covariance terms in the matrix are formulated as follows: ij .com 133 . They have shown that a simple reduced-form GARCH implementation produces regulatory capital estimates that are an improvement on the methods used by some large commercial banks.2. in fact the size of the maximum exception is reduced.A. Ease of estimation is achieved by constraining the correlation to be constant over the estimation period and by ignoring crossmarket effects in volatility. and these are surveyed by Bauwens et al. In summary.t 1 where ij ij i . they allow for more risk-taking (or less capital) when volatility is low and less risk-taking (or more capital) when volatility is high. as proposed by Bollerslev (1990). They find that the VaR estimates based on this type of GARCH model are more sensitive to changes in volatility. This would allow for changes in volatility.A. To use some of the GARCH methods listed above it is necessary to evaluate an entire covariance matrix.8 for a description of VaR model backtests). they apply a GARCH model to the portfolio returns series and use formula (III. over the risk horizon – an important advantage for portfolios containing options. A GARCH covariance matrix forecast could be used to simulate returns going forward. (2003). Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. One very simple alternative is to use GARCH volatilities but assume a constant correlation between assets or risk factors. The reduced-form GARCH approach applies a univariate GARCH model directly to portfolio P&L data.A. GARCH techniques have the potential to greatly enhance our modelling of market risk and to ensure that appropriate capital buffers are in place.A. The benefits of the GARCH approach to VaR estimation have recently been illustrated by Berkowitz and O’Brien (2002).1) specification as shown in (III.The PRM Handbook – Volume III Monte Carlo simulation. 4 in that they do not take account of volatility clustering. the distribution converges to the normal. Under the standard Student’s t distribution: the mean is equal to zero.pdffactory. We limit ourselves to discussing only three possibilities. to match the sample kurtosis). we know this is unlikely to be the case because of the heat wave effect.A. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.3. we divide the observed standard deviation by unity. When we use data having an observed standard deviation different from one. the skewness is equal to zero and the (raw) kurtosis is equal to 3 2 .i. 74 Since the observed variance will not be equal to / 2 it will be necessary to scale the variance. Nevertheless the Student’s t distribution remains quite popular with some professional risk managers. Backtests will reveal clusters of exceptions where actual losses exceed VaR. although many others exist.1 VaR with the Student’s distribution The Student’s t distribution is often proposed as a possible candidate for describing financial returns because of its heavy tails. The distribution was originally designed for working with small samples where the degrees of freedom are one less than the sample size.A.d. the variance is equal to 2 .3.E. the ‘degrees of freedom’. 75 It will generally not be necessary to scale the mean as mean returns (at daily or 74 It is also possible to estimate the degrees of freedom parameter using maximum likelihood techniques.3.A.com 134 . . III. 4 In VaR applications we will be working with large data sets and attempting artificially to select the parameter to fit the shape of the tails of the distribution (that is.6. The standard Student’s t has only one parameter. Under the Student’s t distribution each day’s return is assumed independent of the previous day’s return.4.The PRM Handbook – Volume III III.5 Alternative Solutions to Non-normality Here we consider some approaches to estimating VaR in the face of non-normality that differ from those considered in Section III. The standard normal has variance and standard deviation of one. It is actually a poor candidate because it assumes returns are i. Statistical background is provided in Section II. 75 This is analogous to the way in which we adapt the standard normal distribution. As approaches infinity.5. The implication of this is that the t distribution will tend to underestimate VaR in periods of market crisis – the time when risk measurement is most crucial – and overestimate VaR when market conditions are quiet. the critical value of the t distribution is 3.36 10 0.6) Select the appropriate critical point from the t distribution.75. the analytical method – but the MC VaR model could also be applied.3. take care to double the probability parameter to get the appropriate value for a one-tailed distribution.The PRM Handbook – Volume III higher frequencies) are close to zero.g.02.005547. equivalent to raw kurtosis of 9. 0.01) and the degrees of freedom selected in (a). whereas VaR calculations generally apply a one-tailed probability distribution.5)’. returning a value of 3.2. being equivalent to daily volatility of 0.000. (d) Proceed with VaR calculation using. If we instead used ‘=TINV(0.3) and solving for gives a value of 4. The critical value of the t distribution can be found in Excel using the TINV function. Note also that the Excel TINV function assumes that is an integer. Note that the square root of time rule is appropriate here as returns are assumed i.241.961 down to 4. We use this information to calculate VaR using the analytical method for a position held long $1m. In this example we use ‘=TINV(0. Example III. If you are interested in probability of 0.961)’ Excel would round 4.02.00718 2 4.5: VaR with Student’s t For purposes of comparison we use the USD/JPY data described in Section III. for instance.4.07693 10 5 . We choose a 10-day holding period and ignore expected returns. 76 At the 99% level of confidence and with 5 degrees of freedom. Dowd (2002) explains how to adapt the standard Student’s t distribution for VaR calculations (for a single asset) as follows: (a) Select the degrees of freedom parameter by matching it to the sample kurtosis such that: ( Raw )Kurtosis (b) 3 2 .e. (III.000 $58.3.241.A.3. 938 76 Note that the Excel TINV function assumes a two-tailed probability distribution.3. 4 (III.A. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.36 (the comparable critical value of the normal distribution is 2. We compare VaR for the Student’s t with VaR from the more familiar normal distribution. When using TINV.4) to give adjusted daily variance of: 0.961 2 4.A.33).pdffactory. one-tailed 99% confidence).02.A.961.A.3.005547 $1. then you should instead use a probability of 0.: Student’s t VaR = 3. The returns have empirical excess kurtosis of 6. The empirical variance is scaled using equation (III.5) The empirical variance should be scaled by: 2 (c) .A.3. Using equation (III. based on the desired level of probability (e.com 135 .961 3.01 (i.d.i. economic capital is often estimated at an extremely high percentile. need not hold.6). due to lack of sub-additivity. reaching a peak of 35% p.A.00718 $1. However. ‘expected tail loss’ or ‘tail VaR’ by various authors – that is estimated. the associated conditional VaR is the average of these largest losses. the sum of the component conditional VaRs can never be less than the total conditional VaR (see Section III. there can be real problems when VaR is applied to credit risk.3. For another example. we see that in 1998.4).A. in theory. Applying the Student’s t approach will tend to result in greater capital requirements over time (or reductions in the amount of risk taken).0. any risk metric that is not sub-additive is not a good risk metric. when conducting stress tests the possible behaviour of portfolio returns is pushed to an extreme limit (see Chapter III. This example illustrates how the heavier-tailed Student’s t distribution will tend to give larger VaR estimates than the normal.! III. That is because conditional VaR is sub-additive.3.a. the 1% conditional VaR is the average of these 10 largest losses. Of course. In these models. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. For instance.e.5. by contrast. even an 11% increase in VaR will not be sufficient protection in times of market crisis. Several large banks have been developing risk capital models based on extreme value theory (EVT). However. when the VaR model is extended to allow for heavy-tailed risk factor distributions then sub-additivity. conditional volatility more than tripled.A.3.2.000.97% for a AA company (see Section III. it is not VaR but an associated risk metric called conditional VaR – also called ‘expected shortfall’. Returning to Figure III. above which the largest losses occur.4. the 1% VaR based on 1000 P&Ls is the 10th largest loss.2 VaR with Extreme Value Theory Some VaR applications focus on the ‘tail behaviour’ of financial returns distributions.33 10 0. conditional VaR measures are commonly favoured for the computation of economic capital. however. Whereas VaR is the cut-off point. The incentive for holding portfolios just dissolves if it is better to assess risk on individual positions and simply add up the total! When VaR is modelled using normal distributions for the risk factor returns VaR is always sub-additive.000 $52.903 The Student’s t VaR estimate is 11% higher than that calculated using the normal distribution. i.3). such as 99. Potentially these larger VaR estimates afford financial institutions greater protection from extreme variations as they will respond either by reducing risk or increasing capital.pdffactory. in practice market VaR is almost always found to be sub-additive. For instance. Though not admissible under the Basel rules for the computation of regulatory capital.A. Given the existence of volatility clustering.com 136 .The PRM Handbook – Volume III Normal VaR = 2. For instance. 2 1 and f2(x) = x. when they have already been standardised using a volatility clustering model. Nevertheless many banks do apply GEV distributions for intra-day VaR estimates. g ( x ) p 2 2 1 1/ 2 exp 1 2 x 2 1 / 2 1 1 p 2 2 2 1/ 2 exp 1 and variance 1 2 x 2 2 / 2 1 2 2 and another Note that there is only one random variable so it would be misleading to call the densities f1 and f2 ‘independent’.3 VaR with Normal Mixtures A normal mixture density function is a sum of normal density functions. There are two approaches. 2 2 is the density function: 77 g(x) = pf1(x) + (1 – p)f2(x). The assumption that the observations are i. a mixture of only two normal densities f1(x) = x.d. Very large data sets are crucial for EVT models as we are concerned only with the tail of the distribution and we require many extreme data points for robust estimation of parameters. the threshold could be set at the VaR that is estimated using a standard VaR model and then the GPD density can be fitted to obtain a more precise estimate of the conditional VaR. we record the maximal loss and subsequently fit these daily data using a GEV distribution. one where x has mean 77That is. 2 . The GPD may be used to estimate conditional VaR. we forget about the time spacing of the data. Then we would fit the data using the GPD. Alternatively.com 137 . III. perhaps. For example.i. in fact there is a simple formula for this once the density has been fitted. which is typically assumed to be i.d. the conditional VaR is quite sensitive to the choice of threshold loss.pdffactory. suppose the underlying time series consists of hourly observations on a portfolio P&L. regardless of when this happens. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.The PRM Handbook – Volume III Extreme value distributions. examine the distribution of the extreme values of a random variable. as explained above. On each day.i. over such long sample periods is not very realistic in financial markets except. However. 1 . EVT models are often fitted using some form of maximum likelihood technique so the data requirements can be substantial.5. For instance.A. These extreme returns (or exceptional losses) are extracted from the data and an extreme value distribution may be fitted to these values. which must first be defined. as their name suggests. but record only those P&Ls that exceed a certain loss threshold.3. Either one models the maximal and minimal values in a sample using the generalised extreme value distribution (GEV) or one models the excesses over a predefined threshold using the generalised Pareto distribution (GPD). The parameter p can be thought of as the probability that observation x is governed by density f1(x). In effect there are two regimes for x. In other words.906 = 62. and a normal mixture density. The parameters of a normal mixture density function can be estimated using historical data.906% (shown in red).A.3. it has kurtosis of 4.5 102 = 62.7) The skewness is zero and the kurtosis is given by: NM(2) kurtosis = 3 p p 4 1 2 1 (1 p) (1 p ) 4 2 2 2 2 . everyday market circumstances.com 138 . the excess kurtosis of the ‘equivalent’ normal (red) curve. for an equity portfolio.5.87 (substitute p = 0. The idea.87.The PRM Handbook – Volume III where x has mean 2 and variance 2 2 In the case where x denotes the return on a portfolio. 10% (shown in grey) and 7.e. is to choose the parameters to maximise the likelihood of the data. But the EM algorithm differs from standard MLE in that the EM algorithm allows for some ‘hidden’ variables in the data that we cannot observe. 1 = 5 and 1 = 10 into formula (III.5 on each of the grey normal densities. one can naturally identify these two regimes as a ‘high volatility’ (or even. shown in black.3.3. From formula (III. In this case the variance is just NM(2) variance = p 2 1 + (1 – p) 2 1 (III.A.5.7).A. which is significantly greater than zero. so the mixture of two zero-mean normal densities always has a higher peak and heavier tails than the normal density of the same variance.5 shows four densities: three zero-mean normal densities with volatility 5%.5.8) The kurtosis is always greater than 3. 1 = 2 = 0. More generally. a ‘crash market’) regime with a low probability and the rest of the time a regime that governs ordinary. as always.3.A. In this case the best approach is to employ the expectation–maximisation (EM) algorithm. which is a mixture of the first two normal densities with probability weight of 0. it has an excess kurtosis of 1. (III. For instance. Since 7. i. Figure III.7)). taking several components of different means and variances in the mixture can lead to almost any shape for the density.A.pdffactory. the variance of this mixture is 0. Maclachlan and Peel (2000) provide pictures of many interesting examples.3. Consider a mixture of two zero-mean normal components. However. the mixture has the same volatility as the red normal curve.5 52 + 0. so that we can only maximise the Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Prob(P&L < –VaR ) = .G.4) or ‘Solver’ (see Section II. suppose for the moment that we have a normal distribution for the P&L of a linear portfolio. where Z is a standard normal variable. However.com 139 . but interested readers should consult the book by Maclachlan and Peel (2000) which deals almost exclusively with this approach.1).08 0. Hence [–VaR – ]/ = –Z . Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.2.5: A normal mixture density 0.3.The PRM Handbook – Volume III expected value of the likelihood function and not the likelihood function itself. Figure III.9) 78 The EM algorithm is far beyond the scope of the PRM syllabus.g. and rearranging this gives our analytic formula for normal VaR: VaR = Z – . for instance. we have Prob (Z < [–VaR – ]/ ) = . 78 Alternatively.7 below. That is.3. To see how. equivalently.G. (III.A. That is. by definition.1.04 0.02 0 f1(x) f2(x) Mixture Normal There is no explicit formula for estimating VaR under the assumption that portfolio returns (or. So if P&L has a normal distribution with mean and standard deviation . there is an implicit formula. so the problem is akin to that of implying volatility from the market price of an option (see Section II. we can apply the Excel ‘Goal Seek’ (see Section II. the critical value of Z.A.pdffactory.A.3. P&L) follow a normal mixture density.06 0. two-component) normal mixture can be chosen in a scenario analysis of portfolio risk as. in Example III. the parameters of a simple (e. Then the analytic formula for VaR follows directly from the definition of VaR.4) methods to back out the normal mixture VaR.G. 0.3.5: NM VaR vs. click Tools – Goal Seek – ‘Set cell B21 to value 0 by changing cell B18’.003 1. Table III.A.e. 1 1 2 P 2) = . (III.A. we obtain the ‘NM VaR’ figures in the first row of the Table III.588 178. that the market will crash. What is his 10-day VaR? To answer this we extend (III. normal VaR Significance level 10% 5% 1% NM VaR ($) 157.524 123. Example III. 1/100 = 0.A. rather than P&L .04) and P is the current portfolio value ($2m). say 1 in 100.A.3. is the 10-day return in regime 1 (i. His portfolio is currently valued at $2m. but this time it is given by an implicit formula. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.012. at the moment in ordinary market circumstances there are steady positive returns of 10% per annum with a volatility of 20%.3.e.3.A. the standard deviations now being those of h-day P&L in each regime) we know everything except VaR in the identity: p Prob(Z < –VaR / 1) + (1 – p)Prob(Z < –VaR / 2) = . 1 and 2. with only two zero-mean components in the mixture (so there are only three parameters p.632 Equivalent normal VaR ($) 281.437 435.10) Hence the h-day VaR can be ‘backed out’ from (III.965 Ordinary normal VaR ($) 94.xls with Solver (or Goal Seek) 79 applied to cell B21 each time we change the significance level. 0.The PRM Handbook – Volume III Using exactly the same type of argument we can derive the normal mixture VaR.6: Scenario VaR using normal mixtures A risk manager assumes there is a small chance.2).A.pdffactory.11) 2 is the 10-day return in is the 10-day standard deviation in regime 2 (i.480 500. For instance.10) to the non-zero-mean case. This gives: p Prob (Z < [–VaR – 1]/P 1) + (1 – p)Prob(Z < [–VaR where p is the probability of regime 1 (i. 0.5.e. 1/ 25 = 0. in which case the expected portfolio return over a 10-day period is –50% with an (annualised) volatility around this mean return of 100%.2/ 25 = 0.10) using an iterative approximation method such as Goal Seek or Solver. 2 (III. – = 10-day standard deviation in regime 1 (i.3.01). Using the Excel spreadsheet NMVaR.A.3.1/25 = 0.107 79 To apply Goal Seek.com 140 . regime 2 (i.5).845 335.e.e.3.004). However. and rephrase it in terms of the means and standard deviations of returns in the two regimes. We outline the method for the two-component zero-mean case: Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. = 0.9). Finally.3. the VaR will be almost three times larger when the standard deviation and mean are adjusted to account for the possibility of a crash.or two-line) extension to the simulation code. it was about half the size of the NM VaR).3.e.e. It is the relationship between the NM VaR and the ‘equivalent’ normal VaR that is really interesting. That is. Our example exhibits some typical features of NM VaR: For low significance levels (e. it should be noted that. this is a simple (one. For higher significance levels (e.g. For the ‘equivalent’ normal VaR we use (III. 5% or 1%). we ignore the possibility of a market crash in the ‘ordinary’ normal VaR. When the excess kurtosis is relatively small it may be that the 5% (or even 1%) VaR is actually smaller under the NM assumption.5 it is clear that ignoring the possibility of a crash can seriously underestimate the VaR. are the returns standard deviation and mean over the holding period. Even if one were always to assume a normal distribution. or equivalently VaR = [Z where and – ]P. when the parameters are estimated using actual historical observations on the portfolio returns it is common to find that the normal mixture VaR is less than the normal VaR at the 5%.7) to obtain an ‘equivalent’ standard deviation – and similarly the equivalent mean is p 1 + (1 – p) 2 These adjust the ‘ordinary’ market circumstances mean and standard deviation to take account of the possibility of a crash. From the results in Table III.A. the normal assumption can seriously overestimate VaR (in this example.A. a 10% annual return). although we have not described the generalisation of the MC VaR model to the normal mixture case.04 (i. but after that the VaR is computed using the normal assumption for portfolio returns. it was about twice the size of the NM VaR).The PRM Handbook – Volume III The two ‘normal VaR’ figures are calculated using equation (III. In fact. and even at the 1% level.pdffactory.A.004 (i. a 20% annual volatility). 10% or 20%).com 141 . The ‘ordinary’ normal VaR figures are computed using the second (more likely) distribution of 10-day mean and standard deviation of: = 0.3. The significance level at which the NM VaR becomes greater than the normal VaR based on an equivalent volatility/mean depends on the degree of excess kurtosis in the data. the normal assumption can seriously underestimate VaR (in this example.g. 0. hedging and performance measurement. it is common practice. to disaggregate VaR into different components.6 Decomposition of VaR As explained in Chapter III. the rest of the time. for instance in the allocation of economic capital. else use V2. 1 – p) in the mixture (either estimated from historical data using the EM algorithm.com 142 . firms need to aggregate VaR over different risk types and over different business activities and. III.The PRM Handbook – Volume III Define two risk factor covariance matrices V1 and V2 and associated probability weights (p.A.3. But not all these ‘rules’ for VaR decomposition are appropriate for historical or MC VaR estimates. VaR models form the basis of internal economic capital allocation and limit setting.pdffactory. to apply these rules to all VaR estimates.3.E. It allows risk managers to understand the drivers of risk in their portfolio. by another component that applies to the ordinary market circumstances. In summary. However. but percentiles do not satisfy simple rules. depending on the intended application. Such a distribution is better able to capture the skewness and excess kurtosis that we commonly observe in portfolios of most types of financial assets. Break each of the 10. Another very useful application of normal mixture VaR is to probabilistic scenario analysis. if the parameters of the normal mixture distribution are estimated from historical data.000 or so simulations into two steps: (a) draw from a Bernoulli variable with probability p. Disaggregation of risk is used in risk management for setting limits. Here the VaR corresponds to a percentile. (b) if the result is ‘success’ use V1 to generate the correlated risk factors in the simulation. where the portfolio returns are generated by a high-volatility (or ‘crash’) component with a low probability and. the rule is derived using the rules for the variance operator (see Section II. like the variance operator. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. that is. the resulting VaR estimate will reflect the actual properties of the data more accurately than the normal VaR.4). Hence. the normality assumption is not necessary for the analytic and Monte Carlo VaR methodologies. Their common theme is that each rule is based on the analytic VaR formula. irrespective of the model used to compute them. likewise. Hence. assessing new investments. A number of different rules for disaggregating risk are considered in this section. This section has shown how the analytic and MC VaR methods can be used with a normal mixture distribution for the portfolio returns. or assumed in a scenario analysis). Hence the usefulness of each rule varies. However. But since risk limits do not correspond to real capital. The fact that VaR aggregates take account of correlations. in this section. 80 In this case it is easy to show that VaR is ‘sub-additive’ in the sense that: Total VaR Sum of component VaRs with equality if. Equity VaR + FX VaR = $396. In the normal analytic VaR model VaR follows the same ‘rules’ as the standard deviation operator.2 ? $3m 10/ 250 ? 3 = $209.A. all the correlations in the covariance matrix V are one. Similarly.3.12) below. if risk factor returns are heavy-tailed then VaR need not be sub-additive. and that these are typically far less than one. We shall state the complete rule for the decomposition into two components in (II. On the contrary.The PRM Handbook – Volume III III. the total VaR might actually have been less than either of the component VaRs. is one of the reasons why banks favour VaR over ‘traditional’ risk measures.6. But in the above example the risk factors had a correlation of 0.1 Stand-alone Capital Suppose a line manager operating on a VaR-based risk limit wants to assign separate VaR limits to the equity and foreign exchange desks so that aggregate losses only exceed the aggregate VaR limit an appropriately small proportion of the time. an equity index and an exchange rate. The total 1% 10-day VaR due to both risk factors was estimated as $319.A.700.643.3.33 0.15 = 2.33 0.3. say. such as duration for a bond portfolio. and how. In Example III.33 10-day standard deviation FTSE = 2.400.A. It shows that the total VaR is only equal to the sum of the component VaRs if there is perfect correlation between the components.643 = Total VaR.100 > $319. had the correlation been large and negative. FX VaR 10/ 250 2 = $186. So the total VaR was much less than the sum of the two component VaRs.5. which is much less than one. These traditional risk measures ignore the benefits of diversification that apply in a 80 But only when the risk factor returns are assumed to be normal: as mentioned already in Section III. it is not necessary for the two limits to add up to his overall VaR limit. or the Greeks for an options portfolio.3.com 143 .A.3 we considered a simple portfolio that has been mapped to two risk factors. exceeds the overall risk limit! We shall see why. in theory it could even be that the VaR limit for the equity desk. Hence. or beta for an equity portfolio.pdffactory. and only if.3. the VaR due to equity risk alone was: Equity VaR = 2. In fact.2. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. A. VaR accounts not only for the risks due to the risk factors themselves. 81 Indeed.3.A. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. for limit-setting purposes.2 Specific vs. Also. VaR measures for such portfolios are much more likely to violate the criterion of sub-additivity. A VaR model can be used to assess the ‘specific risks’ of a portfolio – i. 81 Each separate portfolio could be within limit yet the business overall could be in breach of limits when considered on a diversified basis.6. and an FX VaR component. no simple rule that relates the sum of equity VaR and FX VaR to the total VaR. For instance. non-linear portfolios will be analysed using the simulation approaches.6. Since percentiles do not obey simple ‘rules’ (except that a percentile is invariant under a monotonic transformation of variables) there is. stand-alone capital is not appropriate for limit setting. Systematic Risk Another way of disaggregating risk is to decompose VaR into its systematic and specific components. the 95th percentile. Disaggregation of VaR (or conditional VaR) can be undertaken in the context of simulation by restricting the risk factor scenarios in different ways.3. III. say. In this anomalous situation. the risks that are not captured by the risk factor mapping. That is. III. say. but also for the less than perfect correlation of risk factors when aggregating the risk.com 144 .pdffactory. short. total VaR can be disaggregated into an equity VaR component. a strong argument could be made here to avoid VaR and to use conditional VaR instead. It is possible to construct extreme cases where individual portfolios containing.1. the diversified portfolio has higher VaR than each of the components. In contrast.The PRM Handbook – Volume III portfolio exposed to multiple risk factors. those risks that apply to the market/factor generally and those that arise from lack of diversification or deviations from the market portfolio. Of course typically. VaR is a risk measure with consistent dimensions across markets and therefore allows greater consistency in setting policy across products and in evaluating the relationship of risk and return. in this case.e.1 Decomposing non-linear portfolios Decomposition of risk becomes more complex for portfolios with non-linearities. For this reason VaR is criticised as being a poor ‘risk metric’. well out-of-the-money digital options have a very low or even zero VaR at.1. However. corresponding to a lower percentile of a simulated portfolio returns distribution with only the equity risk factors changing. corresponding to a lower percentile of a simulated portfolio returns distribution with only the foreign exchange rates changing. if two similar portfolios are combined together. pdffactory.A.F.6.7284 Intercept -0.00045 -0.3.3.01636 10 ? $2m = $241. Clearly. direct estimation of total VaR will normally give a result that is considerably lower than the sum of systematic and specific VaR. However.700.50312 0.00802 Hence. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. the 1% 10-day systematic VaR (in this case.The PRM Handbook – Volume III Example III. In this case.A.00016 0. the 1% 10-day specific VaR is 2. having re-created an artificial price history of the portfolio using the current portfolio weights. we did not need to estimate a factor model in order to calculate the total VaR. we do some simple algebra showing that the analytic VaR (for linear positions) obeys the following sub-additive rule for any decomposition of total VaR into two components. VaR1 and VaR2 where the component risks have correlation : Total VaR2 = VaR 12 + VaR 22 + 2 VaR1 VaR2. Adding this to the specific VaR of $118.3 we re-created an artificial price history of the portfolio using the current portfolio weights.3. simply adding systematic VaR and specific VaR is a very conservative way to estimate the total VaR.02876 52. the 10-day standard deviation of the residuals in this model is 0. that is.5 for our portfolio of UK equities in Example III.A.6: A CAPM regression Regression Statistics Coefficients Standard Error t Stat R Square 0.A. Table III.184.A.01636.3 Sub-additivity To understand why this is so.3.12) 145 . Of course.A. III. Since the returns are observed daily.02536 $2m = $118.6.3.com (III. giving a direct estimate of 1% 10-day total VaR as: Total VaR = 2. and regressed the time series of returns to this portfolio on the FTSE index returns.184 we thus obtain: Systematic VaR + Specific VaR = $327.3.1.33 0.7: Specific VaR To obtain the beta of 1. we would have a daily standard deviation 0.A.02536 0.33 10 = 0. the equity risk factor VaR) is $209. We could have simply obtained the daily standard deviation of the ‘re-created’ portfolio returns and used formula (III.4).6).3.00802 FTSE 1.884.34991 Standard Error 0.117. This gave the Excel output shown in Table III.26426 The ‘standard error’ is the standard deviation of the model residuals (see Section II. or for assessing how total business risk would be affected by the sale/purchase of a business unit. Each element of the vector would be determined by assessing the change in total VaR (according to simulation) for a oneunit change in the portfolio holdings of the relevant asset. There are two ways of proceeding: (a) The before and after approach.6. We can find an approximate IVaR by first calculating the DelVaR vector (see below). it is sometimes assumed that all component correlations lie between zero and one.3. simply adding specific VaR and systematic VaR is not a good way to estimate Total VaR if the systematic and specific components are (more or less) uncorrelated.A. III. If Total VaR2 = VaR 12 + VaR 22 and = 0: Total VaR = ( VaR 12 + VaR 22 ).2). In this case the two ways of calculating total VaR (as a straightforward sum of VaRs.3. compare it to the current VaR and take the difference.F.2 Incremental VaR Incremental VaR (IVaR) is a measure of how portfolio risk changes if the portfolio itself is changed in some way.3.The PRM Handbook – Volume III Note that expression (III. This is the best approach if the proposed change to the portfolio is significant.13) Now recall that if the factor model is capturing most of the variation in the portfolio then specific risks and systematic risks should be uncorrelated (see Section II. Here we measure the VaR under the proposed change. the Total VaR will be closer to the square root of the sum of the squared component VaRs than to the simple sum of the two VaRs.A. When disaggregating VaR into different components. or as the square root of the sum of the squared VaRs) provide approximate upper and lower bounds for the total VaR. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.com 146 .A. Under the analytical method 82 the DelVar vector is calculated as follows: 82 With simulation methods a DelVaR vector could also be constructed. Like all approximations based on partial derivatives. respectively.12) simplifies when the correlation is one or zero: If Total VaR2 = VaR 12 + VaR 22 + 2 VaR1 VaR2 = (VaR1 + VaR2)2 and = 1: Total VaR = VaR1 + VaR2. it is suitable only for examining small changes to the portfolio composition.pdffactory. It is ideal for assessing the effect of a hedge or a new investment decision on a trader’s VaR limit. (b) The approximation approach. This vector is then multiplied by another vector containing the proposed changes in positions for each asset/risk factor. In that case. (III. but with considerably greater difficulty. In other words. Example III.022987 0. The DelVaR vector will contain an element for each asset/risk factor in the covariance matrix (the first element will contain information relating to the risk of the first asset/risk factor.A.00009 0. As the proposed change to the portfolio is quite large in this case.000342 .com 147 . In other words.000036 0. (III. With the before and after method.A.022987 .022987.000342 2.3.The PRM Handbook – Volume III VpZ DelVaR p'Vp 0.5 . 0.3.A. The incremental VaR can be approximated as follows: IVaR 0 1 0.33 0. and so forth).001882 m$ = $101. The portfolio consists of an exposure to the FTSE index and exposure to the exchange rate since the investor is US$ based.3.14) becomes: 0.14) where V is the covariance matrix.00016 Vp DelVaR 3 2 0.018368 0.043382 0. for the base position where the portfolio is unchanged.33 0.8 Approximate IVaR We continue with the example first presented in Example III. Note that the standard deviation of the portfolio’s P&L over a one-day horizon is equal to $0.000428 2. equation (III.043382 0. Now suppose that we consider hedging half of the currency exposure so that the exposure to the exchange rate is reduced to only $1m. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. we know from Example III. the approximation method is unlikely to be accurate and the before and after method would be preferable.001882).3.A.33 0.000036 0. rather than the current $2m. Hence.043382m (being 0. the hedging decision will reduce 1% one-day VaR by approximately $22.3.987. and the second element (the change in the exposure to currency) is –1.pdffactory.000428 0.3. We can construct a vector of portfolio changes where the first element (the change in the exposure to FTSE) is equal to zero.3 that the one-day VaR of the original position is 2. p is the position vector and Z is the relevant critical value of the normal distribution. Note that the denominator of this expression is simply the standard deviation of the portfolio’s P&L.080.018368 0.A. 83 We take the DelVaR vector discussed above and multiply each element by the corresponding position for each asset/risk factor.018368. the 1% one-day VaR (the error is due to rounding in the DelVaR vector).3. the IVaR is and 2. III.A.A. calculated in Example III. The CVaR for foreign exchange is equal to the position ($2m) multiplied by 0.A.3.105/101.022987.974.The PRM Handbook – Volume III With the currency hedge. Note that they sum to $101.078 or around 55% of 83 This relationship is assured for the analytical method and if the portfolio is comprised of simple linear positions. in this example the equity exposure is contributing 55. according to the exact method. or $55. Sometimes it is also used for performance measurement.3.839.078. necessitating new analysis based on the revised DelVaR vector.001186 m$ = $0. The sum of these CVaRs will be equal to portfolio VaR. The result is a set of CVaRs for each asset.3. The DelVaR vector.com 148 . Hence. 0. the sum of each of the component VaRs will be equal to the total VaR. which is approximately equal to $101.08241m.3 Marginal Capital Sometimes referred to as component VaR (CVaR). It may not hold for other cases. marginal capital is additive.6. it is only relevant for the current portfolio weightings.001186. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. the 1% one-day VaR is $80. In other cases the actual IVaR will exceed the approximation.018368 .pdffactory. Significant changes to the portfolio will change the risk contributions of the various assets. this method of decomposition is useful for gaining a better understanding the drivers of risk within a portfolio.104.022987 The CVaR for equities is equal to the position ($3m) multiplied by 0. Unlike the previous methods.241 = $20.A. which is less than the IVAR given by the approximation.080.8 is used here: DelVaR 0. or $45. CVaR is used to assign a proportion of the total risk that can be attributed to each component.241 because p Vp = 3 1 90 36 3 36 160 1 10 6 306 268 3 1 10 6 = 0. For instance.33 101. Example III.9: Marginal capital We continue to analyse the portfolio first introduced in Example III. Since the analysis is performed using the DelVaR vector of partial derivatives with respect to asset weights.080 – 80. 0. depending on the level of correlation between assets.3. to explain 95% of the variation observed historically – it cuts out the ‘noise’ for the subsequent analysis. or futures prices or even implied volatilities.pdffactory. yield curves. Here only few components are needed to explain almost all of the variation. For instance. because it just picks up the ‘noisy’ variation that we would prefer to ignore. with the first principal component explaining the largest part of the variation in the system represented by the covariance matrix and so on. In contrast. Instead.5. the variation captured by the lower-order principal components is commonly ignored. although stand-alone capital is most commonly used as the risk measure in a risk-adjusted performance measure context. we would use stand-alone capital to compare the performance of the equity and foreign exchange trading desks because it is generally argued that the diversification benefit should not enter into the analysis.5.3.A.6. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. It is particularly effective in highly correlated systems such as term structures. That is.4.D. the stand-alone capital (in Section III.The PRM Handbook – Volume III the risk in the portfolio.7 Principal Component Analysis Principal component analysis (PCA) is a statistical tool that decomposes a positive semi-definite matrix into its principal components. By retaining only the first few principal components – enough. The nth principal component explains the least variation – indeed. i. They should therefore be neither rewarded nor penalised for the diversification of the overall portfolio of businesses.e. In this respect PCA is a useful technique for reducing dimensions.3. 84 For instance. 84 Positive definiteness is discussed in detail in Section II.5 shows how PCA on an n n covariance matrix is used to write the portfolio variance as a sum of n positive terms that become progressively smaller. Either can be used for performance measurement purposes.1) measures the risk of an asset class in isolation. Section II. This technique can be used to evaluate the risk of an asset (or asset class) in the context of a diversified portfolio. III. performance measures should be centred only on the issues over which they have direct control.com 149 . say. being foreign exchange risks in this example. the team managing foreign exchange risks presumably has no say in the way the portfolio of businesses is constructed (whether or not there is an equity desk. PCA applied to a covariance matrix or a correlation matrix has many applications to financial risk management.D.A. and the relative sizes of those businesses). while currency contributes around 45%. i.e. n of an n n correlation matrix sum to n.7. if the scenario specifies that the one-month interest rate increases by 100 bps. and the amount of variation captured by the ith principal component is i/n. when applying PCA to a covariance matrix of zero-coupon yields of different maturities.A. An analysis of eigenvalues tells us how many components we need.1 PCA in Action From 4 January 1993 until 20 November 2003 we have daily closing New York Mercantile Exchange (NYMEX) futures prices. that one can perform a simple scenario analysis on each of the main principal components separately. The eigenvalues 1.3.2.4.The PRM Handbook – Volume III The other great advantage of PCA is that the principal components are uncorrelated with each other. if the largest eigenvalue is. 85 a scenario for the change in the first principal component generally will mimic an almost parallel shift in the entire yield curve. based on the entire sample from 4 January 1993 until 20 November 2003. as it is difficult to prevent generating implausible shapes in the resulting yield curves. The correlations are so high that almost all the variation can be attributed to two or perhaps three components. is shown in Table III.7. to the covariance matrix of the daily (or weekly or monthly) changes in yields. Some of these are shown in Figure III. on West Texas Intermediate (WTI) light sweet crude oil and on natural gas. leaving the other components fixed. 10.A. from 2 to 12 months out. say. III.6. 2.A. then the first principal component explains 10/11 = 90.3. The change translates into a meaningful scenario.D. Correlated scenarios are generated using the Cholesky decomposition of a covariance matrix – see Section II. 85 More precisely.com 150 . Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.9% of the variation. This problem is compounded if simulations are extended more than a short time into the future using this method. For instance. with correlation decreasing as the maturity difference increases. This means. So in our example. Without PCA one would need to take care that only ‘correlated’ scenarios are used – for instance. for instance. …. one that could be observed historically.pdffactory. one could not have the three-month interest rate decreasing by 200 bps in the same yield curve scenario.3. with n = 11. This exhibits the pattern that is typical of term structures. The system of futures returns is clearly very highly correlated – indeed there are perhaps just one or two independent sources of information driving the whole system of futures prices. The correlation matrix of the returns to futures from 2 to 12 months out. 960 0.A. 0.999 m12 1 Table III.993 0.888 0.962 0.984 0.974 0.949 1 0.7: Correlation matrix of returns m2 m3 m4 m5 m6 m7 m8 m9 m10 m11 m2 m3 m4 m5 m6 m7 m8 m9 m10 m11 1 0.988 0.991 0.999 1 m12 0.6: A highly collinear system Crude Oil Futures Prices 40 35 30 2mth 4mth 6mth 10mth 8mth 12mth 25 20 15 10 Table III.A.915 0.D.992 1 0.939 0.901 1 0.990 0.981 0.3.987 0.978 0. the second component explains a further 2. The first three eigenvalues show that the first principal component explains 97.972 0.938 0.956 0.951 0.022) and the third component only explains a tiny amount.245/11 = 0.999 0.998 0.979 0.996 0.969 0.3.951 0.927 0.993 0.com 151 .965 1 0.969 0.916 0.5% of the total variation.3.A.145% of the movements in the futures prices in our sample.990 0.974 0.996 1 0.986 1 0.5 for an explanation of these).998 0.984 0.996 0.928 1 0.998 0.994 0.977 1 0.The PRM Handbook – Volume III Figure III. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.940 0.pdffactory.999 0.999 0.8 shows the eigenvalues and the first three eigenvectors of this matrix (see Section II.2% (since 0.992 0.996 0.981 0.986 0.963 0.995 0.997 0. 004 0.A.The PRM Handbook – Volume III Table III.364 PC3.115 0. And when PC3 increases the term structure shifts up at both ends but – looking again at Table III. m12 = 0.412 PC2 + 0.A.12) can be rewritten as: m2 = 0.211 -0.623 0.044 -0.293 m2 + 0.A.305 0.411 m12.623 m2 + 0. This is known as the principal component representation of the system.293 0.245 0.5.016 0.247 0. equations (III.A. (III.299 0.411 PC2 + 0.12b) PC3 = 0. Since the coefficients on PC1 are all approximately the same.364 The first three eigenvectors in Table III.302 0..132 -0.276 -0.054 -0.12c) where m2.225 -0.304 0.3. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.348 -0.. + 0.pdffactory.350 0. m3 = 0.A.A. when PC1 moves (holding the other components fixed) the term structure of futures prices will shift in (almost) parallel fashion..303 0.3. .537 0.412 m3 + …. When PC2 increases the term structure of futures prices will shift up at the short end but down at the long end – so PC2 is called the ‘tilt component’.A. Since the eigenvectors are always orthogonal (see Section II.537 PC2 + 0.078 m3 + ….com 152 . (III.411 0.623 PC3. + 0.000 m2 m3 m4 m5 m6 m7 m8 m9 m10 m11 0.028 0. m12 denote the (normalised) returns to the futures of different maturities from 2 to 12 months out. …. – 0. Hence PC3 is called the ‘curvature’ component.282 -0.299 PC1 + 0.000 0.280 0.8 are used to compute the first three principal components as: PC1 = 0.078 -0..D.2).412 0.3.3.8: Eigenvectors and eigenvalues Eigenvalues Future 1st Eigenvector 2nd Eigenvector 3rd Eigenvector 10.078 PC3.12a) PC2 = 0.000 0.3.162 -0. For this reason PC1 is often called the ‘trend component’ when PCA is applied to term structures.299 m3 + ….298 -0.302 0.298 m12.161 0.353 -0. .000 0. (III.001 0.537 m2 + 0.002 0.293 PC1 + 0.732 0.3.000 m12 0.364 m12.300 0.305 0.8 – the medium term futures prices will go down.298 PC1 – 0.3..304 0. pdffactory. 95%) of the variation. and even if we do use the same smoothing constant.com 153 . And EWMA matrices normally assume the same value for the smoothing constant for all the risk factors. it is only the variance of each component that matters – their covariances are zero. Nor do we need to 86 Note that it is only the unconditional covariances that are zero. In order to take proper account of (less than) perfect correlation between these risk factors when estimating the total VaR. we need to use a covariance matrix of the whole system – i. 86 So one can treat each component separately. and the MC simulation VaR method for interest-rate options portfolios. Hence the application of a time-varying (i.3. as we discussed earlier. the covariance matrix of changes in a zero-coupon yield curve. we normally need more than three components to model the system with sufficient accuracy. Of course.e.The PRM Handbook – Volume III In this example the curvature component is not important – it corresponds to much less than 1% of the movements normally found in futures prices. which does not have time-varying covariances. but for international fixed income portfolios the risk factors consist of many yield curves.e. And when PCA is applied to other types of systems.2 VaR with PCA PCA is commonly used in the analytic VaR method for cash flows. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. EWMA or GARCH) model to variances. Note that with this method we do not have to use the same smoothing constant in each EWMA. so the dimension of the risk factor space is large. such as equities or currencies. there will be no intuitive interpretation of the components as there is for term structures – because we cannot ‘order’ a system of equities or currencies in any sensible way – although the first principal component will normally capture the common trend (if the system is sufficiently correlated that there is a common trend!). for instance – the curvature component can be more important.A. Perhaps the most important input in both these approaches is the covariance matrix of risk factor returns – in this case. because they are uncorrelated. whilst assuming correlations are constant. The solution is to apply PCA to the entire system of risk factors. Then in the EWMA matrix it is very unrealistic to apply the same value for the smoothing constant for all the risk factors. as happens in the RiskMetrics methodology. all yields of all maturities in all countries. Then. does entail a strong assumption. largedimensional covariance matrices are very difficult to estimate using GARCH models. Typically this yield curve will have more than 10 different maturities. the final risk factor covariance matrix will not have the same effective smoothing constant for all risk factors. III. retaining enough components in the principal component representation to explain most (say. But. But in other examples – interest rates. And direct estimation of the matrix using a multivariate GARCH model will be out of the question because the likelihood surface will not be well defined. estimating and forecasting only its variance using either GARCH or EWMA. This may seem fine if the risk factors are just one yield curve.7. because PCA is based on an unconditional covariance matrix. The large risk factor covariance matrix that we obtain in the end will always be positive semidefinite.The PRM Handbook – Volume III constrain the GARCH models in any particular way.com 154 . and should be used with caution. it should always be remembered that model risk can never entirely be eliminated. This chapter has explored various methods for improving on VaR models so that they are more consistent with the behaviour observed empirically in financial markets.3. Concern about this kind of model risk is well justified because financial markets are prone to periods of high volatility in which extreme market movements can occur. for capital allocation on a ‘stand-alone’ and on a ‘marginal’ basis. Every model is a simplification of the reality it represents. The variation in volatility explains most of the extreme moves observed in financial markets. Full details are given in Alexander (2001). making it more vulnerable than it should be to insolvency. This chapter also completed the discussion of VaR for market risk by examining two other. Many analysts have noted the heavy tails of empirical distributions relative to the standard i. These are a symptom of the clusters of volatility that occur periodically. We focused on models that incorporate the volatility clustering idea (EWMA and GARCH) and considered their practical application to VaR estimation. It is now well established that volatility in financial markets exhibits elevated values over certain periods before reverting to lower levels. We have argued that the most crucial issue for analysts to address in this regard is volatility clustering. and these should be well understood by the user. The advanced models presented here have their own problems. and how traders can use an incremental VaR analysis Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Simple VaR models as explained in the previous chapter are all flawed to some extent. more advanced topics: risk decomposition and principal component analysis. We also examined some alternative approaches to the issue of extreme returns that do not incorporate volatility clustering (Student’s t. normal assumptions. The pervasiveness of model risk is a key reason why stress tests (covered in the next chapter) are useful. as in most multivariate GARCH models. EVT and normal mixtures).A. Such a risk is potentially disastrous since it could cause a financial institution to have insufficient capital given the risks taken. either their assumptions are too simple and/or they suffer from lack of data. Any deviation between the VaR estimate and this truth represents a model risk.i.d.pdffactory. We have described the ways in which total VaR can be ‘disaggregated’ into different components. Of course. III.8 Summary The crucial question for any VaR model is whether the VaR estimate provides a good indication of the ‘true’ risk in financial markets. Chichester: Wiley. 1093–1111. T (1990) ‘Modelling the coherence in short-run nominal exchange rates: A multivariate Generalised ARCH model’. Chichester: Wiley. G.com 155 . D (2000) Finite Mixture Models. L. Not only does this simplify the VaR estimation itself.pdffactory.be/econometrics/Bauwens/Papers/papers. Bauwens. S. as we shall see in the next chapter. For instance. and O’Brien. J (2003) ‘Multivariate GARCH models: a survey’.ac.core.ucl. To appear in Journal of Applied Econometrics. commodity portfolios with exposures to futures prices at different maturities are best represented using VaR with PCA than by a direct VaR analysis. the VaR for any portfolio of bonds or loans that are mapped to riskequivalent cash flows (so that the risk factors are a term structure of interest rates) should be calculated using PCA. C (2001) Market Models: A Guide to Financial Data Analysis. Bollerslev. Review of Economics and Statistics. Maclachlan. pp. LVII. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. but the use of PCA greatly facilitates the application of stress tests and scenario analysis to these types of portfolios. Available at http://www. References Alexander.The PRM Handbook – Volume III to evaluate the effect of a proposed trade on their VaR limit. and Peel. pp. we have explained how PCA can greatly simplify VaR estimation when there are multiple risk factors that are highly correlated. K (2002) Measuring Market Risk. 498–505. Finally. Dowd. New York: Wiley. J. J (2002) ‘How accurate are value-at-risk models at commercial banks?’ The Journal of Finance. and Rombouts.htm Berkowitz. Laurent. 72. Similarly. com 156 .The PRM Handbook – Volume III Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory. adverse outcomes in a portfolio of financial instruments. Stress testing has long since been solemnised by regulators.1 Introduction The previous chapters have introduced value-at-risk (VaR) as a risk measure and discussed methods to deal with some of its shortcomings.A. The second question is by what conceptual framework we have concluded that stress testing is needed. If this chapter were to be simply a review of stress-testing techniques. A method for the quantification of potential future extreme.pdffactory. Any remaining errors or omissions are my own. it is important to step back and formulate some ideas about why we need to have a chapter on stress testing.4 Stress Testing Barry Schachter 87 III. Thanks go to Steve Allen and Carl Batlin for reviewing and commenting on a draft of this chapter. most stress-testing methods. 2. A palliative for the anxiety that is experienced by managers with significant risk exposures. however. That is. It is common belief that VaR does not provide a complete picture of portfolio risk and that stress testing is a means of addressing that (at least in part).The PRM Handbook – Volume III III. I think. then I could immediately proceed by providing a typology and discussion of the various techniques that have been presented in the literature on the topic.A. that an appreciation of the usefulness of stress testing requires that I first step back a bit and ask a couple of basic questions.4. extreme and palliative. The key words in this definition are quantification. unlike VaR. potential future. are not statistical measures of risk. LLC. The first question is by what historical process we have arrived at this general acceptance of the need for stress testing. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.com 157 . We find stress testing on every list of risk management best practices. It is a great challenge in stress testing. Parts of this chapter draw on Schachter (1998a. Balyasny Asset Management. a quantity) to potential losses from extreme events that is not specifically VaR. 2000b). 1. no probabilities are attached to and no confidence intervals estimated for the adverse outcomes.e. from historical scenario analysis 87 Chief Risk Officer. In fact. I am taking a very broad view of stress testing here to include just about any method for attaching a monetary figure (i. whatever the method employed. While most of this chapter is devoted to questions related to the construction of stress tests.. By now the need for stress testing of portfolios of financial instruments is taken for granted. My definition of stress testing is as follows: Stress testing (str s' t st' ng) n. 1998b. It is important to note that quantitative estimates are not necessarily statistical estimates. 2000a. they do not convey just how bad ‘bad’ can get. Stating it more colloquially. does not settle this issue.A. Stress testing must fit into some wider context. This chapter. then we cannot know how to make rational economic decisions which use stress-test results. it is a benchmark or feedback mechanism for modifying or improving a decision. while VaR models provide a notion of a ‘bad’ portfolio return.2 Historical Context In the classic 1967 film. but does not cure an underlying problem. and receives the following sage (and frightening) career advice from a family friend: Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. III. Uses vary widely from a simple informational tool to a formal element of limit setting and capital allocation. or market risks are not distributed according to some distribution that is stable under addition. then the context for extracting that value in decision making still needs to be shown. which is somewhat unsatisfying. we still need to agree (or at least understand) its purpose.4. it is a direct input in some decision function – or a way of measuring the movement towards some goal – in other words.. It is common belief that stress testing has incremental value because VaR is a sufficient statistic for risk under only a restricted set of conditions.33. this is the role that stress tests have served to date.g. The loss on a portfolio corresponding to a one standard deviation move in the portfolio value is equal to the 99% VaR divided by 2. If we cannot show either of these things. and we cannot articulate why we should be performing stress tests. in a world where all financial asset returns are jointly normally distributed and portfolios are passive. Benjamin Braddock (Dustin Hoffman) returns home after graduation from university. the optimal allocation of assets in a portfolio) – in other words. A palliative is something that reduces pain. The Graduate. However.The PRM Handbook – Volume III (reliving a past market crisis) to factor push (shifting a market rate to see the impact on a portfolio). Even if we agree that stress testing has incremental value.com 158 . For instance.pdffactory. then to know VaR is to have only an incomplete picture of portfolio risk. to effectively gauge potential future losses. focusing mainly on methods for stress testing. The risk management profession is still seeking consensus on theoretically and practically consistent ways to integrate stress testing into decision making. We should be able to establish how stress testing provides some incremental value as either a means to achieving some goal (e. The last point above deserves emphasis. if asset returns are nonlinear functions of underlying market risks. To know the VaR is to know everything about risk. To a large extent. or have some point. the magnitude of a loss of any probability is a scalar multiple of the VaR. Even if we agree what stress testing is. and stress tests are not needed. motivated by a concern for the soundness and stability of the international financial system.. scenario analysis. Black and Scholes were eventually to become household names (well. Some of the names are familiar. Just one word.The PRM Handbook – Volume III MR MCGUIRE: I just want to say one word to you. Derivatives. It is widely acknowledged that financial markets were forever changed by wide adoption of derivatives as a fundamental tool of risk allocation. recommendations for best practice frequently Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. BEN: Yes sir. BEN: Exactly how do you mean? MR MCGUIRE: There’s a great future in plastics. 1995) and Barings. as recognition of the importance of arbitrage-free option pricing spread. see MacKenzie’s (2003) history. Or perhaps he should have substituted ‘derivatives’ and omitted ‘plastics’ entirely. 1999). regulatory authorities sought ways to enhance internal risk controls. SPAN has since been adopted by many derivatives exchanges. were given much of the blame. Also in 1988. every boon is also a bane. 1996). ushered in a new era of international cooperation in international financial regulation. 1999). to mitigate future risk.com 159 . and Thorp. portfolio insurance strategies. Procter and Gamble. Gibson Greetings (see Overdahl and Schachter. almost) after their revolutionary results were published in 1973. based on option replication arguments. Merton. Daiwa Bank. By the end of 1967. Will you think about it? Perhaps Mr McGuire should have allowed for a second word. And the Committee’s attention immediately turned to derivatives. Sumitomo (Gilbert. too. As these events unfolded. based on the work of Sprenkle. the Chicago Mercantile Exchange adopted a system for setting daily margin requirements. Think about it.pdffactory. Others are less familiar. such as Orange County (see Jorion. based on a type of stress test discussed below. in the form of ‘circuit breakers’. In the 1990s a series of corporate financial collapses occurred which were associated (in one way or another) with derivatives usage. Samuelson. Against this backdrop the Basel Committee on Banking Supervision. such as Metallgesellschaft (see Culp and Miller. Following the various studies of the crash. Boness. In the crash of 1987. MR MCGUIRE: Are you listening? BEN: Yes I am. Alas. regulatory authorities prescribed additional regulation over the function of equity markets. In the series of reports and rule makings that resulted. and SK Securities (see Gay et al. in 1988. 1995). the Standard Portfolio Analysis System (SPAN? . MR MCGUIRE: Plastics. work was already well advanced in the development of a preference-free option pricing formula. ’ The Derivatives Policy Group (1995) included stress testing among the necessary risk measurement tools of derivatives dealers. is also an important aspect of risk measurement. 1993) states: ‘Dealers should regularly perform simulations to determine how their portfolios would perform under stress conditions.’ The Basel Committee on Banking Supervision (1994) states: ‘Analysing stress situations. Simulations of improbable market environments are important in risk analysis because many assumptions that are valid for normal markets may no longer hold true in abnormal markets..The PRM Handbook – Volume III included a reference to the importance of the role of stress testing for identifying otherwise hidden risks. but also ‘worst-case’ scenarios. The bank should evaluate risk exposures under various scenarios that represent a broad range of potential market movements and corresponding price behaviors and that consider historical and recent market trends. It is these recommendations that have pushed stress testing to a relatively prominent position among risk management tools. Sound risk measurement practices include identifying possible events or changes in market behaviour that could have unfavourable effects on the institution and assessing the ability of the institution to withstand them. 1993) states: ‘National banks’ .pdffactory.. These simulations should reflect both historical events and future possibilities. reflecting their probability.’ The US Comptroller of the Currency in Banking Circular 277 (Comptroller of the Currency. Stress scenarios should include not only abnormally large market swings but also periods of prolonged inactivity. The tests should consider the effect of price changes on the mid-market value of the portfolio. market illiquidity or the default of a large counterparty across both the derivatives and cash trading portfolios and the loan and funding portfolios. including combinations of market events that could affect the banking organisation. systems also should facilitate stress testing and enable management to assess the potential impact of various changes in market factors on earnings and capital. they state. ‘Mechanisms should be in place Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Specifically.com 160 . Recommendation 6 of the G-30 report (Global Derivatives Study Group. Dealers should evaluate the results of stress tests and develop contingency plans accordingly. These analyses should consider not only the likelihood of adverse events. as well as changes in the assumptions about the adjustments to mid-market (such as the impact that decreased liquidity would have on close-out costs). Ideally. such worst-case analysis should be conducted on an institution-wide basis by taking into account the effect of unusual changes in prices or volatilities. The Basel Committee on Banking Supervision (1996) eventually made stress testing a prerequisite for banks to be eligible for the internal models approach to market risk capital.. These factors include low-probability events in all major types of risks. supervisory authorities may ask banks to provide information on stress testing in three broad areas. Stress scenarios need to shed light on the impact of such events on positions that display both linear and non-linear price characteristics (i. to the bank’s board of directors. 1995). Specifically. The requirement had previously been laid out by the Committee in an earlier release (1995).’ Concurrent with the implementation of the use of VaR models for the calculation of regulatory market risk capital as permitted by the EU Capital Adequacy Directive. or make the control of risk in those portfolios very difficult.com 161 . and operational risks. including the various components of market.. the Basel Committee on Banking Supervision (1995) states: ‘Banks that use the internal models approach for meeting market risk capital requirements must have in place a rigorous and comprehensive stress testing program. This assessment is integral to setting and evaluating the bank’s management strategy and the results of stress testing should be routinely communicated to senior management and. More specifically. ‘(a) Supervisory scenarios requiring no simulations by the bank ‘Banks should have information on the largest losses experienced during the reporting period available for supervisory review. options and instruments that have options-like characteristics). This loss information could be compared to the level of capital that results from a bank’s internal measurement system. Banks should combine the use of supervisory stress scenarios with stress tests developed by banks themselves to reflect their specific risk characteristics. Stress testing to identify events or influences that could greatly impact banks is a key component of a bank’s assessment of its capital position. For example.e. incorporating both market risk and liquidity aspects of market disturbances. it could provide Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. periodically.pdffactory. provide the information necessary to conduct “stress testing”. Banks’ stress tests should be both of a quantitative and qualitative nature. credit.The PRM Handbook – Volume III to measure market risk consistent with established risk measurement guidelines. Qualitative criteria should emphasize that two major goals of stress testing are to evaluate the capacity of the bank’s capital to absorb potential large losses and to identify steps the bank can take to reduce its risk and conserve capital. banks were required to ‘conduct a routine and rigorous programme of stress testing’ (Securities and Futures Authority. which are discussed in turn below. These procedures should . Banks’ stress scenarios need to cover a range of factors that can create extraordinary losses or gains in trading portfolios. Quantitative criteria should identify plausible stress scenarios to which banks could be exposed. The 1987 equity crash. Due consideration should be given to the sharp variation that at times has occurred in a matter of days in periods of significant market disturbance. A second type of scenario would evaluate the sensitivity of the bank’s market risk exposure to changes in the assumptions about volatilities and correlations. See for example. for example. ‘(b) Scenarios requiring a simulation by the bank ‘Banks should subject their portfolios to a series of simulated stress scenarios and provide supervisory authorities with the results. followed the LTCM and liquidity crises of 1998.g. or the fall in bond markets in the first quarter of 1994. Moreover. for example. problems in a key region of the world combined with a sharp move in oil prices). if the testing reveals particular vulnerability to a given set of circumstances. Applying this test would require an evaluation of the historical range of variation for volatilities and correlations and evaluation of the bank’s current positions against the extreme values of the historical range. ‘The results should be reviewed periodically by senior management and should be reflected in the policies and limits set by management and the board of directors. Banks should provide supervisory authorities with a description of the methodology used to identify and carry out the scenarios. the ERM crises of 1992 and 1993 or the fall in bond markets in the first quarter of 1994. a bank should also develop its own stress tests which it identifies as most adverse based on the characteristics of its portfolio (e. including recommendations for stress tests of counterparty credit exposures. The Basel Committee provided further impetus to the application of stress testing to credit in 2002 as part of the new Basel Capital Accord (‘Basel II’). ‘(c) Scenarios developed by the bank itself to capture the specific characteristics of its portfolio ‘In addition to the scenarios prescribed by supervisory authorities under (a) and (b) above. incorporating both the large price movements and the sharp reduction in liquidity associated with these events. stating (Basel Committee on Banking Supervision (2002): ‘Banks adopting an [internal ratings based] approach to credit risk will be required to perform a meaningfully conservative credit risk stress test of their own design with the aim of estimating the Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.g. the national authorities would expect the bank to take prompt steps to manage those risks appropriately (e.The PRM Handbook – Volume III supervisory authorities with a picture of how many days of peak day losses would have been covered by a given value-at-risk estimate. the 1987 equity crash. These scenarios could include testing the current portfolio against past periods of significant disturbance.com 162 .pdffactory. all involved correlations within risk factors approaching the extreme values of 1 or –1 for several days at the height of the disturbance.’ Another round of calls for stress testing. by hedging against that outcome or reducing the size of its exposures). the suspension of the ERM. the report of the President’s Working Group on Financial Markets (1999). as well as with a description of the results derived from these scenarios. the apparent presumption in all this emphasis on stress testing is that using stress tests will lead to better decisions with respect to risk taking. (Of course.The PRM Handbook – Volume III extent to which their IRB capital requirements could increase during such a stress scenario. follows a similar approach.pdffactory. The resulting ‘corrected’ simulation is then used in evaluating the usual desired percentile for the calculation of VaR. however. The key prerequisite for accepting this approach is a willingness to assign to 88 See Shaw (1997) for a critique of stress tests. III.’ Credit stress testing is discussed in the Credit Risk section of the PRM Handbook. A set of stress scenarios may then be added to the history. each day of history is assigned an equal probability in the forecast portfolio return distribution.) It is easiest to see how this approach to employing stress tests would work by thinking about the historical simulation approach to VaR estimation (see Chapter III. either as an integral component of the decision maker’s objective function. one could still view and use stress-test results in any arbitrary way. 89 See also Zangari (1997a. By ‘risk model’ Berkowitz means the computational process by which VaR is ultimately derived. or as a tool measuring the distance from some goal. 88 To put this in the form of a question. Banks and supervisors will use the results of such stress tests as a means of ensuring that banks hold a sufficient capital buffer under Pillar Two of the new Accord. is what are the mechanisms at work here. specifically by providing a way of correcting VaR. could be thought of as a way of enhancing the assessment of portfolio risk. He argues that there is no reason to support the modelling of stress tests as a thing separate and apart from the normal process used to evaluate risk.4. what are we then actually supposed to do with the stress-test results anyway? Berkowitz (1999) takes up this question. if they address this lack. The risk model employed by Algorithmics. To achieve this correction. if we develop a system of stress testing. This is a powerful and attractive argument for placing stress tests on a sound footing for use in risk management.2). each scenario weighing equally with each day of history.A. In (the basic application of) historical simulation. who apply the Black–Litterman/Bayesian approach to incorporate stress scenarios into the VaR framework. 89 It is presumed that stress tests are needed because there is something lacking in the VaR derived from the firm’s risk model.com 163 . Stress tests. 1997b) and Cherubini and Della Longa (1999). stress tests should be incorporated into the risk model. the enterprise-wide risk management software company. for then there is no internally consistent way to put stress results to use. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. What is not at all clear.A. now the ‘corrected’ VaR.3 Conceptual Context As noted in the Introduction. pdffactory. III. even if it is the most theoretically comforting (i. too. stress tests on exposures to emerging markets).. (2001) and Fender and Gibson (2001a. then? Is that the only application of stress testing that is sensible? Well.com 164 . which will result in an increase in regulatory capital.g. the risk manager would still be left to deal with the anxiety and doubt over including inappropriate scenarios and excluding overlooked scenarios.e. the sensitivity of valuation to parameter estimates or inputs) where complex models are used in risk assessment. 2001b). Regulators also view stress tests as a way to assess their own comfort level with the risks being run individually by institutions for which they are responsible.The PRM Handbook – Volume III each stress-test scenario a subjective probability (it is not necessary to assume equal probabilities). Integrating stress tests with the basic risk model will result in an increase in measured VaR. stress tests on commodity indices).4 Stress Testing in Practice Most of the information available on current practice was obtained from a survey of 43 large financial institutions conducted by a task force of G-10 central banks established by the Committee on the Global Financial System (2001).4.. this approach is not a panacea for the risk manager.A. internally consistent) application. no. Interestingly. many institutions rely on stress-tests results as a means for the decision maker to perform an intuitive check on his or her comfort with the set of risks in the portfolio. the bank will incur some real incremental economic costs using this approach. the Basel Committee’s risk-based capital rules may create a disincentive for banks to adopt this integrated approach in favour of a fuzzier application of stress testing. While putting stress testing on a solid foundation. The results were also summarised by Fender et al. Is that it. The task force identified nine stress-scenario themes: four themes related to asset class (e.g. As a risk yardstick.. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Even were this approach to be adopted. For this reason some institutions set stress-test limits at the enterprise level. In this sense. To the extent that capital requirements are a binding constraint on the bank (or nearly so). The survey asked risk managers to list the most important stress scenarios used firm-wide. Less formally. This use of stress testing is long-established practice in the credit world. stress tests also provide information to decision makers about risk taking in relation to risk appetite. four themes related to geographic region (e..g. stress tests can provide a ‘reality check’ or a way of assessing model risk (e. and more recently also to check their comfort level with the systemic risks implied by the collective positions of the same institutions. Others use stress-test results to measure capital usage or assess capital charges. Banks are not terribly keen to disclose much about their stress-testing approach.’ JP Morgan Chase. Stress tests. states: ‘Stress testing . Several macroeconomic event-related scenarios are evaluated across the Firm. It is the responsibility of independent market risk management. The Firm stress tests its portfolios at least once a month using multiple scenarios. .. US dollar weakness/strength scenarios were the most common among the remaining geographic themed scenarios. Below are produced extracts from the 2003 annual reports of the three largest US banks (by assets). Additional scenarios focus on the risks predominant in individual business segments and include scenarios that focus on the potential for adverse moves in complex portfolios. Among interest-rate themed scenarios. trends and explanations are provided each month to the Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. with shocks to roughly 10. Citigroup’s report states: ‘Stress testing is performed on trading portfolios on a regular basis . Stress-test results. widening of credit spreads... and utilize the information to make judgments as to the ongoing appropriateness of exposure levels and limits. other than those centred on foreign exchange. review the output of periodic stress testing exercises. but plausible events. tended to reflect the predominance of long exposures in the respondents’ portfolios (e. Scenarios are continually reviewed and updated to reflect changes in the Firm’s risk profile and economic events. institutions conducted more spread widening than narrowing scenarios). using the same scenarios for both..000 market prices specified for each scenario. Commodity themed scenarios focused most often on a potential Middle East crisis. in conjunction with the businesses. even under extreme market moves.pdffactory. and to determine the effects of specific.. the bond market crash of 1994 was the most common.. on individual trading portfolios. an Asian crisis was the most common. as well as on aggregations of portfolios and businesses. and various versions of a hypothetical stock market crash. we will preserve our capital..The PRM Handbook – Volume III The most common scenarios were the 1987 stock market crash. which typically provides the most comprehensive disclosures.com 165 . to determine the effects of significant historical events. as appropriate.g. cross business risk measurement and economic capital allocation. to develop stress scenarios.’ Bank of America states: ‘[S]tress scenarios are run regularly against the trading portfolio to verify that. The results of the stress scenarios are calculated daily and reported to senior management … . The Firm conducts economic value stress tests for both its trading and its nontrading activities. In the emerging markets theme. is used for monitoring limits. extreme hypothetical. volatilities for equities. MIN. Also important is the cost. in time and resources. Under this scenario. MAX. 90 90 The discussion that follows pertains most directly to stress testing for traded instruments.5 Approaches to Stress Testing: An Overview The definition and practice of stress testing encompass several different techniques (see Table III. MIN. The choice of which test to employ in a particular institution and situation is driven by several factors. needed to generate stress-test results.A. and swap spreads and credit spreads widen moderately. sovereign bond yields decline moderately. … The following table represents the worstcase potential economic value stress-test loss (pre-tax) in the Firm’s trading portfolio as predicted by stress-test scenarios: Trading Economic-Value Stress-Test Loss Results – Pre-Tax as of or for the year ended December 31 Loss in USD $m 2003 2002(A) AVE. These needs depend on the complexity of the portfolio. in a conservative stress-test result.4. Approaches specific to Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. DEC. ‘The potential stress-test loss as of December 4. 5 508 255 888 436 405 103 715 219 (A) Amounts have been revised to reflect the reclassification of certain mortgage banking positions from the trading portfolio to the non-trading portfolio. of course. Stress testing either the banking book or a non-financial firm’s exposures presents additional challenges and needs. the liquidity of the instruments in the portfolio.A. the frequency with which it is traded. global equity markets suffer a sharp reversal after a long sustained rally. interest rates and credit products increase dramatically for short maturities and less so for longer maturities. equity prices decline globally. MAX.4. to help them better measure and manage risks and to understand event risk-sensitive positions.The PRM Handbook – Volume III Firm’s senior management and to the lines of business. in the Firm’s view. during an actual stress event. The Firm’s stress-test methodology assumes that.pdffactory. … [S]tresstest losses are calculated at varying dates each month. The choice also reflects the specific needs of the users. no management action would be taken to change the risk profile of portfolios. 2003. is the result of the “Equity Market Collapse” stress scenario. and the strategies employed. which is broadly modeled on the events of October 1987.com 166 . the volatility of the markets in which the instruments are traded. This assumption captures the decreased liquidity that often occurs with abnormal markets and results.1 for a high level overview). DEC.’ III. Regulatory requirements are important. 4 AVE. Factor push 1. net interest income stress tests and economic value added stress tests). too. Limited risk information 1. Desk-level stress tests are (as they should be) highly focused on the specific risks being run at the desk.1: Typology of stress tests Approach Historical scenarios Description Replay crisis event Pros It actually happened that way Cons Proxy shocks may be numerous No probabilistic interpretation No guarantee of ‘worst case’ Hypothetical scenarios Algorithmic 1.4. Empirical support mixed 2. To the extent that trading desks already perform their own position-by-position stress testing. Assumes data from normal periods are relevant 2. Relatively easy 1..g. and the Financial Sector Assessment Program of the IMF and World Bank). For some time international organisations have pursued the idea of aggregating the results of standardised stress scenarios for individual financial firms into financial sector stress tests to attempt to estimate the economy-wide impact of crisis events (e. Equally important.The PRM Handbook – Volume III Table III. but they are outside the scope of this discussion.g.. desk-level tests are used for evaluating and actively managing risk position by position (or at least strategy by strategy). Computationally intensive Most of the regulatory attention has focused on stress testing at the portfolio level.com 167 . risks that are deemed to be of ‘second order’ at the trading desk. Identifies ‘worst case’ in feasible set (maybe) 1. trader and position levels. Covariance matrix 1. For the regulators it is the aggregated impact of stressed market environments that poses risks that interest them. Also. the Committee on the Global Financial System of the Bank for International Settlements. Ignores correlations 2. possibly in near real time. Very flexible 2.pdffactory. Can be detailed 3. No guarantee of ‘worst case’ 3. Maximum loss 2. Stress testing of a sort is widely practised at the individual trading desk. we have to ask why we cannot merely aggregate those results for the purpose of examining exposure at the portfolio level. Minimal qualitative elements 1. and hence are subject to little or no stress testing. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Create event 2. Sensitivity analysis 3. No guarantee of ‘worst case’ 2. may be important (at the margin at least) when viewed in the context of risks being taken at other desks.A. Portfolio stress testing serves a more strategic purpose in the those needs have been developed (e. Perhaps much of the information generated is not relevant at the aggregate portfolio level. 4. as the raison d’être of stress testing is to explore portfolio losses in crises where nonlinearities and discontinuities may expose hidden risks. It sounds so easy. the ERM crises of 1992 and 1993 or the fall in bond markets in the first quarter of 1994. to say ‘could’ is to say ‘do this unless you have some compelling reason not to’.A. However. it really is necessary to ‘sweat the details’.pdffactory. whereas portfolio stress tests tend to employ the scenario approach. 91 Perhaps it is foolish to state the obvious. and the most useful metrics for the one purpose need not also serve best for the other. However. Scenario shocks that determine the impact on portfolio valuation are taken from observed historical events in the financial markets.The PRM Handbook – Volume III identification and control of event-related risk. as is the case with so many things. incorporating both the large price movements and the sharp reduction in liquidity associated with these events’ (Basel Committee on Banking Supervision. Designate a period in history with a suitable crisis environment and find out what would happen to the current portfolio if that crisis were replicated today. full revaluation can entail significant computational and time costs. 92 With regulators. for example. But. approximate revaluation approaches should be strictly limited. I will consider the 91 Discontinuities can present a challenge for stress tests of all types. This is in contrast to stress tests where shocks are based. the 1987 equity crash. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. The only exception to this requirement is for some desk-level stress testing. etc.) is useful for tactical portfolio management decisions as well as being perhaps the only practical way of delivering stress results on demand or in real time. seek to quantify potential losses based on reenacting a particular historical market event of significance. Desk-level stress tests tend to be more of the factor-push variety. on specified changes in the covariance matrix of asset returns. Irrespective of whether the stress-testing programme is intended for desk level or portfolio level risk management. because it actually happened’.com 168 . It may be prudent to construct stress tests with the points of discontinuity specifically in mind. vegas. or on shifting prices or rates by an arbitrary number of standard deviations. as a variety of stress testing. except perhaps in principle ‘maximum loss’. We are all empiricists when we say. Historical scenario stress testing is required by the Basel Committee. Here stress testing of sensitivity-based risk exposure information (deltas. III. ‘it is reasonable. It may be necessary to develop alternative revaluation strategies to overcome technology constraints that limit the employment of full revaluation for other risk management purposes.6 Historical Scenarios Historical scenarios. useful stress tests require full revaluation for all positions with nonlinear or discontinuous payouts. prescribing that ‘scenarios could 92 include testing the current portfolio against past periods of significant disturbance. and it is tempting to trade off accuracy for speed. for example. 1996). However. is that the start and end dates of the event are not always obvious. The first thing one realises when looking at a historical event candidate for a stress scenario. Ideally.1 Choosing Event Periods The first question in choosing a historical period for stress testing is which periods to choose. the so-called Asian crisis of 1997. and the LTCM and liquidity crises of 1998. For any particular market rate. Hickman and Jameson (2001) and Jorion (2002). Systematic characterisation of market crises may be useful both for ensuring that the set of events employed in a stress-testing programme contains a reasonable portion of the spectrum of possible crises. In the first. Common candidates for historical stress tests include the following: the US stock market crash of 1987. such as the Asian crisis of 1997. namely. the event is defined by examining the historical record of moves in market risk factors relative to some user-defined threshold level of shocks. the risk manager will vary the number and variety of historical scenarios evaluated through time depending on both the changing composition of the portfolio and the changing economic environment. In the second. The former approach is more prevalent. III.The PRM Handbook – Volume III elements of historical stress testing in turn. No matter how many historical periods are selected. such as VaR. the European exchange-rate mechanism crisis of 1992.6. for example. Some potential scenarios were mentioned in previous sections. and for guiding the construction of hypothetical scenarios (discussed later). 93 Davis (2003) creates a typology of financial crises. depending on the particular risk factors whose histories are being scanned for large movements. the US bond market sell-off of 1994. say the liquidity crisis of 1998. especially in identifying risks that may not be obvious from other risk measurements. it is not possible to guarantee that the prospective worst-case scenario is covered. the Russian default of 1998. however. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. A historical event may be defined in one of two ways.com 169 . The attacks of 11 September 2001 also constitute a candidate. that a judicious selection of scenarios will provide indicative information about areas of vulnerability. the Mexican peso crisis of 1994. Malz and Mina (2001).4. the event is defined relative to a well-known crisis period. the period of interest is likely to be unambiguous. It can be hoped.pdffactory. The next question is how many events should be part of the stress-testing programme. The second approach will no doubt also turn up events that correspond to most well-known crises.A. but may identify other event periods as well. choice of a historical period and specifying shocks. the 93 See. Two areas of concern come to mind: trading or hedging out risks. Two approaches are possible. Instruments that are affected include futures and forwards.g. Then use as the shock magnitude for each risk factor the change in that factor from the start date to the end date. approximates a worst-case scenario in which illiquidity is extreme. It is common to argue (at least. That is. Historical scenarios rarely play out within a single trading day. for fixed income or options positions). and modelling the effect of time passing on expiration. More commonly. it is common to assume in historical stress scenarios that positions cannot be traded or hedged – no matter how long the interval of calendar time spanned by the scenario. making sure that the interval selected encompasses all (or essentially all) of the significant moves in individual market rates. That is the disadvantage of this. peak-to-trough) found within the interval. the more difficult is the problem of identifying the start and end date for the stress event. As a result the specification of historical events raises questions about how the passage of time is affecting the test results. Define the event interval. No matter how illiquid a market. Since those costs may be prohibitive. The disadvantage is that the shocks. Another question that arises with historical scenarios is how to model the impact of the passage of time on instruments in the portfolio whose values depend on time. there is a price at which a portfolio manager can trade out of a position. the peaks (or troughs) occur at the start date and the troughs (or peaks) occur at the end date. as a key element in establishing the plausibility of a scenario is that the shocks. as in the liquidity crisis of 1998. The ideal event window cannot be achieved in practice. while extreme.g. it stretches the plausibility requirement to apply this assumption to a very extended stress period (as traders will readily argue). The advantage is that the scenario has the potential to be economically meaningful. an event will develop over a period of a few days or even weeks. I have argued) that this assumption. It is possible to argue that the scenario telescopes the historical record into a single trading Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Then use as the shock magnitude for each risk factor the greatest change in that factor (e. must be sensible.com 170 .The PRM Handbook – Volume III more complex and varied the instruments in the portfolio to be stressed.pdffactory. regardless of the start or end date.. Given international market linkages observed through contagion and feedback effects. taken together. may make no economic sense. The advantage of this is that the scenario will entail the largest possible moves in each risk factor. Still. The second approach is preferred. Define the event interval such that it comes as close as is possible to capturing exactly the greatest moves in the factors of most interest. maturity and ‘carry’ (e. bonds and options. even an event sharply focused in time is likely to engender after-shocks that continue for a few days. when taken together.. thereby making it unnecessary to deal with the passage of time. In this case. but because the changes in market risk factors are generally larger in stress testing. and the relative change was 16.25 = 2. However. However. Suppose that the GBP/USD exchange rate is currently 1. This issue arises in VaR modelling as well. Firstly. An issue arises when the portfolio’s fixed-income instruments are priced from both zero curves and par curves. Allowance must be made. Some market spreads can be either positive or negative. The absolute change in the exchange rate during the event was 0. Similarly.com 171 . in part because volatilities cannot be negative. bond coupons and repo payments should be considered. if any. Shocks to volatility should generally be relative.2 Specifying Shock Factors A fundamental element in specifying shocks for a historical scenario is the choice of relative (or proportional) versus absolute (or additive) shocks. consistency between the historical shocks for par curves and zero curves must be imposed. need to be incorporated into the scenario.A.5 at the beginning of the event to 1.4. a relative interest-rate shock that is derived from a historical period when interest rates are very low may imply unrealistic moves in rates when they are at higher levels. the effects can be more dramatic. For example.75 at the conclusion of the event.6. absolute shocks need to be applied. Nevertheless. relative shocks are not always appropriate.The PRM Handbook – Volume III day. then the choice is less important. for rolling of forwards and futures.05 or 1. However. If the levels of market rates are similar between the beginning of the historical event and the current stress-test ‘as of’ date. then. and this is the prevalent approach. when applying a relative shock one will not inadvertently cause a rate to change sign.7% (appreciation of the GBP). If explicitly allowing for the passage of time. A decision must be made whether to calculate the shocked exchange rate as 1. perhaps as a result of the way different trading desks prefer to view their risks. too. III. Consider the following example. Relative shocks are generally preferred for a couple of reasons.8.8 + 0. the effects of illiquidity on the portfolio are muddled somewhat. which is (generally) thought to be the parameter of interest in an individual’s utility function. shocks to interest rates generally should be absolute. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. As a general rule. most shocks should be relative. a relative shock (generally) corresponds directly to the rate of return on a portfolio. this will not generally be the case.8 ? 1.pdffactory. Applying relative shocks to interest rates can be a problem as well. Secondly.25. the expiration of options and the cash flows from payouts. To maintain that property in a stress scenario.1.167 = 2. Also assume that during the period of historical interest the rate moved from 1. or when different portfolios are marked using different back-office systems. Exceptions to these rules should be made on a rate-by-rate basis where it is appropriate to do so. by assuming that time is telescoped. and the plausibility of such assumed one-day moves in rates can be questioned. For example.4). The more distant in the past is the historical scenario. III. Years of research (and probably quite a few doctoral theses) have shown that typically three factors can explain about 95% of the movements in yield curves. Secondly. thus making them correlated (see Section II. so they can be shocked independently and the shocked result will be a realistic curve. leaving positions without shocks not only results Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. it would be the shocks arising from those three key yield curve factors that are used in scenario construction. If stress scenarios become part of the risk limit structure or capital allocation or performance evaluation processes. If PCA is not applied it may be necessary to modify historical shocks that give rise to implausible curve shapes. perhaps by applying the Cholesky matrix (derived from the historical covariance matrix) to the independent shocks. default swaps were not traded at the time of the Mexican peso crisis. either the data are bad or there are no data. It is a good idea. dimensions are reduced so that only the three key risk factors need to be shocked. If positions in particular instruments are deemed to be immaterial at the time of the specification of the scenario. the situation may subsequently change. slope and curvature. a term structure of commodity futures of different maturities and a term structure of foreign exchange rates. A simple rule to follow is not to leave any current positions without an assigned historical shock unless a zero shock is the best guess at what that shock would have been. the problem that can arise is that the ‘shocked’ yield curve can take on some very implausible shapes. and the specification of the scenario would then need to be revisited. Firstly. It is necessary to have a policy for dealing with ‘missing’ shock factors. the more likely this is to be a problem.3 Missing Shock Factors In many instances it is not possible to refer to the historical record for a particular instrument when specifying stress shocks. at the very least.A.D.6. such as the term structure of volatility. these factors are orthogonal. PCA is useful for generating realistic ‘shocked’ yield curves in a tractable manner. commonly interpreted as level. even if instruments were traded. In some cases instruments in the current portfolio simply were not traded during the historical period. perhaps through principal components analysis (PCA) as described in Frye (1997).pdffactory. there is no reliable source for historical data. In other cases. Ideally.4. to monitor the ‘look’ of shocked curves as part of the stress-test process. This technique is equally important when the scenario specifies shocks to any term structure.com 172 . If historical (absolute) changes at various points on the yield curve are applied point-by-point to the current yield curve.The PRM Handbook – Volume III Applying historical interest-rate shocks to current yield curves requires special attention as well. only event. at least if plausibility is to be maintained. More sophisticated data filling approaches than discussed here are possible as well. It should be clear that even when using historical scenarios for stress testing. For example.com 173 . rather than just a single market proxy). of course. mergers. There are two basic approaches to ensuring this: employing proxies or using interpolation. Interpolation (or extrapolation) may be appropriate in the case of fixed-income instruments. so it is a good idea to establish a policy to formally review stress scenarios periodically to assist in establishing a good discipline. a shock is assigned from another instrument. but even existing scenarios need to be constantly re-evaluated and sometimes tweaked to maintain their usefulness. When assigning proxies. it is useful to employ more rather than fewer proxies (equity sector proxies. Then a policy decision can be made whether to ignore the history and instead proxy shocks for such equities to a historical industry-specific shock factor. it is reasonable to interpolate rate shocks (usually linearly) from available historical shocks at adjacent points in the term structure.pdffactory. This can be tedious and unexciting. scenario creation is not a once. and changes in business focus may make this existing historical record irrelevant. in order to obtain a betterarticulated result. Not only should new scenarios be constantly under consideration for development. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. In the case of proxies. swap and forward foreign exchange markets tend to both become more liquid and extend over time as comfort increases in assessing the longerterm risks. In this case it may be prudent to at least ensure that changes in a company’s industry classification are noted.The PRM Handbook – Volume III in an inferior decision-making process but also creates incentives for strategic behaviour that may not be desirable from the perspective of risk appetite. This filling in and out of the term structure is especially noticeable in emerging markets. spin-offs. Assumptions about correlations play a big role in following the leave-no-position-unshocked rule. Equities are a good example. In part because of the correlation between instruments of different tenors and in part because of the structure of implied forward rates. as the best guess is usually obtained from examining historical shocks of instruments thought to be highly correlated with the position in need of a shock assignment. With equities. attempts to liquidate positions in other markets.1 Modifying the Covariance Matrix Some argue that the key feature of a stress event is embedded in the behaviour of asset correlations. one for an ordinary market environment and one for a stressful market environment. Nevertheless. in which the specification of a parsimonious set of market shocks provided as inputs to the model will result in a complete specification of responses in all markets.com 174 .’s points into account. Or interconnections between market participants may manifest in a crisis where the actions of one agent create the need for other agents to take similar actions. that is exactly the approach taken by UK regulatory authorities who employed a macroeconomic model in their experimentation with macroeconomic stress tests. it is good to keep that ideal in mind when constructing an economic scenario. For this reason. For example. Or interconnections between markets may manifest when an agent. it is prudent to test it against the available data. 94 However.7 Hypothetical Scenarios History does not conveniently present the risk manager with a template for every plausible future market crisis (though the sample size of crises does keep increasing with time). precipitating a liquidity crisis throughout the system. because it is very easy to make a bad scenario by ignoring cause.pdffactory. Hoggarth and Whitley (2003) present a very interesting discussion of the issues involved. too).A. In particular. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. then the stress portfolio valuation can be obtained by computing the monetary value of a one standard deviation change in the portfolio value (using the modified covariance matrix) and scaling up the result by the desired number of standard deviations (a common multiplier is 4). As with any intuition. In a crisis. The intuition is strong. faced with a liquidity crisis in one market. effect and co-determination in economic relationships.4. if it is assumed that asset returns are jointly normally distributed. it may be desirable to create a hypothetical economic scenario as a stress test. However. Well. (1999) show that careless data mining can lead one to conclude incorrectly that correlations are different in stressful markets. A stress test can be constructed from a modified covariance matrix in several ways. III. investors may make fewer distinctions among assets and issue blanket buy or sell orders in a flight to safety that tends to drive whole classes of assets in the same direction. Taking Boyer et al. in most cases that is not going to happen. they propose that observed asset returns are generated by a mixture of normal return-generating processes.A.4. Kim and Finger (2000) conduct an empirical study from which they conclude that that data provide considerable support for the existence of a separate stressful market environment with distinct asset correlations. 94 Still. they acknowledge that there is evidence that correlations do change. Boyer et al.The PRM Handbook – Volume III III. a hypothetical scenario is based on a structural model of the global financial markets (perhaps with a ‘real’ or physical goods and services component. Ideally.7. however. which means changing the value in two cells of the matrix (from the symmetry property). Thinking in terms of the correlation matrix.The PRM Handbook – Volume III Loretan and English (2001) argue that the empirical evidence might not support correlation breakdown. taking into account the apparent relation between correlation and volatility. and it is common to construct stress scenarios with increases in correlation. Still. 1 or –1. they are unable to find evidence of changes in correlations during the 1997 Asian crisis. Then again. (2002) say that the results in Forbes and Rigobon (2002) may be overstated. Similarly. then by implication every correlation between each of those two risk factors and the remaining risk factors may change. the empirical evidence is not uniformly supportive of the notion that correlations increase in crisis situations. but rather be the residual of time-varying volatility. By specifying the method to (explicitly or implicitly) adjust the return vectors. then all the correlations in the rows and columns intersecting those two cells can change as well. covariance matrices that are not positive semi-definite. As a result. is a linear combination of the historically observed return of that factor on day t and the average of the returns on the affected Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. that increased correlation does not in itself guarantee to stress a portfolio.pdffactory. Note. It is simple to construct thought experiments in which the VaR of a portfolio will decline with increases in correlation. first because the number of crisis periods in the sample is small. the 1994 Mexican peso crisis. Generally. if we want to change the value of one correlation. many possible pairs of altered return vectors will yield any desired correlation (given average returns and variances). t. Forbes and Rigobon (2002) argue that. Finger (1997) proposes modifying the return vectors of the risk factors whose correlations are to be modified such that the return on each factor on any day. If those underlying returns change. because implausible stressed portfolio returns can result (specifically. it will be possible to determine the corresponding induced changes to the other affected correlations that are necessary to maintain consistency. and second because their econometric specification is too restrictive. It is not advisable to change correlations by arbitrarily setting selected correlations to 0. or the 1987 US stock market crash. Dungey and Zhumabekova (2001) and Corsetti et al. When changing the correlation between two risk factors. despite the intuition. In sum. too. it is important to understand that the correlations can only change in reality if the underlying returns on the two factors change relative to each other (possibly in such a way that the average return and the variance of the two do not change).com 175 . meaning that some portfolios could have negative variance!). the force of intuition is strong. any stress scenario that involves causing correlations to differ from the relationships embedded in the historical data that were used to estimated them must follow certain rules. 4 shows the final modified correlations and covariances for these three equities. but this is not necessary if the mean returns are ignored (i. The upper triangular (red) entries are correlations. assumed to equal zero) for purposes of the risk calculations.pdffactory. GE) = Cov(IBM. Corr(IBM.558 GE 0. For the data used here. the correlation between IBM and MSFT will be increased to 0.3: Modification of returns on a representative date Date Return Modified Return R(IBM) 0.558) using Finger’s methodology. 95 the remaining entries are variances and covariances (all annualised from the daily data).A.4.4.g.016/{0. 95 For example.A.A..4625.A.016 0.054 In this example.411. Table III.3 illustrates the operations that are performed on the IBM and MSFT return vectors. with Solver in MS Excel) can be used to identify the value of that results in a correlation equal to the target level of 0. Table III.2 contains statistics derived from the logarithms of the daily changes in closing prices (in New York) for IBM.4.005117 R(IBM mod ) ) 0.005117 4 R(IBM mod ) June 2003 R( MSFT) The parameter 0.850. The transformed return vectors will need to be rescaled if the original individual variances are to be unchanged. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. For any = 1.041 0.com MSFT MSFTmod choice of .2: Estimated correlations and annualised variances and covariances IBM GE MSFT IBM 0.038 0. Table III.A.474 MSFT 0. GE and MSFT for one year ending on 20 April 2004.4. If IBM IBM mod 176 . an adjustment may also be made to ensure the mean returns are unchanged.0004 MSFTmod Normalised Return 0.411 0.e. Table III. Further.0004 (1 2 ) 0. then the two time series will be perfectly correlated. the desired correlation is obtained with = 0.The PRM Handbook – Volume III market factors on day t. returns for every date are modified as illustrated for the representative date in the table.038 ? 0.022 0.. Table III.025 0. A simple numerical search (e.4.850 (from 0. GE)/{Var(IBM) ? Var(GE)}1/2 = 0.005117 0.005117 0. Note that shocking individual asset variances can be incorporated into this method as well.0004 R( MSFTmod ) determines the modified correlation between the two return series.041}1/2 = 0.0004 (1 2 0. see.038 0. and the remaining entries are variances and covariances. Kupiec (1998).A. Speaking somewhat loosely.2 Specifying Factor Shocks (to ‘create’ an event) Rather than creating a stress test through modification of the covariance matrix. As should be expected. given the methodology. Without an economic model. it is possible to create a hypothetical scenario simply by specifying hypothetical shocks to the market factors.041 0. Rebonato and J鋍kel (1999). A pure arbitrage is an opportunity for a riskless and certain profit. pure arbitrage. it is a daunting task to attempt to describe a coherent set of hypothetical shocks encompassing every market risk factor. because of the interdependencies among the risk factors. Thus. actual historical behaviour of market prices can provide useful guidance for the specification of plausible shocks. so much so that a variation has come into use. III. even for a hypothetical economic scenario. the correlations between GE and both IBM and MSFT have been affected by changing the correlation between IBM and MSFT. More general techniques for modifying correlations in a consistent manner have been suggested. (2003).018 0.4: Modified correlations and annualised variances and covariances IBM GE MSFT IBM 0.023 0. ‘Arbitrage’ is a term that is used very loosely.The PRM Handbook – Volume III As before.A.850 GE 0.038 0.com 177 .4.7.054 When several instrument pairs are chosen to have their correlations fixed at a level different from the historical correlation. for example. In this case.4. These approaches focus explicitly on eliminating the negative eigenvalues of the stressed correlation matrix. the entries above the diagonal are correlations. Higham (2002). Pure arbitrage is achieved through the implementation of a self-financing ‘replicating portfolio’ (or exact hedging strategy).470 0. Another element in specifying shocks is to specify which no-arbitrage relationships are to hold in the scenario. it will be necessary. Table III.498 MSFT 0. and seek to identify the consistent matrix that is ‘closest’ to the stressed matrix (according to some metric). this approach in general requires computing a separate for each of those pairs of instruments. in general again. instruments that are (in principle) tied by Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory. or Turkay et al. to solve simultaneously for the set of s that together yield the desired set of correlations. Currency traders sometimes will make large bets on the actions of governments that either have pegged or are managing the float of their exchange rate. the impacts do not end there. it may be appropriate to assume that certain pegs are broken.pdffactory. An example of a hypothetical economic scenario is the ‘commodity themed’ Middle East crisis scenario common among banks. this decision is helped by observing what happened in the historical record. etc. as such an event is likely to modify investor expectations of future inflationary impacts. regulators and some money centre banks have shown increasing interest in the both immediate and the follow-on effects of such an event. as noted in Section III. However. for example. then. and bring about pre-emptive central bank responses. The impact on related energy products prices is then estimated. III. In the case of the relationship between futures and cash equities. Some spike in crude prices and volatility must then be assumed. The voluntary withdrawal of a derivatives dealer from the market is a commonly cited event of this type. Depending on the scenario. it is important when constructing a hypothetical economic scenario to make a conscious decision about what to do with pegged currencies. The stress tests discussed above do not probe vulnerabilities arising from the interrelationships among institutions. if desired. may depend on historical cases when other currency pegs were broken. Greenspan (2003) and Jeffery (2003). For all of these effects factor shocks must be assumed in a way that creates a plausible and coherent picture of the impacts on various markets. Nevertheless. forwards/futures and the corresponding cash market instrument. The scenario designer must decide which relationships are to be fixed in the scenario. forgoing the implementation of a separate historical shock for the futures. In part. lead to possible shortages in certain markets. and relative exchange rates. see. It may be assumed in such a scenario that the outbreak of war results in a disruption of oil production. The focus Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. For this reason.3 Systemic Events and Stress-Testing Liquidity Another hypothetical economic scenario that is of great interest is a systemic liquidity event..4. it has been observed that the parity relationship did not hold continuously through the 1987 market crash.The PRM Handbook – Volume III this type of arbitrage relationship include simple European options and their underlying assets.com 178 . all of which will affect asset prices and volatilities. These shocks to market prices and rates are then applied to portfolio positions to evaluate potential exposure to the hypothetical event. or on expectations of the future exchange rate implied in non-deliverable currency forwards.4. the futures– cash relationship can be enforced in scenario construction simply by defining the futures price to be fairly valued relative to the shocked index level.7.A.2. Since the LTCM and liquidity crises of 1998. The shocks that are assumed (including spill-over effects).A. risk management policies. such as risk limits. This loss may. collateral requirements for over-thecounter derivatives and performance bonds on exchange-traded derivative transactions reduce the likelihood of default in normal times. However. Then these individual responses will be exacerbated at the level of the financial system as a whole. Leverage and risk management play a key role. Thus pricing is interrelated across the range of derivatives products. in which large market players have similar trades. as well as advances in information technology. A reduction in liquidity in the liquid sectors can significantly affect the prices in the illiquid sectors.’ Consider the following features of the financial system. It also suggests that some factors that may contribute to liquidity in normal times can actually make it more vulnerable under stress. Failure to do so requires the counterparty absorb a portion of the loss. If a Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Thirdly. a linchpin of efficient functioning of the over-the-counter derivatives market is the ability of dealers to control portfolio risk exposures through the construction of synthetic hedges (replicating the desired set of risk characteristics through another portfolio of instruments). This behaviour can exacerbate the initial effects of an event. position transparency and risk disclosures. or risk managers at large firms just respond to market events in similar ways. will moderate risk taking to tolerable levels in normal times. Fourthly. where a market event causes a sharp increase in measured risk. creating a contagion of defaults.pdffactory. and further knock-on effects. This likelihood is greater if large market participants measure and limit risk taking in similar ways. Firstly. an organisation that trades these may plan to have liquidity sufficient for ‘normal’ daily mark-to-market contingencies and long-term average liquidity needs. ‘for a proper understanding of liquidity under severe stress. Secondly. in turn. generally promote efficient price discovery and fair market pricing. Illiquid sectors of the market rely on the instruments traded in the more liquid sectors to create hedges cheaply.com 179 . The resulting trades may contribute further volatility. force the counterparty to fail to make required payments on its obligations. all of which are generally regarded as improving stability and efficiency in normal markets. the interaction of basic order imbalances with cash liquidity constraints and counterparty risk needs to be explained. but may still find itself unable to post the collateral as required in a significant market event. and in which market participants react similarly and at simultaneously in response to an event. However.The PRM Handbook – Volume III of this interest is on the mechanisms that tie together institutions and provide channels for contagion and feedback. traders may choose (or be directed) to exit positions to avoid triggering risk limits. Borio (2000) writes that. they may also contribute to ‘herd’ behaviour. However. The umbrella organisation for these efforts is the Financial Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. and whether the market is one-sided (e. For the instruments that the institution uses in its own hedging. Thirdly. Supranational organisations that have an interest in the financial stability of the global economy are embracing a similar approach to stress testing. An impact function measures the cost of trading as a function of position size. Some potentially useful sources of data for this type of stress test are BIS statistical summaries.pdffactory. causing further price impacts in a cascading effect. it should identify the other major institutions that either make markets in those instruments or heavily employ those instruments and estimate market share. As is the case with data. it will be necessary to rely heavily on intelligent guesstimates of possible shocks. The institution’s own pricing models may be used to estimate the price impact on positions of a wider bid–ask spread. It may be possible to use the institution’s own trading data to estimate market impact functions in certain instruments. Prime brokers will take a portfolio view of their counterparty risk and the algorithm they apply for determining collateral may not be transparent. an institution should estimate the distribution of positions across the financial system. This may be especially difficult for institutions with many types of instruments held with a prime broker. who are the others and what is the market share of each. less liquid instruments might become uneconomical to trade.The PRM Handbook – Volume III large dealer were to pull back from trading. volume and volatility). Secondly. Ultimately. For example. In some instances. Firstly.com 180 . and the Commodities Futures Trading Commission commitments of traders’ reports. such as October 1998. forcing institutions to exit those positions. the most prominent bit being the Financial Stability Assessment Program. the institution should ask whether dealer A is the only dealer making a market in certain instruments in the institution’s portfolio. reports of derivatives activities of banks published by the US Comptroller of the Currency.g. some estimation will be necessary.. as with any stresstesting method. But. the resulting reduction in market liquidity would create losses at other institutions. an institution must first incorporate counterparty information in its risk database. the dealer is long in comparison to most counterparties and is primarily relying on hedges to control overall risk) or two-way (the dealer’s book has a balance of long and short positions). Since much of this information is not internal to the institution. value-added information is still possible even though the reality falls short of the ideal.. Some historical guidance is available by reviewing the behaviour of markets and spreads in periods of illiquidity. an institution should be able to identify the impact on required collateral of a proposed set of systemic shocks. as markets repriced to reflect the reduced liquidity. specifying shocks can be a significant challenge as well.g. given other parameters that describe the trading environment (e. The data needs for an institution to construct such a stress test are great. perhaps.The PRM Handbook – Volume III Stability Forum (www. or more exactly. A portfolio whose valuation function is linear in the risk factor can have a larger increment than a highly nonlinear portfolio with some instruments whose payouts might be discontinuous in the risk factor. when interest-rate volatility was very high.com 181 . in a method referred to as sensitivity analysis. Similarly. and a second point on the curve must be chosen from which to measure the amount of steepening or flattening. to implement changes in curve slope.org) in conjunction with local country supervisory authorities. These points should reflect the market environment and the manner in which the curve is traded by market participants. for example. The Derivatives Policy Group (1995) report contains recommendations for a parallel yield curve shift of 100 basis points.A. with the actual work being undertaken by the International Monetary Fund (www.org). it is common in scenario analysis to create a ‘ladder’ of shocks. The recommendations of the Derivatives Policy Group noted above might have represented an adequate range in 1995. Examples of sensitivity analysis are a parallel shift of the yield curve. For these reasons. See. for very strongly mean-reverting risk factors). In this type of stress test at most a few risk factors are shocked and correlation is typically ignored. A fair place to start might be to take the two-year point as the point of rotation and the 10-year point as the point to measure the change in basis points. and must be accompanied by a lot of judgement on the part of the risk manager. in which price impacts are calculated for intermediate values of the risk factors.imf. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. The range of shocks should be wide enough to encompass both likely and unlikely (but plausible) moves in the market factors. it makes sense to use the volatility (or perhaps. Design of these ‘sensitivity ladders’ requires attention to two issues: granularity and range. but would probably be considered inadequate in 2003–2004. somewhat artificial portfolio shocks. or a 10% drop in equity prices. a point on the curve must be chosen around which to rotate the curve. Granularity should reflect the nature of the portfolio. For example.org) and the World Bank (www.7. III.worldbank.4. empirical percentiles of the percentage change) of the factor to determine the range. These are easy to implement. the range selected should take into account the time horizon for the analysis.4 Sensitivity Analysis Sometimes it may be desirable to create simple. with the range increasing with the horizon (except. and for curve steepening and flattening of 25 basis points. The granularity of the analysis refers to the distance between the rungs of the ladder. the increment chosen for the change in the risk factor.fsforum. Blaschke et al. but they only provide a partial picture.pdffactory. 96 Since the biggest losses do not always correspond to the largest moves in factors. (2001) and International Monetary Fund (2003) for an overview of this effort. 96 Implementing even these simplistic hypothetical market moves contains some hidden issues that can have a big impact on the results. 7. that is to say. one the stress environment and the other a normal environment. A more systematic approach might yield a greater level of comfort. The key issues with such approaches are the following: Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.A. Assuming that market risk factors are jointly normally distributed makes the approach very tractable.A. Any covariance matrix may be taken as a starting point. and hence zero correlations). Using the covariance matrix of factor returns and this subset of fixed shocks. zero variance. The goal is to create a search algorithm to identify the worst outcome for the portfolio within some defined feasible set. Kupiec goes on to demonstrate how this approach can be generalised to the case in which selected variances and covariances are given prespecified shocks as well. However. His approach can also be applied to the problem of missing historical data in specifying shocks to be used in a historical scenario.pdffactory. They assume that returns are drawn from a mixture of two normals. Using the resulting conditional distribution of factor returns (the factors with fixed shocks have means given by the shocks. In his approach.The PRM Handbook – Volume III III. the risk manager ‘can specify partial “what if” scenarios and use the VaR structure to specify the most likely values for the remaining factors in the system’. in applying this approach. III. Assume that the risk manager wishes to specify the shocks to a subset of the market risk factors. an optimisation is performed.4.com 182 . Alternatively. ‘stress’ VaR can then be calculated. Perhaps more simply.5 Hybrid Methods Kupiec (1998) proposes a methodology that is a particular hybrid of covariance matrix manipulation and economic scenarios. a conditional. it is then possible to compute a conditional mean vector and a conditional covariance matrix for the remaining risk factors. Kim and Finger (2000) employ Kupiec’s method after first estimating a stress-environment covariance matrix. which he calls ‘stress VaR’. if it is true that in crisis periods there is a structural change in the relationships among market risk factors.4. such as the historical covariance matrix modified in the manner of Finger (1997) discussed above. the stress VaR could be calculated using the covariance matrix estimated from a particular crisis period.8 Algorithmic Approaches to Stress Testing One of the deficiencies of historical and hypothetical stress scenarios is that the user has only a fuzzy level of confidence that the full extent of the potential badness for the portfolio has been exposed. Using the unconditional covariance matrix as the starting point for this approach is potentially very limiting. calculating the ‘stress VaR’ using the estimated parameters of the stress-environment return distribution. it is not necessary to create the conditional return distribution from the same covariances as are used in an unstressed VaR. 51. On 23 February 2004 the closing prices of the two equities were $95. The portfolio is revalued once more. It may be chosen with reference to (an average of) observed movements in market prices during some significant historical event. and once applying a shock s– = (–1 × m).. The two portfolio revaluations are compared and the shock resulting in the lower portfolio value is adopted for the stress test. because the position is long and the return is linear in the price change. The magnitude chosen is. If you are using unconditional estimates.8. to a single market risk factor: once applying a shock s+ = (+1 ? m). Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. The portfolio. s.1 Factor-Push Stress Tests This type of stress test is so named because it involves ‘pushing’ each individual market risk factor in the direction that results in a loss for the portfolio. each market risk factor. factor-push and maximum loss. by analogous reasoning.4.09. it can be stated as a number of standard deviations. The shock for GE will be –1. and are those outcomes represented disproportionately in the optimal results? Two approaches are discussed below. is revalued twice by applying shocks. For example. 4. this time simultaneously applying the shocks selected in the prior steps for each risk factor. correlations or other measures of interdependence) used in the optimisation relevant for identifying worst-case scenarios? Is the algorithm capable of identifying the globally worst-case outcome in the feasible set? Does the feasible set include implausible outcomes. is selected. When using a methodology for estimating volatility conditionally. Or it may be chosen to correspond to some quantile of returns based on an assumed distribution (or perhaps an empirical distribution quantile).96 and $47. m. if available. A push magnitude. the period over which these are estimated must be chosen as well. Set the push magnitude to 6 (more on this below). then one year of daily return history is good. 2. then it is good to use as much return history as there is reliable data.The PRM Handbook – Volume III Are the relationships (e. 5.g. A portfolio consists of a long position of 1000 shares in IBM and a short position of 1700 shares in GM. Steps 2 and 3 are repeated for each of the N market risk factors affecting the portfolio.com 183 . III. may be pushed four standard deviations. 1. subjective. Construction is straightforward. The factor-push portfolio stress loss is then $6807. 3. The shock for IBM will be +1. namely. If you are setting the push magnitude using standard deviations or quantiles. Consider the following example.006972. Their respective daily standard deviations of return (based on one year of closing prices) were 0. such as GARCH. P. ultimately.A.005955 and 0.pdffactory. r. respectively. respectively. 99 and the magnitude applied to GE is increased to 6. For example. these stress tests ignore correlations among risk factors. Such a scenario fails the ‘laugh test’. the specification of the factor changes employed in the stress test may imply highly implausible market dynamics. However. The largest one-day move downward in GE in the same period is –6. based on the volatility of each factor. As a result. If we assume. then the potential worst-case portfolio loss would be $22. much bigger than the factor-push loss.com 184 .78 times the standard deviation. Small perturbations of the push magnitudes. if some of the portfolio positions have returns that are nonlinear functions of the market risk factors. A simple thought experiment also illustrates that the factor-push loss need not be greater than the losses from all unambiguously smaller moves. Consider a situation where the push factor is 4 standard deviations.pdffactory.31%.35 times the standard deviation. Note that the largest one-day move downward in IBM in the five years ending 23 February 2004 is –9. For example. can result in greater estimated losses. the factor-push loss will increase (slightly) because GE has the greater estimated standard deviation. or 9. and the stress shock at the three-month point on a given yield curve is –1. Also. loss. if the magnitude applied to IBM is reduced to 5.00. the factor-push loss can be made arbitrarily large by increasing the push magnitude.06 times the estimated standard deviation of GE. The third largest move down in GE is 5. It may be better to select push-magnitudes position by position. cross effects in instruments whose values are Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. as in a historical simulation. This type of stress test has a variety of other drawbacks as well. the factor-push method will be less useful in such cases. that any of the one-day IBM–GE return pairs observed over that five year period were equally possible when looking forward one day. or about 15. neighbouring tenor points of a given yield curve are generally very highly positively correlated. A long option straddle has its least payout associated with small moves in the underlying market factor. if unlikely. Of course. but rather to estimate a plausible. the goal is not to create an arbitrarily large loss.63 times the estimated standard deviation of IBM.175. if applied to individual positions.31%.01. This fact highlights the implicit assumption here that the marginal return distributions have the same shape and thus the marginal probabilities of the moves are equal. The third largest move downward is still 7. Most importantly. while at the adjacent six-month point the shock is +1.The PRM Handbook – Volume III The selection of the push magnitude is subjective and ignores correlations. Since the factor-push method only ‘searches’ among large moves for the worst outcomes. observations from a distribution that is distinct from that which is experienced most of the time. Choosing the push factor based on the standard deviation. in effect.g. in general. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Note that the maximum loss scenario is dependent on the structure of the portfolio.. for example. More akin to conditional VaR than stress testing. (1995) propose a risk measure they call the worst-case scenario (WCS). It is also related to the extreme-value theory method discussed below. This makes it more difficult to communicate intuition about the nature of the shocks that generated the observed stress-test result. Being able to associate a probability with a stress-test loss is potentially very useful. Breuer and Krenn (2000) suggest Monte Carlo simulation (or some quasi-random method) as a natural method to use in 97 Boudoukh et al. there may be no obvious choice of push factor for the portfolio that ‘maps’ from the estimated standard deviations to a stress environment. III. may defeat the purpose of the test. The constraint is necessary because potential portfolio losses need not be bounded. absent a constraint. subject to some feasibility constraint on the allowable changes in market risk factors (see Studer. it is necessary to search through the entire space of risk factor changes with joint probability less than or equal to the plausibility constraint. for the equity price is dependent on the choice of shock for the volatility. the maximum loss need not occur at the limits of the allowable risk factor changes. The particular set of shocks that are chosen can change with every run of the stress test. 1997).pdffactory. the expected worst-day portfolio P&L over a month.8. Those statements will be based on either historical or simulated data and can be conditional (e. the result may lack plausibility. Note that. Thus it is well adapted to answer a question such as just how bad can things plausibly get. With positions whose values are nonlinear functions of risk factors in the portfolio. +1 or –1. WCS is the expected value of the distribution of maximum loss over some time horizon. To proceed down this path is it necessary to be able to state joint probabilities of specific sets of risk factor changes. or even the VaR. and thus.The PRM Handbook – Volume III affected by multiple risk factors are ignored. except as an environment is reflected in the statistical description of market factor returns employed in the analysis. to apply this approach. for an option on an equity. stressed) or unconditional (‘normal’). 97 One natural method of constraining the set of feasible risk factors is to consider only those scenarios that have a likelihood in excess of some small probability.A.4. the option delta is itself related to the volatility of the equity.com 185 . Thus the correct choice of shock. which in turn requires that some statements be made about the co-movements of risk factors.2 Maximum Loss A maximum loss scenario is defined by the set of changes in market risk factors that results in the greatest portfolio loss. 1995. For example. and not related to any particular economic environment. If it is the case that extreme events are. The application to stress testing is immediate (sort of). The details of this approach are beyond the scope of this chapter.. since a complex portfolio may have many local return minima and possibly many discontinuities in the portfolio return function.A. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. then we can use this approach to find the loss that is expected to be exceeded once in every k years. A na飗e approach would be to run the Monte Carlo for a predetermined sample size. makes guesses as to which parts of the return space can be ignored. it may be necessary to develop a ‘smart’ search algorithm that. and pick from the remaining sampled (n-tuples of) factor returns the one with the largest associated portfolio loss. The first is called the ‘block maxima’ approach.9 Extreme-Value Theory as a Stress-Testing Method Extreme-value theory (EVT) is based on limit laws which apply to the extreme observations in a sample. even quasi-random methods may not make the search for that maximum loss tractable. The second method is the ‘peak-over-threshold’ approach. are drawn from the distribution relevant for examining the stress event).com 186 . It is apparent that the two methods have somewhat different definitions of extreme events.e. the size of the historical data sample must be large enough to yield enough ‘tail’ observations to get good estimates of the tail distribution’s parameters.. Second. These laws allow parametric estimation of high quantiles of loss (negative return) distributions without making any (substantial) assumptions about the shape of the return distribution as a whole. it must be assumed that the historical data employed in the estimation of the tail are representative (i. throw out any draw of (an n-tuple of) factor returns with a corresponding theoretical probability less than the plausibility constraint. Firstly. 98 III. some natural number of) negative daily returns on the S&P500 over the same period.4. too many simulated risk factor vectors would be required to attain a high level of confidence that the maximum loss had been identified using this approach.pdffactory. Practically. see Cotter (2000). Two flavours of EVT have been employed in looking at risk measurement in finance. based on the qualities of the positions in the portfolio. To employ this approach in a practical manner.The PRM Handbook – Volume III this regard. 98 Gonzalez-Rivera (2003) develops an interesting related approach. For an application of the approach to stress testing. however. However. Quasirandom methods may ensure a more efficient sampling of the feasible return space.e. An example is the set of yearly maxima of negative daily returns on the S&P500 over some period. The block maxima approach may be better suited to estimation of stress losses and the peak-over-threshold approach may be better suited to VaR estimation. illustrated by the greatest z (i. If we take the block to be a year. g. but not yet experienced. V. Hypothetical scenarios can represent a complete. Stress-test methods generally fall into three categories.com 187 . economic story or simply a set of ad hoc factor movements. Czech National Bank. In the case of the factor-push method. in practice. Nikeghbali.pdffactory. ih醟. Working Paper (July). Martin (2004b) Stress testing: A review of key concepts. Mimeo (October). if unlikely. Riboulet.GloriaMundi.org Basel Committee on Banking Supervision (1999) Recommendations for public disclosure of trading and derivatives activities of banks and securities firms. G.10 Summary and Conclusions The main points of this chapter are as follows: Stress testing is perceived as being a useful supplement to value-at-risk because value-at-risk does not convey complete information about the risk in a portfolio. V. There is no best or right type of stress test. However. Historical scenarios seek to re-create a particular economic environment from the past. and Roncalli.4. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Bouy? E. Working Paper (March). hypothetical scenarios. Durrleman. Durrleman. T (2000) Copulas for finance: A reading guide and some applications. and algorithms. Martin (2004a) Designing stress tests for the Czech banking system. A. G.A. T (2001) Copulas: An open field for risk management. ih醟. Stress tests should be part of an rational economic approach to decision making. Algorithms attempt to systematically identify the set of factor changes (within some bounds) that give the worst-case portfolio loss. Working Paper (January). Useful stress tests must represent plausible. Further Reading Most of the papers listed here and in the References following may be found on www. individually and jointly. and an understanding of structural relationships among risk factors (e. historical scenarios.. Bouy? E (2002) Multivariate extremes at work for portfolio risk management. Czech National Bank. Bouy? E. mimeo (March). no arbitrage requirements and spread relationships).The PRM Handbook – Volume III III. and Roncalli. Creating useful stress tests requires detailed consideration of the potential behaviour of the market risk factors. The context in which the results will be used should determine the approach to be taken in stress testing. Riboulet. factor changes. mimeo (February). Nikeghbali. the result may not represent a plausible economic story. most uses of stress results are somewhat ad hoc. A. 7–19. Cherubini. T (2004) Financial applications of copula functions. G. G. BIS Quarterly Review (November). Volume 5: Stress testing. Majnoni. pp. S (2001) Stress testing of financial systems: An overview of issues. Working paper. Working paper. New York: Wiley. and Lee. Risk. Risk Measures for the 21st Century.). G (1999) Stress testing techniques and value-at-risk measures: A unified approach. J. Working paper. A. Mimeo (July).com 188 . C (2000) Market liquidity and stress: Selected issues and policy implications. Riboulet. In G Szego (ed. Committee on the Global Financial System (2001) A survey of stress tests and current practice at major financial institutions. Supervisory Policy Manual IC-5 International Association of Insurance Supervisors (2003) Stress testing by insurers. and Martinez Peria. W. T (2000) Stress testing et th閛rie des valeurs extr阭 es: Une vision quantifi閑 du risque extr阭 e. G. 100–101. R (1995) Expect the worst. Richardson. 77–99. Press release (10 July). Jouanin. J (1999) A coherent framework for stress testing. Jones. Hong Kong Monetary Authority (2003) Stress testing. Mimeo (April). Mimeo (October). Banking Circular 277: Risk management of financial Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. (February). and Roncalli. pp. Comptroller of the Currency (1993) derivatives. methodologies and FSAP experiences. M T. Gibson. Bank Accounting & Finance (Spring). 2 (Winter). 1– 11.The PRM Handbook – Volume III Costinot. Breuer. Rivista di Matematica per le Scienze Economiche e Sociali. Wee. References Basel Committee on Banking Supervision (1994) Risk management guidelines for derivatives. pp. and Whitelaw. and Della Longa. 38–51. Mimeo. and Krenn. J (1999) Integrating stress testing with risk management. Boyer. Blaschke. U. Basel Committee on Banking Supervision (1996) Amendment to the capital accord to incorporate market risks. Mimeo (April). and Loretan. and Roncalli. Borio. Boudoukh. pp. Journal of Risk. Riboulet. Mimeo (June). M. 22(1/2). M. Guidance paper (October).pdffactory. J-F. T. B. Federal Reserve Board. Mimeo (January). pp. G (2000) Identifying stress test scenarios. Oesterreichische Nationalbank (1999) Guidelines on market risk. L-S. Basel Committee on Banking Supervision (1995) An internal model-based approach to market risk capital requirements. Credit Lyonnais (September). M (1999) Pitfalls in tests for changes in correlations. 8(9). Berkowitz. Basel Committee on Banking Supervision (2002) Basel Committee reaches agreement on new capital accord issues. pp. Working paper. Fender. J (1997) Principals of risk: Finding value-at-risk through factor-based interest rate scenarios.The PRM Handbook – Volume III Corsetti. pp. M. Journal of Finance. D (2001) Testing for contagion using correlations. G. Australian National University. IMA Journal of Numerical Analysis. J (2003) Assessing the strength of UK banks through macroeconomic stress tests. Yale University. Hoggarth. DC: Gonzalez-Rivera. London: Risk Publications. M. 7(10). R (2002) No contagion. Pericoli. I. ERisk. Kim. G. I. Derivatives Policy Group (1995) Framework for voluntary oversight. only interdependence. 7–18. 13–26. Working paper. 22. and Rigobon. P (2001) An international survey of stress tests. 14(5).). University of London (November). Derivatives Quarterly.com 189 . Working paper. University College Dublin. Global Derivatives Study Group (1993) Group of Thirty Derivatives: Practices and Principles. Fender. Fender. Current Issues in Economics and Finance. and Zhumabekova. Higham. K. G (2003) Value in stress: A coherent approach to stress testing. M. Finger. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Morgan swap: Lessons in VaR frailty. Culp. C and Miller. N J (2002) Computing the nearest correlation matrix – a problem from finance. Brunel University (November). 1–6. M (2001a) The BIS census on stress tests. R (2001) Benchmarking the US attack crisis. and Nam. In S Grayling (ed. Cotter. some interdependence. Working paper. Mimeo (March). I. Speech at the Conference on Bank Structure and Competition (May).pdffactory. E P (2003) Towards a typology for systematic financial instability. 50–52. G. Risk. pp. M (2002) Some contagion. M (2001b) Stress testing in practice: A survey of 43 major financial institutions. Greenspan. BIS Quarterly Review (June). Davis. Frye. 58–62. Forbes. London: Risk Books. 3(6).com (September). pp. and Gibson. Washington. Dungey. A (2003) Corporate governance. Journal of Fixed Income (September). RiskMetrics Monitor. and Mosser. 329–343. Financial Stability Review. and Sbracia. Gay. M (eds) (1999) Corporate Hedging in Theory and Practice: Lessons from Metallgesellschaft. J (2000) Crash and boom statistics for global equity markets. 91–103. pp. and Whitley. A. 43. Gibson. 3-12. Working paper. pp. J. C (1997) A methodology to stress correlations. 5(Spring). Hickman. and Gibson.P. and Jameson. J (1999) The case of the SK Securities and J. Understanding and Applying Value at Risk. C L (1996) Manipulation of metals futures: Lessons from Sumitomo. Gilbert. Schachter. B (1998b) Move over VaR. Working paper. (Arlington. 26 May. Jorion. 2 (2). disunity and perfomativity in financial economics. Working paper. Financial Management. exemplars. S. 75–89.The PRM Handbook – Volume III International Monetary Fund (2003) Analytical tools of the FSAP. and J鋍kel. P (1995) Big Bets Gone Bad. and Nicos. Kupiec. 68–78. and Finger. Amsterdam: Elsevier Science. pp. Jeffery. P (2002) Risk management in the aftermath of September 11. Schachter. and the lessons of Long Term Capital Management. Loretan. Board Notice 254. Risk (November). G (1997) Maximum loss for measurement of market risk. Derivatives Risk Management. T. Mimeo. and English. Working paper. B (2000a) How well can stress testing complement VaR? In. Virginia) A. 6. pp. M. Risk. (December). Journal of Risk. In VaR: Understanding and Applying Value-at-Risk. and Mina. 211–224. J and Schachter. University of California-Irvine (April). Rebonato. ETHZ (December). Securities and Futures Authority Limited (1995) Value-at-risk models. 7–24.com 190 . Turkay. pp. D (2003) An equation and its words: Bricolage. Journal of Risk. J. Epperlein. 5(4). B (2000b) Stringent stress tests. Doctoral thesis. B (1998a) The value of stress testing in market risk management. A. President’s Working Group on Financial Markets (1999) Hedge funds. 24. Studer. R. New York: Risk Publications. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. 2(1).pdffactory. leverage. S22–S24. Haight. The Financial Survey (May). Journal of Risk. C (2003) Correlation stress testing for value-at-risk. Pratt & Sons. E. Mimeo (February). RiskMetrics Group (September). S. Jorion. Schachter. Schachter. Swiss Federal Institute of Technology. Paul (1998) Stress testing in a value at risk framework. MacKenzie. C (2003) The ultimate stress test: Modeling the next liquidity crisis. J (1997) Beyond VaR and stress testing. Working paper. Overdahl. P (1999) The most general methodology to create a valid correlation matrix for risk management and option pricing purposes. pp. Journal of Derivatives. Derivatives Risk Management (March). J (2001) Risk management in the aftermath of the terrorist attack. Studer. pp. B (1995) Derivatives regulation and financial management: Lessons from Gibsons Greetings. Kim. Shaw. 12–14. 13. W (2001) Evaluating correlation breakdown during periods of market volatility. G (1995) Value at risk and maximum loss optimization. Working paper. C (2000) A stress test to incorporate correlation breakdown. Malz. pdffactory. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. pp. 34–36. Special Edition. RiskMetrics Monitor. P (1997a) Exploratory stress-scenario analysis with applications to EMU. P (1997b) Catering for an event. Zangari.The PRM Handbook – Volume III Zangari. pp.com 191 . 31–54. Risk (July). com 192 .pdffactory.The PRM Handbook – Volume III Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. like many of his peers. Mr Noloss is so named to highlight the traditional defensive role of the credit officer: to avoid credit losses as much as possible and take corrective action when losses are incurred. The aim of this chapter therefore is to shed some light on this topic. these two worlds.pdffactory. This chapter will focus on some general topics of relevance for a particular credit officer (Mr Noloss) who has just been appointed chief credit risk officer of Universal Bank. J鰎g Behrens 99 III. Mr Noloss. Global Financial Services Risk Management.1 Credit Risk Management Dr.B. the past couple of years have seen a clear shift in interest towards credit risk modelling. While this role is changing. is primarily concerned with protecting his firm from 99 Partner.com 193 .2 to III. Although a significant part of this shift is attributable to regulatory initiatives such as Basel II.B. Ernst & Young. are now converging as credit risk at some institutions is actively managed using credit derivatives.B.The PRM Handbook – Volume III III.1 Introduction After a decade of success in the field of market risk modelling mainly driven by advances in derivative pricing and computer technology.1.6. pricing on the one hand and risk management on the other. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. As a result a wealth of new models for pricing and risk management has been developed. discussion about the extent to which these developments actually find their way into daily credit risk management seems to be absent. an important contribution comes from the demand for more sophisticated credit products and from better management of credit risk. Many of these state-of-the-art credit risk concepts are explained in Chapters III. and where is there further need for improvement.B. Interestingly enough. let us examine the daily work of a credit risk manager and find out which areas have benefited from recent credit developments. Are credit risk officers taking advantage of recent advances in credit risk methods? Do the next generation of credit officers need to be rocket scientists? Maybe we are just observing the exploitation of rather academic theories that do not really add a lot of value to practical credit risk management? In order to answer these questions. that one is tempted to ask to what degree the wealth of new information has reached the ‘central figure’ in this context: the credit risk manager! While the business of publishing the latest developments in glamorous magazines and discussing it at conferences has reached a high point. In fact so many methods have been developed and so many papers are being published. 100 We will introduce terminology and show components of a detailed credit risk report below. ownership of all credit risk-related aspects of Basel II.The PRM Handbook – Volume III significant credit losses. This also allows a deputy to smoothly take over the work whenever Mr Noloss is not around. 100 Supervision of credit data quality process and delivery of all critical credit risk information to various stakeholders and board level. This includes responsibility for the management of credit-related work groups such as the credit portfolio group. credit risk policy and reporting teams.pdffactory. Before looking at a typical day in the working life of Mr Noloss. credit stress and scenario testing and calculating capital requirements. The credit modelling team reviews the bank’s credit risk methodologies and needs to be independent of the business. exposures. including information on limit excesses. concentrations. provisioning. 4. counterparty ratings. Such a list looks good on paper. 3.B.1.1. including limit setting. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. reviewing provisions. From his individual situation as a practitioner we hope to derive conclusions of general relevance. Ownership of credit processes. But in the spirit of this chapter.com 194 . Representative for credit risk. etc. III.2 A Credit To-Do List Clearly there are critical tasks and ‘other activities’ that cover the day. credit modelling team. Ownership of credit risk function and framework. In particular. dealing with external credit bodies such as rating agencies and regulators. let us browse through his job description in order to better understand his responsibilities: 1. In order not to get sidetracked by too many ‘other’ issues such as administration. These topics will be discussed in more detail below. etc. let us just look at a typical day in the life of our credit officer Mr Noloss. Compilation and responsibility for credit risk reporting. The portfolio group supports the credit officer by complementing a typically rather transaction-focused view with monitoring expected and unexpected losses of the credit book. 5. Mr Noloss has produced a ‘to-do’ list to remind him of the key tasks he wants to focus on (Table III. Benchmarking of performance of credit risk functions within business units according to group credit policy.1).B. but how does it translate into practice? This obviously depends on the individual situation. 2. pdffactory.com 195 .B.1.The PRM Handbook – Volume III Table III.1: To-do list Review strategic credit positions: o Any changes to largest exposures (net of collateral)? o How about changes to counterparty ratings? o Any significant credits to be approved by chief credit officer or board? Credit limits and provisions: o Any limit excesses? o Limits to be reviewed? o Provisions still up to date? o All concentrations within limits? Credit exposure: o All exposures covered and correctly mapped? o Any ‘wrong-way’ positions? Credit reporting: o All significant risks covered in credit report? o Report distributed to all relevant parties? o Any significant credits that must be discussed at top management/board level? Stress and scenario analysis: o Any surprises from stress and scenario analysis at portfolio or global level? o Anything not covered by current set of scenarios? Provisions: o Any past or anticipated changes in general loss provisions? o Any changes to specific provisions? Documentation: o Full documentation in place for all transactions? o Break clauses and rating triggers fully recognised? Credit protection: o Credit protection utilised and understood? o Any further possibility to exploit credit protection? Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. com 196 .B.B. If a client pledges US government bonds. Not too long ago collateralised lending at Universal Bank was based on a simple table of ‘haircuts’ reflecting collateral volatility and mismatch of each collateral type. Table III. In particular.pdffactory.2 for further discussion. See Section III. Also sometimes a client would want to pledge collateral of ‘similar’ type.1. Universal Bank therefore has accepted some of these. which is not desirable. CHF or GBP 5% Blue chip equity SPI or DAX 20% Corporate bond Investment grade 10% Other equity Listed 50% Clearly such tables cannot easily capture collateral of a complex nature such as derivatives.1 Review of Strategic Credit Positions Probably the most critical question Mr Noloss asks himself when arriving at his office is ‘Are there any recent or overnight changes in critical exposures?’ This would generally be most important in the corporate book or in some of the structured transactions. a 101 Collateral refers to the assets used to secure a loan or other credit-sensitive transaction. the haircut increases (i. non-competitive in the market) or updated very often.e.1.B. but in spirit an ‘investment grade bond’. lending value decreases) for other.The PRM Handbook – Volume III Clearly this list is not exhaustive. thereby generating a mismatch of collateral that was difficult to manage.e. The lender has recourse to these assets if the borrower defaults on its contracted obligations.4. As can be seen in the table. Mr Noloss has a couple of these ‘XXL size’ positions with individual corporates and funds.B. A natural evolution of collateralised lending is to implement a dynamic collateral system that is. The haircut is low because the value of government bonds is considered to be quite stable. it does not address topics with an inherently long time horizon. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.e. Table III.2. But let us stick with the daily critical tasks for the moment. and would therefore either need to be very conservative (i. more volatile collateral types. a haircut of 5% would be applied to the lending value. the client would receive 95 units of cash for collateral worth 100 units. where credit lines could fill quickly in turbulent market conditions. In such scenarios collateral 101 could significantly lose value within hours and diversification could be low. III.2 shows collateral types and corresponding haircuts.2: Haircuts for collateralised lending at Universal Bank Collateral type Restrictions Hair-cut Government bond EUR.1. basically. USD. for example a securitised paper with AAA rating not covered by the table. i. Some of these are collateralised and a significant move in market value would probably trigger a margin call.3. The term is used synonymously with ‘credit limit’. The approach not only is dynamic as it daily reflects changes in collateral value. the mechanics of defining appropriate limits is not! The first question is ‘What shall we limit against?’ A traditional variable is the notional amount. and indeed there might not be any room for any new business at all.2.B. Non-collateralised positions require even closer attention. the bigger the potential credit exposure. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. III. For example. This was a big achievement for Mr Noloss.The PRM Handbook – Volume III value-at-risk system for a collateral portfolio. therefore credit lines should be a decreasing function of term-to-maturity.B. However. prior to default.pdffactory. An alternative limiting variable could be expected loss. 103 Other products have different potential exposure profiles. This is why Mr Noloss carefully monitors major credit lines 102. which could limit gross exposure against an individual counterparty – that is. In some cases. in particular credit line utilisation. as defaults would translate into a bigger loss to the lending bank. This is a classic case of ‘wrong-way exposure’ that we will discuss a little later: in the event of default the credit exposure is always ‘a bit bigger than usual’. A limit that is decreasing with term-to-maturity therefore hardly provides incentives to do long-term currency forwards.3. But there are other dimensions to limits. a currency forward that allows a counterparty to exchange one currency against another at a predefined rate has a potential credit exposure profile that increases with time: the longer the time to maturity. when credit lines are cut. 103 See the discussion of exposure profiles in Section III. that is. the total outstanding exposure must not exceed a certain notional amount. but also accounts for correlations between the underlying risk factors. the credit line remains open provided expected loss does not exceed a certain level. that is. this measure does not really capture risk. as we have not differentiated different products within this limit. For obvious reasons in credit card portfolios one even observes line utilisation of more than 100% at the time of default. 102 A credit line defines the total amount that a borrower (or credit card user) may borrow. this has implications for business incentives.2 Credit Limits and Provisions While the management of credit lines can be regarded as straightforward. the line will be fully drawn in bad times.1.com 197 . However. such as the term structure of credit: should limits vary with the term of the borrowing (or other credit-sensitive transaction)? Some people argue that long-term credit exposure is ‘riskier’ than short-term exposure as the uncertainty in exposure and default probability increase with time.3. So high credit line utilisation might eventually indicate an increased default probability. they have to sell or buy further into the fund. For example. A corresponding amount is normally deducted from the total assets in the balance sheet.1. Some banks apply ‘materiality’ rules of some kind and allow minor exposures to be ignored in the credit aggregation process. That sounds like market risk.3 Credit Exposure This brings us to the problem of exposure measurement: that is. a critical review of loss provisions 104 will be at the heart of a credit officer’s job. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Not only does Universal Bank comprise numerous businesses.The PRM Handbook – Volume III In addition to daily position monitoring. but what happens if the fund value goes down and Universal Bank needs to buy even more shares in the fund to hedge its short put position exactly at a time when the credit quality of the fund becomes doubtful? An example of ‘wrong-way exposure’ (explained in the next section) and in this case even the pure credit risk component – which perhaps could be interpreted as a ‘hidden’ straight loan – might not have been fully recognised. these exposures might grow and at some stage exceed global counterparty limits.1.B. III. leading to significant write-offs (and even the departure of key personnel).2. Mr Noloss might find transactions that do not appear to be exposed to credit risk at all.pdffactory. as its name suggests it also operates in various countries and runs different IT systems.B. So our Mr Noloss will study closely Universal Bank’s so-called ‘leveraged finance book’ 105 and check what happens to those wonderful spreads earned some time ago. several problems often remain in practice. how large are potential losses due to credit risk? While the theory of exposure measurement seems to be fully explored. or will repay them only in part. 105 The term ‘leveraged finance’ is often used for direct investments in sub-investment grade exposures where the credit quality outlook is believed to be significantly better than the implied rating. and carefully monitor provisions that might have to be built up in order to cover increases in expected loss.2.6. Didn’t Universal Bank invest in that famous hedge fund on which it is now writing (and hedging) put options? In order to dynamically delta-hedge the short option position. Every now and then yesterday’s ‘strategic position’ turns into tomorrow’s legacy position.com 198 . See the discussion in Section III. The biggest ones simply have to do with complexity. Other problems with exposure measurement arise when aggregating exposures of different businesses. These days the task of provisioning is made more difficult by the complexity of many credit risks. An amount is set aside out of profits in the accounts to reflect this diminution in the value of assets. However. Some of the more challenging issues around exposure measurement in practice are the following: Full exposure coverage and aggregation: The collection of all relevant exposures seems to be trivial but in practice one often finds a ‘special spreadsheet for that recent complex structure’ that does not yet feed into the official exposure system. a private client might pledge 104 Provisions for bad debts are made when it becomes apparent that debtors will not repay their debts. but in fact hide a ticking credit bomb. this approach would recognise a loan received by another bank as negative exposure. which could be offsetting rather than additive especially in the case of bought credit protection. the trading arm of Universal Bank investigates opportunities to ‘package up’ negative credit exposure for other clients to exploit new revenue streams. or if required initiate a change in that policy! Correct capture of credit derivative exposure: Classical exposure systems sometimes have difficulty differentiating between ‘normal’ counterparty exposure and exposure from credit derivatives. 1999). as one could offset this negative exposure with another loan resulting in a net zero credit exposure. A drastic example is a long put position on counterparty X issued by X: if the share price of X drops dramatically. ‘wrong-way exposure’ is still often not adequately accounted for. the put option would gain significantly in value and therefore credit exposure would go up. The total exposure to ABC might remain undetected until a default event. other systems might not ‘know’ the concept of potential exposure at all but deliver expected exposure only. some systems use a 95% confidence level to define potential credit exposure while others use a 99% confidence level. In addition. Even worse. This problem highlights a key function of the credit risk manager: He must respond to new developments and judge if these are in line with the bank’s overall policy.com 199 . For example. Globally consistent definition and usage of risk factors and risk measures: As Universal Bank has undergone several mergers and acquisitions. For example. Wrongway exposure basically is exposure that ‘goes against you when it hurts most’.The PRM Handbook – Volume III stock in company ABC as collateral while at the same time company ABC is a major corporate client with significant borrowings. if a firm buys protection to lower credit exposure. However. several of its exposure systems are built around inconsistent risk definitions and measures. where the realised loss seems to be significantly larger than a fully utilised credit line. mainly caused by ignoring positive correlation between default probability and credit exposure (see Winters. Correct treatment of ‘wrong-way’ exposure: Despite much discussion in journals and at conferences. some traders start using new concepts such as ‘negative credit exposure’. Rather than offsetting such negative credit exposure with the original counterparty. will X be able to pay – in particular if the share price falls even further? While this problem is fully addressed in Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. many systems still compute credit exposure of the credit derivative and add both exposures rather than recognising the offsetting effect as they cannot correctly reflect counterparty (initial exposure) and issuer (bought protection). In contrast to the classical definition of credit exposure that is floored at zero.pdffactory. 4. for example exposure to firm A through a Currency Forward bought from firm B. Credit risk concentrations are of major concern: does the firm fully recognise credit risk concentrations in certain sectors. this often remains a problem in practice. Other problems deal with the complexity of netting agreements 106 (in particular when ISDA agreements are not in place). … or how much exposure is investment grade or below? Here the focus is on the overall portfolio rather than on individual credits. A good credit report would list these largest individual exposures by business type and country. The best solutions truly integrate market and credit risk.B. amortisations or pay-offs).1 for a full discussion. Other credit exposure would be reported against various variables such as sector/industry.3.B. AA. Several firms also combine information on credit exposure and counterparty ratings: how much exposure is AAA.g. this information should also be the basis for his own action plan! So what are the most relevant pieces of credit information one would want to look at? A good starting point are the largest individual counterparty exposures. with exposure to firm B through an Interest Rate Swap bought from firm A. Not only must he report the most relevant credit risk information to top management (usually the chief risk officer or chief financial officer) and the board as a basis for strategic decisions. III. risk factor and by country. etc.The PRM Handbook – Volume III various academic papers and books (see. See Section III. thereby recognising exposure–default correlations.pdffactory. Jarrow and Turnbull. or difficulties with forecasting variables such as prepayment in mortgages that can lead to large uncertainties for credit exposure modelling.? Will concentration limits 106 A netting agreement allows to offset exposures between counterparties. countries. even bigger one? Large wrong-way exposures could be highlighted as well as large exposures with significant cash flows within the next week (e.4 Credit Risk Reporting One of Mr Noloss’s key tasks is the collection and reporting of credit information. As a first step in this direction one often finds banks addressing such problems with specific scenarios in their stress testing exercises (see below). Most likely these reports would include information on limits and utilisation as well.1. 1996). for example.2. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. How did overnight market moves affect these positions? Is there any news in the press about any of those firms? Are there rumours regarding mergers that could affect any of these positions or eventually merge two positions into a single. risk factors. the mapping of complex exposures onto market risk factors in the simulation of potential credit exposure.com 200 . Clearly there is no single answer to the question of what information should be in such a report. as explained in Section III.pdffactory. Example III. ‘Stress net’ is the anticipated exposure measured under stressful conditions.B.The PRM Handbook – Volume III still be appropriate after adverse market movements? Can credit risk be further diversified and business be extended without substantially increasing concentration risks? Changes of credit risk variables are of major importance.5. Ten largest individual exposures CP1 CP2 CP3 CP4 CP5 CP6 CP7 CP8 CP9 CP10 rating AA AA AAA BBB BBB BBB AAA AA AA AAA investment grade gross net 3400 3354 1500 1416 1200 1030 980 909 950 843 780 737 645 617 632 512 500 462 450 366 stress net 3390 1470 1140 970 933 770 639 601 480 430 CP11 CP12 CP13 CP14 CP15 CP16 CP17 CP18 CP19 CP20 rating B BB B B B C BB B C C sub-investment grade gross net 950 894 800 676 750 609 670 607 650 582 500 451 401 378 303 279 230 210 150 144 stress net 932 725 655 652 593 496 390 298 211 149 Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.2.1.1: Largest single credit exposures A key component of a good credit report is a list of the largest individual credit exposures.B.1. and complements it with a second report with everything else for those interested in the detail. This information often covers exposures before and after netting (gross and net exposure) and can be combined with information about counterparty credit quality. and good credit reports do highlight relevant changes such that they immediately catch the eye.com 201 . In the table below CP1 refers to ‘counterparty 1’ and so on and numbers are in units of currency. Mr Noloss believes that ‘less is more’ and produces an executive summary report fitting the most critical information onto two or three pages. 4 1. Grade BB Grade B Grade C Grade D $ 7. that indicates a shift in risk parameters by arrows.4 12.com 202 .000 1.0 0 .7 2.500 3.000 14.0 0.8 8.5: Watch list Some transactions or counterparties might require special monitoring. A good credit report combines the most relevant information into one table or graphic.000 9.4 1.8 Example III.3: Time evolution of credit exposure Many firms report time evolution of credit exposure. This allows monitoring of trends that can be useful to trigger timely corrective action.4 0 .2 1.7 0 .4 1.B.2 2.4 1.B.1.pdffactory.1 1.4 1.4: Arrow plots Another good way of highlighting recent changes is an ‘arrow plot’.500 2.000 1.1 Trading Banking Cash Management grade net 1.000 4.8 2.4 10.9 0 .1.4 5.0 Example III.6 $ 3.8 1.1.7 P9 P 10 P 11 P 12 $ 0.5 2.6 1.2 8.4 $ 5.4 1.1.0 0.7 0.5 1. The table below shows the Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.7 0.0 $ 4. in this case by splitting aggregated exposures into investment grade (rating BBB or better) and lower.2 15. Universal Bank uses two-dimensional plots to visualise monthly risk parameter changes.2 3.0 $ 6. Moves of 5 largest exposures 19.The PRM Handbook – Volume III Example III.2 1.500 EL CP1: 200 M CP2: 193 M CP4: 175 M CP5: 159 M CP3: 184 M Example III.2: Exposures by product category Information about individual exposures is complemented by the aggregated view usually listed by product type.5 1.B.000 3.1 3. Total exposure by risk type investment grade sub-investment gross net gross 20.2 $ 2. The graph below shows the change in expected loss (x-axis) and extreme tail risk (y-axis) for the biggest five counterparty exposures this month (solid squares) compared to last month (empty squares) without actually using an arrow.4 P6 P7 P8 $ 1. Again the information can be combined with credit quality information.0 0.000 2.4 2.6 1.0 0. Again it is good practice to combine this information for various periods with credit quality information as shown below.B.6 0. The cost of rating triggers is explained in Section III.com 203 .B.B. Stress and scenario analysis can also significantly expand the coverage of risk models.7. Why is this? Firstly.2. Clearly there is no general guideline covering all aspects of stress and scenario testing. the position will be reduced immediately. ‘the Russia crisis’ and the ‘oil crisis’. CP11 CP12 CP13 CP14 CP15 CP16 CP17 CP18 CP19 CP20 Rating watch list rating B BB B B B C BB B C C (sub-investment grade) outlook exposure stable 894 pos 676 neg 609 neg 607 pos 582 neg 451 pos 378 pos 279 stable 210 stable 144 cost of triggers 90 13 54 44 70 14 24 9 79 15 Clearly there are other important components of credit risk reporting that cannot be discussed here. such as ‘the Asian crisis’. In practice the difficulty is often not so much to produce all the pieces of a credit report (though this might be a challenge for some very complex firms) as to select the most relevant reports and implement contingency plans in case anything unexpected happens. and the impact of credit risk is quantified and reported against ‘stress limits’. but in order to provide some practical hints let us once more look to Mr Noloss.1. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.1.The PRM Handbook – Volume III largest sub-investment grade exposures bundled with information about rating outlook and cost of rating triggers in the event of a downgrade. for example through specific scenarios when tail event probabilities are underestimated by the firm’s standard models.2. or when dependencies between probability of default and exposure measurement are not correctly captured by standard risk management models. because classical ‘what-if’ analysis was the only way to estimate large potential losses before statistical credit (portfolio) models 107 were developed. In the event that any of these stress limits are exceeded.5 Stress and Scenario Analysis This topic received a big push with recent Basel II pillar 2 requirements but has been at the heart of credit risk management for a long time. for example by inclusion of additional risk types such as business or strategic risk. For these events historical market data are used for the re-evaluation of all traded positions and books. But stress and scenario analysis also allows one to address shortcomings in the firm’s risk models. Most recently Universal Bank has also started to perform a ‘unified risk 107 These models are explained in Chapter III.B. III.pdffactory. who has produced a simple but pragmatic stress testing schedule: Historical stress scenarios: Once every quarter Universal Bank runs a selection of historical crisis scenarios.5. 0 for a discussion of economic capital generally and Chapter III. or did Mr Noloss fail to recognise a specific tail risk? Needless to say. General loss provisions are usually made at portfolio level in line with portfolio expected loss (or the difference between loss charges priced into the product and portfolio expected loss). Mr Noloss has defined a number of hypothetical scenarios that have not (yet?) been realised but are considered to have a decent chance of materialising. for some large transactions. a combined credit and market risk stress test highlights problems such as ‘wrong-way’ exposures. For example. is an increased understanding of model behaviour. is there a problem with the model. liquidity reserves are booked for transactions where market prices can be observed for low volumes only and prices are expected 108 See Chapter III. 108 But sometimes – so many firms argue – expected losses cannot be ‘priced in’. These scenarios are defined and reviewed by the credit committee. In practice. individual loss provisions might include other factors.1.com 204 . Universal Bank carries out annual testing of the performance of its credit risk models by stressing input parameters and verifying their performance in such events.6 for a detailed discussion of capital for credit risk.The PRM Handbook – Volume III stress test’ where joint moves in different risk factors are recognised.pdffactory. that is.6 Provisioning Textbooks usually introduce economic capital as the difference between unexpected loss (typically the 99th percentile of a loss distribution) and expected loss (typically the average loss). which Mr Noloss chairs. Leveraging experience with derivative pricing models. either because of competitive pressure or for other reasons. Testing the models: Another method of stress testing concerns the risk models themselves. while unexpected losses would be covered by risk capital. So the next best thing is to set up provisions to recognise expected losses internally and avoid the possibility that such losses would harm future profit and loss statements. In particular.B. Potential stress scenarios: In addition to historical scenarios. once a month. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. they would have to be priced into the product.B.2. The benefit of this analysis that Mr Noloss performs jointly with Universal Bank’s independent modelling team. the results of all these exercises are reflected in the standard reports. as a shock to market risk factors would directly impact counterparty default probabilities. they cannot be directly be charged to customers. However. provisioning takes place at two levels. As expected losses – as the word suggests – are anticipated. III. Does the model behave intuitively? If not. Recently. that is. and ISDA agreements are certainly a very good example of a solution.B. typically at a predefined point in time. which is particularly relevant for credit derivatives: when exactly has a counterparty defaulted? It looks as though Basel II has finally arrived at a useful default definition. a break clause could give a corporate the right to prepay a loan after 2 years if interest rates have decreased by more than 2 per cent – which in this case is an embedded interest-rate option. In addition. As for traditional derivatives. it has begun to compute provisions using observable default information taken from market prices such as CDS spreads.7 Documentation For many reasons. however. On the other hand a break clause could give the lending bank the right to force a corporate to prepay a loan in case of a merger with another counterparty. Do any individual rating changes justify an adjustment of these provisions? Are correlation assumptions for joint obligor default still realistic? Is individual credit protection really offsetting the amount of risk that it was supposed to cover? For a long time Universal Bank has used long-term average default probabilities for the computation of provisions. Mr Noloss has a list of specific transactions with individual provisions that he monitors frequently. Once sufficient data are available. III. in cases where market prices are observable only for similar instruments – typically for non-standard derivative instruments. Break clauses on one or both sides of a contract allow a firm to terminate a transaction contingent on a predefined event.pdffactory. but it must be crystal clear whether a party selling protection has to pay or not. In practice. for example due to changes to exposure and default probabilities.com 205 .The PRM Handbook – Volume III to drop significantly when large volumes are sold. Let us start with an obvious example. a break clause is often used to renegotiate the terms and conditions of a transaction and then continue with a modified contract. reserves need to be monitored and adjusted on a regular basis. Triggered by market moves or individual loss events. Typically general loss provisions are reviewed monthly or after specific events. such as big market movements. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.1. credit documentation plays a crucial in credit risk management. Individual provisions could also cover model risk.2. individual loss provisions are also reviewed after rating changes of the relevant counterparties. provisions are then adjusted and losses or gains booked accordingly. This is a powerful tool for a credit officer as it allows him to limit maximum credit exposure to an individual counterparty. For example. As expected losses vary over time. standardisation is a good way to resolve problems with formulation and interpretation of contract details. it will compute provisions entirely based on market information. in particular when credit derivatives for credit protection (as in the first example) cannot be found in the market. Also the exact amount of protection for future points in time is difficult to estimate. that is. a downgrade)! Rating triggers are often implemented as a break clause. as the expected exposure is simply a correct Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. a former rocket scientist with brilliant ideas. Of course all standard options to increase credit lines or to reduce credit exposure have already been explored: in light of the good credit outlook credit lines have already been increased twice by the credit committee and short-term exposure has been repackaged to long-term exposure using specially designed swaps in order to fully utilise the entire term structure of credit lines. However. To help with such cases Universal Bank has hired Dr Quant. but does this really provide full credit protection in the future? Most certainly not. By accepting rating triggers within their contracts.B. or sometimes simply identifies credit derivatives readily available in the market.The PRM Handbook – Volume III Another feature requiring careful documentation is a rating trigger. The discovery of rating triggers written ‘for free’ can come as a bad surprise. These are the days when arbitrage opportunities seem to be obvious and it would be a real shame to pass them up.1.com 206 . Should the rating of the counterparty deteriorate. This contractual feature provides protection against additional credit risk due to a downgrading of the counterparty. that Universal Bank can buy to protect itself: if the original counterparty defaults. ‘Why not transfer some of the credit risk to another counterparty for which there is still plenty of credit line available?’ So Dr Quant designs credit instruments. A better way would be to map a rating trigger into the corresponding credit protection product. For example. a break clause linked to a downgrade below AAA or AA would allow the pension fund to withdraw investments without incurring additional costs at time of the trigger event. for example a put option triggered by counterparty rating. pension funds might be restricted to dealing with counterparties holding a very good credit rating. some firms have generously granted their counterparties implicit credit options without adequately measuring or pricing them. it will be the new counterparty that pays the outstanding amount.pdffactory. Dr Quant has decided to consider expected exposure as the relevant figure.8 Credit Protection Sometime Mr Noloss is faced with the temptation of allowing his traders to continue trading with counterparties that already have their credit lines filled. by nature.2. by directly taking on credit exposure through credit derivatives. III.e. in other cases Dr Quant’s ideas are just too expensive to implement. in particular as a rating trigger. hits at times when the firm most likely already has problems (i. Of course Dr Quant has realised that sometimes it is even more profitable to assume credit exposure synthetically. but he certainly also gets the respect he deserves.B. Whereas in bygone days traders would have done ‘their business’ and an independent risk control unit – in particular. Clearly there are other tasks with an inherently long time horizon. we nowadays see joint teams exploiting opportunities to integrate both aspects. the credit officer – would have either ‘approved’ or ‘blocked’ the business.6 for a discussion of securitisation. So credit derivatives and securitisation are instruments/techniques that do actively link trading and risk management. In the following we highlight a few tasks that are usually addressed only annually but then sometimes lead to very controversial discussions. Finally Dr Quant comes up with an even better idea: why not consider all counterparty exposures within one portfolio jointly.pdffactory. Mr Noloss is keen to ensure that model calibration is done seriously so that credit granting is based on the best information available and 109 See Section III. III.3 Other Tasks We have mainly focused on the credit officer’s more frequent tasks. Some of these were listed in the job description at the beginning of this chapter. ‘Good business’ links returns to risks.1.The PRM Handbook – Volume III probabilistic measure today – good enough for pricing and reserving today – but will need adjustment if any of its constituents (default probability. that it fits the global credit risk strategy and that any new risks – in particular. with regard to limit management and reporting – are fully taken into account. loss given default) changes. Review of models: Backtesting of ratings and models for ‘loss given default’ is not only required by Basel II.B. So now Dr Quant is working jointly with Mr Noloss and the portfolio team of Universal Bank to identify opportunities. exposures. Review of credit strategy: We previously discussed the review of strategic positions that clearly is a frequent key task for any credit officer.6. However. securitise them 109 and then sell them to other counterparties? In this way there should be a large diversification effect and credit lines could be significantly reduced. while good risk management links risks to returns.com 207 . Mr Noloss therefore has a reputation for asking many nasty questions. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. perhaps even unasked: ‘Do we really want to be in that business? Are we fooling ourselves when we believe we can exploit what is called “arbitrage” today?’ In some firms the duration of employment of those people who propose and eventually benefit from delicate transactions is typically far shorter than that of a credit officer. Mr Noloss is ensuring that this process is in line with Universal Bank’s overall credit risk framework. a more fundamental question often remains unanswered. By contrast. Clearly the frequency of such meetings varies with the volatility of the business. a change in regulatory requirements. He monitors developments that could lead to credit losses and ensures that corrective action is taken. a large unforeseen loss. that is he helps to avoid execution of bad (i. This is a good opportunity to ‘clean up’ legacy positions. But is a credit officer necessarily restricted to a defensive role – ‘to protect’ – or can he also be someone who actively makes suggestions. the head of the business unit has the job of optimising profits.e. III.4 Conclusions Let us return to our initial questions: Are credit risk officers taking advantage of recent advances in credit risk methods? And does the next generation of credit officers need to be rocket scientists? Looking at a day in the life of a credit officer. too credit risky) transactions. Review of stress testing: Credit officers will also from time to time meet with economists to hear about changes in the ‘global economical outlook’ and adapt their stress scenarios accordingly.B. In the simplest terms.The PRM Handbook – Volume III provisions are computed with realistic default probabilities and loss given default values. At Universal Bank backtesting is performed by an independent modelling team that also approves any new models before they go into ‘production’. we observe many of the credit officer’s classical tasks and rather few forays into the ‘hot areas’ in credit risk such as credit modelling or portfolio theory. identifies opportunities and provides insights? Can he proactively support the business to better generate value for the firm? The answer is a clear yes.pdffactory. These sometimes start with a complex ‘spreadsheet trade’ on a remote desk rather than with a formal approval of the ‘new business committee’ and might require a dedicated infrastructure including appropriate management information systems. for example to better integrate new businesses into the existing credit framework. Review of credit process: Another topic that is usually triggered by ‘events’ – for example. we could argue that a credit officer usually plays a rather ‘defensive’ role.1. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. and some firms have started to significantly expand the credit officer’s role and in fact integrate the two functions. the traditional role of the credit officer is to control the business units and avoid losses. For those who like to be critical.com 208 . or when a new chief credit officer assumes his role – is the review of the credit process itself. the new credit officer has the role of independent oversight of credit risk which.com 209 .The PRM Handbook – Volume III This is best highlighted by the use of credit derivatives. So it seems that the new credit officer. Another development is the formation of portfolio teams comprising business people and credit risk managers who jointly explore opportunities to do ‘the right trades’ – trades that optimize return versus risk. What used to be a two-stage process has been integrated into one. July. by Mr Raroc. under the Basel regulations. R. can directly benefit from many advances in credit risk management. that it is only a matter of time until the typical credit officer not only reads about recent advances in credit risk but perhaps even writes about them.pdffactory. ‘Good business’ links returns to risks. at least for banks. pp. 110 who has something of a reputation for his outspoken views on risk and return… References Winters. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. is an essential qualitative requirement. 574–580. and Turnbull. Credit officers have begun actively using these instruments to manage credit risk. Cincinnati. we nowadays see joint teams exploiting opportunities to integrate both aspects.5 for discussion of RAROC (risk-adjusted return on capital). Whereas in earlier times traders would have done ‘their business’ and a credit officer would have either ‘approved’ or ‘blocked’ the business. In addition to joining the business team and providing it with a better perspective on risk. W (1999) ‘Wrong way exposure’. The benefits of a portfolio view including up-to-date credit methodologies are so obvious. Jarrow. while ‘good risk management’ links risks to returns. OH: South Western College Publishing. Risk. on his retirement. S (1996) Derivative Securities.0. 110 See Section III. unlike his traditional predecessor. So we will not be surprised to learn of Mr Noloss’s replacement. The PRM Handbook – Volume III Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 210 . but it is also a default if a supplier does not deliver the parts he promised to deliver.2 makes the definition of default risk precise.2. This can be: o Repudiation: Refusal to accept a claim as valid.com 211 .B.B.2 Foundations of Credit Risk Modelling Philipp Sch鰊bucher 111 III. their future values are not known. In this wide context.2.The PRM Handbook – Volume III III. R鋗 istrasse 101. and there is no transaction that does not involve the risk that one of the two parties involved does not deliver.B. For a financial institution the largest and most important component of default risk refers to payment obligations such as loans. or if a contractor does not render the services that he promised. 111 D-MATH. default risk is everywhere. and payments arising from over-the-counter derivatives transactions.B. Instead.5 makes a careful distinction between expected and unexpected loss. The credit loss distribution is then defined as a product of these three distributions.pdffactory.4 introduce the three basic processes and the credit loss distribution. Such an obligation may be a payment obligation. Section III.2 What is Default Risk? Default risk is the risk that a counterparty does not honour his obligations. is called credit risk.7 summarizes and concludes.6 gives a detailed discussion of recovery rates for a portfolio of credits.2. In this chapter we merely aim to provide an introduction to this distribution.2.1 Introduction This chapter introduces the three basic components of a credit loss: the exposure. that is to say.3 and III. This risk of a payment default. and Sections III.2.5 that many advanced techniques are available for portfolio modelling. Payment default: An obligor does not make a payment when it is due. in particular when it refers to loans and bonds. We distinguish: Default: An obligation is not honoured. 8092 Z黵ich Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. the default probability and the recovery rate. ETH Z黵ich.2.2.B.B. III. Section III.B. bonds. We define each of these as random processes. and Section III.B.B.2.B. The credit loss distribution for a large portfolio of credits can become quite complex and we shall see in Chapter III. Section III. their values are represented as a probability distribution of a random variable. At time t. The aim of this procedure is an orderly and fair settlement of the creditors’ claims and possibly also other social priorities such as the preservation of jobs. if he defaults on a non-financial obligation). the obligor has not defaulted so far). I. if he could pay. Insolvency: Inability to pay (even if only temporary). if the bankruptcy procedures have not been started (yet) or if there are no bankruptcy procedures. Obviously knowledge of the full path of the default indicator process is equivalent to knowledge of the exact time of default of the obligor. Strictly speaking.g. … .pdffactory.e. an obligor may be: in default but not in payment default (e.g. and zero if the obligor is still alive at time t. the existence of a bankruptcy code allows us to speak of the credit risk of an obligor and not just of an individual obligation. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.The PRM Handbook – Volume III o Moratorium: Declaration to stop all payments for some period of time. but frequently we only have partial knowledge (i. The following three processes describe the credit risk of obligor i: Ni(t): The default indicator process. Usually only sovereigns can afford to do this.g. For instance. but chooses not to).2. in most cases we can rely on the existence of a functioning legal system which ensures that such selective defaults are not possible.B. Thus. and a priori there is no reason why an obligor should not default on one obligation and honour another. We consider a set of I obligors indexed with i = 1. and we call i the time of default of the i th obligor. The details of the bankruptcy procedure depend on the applicable local bankruptcy law and will vary across countries. default before the maturity of a loan).com 212 . default risk is tied to the obligation it refers to. Fortunately. The creditor of the defaulted obligation has the right to go to a court which (eventually) will force the obligor to honour his obligation (if this is possible) or – if the obligor is generally unable to do so – instigate a formal bankruptcy procedure against him. if the obligor is a sovereign). and in default but not in bankruptcy (e.3 Exposure.g.e. e. Bankruptcy: The start of a formal legal procedure to ensure fair treatment of all creditors of a defaulted obligor. Ni(t) takes the value one if the default of obligor i has occurred by that time. or we are only interested in a partial event (i. o Credit default: Payment default on borrowed money (loans and bonds). which we will do in the following. Default and Recovery Processes To analyse the components of default risk in more detail we now need to introduce some notation. III. in payment default but not insolvent (e. The exposure at default (EAD) to obligor i at time t is the total amount of the payment obligations of obligor i at time t which would enter the bankruptcy proceedings if a default occurred at time t. For instance.1) For a fixed time-horizon T.2. reflecting the fact that some proportion of the exposure at default may be recovered in bankruptcy proceedings.The PRM Handbook – Volume III Ei(t): The exposure process.01%. given default at time t. so that a default from one obligor makes default of other obligors more likely.2. the default indicator Ni(T) is a binary (0/1) variable which allows one to capture the default arrival risk. But when multiple defaults occur simultaneously (or within a short time span) this can threaten the existence of a financial institution.B. if the one-year probability of default is pi(1) = 0.B. We will discuss recovery rates in more detail in Section III. Thus. Li(t): The loss given default (LGD) of obligor i.6 .com 213 .2.3. a major task of the credit risk manager is to measure and control the risk of losses from a whole portfolio of credits.000 chance that the obligor will default at some point in time over the next year. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. the losses due to defaults of obligor i before time T can be represented as follows: Default loss = Default arrival ? EAD ? LGD or. A fixed timehorizon (typically one year) is a common point of view in credit risk management. and Ri(t) = 1 – Li(t) is known as the recovery rate of the obligor. Let pi(T) be the individual probability of default (PD) of obligor i until some time horizon T. Using these processes. The LGD usually takes values between zero and one. III. (III.pdffactory. that is. this means that there is a 1/10.4 The Credit Loss Distribution For a bank. The exposure process will be covered in detail in Chapter III. suppose the credit losses are correlated. If the exposure is not too large it can be buffered using normal operating cash flows. the risk whether a default occurs at all. in our mathematical notation where denotes the time of default of the obligor. but in many cases this is not sufficient and the timing risk of defaults must also be considered.B. Di(T) = Ni(T) Ei( ) ? Li( ). Then there is a concentration risk in the portfolio that needs to be managed. For instance. individual credit defaults of obligors are not unusual events and – although painful and inconvenient – these events are part of the normal course of business. The LGD is often less than one.B. the portfolio loss D(T) at time T is defined as the sum of the individual credit losses Di(T).00% 0 50 100 150 200 250 300 350 400 Loss Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.B.2.9% VaR are shown as vertical lines Mean Loss 14. summed over all obligors i = 1.B. (III.00% 99% VaR 99.2.E).The PRM Handbook – Volume III Using the definition of individual default losses (III.2. … .B. F ( x ) : P[ D( T ) x ].B. Figure III. In Chapter III. The portfolio’s credit loss distribution is the probability distribution of this random variable (see Chapter II.3) An important ingredient of the distribution of D(T) is the dependency between the individual losses.00% 8.9% VaR 12. (III.com 214 .00% 2.pdffactory.00% 10.00% 6.1: Density function of the portfolio loss of a typical loan portfolio. (b) The default correlation parameters typically have a strong influence on the tail of the loss distribution – and thus on the value at risk (VaR). 99% VaR. Mean loss (expected loss).2) i 1 The individual default losses Di(T) are unknown in advance. I: I D( T ) I Di ( T ) i 1 N i ( T ) Ei ( ) L i ( ). and 99.00% 0. Here we only mention two important points: (a) Assuming independence between individual default losses almost always leads to a gross underestimation of the portfolio’s credit risk. Thus the full portfolio loss D(T) is a random variable.B.00% 4.4 some popular models are presented which show how default correlation can be modelled.1).2. that is.4) where pi(T) is the default probability of the obligor.1 shows the density function of the credit loss distribution of a typical credit portfolio using a hypothetical portfolio of 100 obligors with 10m exposure each (i.5. no loss. Unfortunately. and only if it gets into distress is it depreciated.B. (III. The distribution has heavy tails. In some institutions return on capital is still (incorrectly) calculated this way.1).2. This can also be seen from Figure III.pdffactory. If we assume that exposure Ei and loss given default Li are known and constant.2. the expected loss is E[ Di ( T )] pi ( T ) Ei Li 0. while the downside can become extremely large.1. These are also the events that we are most likely to observe in historical data sets.2.2. interest rates or FX markets: The distribution is not symmetrical. losses between 0 and 50 in Figure III. this is one of the cases wheren naive intuition can lead us astray: the ‘standard scenario’ is not the mathematical expectation of the loss on the individual obligor.e.The PRM Handbook – Volume III Figure III.B. We have used a CreditMetrics type of model (see Section III.B. This scenario is also still used quite frequently for accounting purposes: a loan or bond is booked at its notional value (essentially assuming zero loss).B. III.1. Most of the probability mass is concentrated around the low loss events (e. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. The ‘upside’ is limited (the best possible case is a credit loss of zero).5 Expected and Unexpected Loss The ‘standard scenario’ which people intuitively expect to happen when they consider the default risk of an obligor is ‘no default’.B. losses will be less than 60m in 80% of the cases.2.g.com 215 .2. Credit portfolio loss distributions have several features that distinguish them from the ‘profit and loss’ distributions (or returns distributions) from market variables such as equities.3) with individually varying unconditional default probabilities. This means that the probability of large losses decreases very slowly. 50% assumed recovery rate and a constant asset correlation of 20%. The distribution is highly skewed.B. 1bn total portfolio volume). This is indeed the most likely scenario if we consider each obligor individually (unless we consider an obligor of extremely low credit quality). and VaR quantiles are quite far out in the tail of the distribution. In the example of Figure III.B. The PRM Handbook – Volume III At the level of individual obligors in isolation. the concept of expected loss may be counterintuitive at first because we will never observe a realisation of the expected loss: either the obligor survives (then the realised loss will be zero) or the obligor defaults (then we will have a realised loss which is much larger than the expected loss). offer to buy your friend a drink if the next person entering the bar does not have an above average number of legs. But nobody will perform exactly as expected. on a portfolio) is not bad luck: it is what you should expect to happen. 3. the expected loss should be covered from the portfolio’s earnings. it is at a level of about 35m. It should not require capital reserves or the intervention of risk management. Even if each of the obligors has a default probability of only 1%. The expected loss on a portfolio is the sum of the expected losses of the individual obligors: 112 E[ D( T )] I E[ Di ( T )]. Typically. When a loan’s expected gain (in terms of excess earnings over funding and administration costs) is not sufficient to cover the expected loss on this loan. These small errors will accumulate when we consider a portfolio of many obligors.2. Consequently.5% of the portfolio’s notional amount.2. In Figure III. Expected loss is an important concept when it comes to performance measurement.com 216 .pdffactory. You will win the bet if the next person has two legs: the average number of legs per person in the population must be slightly less than two because there are some unfortunate people who have lost one or both legs (but there are no people with more than two legs). There is a related trick question.B.1 the level of the expected loss of the portfolio is shown by the first (leftmost) vertical line. Suffering the expected loss (in particular.5) i 1 112 This follows from the property of the expectation operator: E(X + Y) = E(X) + E(Y) – see Chapter II.0).E. that is. the expected loss is small (because pi will be small) but it is positive. Next time you go out.B. (III. The same idea applies to credit obligors: most obligors will perform better than expected (they will not default). In a portfolio of 1000 obligors we may no longer assume that none of these obligors will default. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. in particular in connection with risk-adjusted return on capital (RAROC) calculations (see Chapter III. but there are some who perform significantly worse than expected. Of course your friend will have to buy you a drink if the person does indeed have an above average number of legs. then the transaction should not be undertaken. we will have to expect 10 defaults. and not in the sense of (III. More details on some alternative risk measures are given in Section III. the unexpected loss is not additive in the exposures. we assume that with zero recovery and unit exposure each obligor defaults with a probability of 3%.1. Then the unexpected loss of the portfolio at a VaR quantile of 99% is defined as the difference between the 99% quantile level and the expected loss of the portfolio: UEL D99% E[ D(T )] (III.com 217 . If.9% VaR loss quantiles were shown with two vertical lines intersecting the tail of the loss distribution.pdffactory. Let us assume that D99% is the portfolio’s 99% VaR quantile.2. (III.B. It is no coincidence that.2. we will use unexpected loss in the sense of equation (III. We define unexpected loss in these situations by replacing D99% with the general risk measure. even at a very high VaR quantile of 99.7). but only something like a worst-case scenario.B.9%.9% VaR level is at approximately 220m. which is still present in the portfolio despite an asset correlation of 20%. its total exposure.6) If another risk measure such as conditional VaR is used in place of VaR. In Figure III.B.5. this simple summation property will not hold for the unexpected loss! Unexpected loss is usually defined with respect to a VaR quantile and the probability distribution of the portfolio’s loss. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. one might define unexpected loss as the amount by which the portfolio’s credit loss turns out.3.6) is easily extended. with an unexpected loss of 185m.2. then (III.2.The PRM Handbook – Volume III However. The 99% VaR level is at approximately 160m. because it does not concern losses that were unexpected. which yields an unexpected loss of 160m 35m = 125m.2.2.7) Here.6). then each obligor’s individual 99% VaR will be 1. the unexpected loss is still much less than the maximum possible loss of 1bn that is suffered when the total portfolio defaults with zero recovery. The term ‘unexpected loss’ may be confusing at first. This effect stems from the partial diversification. the 99% and 99.A.B. for example. But the 99% VaR of a large portfolio of such obligors will not be the total exposure of the portfolio (unless we have the extreme case of perfect dependency between all obligors). As opposed to expected loss. P[ D( T ) D99% ] 99%. Intuitively. to exceed the originally expected loss: max{D( T ) E[ D( T )]. The 99. that is. 0}.B.2.B. in the end. Thus. This is a management decision that has to be made at the top level.B.B. This quantile should reflect the institution’s desired survival probability due to credit losses (this probability can be derived from its targeted credit rating). and most focus has been put on default events and default probabilities. Allocate risk capital to the portfolio to the amount of the unexpected loss.2. Nevertheless. The risk management department then makes appropriate risk charges to the business lines.com 218 . Determine the unexpected loss of the portfolio according to (III. recovery rates are closely tied to the legal bankruptcy procedure which is entered upon the default of the obligor.The PRM Handbook – Volume III The unexpected loss is frequently used to determine the capital reserves that have to be held against the credit risk of the portfolio. but any losses that exceed the expected loss will hit the risk capital reserves. the probability of this event can be controlled. III.1) and (III.4).B. but reserving against unexpected loss at a sufficiently high quantile is viable and effective if it is done centrally under exploitation of all diversification effects. 2. Should these reserves not suffice to cover all losses. the bank itself will have to default. 3.2. but the management of the (highly non-linear) unexpected loss is usually a task for a centralised risk management department. the recovery rate (or the LGD) of an obligor is as important in determining default losses or expected default losses as the default probability. It is not economically viable to hold full reserves against total loss of the portfolio.B.pdffactory.2. 5. Fix a VaR quantile for credit losses (usually 99% or 99. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. While we may expect that obligors will do their best to avoid bankruptcy in almost any legal environment (which will make default arrivals largely independent of the legal framework). A stylised capital allocation procedure is as follows: 1.9%). research into recovery rates has been neglected for a long time. This is partly due to the fact that data on recovery rates are much more fragmented and unreliable than data on default events. Split up the portfolio’s risk capital over the individual components of the portfolio according to their risk capital contributions. But by setting the original VaR level.6). Determine the portfolio’s expected loss.2. Losses in the portfolio up to the amount of the expected loss will have to be borne by the individual business lines (because these losses are economic losses). 4.6 Recovery Rates As we can see from equations (III. coverage of the (linear) expected loss is in the domain of responsibility of the business lines. the bankruptcy procedures start to diverge significantly: some procedures aim to find a way to restructure the bankrupt obligor and to enable him to become profitable again (e. Claims are then grouped by priority class (collateralised. and then stockholders have the last claim and may not receive anything. while others aim is to liquidate the obligor’s business and use the proceeds to pay off the debts (e. preferred equity or restructured debt of the reorganised obligor. and partly some other type of security such as equity. The only problem is to decide whether the payment should be discounted back to the actual date of default (in particular. unsecured creditors come next. At this point. the definition of the recovery rate is relatively straightforward: if each dollar of legal claim amount receives a 40 cent cash settlement. and the usual answer is ‘yes’). There is a well-defined procedure to determine these legal claim amounts which does not necessarily reflect the market value of the claim: for example. where it is assumed that the counterparty has the same rating as the defaulted counterparty’s pre-default rating. if the obligor is restructured and not liquidated) the settlement is partly cash. If the creditor’s claims are settled in cash. with secured creditors having the first claim on their collateral and the rest of the firm’s assets. According to the ISDA standard definitions. bankruptcy procedures can easily take several years. We will have to ignore these effects here.pdffactory. the legal claim amount for over-the-counter derivatives is usually the current replacement value of the contract.g. These values may be significant if interest rates have moved since the issuance of the bond. Much more frequently (in particular. for large obligors.). Chapter 11 in the USA). and whether legal costs should be subtracted from it (again ‘yes’). determining the value of the settlement is often close to impossible. senior.g. but not the actual market value of the bond or the value of the future coupon payments. and just warn the reader that. junior. Unfortunately. in particular if Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. for loans or bonds only the notional amount (and any interest payments which are currently due) is considered.com 219 . Generally. In this case. The final outcome for the creditors usually depends first on the choice of bankruptcy procedure. the outcome of any bankruptcy procedure will be a settlement in which creditors of the same legal claim amount and the same priority class will be treated identically.The PRM Handbook – Volume III In a typical bankruptcy procedure all creditors register their legal claim amounts with the bankruptcy court. and then on complicated negotiations between many parties. recovery rates are not easily comparable across countries. etc. because of the strong dependency on procedural details. cash settlements tend to be rare. then the recovery rate is 40% and the LGD is 1 40% = 60%. Chapter 7 bankruptcy in the USA). g. For these reasons. Renault and Scaillet. 2001. recovery rates are generally defined without reference to the final settlement. When smaller obligors are concerned (e. For instance. some short time (e. It generally applies to larger obligors (e.pdffactory. the legislature in which the bankruptcy takes place (the UK tends to be a rather creditorfriendly legislature. The less capital intensive the obligor’s business. 1 or 3 months) after default. the less substance there is to liquidate in the event of a bankruptcy. 2001. Some other.. obligors rated by public rating agencies). Gupton et al. the legal priority class of the claim. An obligor who has spent much time close to default usually has fewer assets to liquidate to pay off the creditors than an obligor who defaulted quite suddenly from a high rating class. discounted back to the date of default and after subtracting legal and administrative costs. The recovery rate is the value of the default settlement per unit of legal claim. According to the discussion above.g. 2001.g.. less easily observed factors turned out to be significant in empirical investigations (see. ‘dotcom’ companies tended to have recoveries close to zero. Altman et al. Definition 1 (Market Value Recovery). while France and the USA are more obligor-friendly and thus have lower recovery rates).com 220 . and define the recovery rate as: follows Definition 2 (Settlement Value Recovery). Van de Castle et al. This definition also coincides with the way recovery rates are determined in credit default swaps with cash settlement (see Chapter I. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.6). for example.The PRM Handbook – Volume III the obligor was de-listed from the stock exchange in the course of the bankruptcy. retail obligors and/or small and medium-sized enterprises) there will be no market price for distressed debt and we will have to attach a direct valuation to the final default settlement. Financial institutions tend to have significantly different (higher) recovery rates than industrial obligors. The recovery rate is the market value per unit of legal claim amount of defaulted debt. we expect the following factors to directly influence the recovery rates of defaulted debt: collateral. The obligor’s rating prior to default..B. 2004): The industry group of the obligor. 0% in the 2001/02 recession.com 221 . This is confirmed in Table III. it turns out that it is virtually impossible to predict a recovery rate of an obligor with much certainty. where also the default incidence has increased significantly. It can be seen that the average standard error is often of the same order of magnitude as the mean of the recovery rate. which shows average recovery rates for different phases of the business cycle. 1981–1999. US Corporates From a risk management point of view the large prediction errors in the recovery rates would not be too serious if we could at least hope that our estimation errors will cancel out on average over several defaults. we will get hit twice in a recession: first. because we will have lower than average recovery rates.42 Senior secured Source: Renault and Scaillet (2004) Data set: S&P. In particular.B.31 23.09 Total 623 42. because there are more defaults than usual.35 24.B.64 Junior subordinated 142 35.5% and 34.2. Recoveries tend to be lower in recessions and in industry groups that are in cyclical downswings or which have large overcapacities. and the business cycle.The PRM Handbook – Volume III The average rating of the other obligors in the industry group. For example. and second.8% to 35.57 Subordinated 174 35.2. This affects the liquidation value of the obligor’s business and/or the value and viability of a restructured firm.1: Recovery rates by seniority of claim Seniority Observations Mean (%) Standard deviation (%) 82 56.B. Despite these empirical findings.1 shows estimated recovery rates and their standard errors for US corporate debt of different seniority classes. The margins of error are very large indeed.pdffactory. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.61 Senior unsecured 225 46. In the years 1982–2000 (a proxy for a ‘long-term average’) recoveries tended to be much higher than in the recent (2001 and 2002) downswing.2. we should stress the recovery-rate assumptions of our credit risk models when we consider recession scenarios.15 25. Table III.74 25.2. Thus. the systematic dependence of recoveries on the business cycle destroys this hope. Unfortunately.03 22. recoveries on senior unsecured debt dropped from a long-term average of 43. Table III. Recoveries depend on a common factor and thus they will not diversify away. 4 31.2 65.1 are plotted. and (1 ) 2 and b )2 (1 2 .2.6 20. the beta distributions fitted to the data in Table III.4 43. is its standard deviation.1 52.5 NA NA All bonds 37. Quite clearly.3 Source: Moody’s KMV (2003) A popular mathematical model for random recovery rates is the beta distribution. The beta distribution is a distribution for random variables with values in [0.2.5 48. Recovery-rate distributions for senior.2.1 34.5 26. In Figure III.2 39.9 15.7 Senior unsecured 37. 1] which has the density f ( x ) c x a (1 x )b . (III.3 64. This can happen even if priority rules of the seniority classes are observed for each obligor individually. is the mean of the data.com 222 .2.6 67.0 Equipment trust 40.B.7 34.4 Junior subordinated 23.8 24.8 35. One can directly fit the parameters of the beta distribution to the mean and the variance of the dataset using the following formulae: 2 a Here. By choosing different values for a and b. junior subordinated.6 Subordinated 30.B. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.0 34.2 Senior secured 53.B.2. and c is a normalisation constant. This is caused by the relatively large variance of the values reported for each class.The PRM Handbook – Volume III Table III. senior unsecured. but there is a large overlap between the distributions of the various seniority classes.0 Senior subordinated 32.0 51.8) where a and b are the two parameters of the distribution.1 57. and all debt issues are shown.5 34. subordinated.6 22. higher seniority classes tend to have higher average recoveries.B. a large variety of shapes for the recovery distribution can be reached.pdffactory.9 NA 38.2: Average recovery rates (%) of defaulted debt at different periods in time Asset class 1982–2002 1982–2000 2001 2002 Secured bank loans 61. Such models are used to determine both expected loss (important for pricing loans and other assets or contracts) and unexpected loss (necessary for assessing the appropriate size of the capital buffer). with a small probability of large losses.pdffactory. Algorithmics Publications. having large standard deviation and dependence on the business cycle. Gupton.The PRM Handbook – Volume III Figure III. amongst other things. D. Available from www.5 0 0% 10% 20% 30% Senior Secured III. A (2001) Analyzing and explaining default recovery rates.B. The fundamental idea presented in this chapter is that probability of default. 69–92. New York University. This distribution is typically skewed. and Carty. L V (2001) Bank-loan loss given default. exposure amount and loss given default (or conversely. References Altman.2: Beta distributions fitted to the recovery data of Table III.B.B. and Sirone. Resti. Gates. G M. Recovery rates are particularly difficult to estimate in advance.com Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Report submitted to the ISDA. A. in Enterprise Credit Risk Using Mark-to-Future.7 40% Senior unsecured 50% 60% Subordinated 70% Junior Subordinated 80% 90% 100% All Conclusion Modelling credit risk has become an essential tool for modern risk management within a financial institution. Stern School of Business. recovery rates) combine to give the credit loss distribution for a portfolio of assets. pp. the economic cycle and local bankruptcy laws.2.5 1 0. E I.com 223 .2. Recovery rates are one of the three crucial elements in determining the credit loss distribution.2. December.5 2 1. They will vary according to seniority.algorithmics.1 2. level of security. com 224 .pdffactory. K. Algorithmics Publications. 61–68. Journal of Banking and Finance. O. Van de Castle. O (2004) On the way to recovery: A nonparametric bias-free estimation of recovery rate densities. and Scaillet. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. D. in Enterprise Credit Risk Using Mark-to-Future. pp. R (2001) Suddenly structure mattered: insights into recoveries from defaulted debt. and Yang. Keisman. 28 (to appear).The PRM Handbook – Volume III Renault. conversely. Finally.pdffactory. Section III.1 Introduction Chapter III.2 established the three components of credit loss upon default of an obligor: exposure amount.com 225 . Thus if we decrease the relative error in the exposure measurement by 5%. Section III.B.B.3. this chapter explores in more detail the exposure amount. how exposures vary over time for the various asset and transaction types. Accordingly.3. it will not help much in improving the loss estimation if on the other hand we face a relative error of 20% or more in the recovery rate.5 concludes.3. Exposure and recovery enter the loss at default multiplicatively. Section III. Credit exposure may arise from several sources: 113 D-MATH.3.The PRM Handbook – Volume III III. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. The exposure at time t is the amount that we would lose if the obligor defaulted at time t with zero recovery.3 Credit Exposure Philipp Sch鰊bucher 113 III. R鋗 istrasse 101. the correct definition of exposure is the market price or the replacement value of the claim.B. for example large future coupon payments. as the management techniques vary depending on whether default occurs prior to or at the time of settlement. In practice many simplifications are used when exposure amounts are estimated.B. The exposure is closely related to the recovery rate in the sense that it is usually identified with the legal claim amount or the book value of the asset.3.3 explains exposure profiles.B. Successful credit risk management and modelling will require an understanding of all three components. This is partly justified by the fact that any accuracy that is gained by a more accurate measurement of exposure will probably be swamped by the large uncertainty surrounding the recovery rates of the obligors. 8092 Z黵ich.2 will distinguish between pre-settlement and settlement risks.B.4 discusses some techniques for reducing credit exposures (risk mitigation techniques) and Section III. which may also be referred to as credit exposure or exposure at default.2. Thus.B. that is. But this identification is not quite correct: We should also recognise that we might stand to lose value which is not recognised as a legal claim amount (see the discussion in Section III. recovery rates) and probability of default. Switzerland.6). loss given default (or. ETH Z黵ich.B. most credit portfolio risk management models currently map future exposures at default into loan equivalent exposures. Variable exposures. in turn. These arise mainly from over-the-counter (OTC) transactions in derivatives. fixed exposures. committed lines of credit constitute large potential exposure because they will usually be drawn should the obligor get into financial difficulties. The bank may have covenants that allow a termination of the lending facility in the event of an adverse change of the obligor’s credit quality.6. In this method. For pragmatic reasons. the random future exposure of an OTC derivatives transaction is mapped into a non-random exposure (possibly time- Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. This is the most straightforward type of exposure. Thus. but the borrower usually has an information advantage and can draw at least parts of the line of credit before his financial problems become known to the bank. it has never actually happened and is generally regarded as a remote possibility. we consider a fixed fraction of a committed line of credit as exposure at default. Commitments. even if it is not drawn at the moment.1). Although they frequently have zero current exposure. current exposure is defined as the current replacement value of the relevant derivative contract (after taking netting into account). This raises the question how the potential exposure embedded in a line of credit should be measured.3.pdffactory.The PRM Handbook – Volume III Direct. but they are usually considered fixed exposures.C. Futures contracts are generally ignored for the purposes of credit assessment as their institutional features are designed to effectively eliminate credit exposure. A common pragmatic solution is to assume that the obligor will have drawn a certain fraction of the line of credit if he should default.com 226 . These arise from lending to the obligor or from investment in bonds issued by the obligor (which is another kind of lending). The futures clearing mechanism interposes the clearing house as the ultimate counterparty to all trades. on market movements which cannot be accurately predicted in advance. The clearing house in turn manages its credit exposure by trading on a fully collateralised basis through the system of initial and daily margin calls (see Section I. Difficulties arise when it comes to future exposures because this involves the projection of the future value of the derivative conditional on the occurrence of a default while recognising the effects of any netting agreements which may be in place. While it is theoretically possible that a clearing house might default on its obligations as counterparty. The future value of the derivative depends. For OTC transactions. As they are exposed to (uncertain) moves in the interest rates. fixed-coupon loans and bonds could also be considered variable exposures. Should one of these events occur. Here.B.B. an OTC derivative transaction) defaults at a date before the maturity (settlement) of the transaction.The PRM Handbook – Volume III dependent). If the replacement value of the contract is positive to the defaulted counterparty. Thus it makes sense to distinguish between the risks that arise specifically at the settlement of these transactions.2. In the latter case the defaulted counterparty may default on this final payment.1 Pre-settlement Risk This is the risk that the counterparty to a transaction (e. even if the net value of the transaction is much smaller than these cash flows.2 Pre-settlement versus Settlement Risk In Section III. bankruptcy. Thus. The techniques used for managing each of these risks vary. Replacement value at time t}.3.3. They involve triggers due to failure to perform on this or a related contract.1) Thus. too. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. the derivatives transaction is then treated as a loan with this exposure. Usually. (III. which means that the non-defaulted counterparty will have to enter the bankruptcy proceedings with a legal claim amount of the final payment.3.B. III. The exposure of the settlement risk on the other hand is roughly equivalent to the gross exposure of the transaction. the defaulted counterparty will have to make the final payment to the non-defaulted counterparty. the existence of early termination clauses is crucial in the reduction of the exposure due to pre-settlement risk. Otherwise. The right of early termination is the credit risk equivalent of ‘cutting your losses’ by closing out a lossmaking market exposure.com 227 .3.B.e. the contract is terminated and settled immediately. the pre-settlement exposure is E(t ) = max{0. III. and only if this value is positive to us. and the pre-settlement risk of the transaction before its maturity. rating downgrade (usually to a class below investment grade).pdffactory. For credit risk assessment. these clauses are incorporated in the master agreement between the counterparties. the value of an otherwise identical contract with a non-defaulted counterparty).g. the settlement risk. it will receive the final payment. with final payment being the replacement value of the contract (i. the exposure for pre-settlement risk is only the replacement value of the derivative.3 on exposure profiles we shall see that it is not unusual in OTC transactions for very large cash flows to change hands at the settlement of the transaction. this payment will be made later than the EUR payment. which on 26 June 1974 had taken sizeable foreign currency receipts in Europe but went bankrupt (it was shut down for insufficient capital by the German office for banking supervision) at the end of the German business day before it settled its USD payments in New York. A famous example of settlement risk is the case of the German bank Herstatt. Settlement risk arises only at the final settlement of a transaction if there are timing differences between the two payments of the transaction.3 Exposure Profiles III.3. Because New York is five hours behind London.B. but before making its own payment. III. Similar final payments are made at the maturity of FX swaps (exchange of principal) or at the maturity of forward contracts. the cash settlement of price differences instead of physical delivery). counterparty A delivers a bond and counterparty B delivers the purchase price for this bond (in a straightforward cash-trading transaction). or one counterparty delivers USD and the other counterparty delivers EUR (in a spot FX transaction).B.3. which must be made in New York.1 shows the exposure profile over time of a tenyear.pdffactory. The effect of settlement risk is that bank A has a very large exposure (the total notional amount of the transaction) but only for a very short period of time (a few hours). the difference in exposure size can be very large.3. For example. This risk can be mitigated by improving the clearing and settlement mechanisms.B.3. Other risk management tools for both settlement and pre-settlement risks are explained in Section III. The simplest case is a bullet bond or a bullet loan with a fixed notional amount paid off at maturity of the loan (see Chapter I. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.2. netting agreements to minimise cash flows (in particular. 5% coupon loan where we have assumed constant interest rates of 5%.B.The PRM Handbook – Volume III III. bank A will have to try to recover its claim in the bankruptcy court.B.2). The top panel of Figure III.3. Compared to presettlement risk.com 228 .2 Settlement Risk Many financial transactions between two counterparties involve two simultaneous payments or deliveries – for example.B.1 Exposure Profiles of Standard Debt Obligations Standard debt contracts usually have fairly straightforward exposure profiles. and a payment in USD by bank B at the same day. Should bank B default after receiving bank A’s payment. an FX transaction may involve a payment in EUR by bank A. which is made at 9am in London.4. or the introduction of a central clearing house which takes both sides’ payments in escrow.3. 95.com 229 . This causes the characteristic ‘sawtooth’ pattern in Figure III. fixed coupon of 5%.3.3. Interest rates are constant at 5% .1. the exposure drops by the payment amount. Between payment dates the exposure increases smoothly.B. to ignore the sawtooth pattern caused by the coupon payments Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.1 and also in all other exposure profiles for assets with intermediate payoffs. the largest payment is the final repayment of principal (with the final coupon).1: Bond exposure profiles Exposure profile for a ten-year fixed-coupon bond/loan with 100 notional amount. bullet principal repayment at maturity and an annual. that is. and annual payments of 12.3.B.The PRM Handbook – Volume III Figure III. A common approximation is to set the exposure constant until maturity. thus the exposure profile remains largely constant with a large drop in the exposure at maturity. For the coupon bond in the top part of Figure III. This reflects the increase in the time-value of the outstanding payments.B.pdffactory. 120 100 80 60 40 20 0 0 1 2 3 4 5 6 7 8 9 10 Whenever a payment is received. 120 100 80 60 40 20 0 0 2 4 6 8 10 Exposure profile for a ten-year amortising loan with 100 notional amount. with an interest-rate swap. But by equation (III.1) the exposure is floored at zero. The exposure profile of amortising loans is slightly concave because. spread the principal repayments over the life of the transaction in such a way that the total payment amount is the same at all payment dates.1. This means that current exposure is a very bad measure of future exposure.2) 230 .3.B. Derivatives often have an initial value of zero (or close to zero). The volatility of the underlying asset will also enter the calculation as it defines the range of likely future outcomes for the asset and its derivative. forward contracts or FX transactions have several special features that complicate the calculation of the corresponding exposure amounts.com (III.3. Thus. it makes a difference whether the two payments are netted (then the exposure is only to the net value of the payments). Here.3. which can vary dramatically with market movements. Amortising loans. For the current exposure of a derivative with replacement value D(t) at time t.B.B. for example. So. an OTC derivative can have a positive or a negative value.pdffactory. or whether both payment streams are considered in isolation (in which case we are exposed to the full payment stream of our counterparty). in the latter case it is positive. initially. Not all loans have the full principal repayment at maturity of the transaction. In the first case our exposure to default of the other party is zero. exposure cannot become negative. The exposure is the amount that we would lose if the obligor defaulted. III. assumptions must be made in order to decide how multiple possible realisations of the exposure are to be captured by a single number. less of the annual payments goes towards principal repayments than at later dates. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Derivatives usually involve payments by both contract parties. Essentially this is equivalent to assuming that defaults only occur in the middle between two coupon payment dates.2 Exposure Profiles of Derivatives OTC derivative contracts such swaps. for instance. We must consider exposure profiles over time and cannot set exposure to a constant value as we did for fixed coupon bonds. Thus we have to cope with an inherent nonlinearity here. At future dates. This yields the downward sloping exposure profile in the bottom part of Figure III.The PRM Handbook – Volume III and to use some average of the exposure level.B.3. By definition. These exposure profiles are not only time-dependent but also stochastic. at each payment date either we owe the other party money or the other party owes us money. the starting point for the calculation of derivatives exposures is E(t ) = max{0. D(t)}.3. The bold blue line in Figure III. Figure III. the exposure will be below the blue line with 90% probability. and equal colours of the mesh correspond to equal quantiles of the interest-rate distribution.3. Put another way. T) which for all relevant T > t maps the distribution of the spot exposure E(T) to a single number: the forward-looking exposure E(t. The exposure measurement problem is now to find a curve E(t. T) is called the exposure profile of the transaction. so the density for t = 10 is wider than the density for t = 2.3. since we usually measure credit risk over a long time horizon (such as one or five years).3. we can only make statements over the probability distribution of the interest rates at different time horizons. derivatives exposures may change significantly over the time horizon.B.2 shows the density of the interest rates at t = 2 and t = 10. Because the underlying interest rate is random. so the exposure will be stochastic.B.B. the positive part of the value of the replacement value of the swap) at different points in time and for different possible levels of the interest rates. too: it could take any positive value. seeing this from t = 0. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Clearly.B. The simplification is now to assume that this 90% point will be the realisation of the exposure. uncertainty increases over time. Viewed as a function of time horizon T. for a future point in time T > t we cannot predict D(T) with certainty given information at time t.com 231 .2) yet.3. the forward-looking exposure E(t.The PRM Handbook – Volume III If we assume that D(t) can be determined at any time t.pdffactory. The surface plot shows the current exposure (i. there is no inherent difficulty in equation (III. Figure III. T) given information at time t. the exposure is random. The difficulties arise when we consider future exposures because exposure is a random variable and.e. at any time in the future.2 represents the 90% quantile exposure. we cannot predict future levels of interest rates.2 illustrates this using the example of an interest-rate swap with a 5% swap rate from the point of view of the receiver of the floating rate. Unfortunately. that is. in the hope that this assumption – while clearly wrong – will at least be conservative. 3.pdffactory.3) where (x)+ = max{x. similar to assigning a single risk measure such as value at risk (VaR) to the full distribution of a random variable. T) = Et [(D(t))+].5. In many cases.2: Interest-rate swap exposure profile Exposure distribution and exposure profile for a ten-year interest-rate swap at 5% Assigning a fixed number to the (stochastic) exposure at time T is a simplification.1. Expected exposure has the additional advantage of being coherent in the sense of Artzner et al.B. That is. this clever mapping allows us to integrate derivatives contracts into a risk management system which otherwise could only accept loans.3. But this simplification allows us to map the complex derivative contract to a loanequivalent exposure amount.com 232 . so we can view the derivative as a loan with an admittedly rather strange amortisation schedule.B. Essentially the problem is reduced to the calculation of the value of a European call option on D(T) – see Section I. Nevertheless. An obvious candidate for an exposure measure is the expected exposure E(t. 0} is the common shorthand notation for the positive part of x and Et[穄 denotes the conditional expectation given information up to time t.The PRM Handbook – Volume III Figure III.B. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. (1999). Expected exposure has the advantage of being comparatively easy to evaluate. this calculation can be done quite easily. expected exposure is: non-negative: any derivative with a non-negative replacement value will generate a nonnegative exposure number. (III. This so-called ‘diffusion effect’ is caused by the randomness of the underlying interest rate: the range of possible outcomes increases with time. so: Pt (D(T ) q *p ) = p and E(t. and calculating the value of the derivative at this level of the underlying asset.The PRM Handbook – Volume III homogeneous: a positive scaling of the derivative position will result in the same positive scaling of the exposure measure.4) Frequently. Despite its nice properties.4). The typical feature of most swap contracts is that they are initially entered at zero exposure: both sides of the swap have the same value.2). This is usually simpler than calculating the expected exposure at the same level.2) and quantile-based exposure measurement in very similar to VaR-based measures of market risk (see Chapter III.g.B. than the ones commonly used in market risk.B. These calculations have much in common with the calculation of risk measures in market risk management: expected exposure is roughly equivalent to expected shortfall (see Section III.A. the number and total notional amount of the remaining payments decrease.3.5.com 233 . the joint expected exposure is less than the sum of the individual exposures.3.A. so that eventually the future exposure has to decrease back to zero. We denote this by qp*. But as time proceeds. the price of the derivative is a monotonic function of the underlying asset and the distribution of the underlying asset is assumed known (e. An alternative.pdffactory. it is frequently felt that the expected exposure is not conservative enough because in many cases the actual exposure at default will be larger than the expected exposure. is to use a ‘quantile-based’ exposure measure. Figure III. The potential for large credit losses at this point exceeds the potential for large credit losses at any earlier time because the range of likely market movements is greater. There are some practical difficulties. (III. and subadditive: if you add two (or more) derivatives. and for longer time horizons. in that for credit exposure measurement these numbers must be calculated at several time horizons. A large and favourable Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. These measures are defined very much like VaR levels: the p-quantile exposure at time T is the level below which the exposure will remain with probability p. In this case the pquantile exposure of the derivative can be calculated by: determining the corresponding quantile in the ‘bad’ direction of the underlying asset. The potential exposure increases as we look further forward in time.3 shows the exposure profile of an interest-rate swap contract (see Chapter I.3. T) = q *p . From a statistical point of view we can say that the worst time for the counterparty to default is around seven years into this ten-year swap. yielding higher exposure values. it is lognormal).B. The exposure profile reflects only the diffusion effect.3. the greater the potential for a large favourable exchange rate movement.B. so potential losses are reduced. The exposure profile tells us that statistically.com 234 .B. If the counterparty defaults when the swap is in profit. the greater the passage of time. then a significant asset may be lost.4 shows the typical exposure profile of an FX swap. Figure III.4: 95% exposure profile of an FX swap 120 100 80 60 40 20 0 0 1 2 3 4 5 6 7 8 9 10 Figure III. An FX swap differs from an interest-rate swap in that there are no interim cashflows and therefore no amortisation of risk.B. Beyond the seven-year point.pdffactory.3. the worst possibly time for default is towards maturity when the potential mark-to- Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.3: 95% exposure profile of an interest-rate swap 160 140 120 100 80 60 40 20 0 0 1 2 3 4 5 6 7 8 9 10 Figure III.3.The PRM Handbook – Volume III market movement could mean the loss of a very valuable asset should the counterparty default at this time. the amortisation effect dominates over the diffusion effect. 1 we present a numerical example to illustrate the calculation of the exposure levels of a fixed-for-floating interest-rate swap from the point of view of the floating-rate receiver with a fixed swap rate of 5% and a notional of 100. and that the term structure of interest rates remains flat at all times. The calculations to generate an exposure profile are shown in Table III. that is.B. Consequently the settlement risk at maturity is significantly greater. and then multiplying it by the difference between the current interest rate and 5%. So.B. for each year we calculate those levels of interest rates which are not exceeded with 95% probability.) We see that the value of the swap turns negative from year 3 onwards. The value of the swap is calculated by calculating the value of an annuity that pays a fixed payment of $1 for the remaining life of the swap. The rest of the exposure calculation now proceeds as for the calculation of current exposure: we calculate the values of the annuities for these interest rates. we need to calculate the upper 95% quantiles of the interest rates.3. we see the characteristic shape of a hump-shaped exposure profile similar to the one in Figure III.1(a) we need to know the future development of the interest rates. These levels we can calculate already at time t = 0.B.3. also has a final exchange of principal.3. We assume that the interest rates follow a lognormal random walk with drift zero and volatility 5%. and it is only positive when the swap also has a positive value. Again. For the floating-rate receiver. In years 1 and 2 interest rates rose above the fixed rate of 5% so that the swap has a positive value to us.The PRM Handbook – Volume III market value is greatest. to carry out the calculations in Table III. Note also that an FX swap.3.B.3.B. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Thus. so it cannot be done at time t = 0 to generate an exposure profile. unlike an interest-rate swap. As the value of the swap is always positive to us. Unfortunately. (This calculation yields the value of the net payments if we were to enter an offsetting swap.com 235 . they are shown in the second column. the exposure is worst (highest) whenever interest rates are high.pdffactory.1(b).3. this is also the exposure in these scenarios. In Table III.1(a) shows the calculation of current exposure for one simulated path of the interest rates (shown in the column ‘floating’). and then the value of the swap. in order to calculate a 95% quantile exposure. the current exposure is zero in those years. Table III. The PRM Handbook – Volume III Table III.90 0.99% 3.03 6.09 0.98% 8.71 10.1: Exposure calculations for an interest-rate swap (a) Current exposure (simulated interest-rate path) Time Floating Fixed 0 1 2 3 4 5 6 7 8 9 10 5.42 -1.53 21.24 15.13% 29.12 1.3.44 21. Note that for the calculation of the total exposure profile with respect to a counterparty we have to consider a whole portfolio of derivatives transactions.47 5.12 1.30% 5.00 0.00 0.00 2.00 Value of swap 0.44 4.77 14.96 6.60 0.00 2.40 -2.19% 9. nor will we be able to identify the ‘critical values’ for quantile- Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.61 2.64 3.40 3.91 5.19 4.B.27 0.00 0.4 Floating: upper 95% quantile 5.24 5.00 0.79 1.09 2.00% 5.06 6.77 7. where netting and other mitigation agreements may be in place.00 0.06% 5% 5% 5% 5% 5% 5% 5% 5% 5% 5% 5% Value of annuity 7.00 Current exposure 0.08% 6. This is a win–win situation for both sides because with welldesigned agreements.34 1.B. In this section we consider the problem of aggregating all exposures to a particular counterparty.61% 4.00 0.00 Exposure (value of swap at 95% quantile) 0.00% 5.78% 11.pdffactory.13 -1.87% 14.12 0.55% 4.61% 4.62% Fixed Value of annuity 5% 5% 5% 5% 5% 5% 5% 5% 5% 5% 5% 7.13% 3.63% 18. both counterparties to an OTC derivatives transaction should engage in exposure-minimising agreements.88% 4.23 22.35 -1.92 -1.00 0.com 236 .3.00 2. This usually means that we will no longer be able to calculate expected exposures with closed-form solutions for European options on the underlying transaction.31 -2.86 3.77 7.12 -2.84 0. counterparty exposure (and thus credit risk) can be significantly reduced without needing economic capital and without large additional costs to the counterparties.42% 6.27% 23.00 Mitigation of Exposures Whenever possible.97 0.80 18.17% 4.74% 4.00 (b) 95% quantile exposure profile Time 0 1 2 3 4 5 6 7 8 9 10 III. com 237 .987 Gross credit exposure after netting 1.B. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. all transactions are part of this one master contract.2.B. one need only look at the latest OTC derivatives market statistics released by the BIS in Table III.org).pdffactory. 114 A master agreement is a contract that sets the framework under which the counterparties can undertake derivatives transactions.e. or for a very active trader/hedge fund with his prime broker. and such legislation has been adopted in most OECD countries.3. This risk-reduction effect can be even stronger when the two counterparties are involved in a large number of transactions – for example. 114 The International Swaps and Derivatives Association (ISDA) was and is engaged in proposing and promoting legislation to enable netting agreements. Thus. Each individual transaction is economically independent. and then the net amount becomes immediately due. which is usually embedded in a master agreement between the two counterparties. The form of netting advocated by ISDA is called bilateral close-out netting: Once a credit event occurs on one of the counterparties. Closing out means that the current market value (replacement value) of all transactions is determined.4.G).3.3. Table III.2: The effect of netting on global OTC derivatives exposure (USD bn) Total outstanding notional amounts 197.1 Netting Agreements To illustrate the magnitude of the benefits of netting agreements. if they are both market makers in certain OTC markets. but it is governed by the same master agreement. More legal background information can be found on the ISDA’s web site (www. If the transactions are ‘hedged’ transactions (i. see also Werlen and Flanagan (2002).177 Total market value of outstanding contracts 6.986 Source: BIS market statistics.isda. A necessary requirement before netting can be applied is the existence of a legally watertight netting agreement. End of December 2003. the calculation of exposure profiles is reduced to Monte Carlo simulations or other numerical methods (see Chapter II. The presence of netting agreements reduces the gross credit exposure of all outstanding OTC derivatives contracts to 28% of the total market value of these contracts. Legally. delta-hedged derivatives positions) then netting can eliminate almost all exposure. III. all transactions under the netting agreement are closed out.The PRM Handbook – Volume III based exposures. Thus.B. as these values will now be a surface in a multidimensional risk space. then these numbers are netted. the master agreement can specify rules that apply across several transactions. Then the net value of these four transactions is 13. then B would have claims of 97 against A which B would have to try and recover in bankruptcy court. Another. The situation without netting is also bad to A: while A had no net exposure with netting. A does not have any immediate credit exposure to B. From a risk manager’s point of view. Thus. If. around USD 1017bn of collateral were used in OTC derivatives transactions. in particular if one counterparty (e. If A defaulted. the existence of a netting agreement with a certain counterparty allows him to aggregate all exposures with that counterparty and to consider only the net exposure that arises from these transactions.The PRM Handbook – Volume III Suppose counterparty A has entered swaps with counterparty B with current market values (to A) of +34.com 238 . In many legislations.8. B’s loss would be significantly larger: at 40% recovery B would lose 58. Comparing this to the USD 1986bn of credit exposure after netting found by the BIS. The International Swaps and Derivatives Association carries out regular surveys on the use of collateral and estimates that at the beginning of 2004. on the other hand. collateral is another popular instrument to mitigate counterparty risk.3.g. If counterparty B defaulted. an investment bank).g. less obvious form of collateralisation arises when derivatives are embedded into bonds or notes. The introduction of a master agreement ensures that all swap transactions are viewed as one contract. which can only be accepted or rejected as whole.2 Collateral Apart from netting agreements. By buying the note. He is able to calculate an exposure profile with respect to the counterparty and not just with respect to an individual transaction. 97. In this case.B. B would suffer a loss of 7.4.pdffactory. there is no bilateral close-out netting agreement. A would have to perform on the –97 swap but would have to try and recover some of the total value of 84 of the other three contracts. the administrator (receiver/bankruptcy court) can decide only to enforce contracts which are beneficial to the defaulted entity while sending all other contracts and obligations to the bankruptcy courts. The credit losses to A would be 50. +45 and +5.2. as for example in credit-linked notes or equity-linked notes. At an assumed recovery rate of 40%.4. thanks to the existence of the netting agreement. B would have a claim of 13 against A with which it would go to the bankruptcy court. then A would pay the net value 13 to B and would not have any further obligations. The reason for the efficacy of netting is that it prevents cherry-picking by the administrator of the defaulted counterparty. the investor implicitly posts full Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. a hedge fund) has a significantly more likely to default than the other (e. while B would still have to perform on the other three contracts with a total market value of 84. we see that about half of the non-netted credit exposure is managed by collateral. III. without netting A does have significant losses. pdffactory. then B is allowed to sell the full collateral in the market and use the proceeds to cover the replacement value of the contract. non-cash collateral is not counted at its full face value but at less. If on the other hand counterparty A misses a payment of the swap at some point. then B is entitled to sell some of the collateral assets to make the payment to himself. In the case of collateral. If the collateral becomes insufficient (e. The assets are legally still the property of A but they are under administration by B. A will have to deliver the required amount of collateral assets to B. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. the parties enter a collateral agreement which specifies which assets may be delivered by A as collateral. If the collateral was not sufficient to cover the replacement value of the derivative. If at the end of the transaction A has not defaulted. but other assets are possible (e. but here the legal difficulties are less severe as collateral is a very old means of mitigating credit risk.g. the idea is to reduce credit risk by reducing exposure. If there is excess collateral. because of market movements or because of changes in B’s credit rating).The PRM Handbook – Volume III collateral for all possible exposures that may arise from the derivative which is embedded in the note. all unused collateral is returned to A. equities). This explains why this is a very popular structure for derivatives transactions with retail customers. Generally. including the amount of the collateral. If there are any remaining proceeds from the sale of the collateral. Over the life of the swap. that is. Typical collateral assets are cash (used in about 70% of cases) and government securities (about 15% of cases). As the collateral may decrease in market value just when the counterparty defaults. collateral also requires a corresponding legal document (again the ISDA publishes template contracts on this topic). these are returned to A. A may remove the collateral from the collateral account. it is reduced by a ‘haircut’ factor depending on the volatility of the collateral and its correlation with the underlying exposure. and under which conditions B may ask for collateral.g. the remaining value will have to be recovered in the usual way. The lender only suffers a loss if both counterparty and collateral default. To mitigate A’s credit risk. Let us assume that hedge fund A and investment bank B want to enter an OTC derivatives transaction such as an interest-rate swap.com 239 . the credit risk of the counterparty is enhanced by the credit risk of the collateral. Like netting. If an early termination event occurs because of a default of A. B issues a margin call asking A to post additional collateral. as in netting agreements. Separate limits should apply for settlement and pre-settlement risk.B. Establishing third-party guarantees: A traditional and popular way to mitigate credit risk is to establish a guarantee for the exposure from a third party. Non-financial corporations also establish counterparty limits. including the downfall of the Long-Term Capital Management hedge fund. or can effectively be outsourced by relying on ratings from ratings agencies. Thus.The PRM Handbook – Volume III For exposure measurement.4. Termination rights/credit puts. but for the most common case of cash collateral we can simply subtract the collateral amount from the exposure value. The institution must ensure that it can always provide sufficient collateral to support its transactions. to performance risk for any particular The existence/size of a limit will either be determined on the basis of its own analysis. the value of the collateral must be deducted from the replacement value at risk at every time horizon.pdffactory. and the collateral it should receive. and to what extent. in such cases a limit is not granted. If the value of the collateral itself is random this introduces an additional level of complexity. This is very similar to collateralisation but its effect is to replace the default probability (as opposed to the exposure) of the counterparty with the combined default probability of the counterparty and the guarantor. There are many famous examples of the failure of collateral management policy.com 240 .3 Other Counterparty Risk Mitigation Instruments Limits. This allows the recovery of the exposure before the actual default event at a higher (often full) recovery value. One or both counterparties may reserve the right to terminate the transaction should the credit risk of the other party worsen significantly. even if adverse market movements or a credit downgrade suddenly increase the collateral requirements. a collateral manager also has to monitor the credit quality of his own institution very closely. Exposure limits are counterparty risk management devices that are used to avoid undue counterparty risk concentrations with respect to any particular counterparty. Collateral management is the non-trivial task of keeping track of both the collateral a business has to post. preventing any trades or lending activity which might result in credit exposure. but are more likely to rely on ratings agencies for credit assessments. In most financial institutions the credit committee is responsible for determining whether the institution is willing to expose itself counterparty/borrower. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.3. They are also used to avoid exposure to potential counterparties that are deemed to be insufficiently creditworthy. III. Counterparty exposure limits are not a mitigation instrument. Eber. Furthermore. F.B. 9(3). 203–228. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Although it is technically possible to specify a credit derivative which pays off exactly the exposure at default of a specific counterparty. because of the unusual nature of the payoff. this would entail a disclosure of the transactions that have been made with the counterparty. pp.The PRM Handbook – Volume III Credit derivatives. Delbaen. and Heath. Butterworths Journal of International Banking and Finance Law. this is usually not done in practice for two reasons: first. which must be justified by the reduction in risk. References Artzner. a significant part of the counterparty exposure can be laid off using single-name credit default swap contracts. More details on credit risk management instruments can be found in Chapter III. April. Collateral calls may not be made or fulfilled in time.B. Werlen. Mathematical Finance. pp. and second. and Flanagan. these risk mitigation instruments also have a cost in terms of administration costs.6). These risks are hard to quantify and must be carefully monitored. D (1999) ‘Coherent measures of risk’. T J. J-M. Nevertheless if one is willing to accept some residual exposure to the counterparty. 154–164.pdffactory. All counterparty risk mitigation instruments necessarily introduce legal risk and documentation risk. Counterparty exposures can also be managed with credit derivatives (see Chapter I. the credit protection would be rather expensive.6. and netting agreements and guarantees may be legally challenged. SM (2002) ‘The 2002 Model Netting Act: A solution for insolvency uncertainty’.com 241 . P. pdffactory.The PRM Handbook – Volume III Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.com 242 . The starting point of any representation of default probabilities is the one-period default probability depicted in Figure III. and survival (node S1) which is reached with the survival probability.1: A one-step default tree Starting from node S0 at time T0.B.B.4.4 Default and Credit Migration Philipp J. ETH Zurich.The PRM Handbook – Volume III III.pdffactory. After setting up a framework to describe and analyse default probabilities we will consider three different credit risk assessment methods: agency ratings. the survival probability): 115 D-Math.e. internal ratings and market-implied default probabilities.1 Default Probabilities and Term Structures of Default Rates In this section we introduce the terminology for describing default probabilities. 1 – p1. Sch鰊bucher 115 Having covered recovery rates and exposures in the previous chapters.1: default (node D1) which is reached with the default probability p1. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.com 243 . Figure III. we could also specify the odds of default. The ‘odds’ of an event are defined as the ratio of the probability of the event (i. Switzerland.4.e. this chapter discusses the modelling and measurement of the third determinant of default loss: the probabilities of default of individual companies (and.B.B. sovereign nations).1. Finally. by extension. we compare the three approaches and discuss their differences.4. Apart from the direct specification of the default probability p1 (or the survival probability 1 – p1).4. III.B. the default probability) to the probability that the event does not occur (i. there are two possible outcomes at time T1 in Figure III. so the bet is fair.4. given the odds of an event. while odds can take any value between 0 and infinity.1) Hence. or • you get 1/H1 if the obligor does indeed default. the obligor can default (with probability pi) or survive (with probability 1 – pi). in that if you bet $1 on the event that the obligor defaults then: • you lose your $1 if the obligor survives. From a modelling point of view the advantage of using odds is mostly of a technical nature: probabilities are restricted to lie between 0 and 1. T1].2: A schematic representation of default and survival over time A complication in the representation of default probabilities arises when several points in time are considered. In fact. T2] and [T2.2 shows a simple representation of default and survival over time in a model with three periods ([T0.com 244 .pdffactory.B. Figure III. For small values (such as for short-horizon default probabilities). Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. The expected payoff of this bet is 1 p p . bookmakers usually quote 1/H1 and not H1. For example. odds and probabilities are almost equal.4. Figure III.B. with odds being slightly larger. We can interpret H1 to be the odds of a fair bet on the event. then the ‘odds’ of default are H1 = 2. [T1.0408%. 1 p1 (III.B. if the default probability is p1 = 2%. Ti]. one can recover the probability of the event as: p1 H1 .4. 1 H1 This definition differs only slightly from the ‘odds’ that are routinely quoted by bookmakers on sports (and other) events.The PRM Handbook – Volume III H1 : p1 . T3]) Over each period [Ti–1. pdffactory. monthly or even daily intervals as this Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. The cumulative survival probability over [T0. going from yearly to quarterly. that is. Starting from the tree in Figure III. that is. of going from survival in Ti to survival in T j with i < j. If we now want to know the cumulative default probability until T2. which all can lead to a default during the period [T0. the probability of default at any point in time over [T0. the cumulative default probability over [T0. we can define ‘local’ default probabilities (and local odds of default) as before. we can specify a term structure of default probabilities. (III.B.B. They are only valid if we have reached the node S1.The PRM Handbook – Volume III By considering the individual periods in isolation. T3]. T3] equals one minus the cumulative survival probability over [T0. T2] equals (1 p1 ) p2 p1 p2 p1 p1 p2 . T j ) (1 pi 1 )(1 pi 2 ) (1 p j ). that is. we can specify default probabilities (or odds of default) for future time periods.2) Clearly. T3].4. and • the obligor defaults in the first period ( S0 D ). T3 . we can still analyse the situation in period [T1.4. for example. The first scenario has probability (1 p1 ) p2 and the second scenario has probability p1. then p2 and H2 have no meaning. In many cases it may be desirable to increase the resolution of the term structure by inserting additional points in time. But these quantities are now conditional default probabilities.com 245 . T2]. so that the total default probability over [T0. P[Survival over [T0. These conditional default probabilities are also often called marginal default probabilities. that is. T3]. The general formula for the probability of Si S j . is given by the product of the marginal survival probabilities over the individual periods which are spanned by the interval [T0. If we project further into the future it quickly becomes more convenient to consider cumulative survival probabilities because for survival we only have to consider one path across the tree. T2] with default probability p2 and odds of default H2 = p2/(1 – p2). then we must take several possible paths across the tree into account. is P ( Ti . they are conditional upon survival until T1. T2]: • the obligor survives the first period ( S0 S ) and defaults in the second period S1 D .2. T3]]= 1 p1 1 p2 1 p3 P T0 . that is. If a default has already occurred in the first period. for example. So. it is the ‘instantaneous’ probability of default in a continuous time setting. we should assume that the monthly default probability is 0.B. somehow. suppose we have an obligor with a 10% default probability over one year. But the odds H i is the default probability per unit of time. In summary.pdffactory. evaluated at the gridpoint which corresponds to time T. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.3) Ti . Now. (III.The PRM Handbook – Volume III will allow us to place the nodes of our tree exactly on the payment dates of the bonds or loans of the obligor under consideration. For instance. T ) exp T 0 t dt .4.. even a small overestimation of the daily probability can lead to a one-year default probability that is much too large.4) and the probability of a default between time 0 and time T will thus be 1 – P(0. Suppose that. T).4. when calculating the daily default probability that is equivalent to a 10% one-year default probability. For example. local default probabilities and odds of default should also decrease. A common assumption that always achieves a stable balance when time-steps are reduced is that the odds of default over a small time interval [Ti. Let us denote the length of the time interval by = Ti+1 – Ti. If we then let the interval length go to zero. as of default get smaller. This quantity is known as the default rate. (III. if we take the limit as the time 0 ). Ti+1] are approximately proportional to the length of the time interval. too. Then the hypothesis may be written: Hi r i where the proportionality factor is denoted by interval gets smaller and smaller (i. to avoid strange limiting behaviour as the time period decreases we must reduce the term default probabilities in a coordinated way. As the length of the time periods gets smaller. it can be shown that eventually the survival probability will be P (0. we leave Ti unchanged. we are able to specify a default hazard rate function (t) for all t 0.874%. To be consistent with this when moving to monthly time periods. The constraint becomes stronger as we take smaller steps. T Ti i .com 246 . the default intensity or the default hazard rate at time T.e. Armed with this function we can calculate survival and default probabilities from 0 until a given time horizon T.B. try to estimate a term structure of default probabilities using one of the methods described later in the chapter. 0 0 ln( P (1)) . (III. 116 An important special case arises when the default hazard rate is constant. let us write the hazard rate as 0 Then by equation (III.com 247 . Similarly.B.4. 116The only exception are term structures of survival probabilities which have jumps or which drop to zero. Whatever the shape of this term structure – and market-implied term structures can have quite strange shapes – it is usually possible to find a default hazard rate function that reproduces this shape.3) gives the six-month default probability as 5. We may. the one-year T } 90% so the one-year default probability is 10%.4) the survival probabilities are just: P ( t ) exp 0 T . The aim of a credit rating procedure is the accurate classification of obligors according to their credit quality. suppose we choose survival probability is P (1) exp{ 0 = 10.6).4.The PRM Handbook – Volume III Conversely. suppose we are given a term structure of survival probabilities.B. In fact: (T ) T ln P (0. a set of probabilities P (0.B. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.B.53%. Also. T ). III.029%.87% and over one day it will be 0. For example.B. T ) for different time horizons T. (III. Then we could compute the corresponding default intensity function by differentiation. that is to say.4. Since by (III. there is a correspondence between the term structure of survival probabilities (or default probabilities) and the default hazard rate function.B.5). In this case.4. usually by specifying an estimate of the obligor’s default probability or by giving them a ‘letter rating’.13%.4.pdffactory. over one month the default probability will be 0. Credit ratings are one of the most important tools to assess the likelihood that an obligor defaults and their use is actively encouraged by the new capital adequacy rules for credit risk proposed by the Basel Committee on Banking Supervision.6) This gives a simple and effective way to interpolate default probabilities between different dates.2 Credit Ratings The goal of any credit rating system is the accurate assessment of the credit risk of the obligor.5) So. an assumption that is at the foundations of the CreditRisk+ model (see Chapter III.B.4. for instance. assumption (III. Unfortunately. Such systems are typically based upon a quantitative and statistical model of the default risk of the obligor. BB.pdffactory. and it is important to have a methodology to judge the accuracy of the output of the rating procedure. and it has no direct relation to observable quantities. When classifying rating systems one might be tempted to say that the best rating system is the one that produces the ‘true’ default probability. Having Crystal Ball. As these two rating systems are fundamentally different. we no longer need default probabilities. B. it can be incorrect. the concept of the ‘true’ default probability is very elusive: it depends on the information that is available. Distinguishing between these two possibilities can be difficult. but lucky. so we specify default probabilities 117 Moody’s bought KMV and formed Moody’s KMV in 2003. D. A. An obligor is classified as D if it will default. a credit rating can be correct but still unlucky (it correctly classifies obligors. with the interpretation that ‘lower’ classes (such as CCC) carry a higher risk of default over a given time horizon than higher classes (AAA or AA). Imagine we had access to an ideal rating system. these are the only ‘true’ default probabilities. let us call it ‘Crystal Ball’. but some ‘good ones’ default and the ‘bad ones’ do not).5) or Kamakura’s default probabilities. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.2. Standard and Poor’s use the classes AAA. we will refer to Moody’s ratings when the letter ratings of Moody’s KMV are meant.B. There are only two trivial values for default probabilities: 0% (for the S class) and 100% (for the D class). essentially from 0% to 100%. AA. In reality we do not have a Crystal Ball. Crystal Ball only needs two classes to accurately classify all obligors: D and S. Essentially. and our Crystal Ball can do this without error. Examples are KMV’s ‘expected default frequencies’ (see Chapter III. III.1 Measuring Rating Accuracy There are many different approaches to the problem of determining a credit rating. Conversely.The PRM Handbook – Volume III Large public rating agencies such as Standard and Poor’s. Moody’s and Fitch classify obligors into a number of rating classes.B. CCC. most proprietary ‘internal’ ratings models also fall into this class. and we will refer to KMV ratings for the EDF equity-based default risk measures calculated according to the KMV methodology. and as S if it will survive. Many other systems directly produce estimates of the default probability of an obligor which can vary on a continuous scale. 117 For example.com 248 . Moody’s KMV provides both classical Moody’s ‘letter’ ratings and the ‘expected default frequency’ (EDF) ratings of KMV. Unfortunately.4. A classification into a finite set of classes is not necessarily the only output of a credit rating system. BBB. Today. for each number/percentile x. The plot is constructed as follows. which can be an advantage if the system does not directly give us such probabilities but only qualitative rankings like the letter ratings of the public rating agencies.3: A cumulative accuracy plot Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.The PRM Handbook – Volume III and/or intermediate rating classes to measure the degree of our confidence about the fact that the obligor truly belongs to class S or class D. A common method to compare different rating models is to focus on their ability to accurately rank obligors according to their credit quality. the obligors are ranked according to the default risk that the model assigns to them. starting with those with the highest default risk. The rank numbers will form the x-axis of the plot. The CAP now shows.B. what percentage y% of the actually defaulted obligors can be found among the x% worst obligors according to the model’s ranking.3 shows a cumulative accuracy plot (CAP) with which the accuracy of the risk ranking of rating models can be compared. Figure III.4. This method ignores any possible concrete values that are given for the default probabilities. Figure III.4.pdffactory. First.com 249 .B. so that we have captured 50% (5 out of 10) of the actual defaulters in these 19 worst obligors. 10% of the actual defaulters in the 10% ‘worst’.pdffactory.4.The PRM Handbook – Volume III For example.. The Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.B.B. the marked point of the blue CAP is found as follows. Nevertheless. without any reference to their actual credit quality. any CAP plot must start at 0% (an empty set of zero obligors cannot contain any defaulters) and it must end at 100% (if you have all obligors. The basic data set contained 100 obligors. then this is a sign of superior classification ability for the first model and it will be considered the better model. Beyond that. The ten worst obligors of the Crystal Ball model are also the ten obligors that default in reality.com 250 . the accuracy profile shows a very steep increase up to 100% explained (correctly classified) obligors already at the 10th ranked obligor. because any randomly chosen subset of x obligors will on average contain 10x/100 defaulters. 7th. of which 10 defaulted during the next year. the CAP for the Crystal Ball model remains flat because there is nothing more to explain. Thus. We now use our credit model to classify these 100 obligors based upon the data available at the beginning of the year and rank the obligors in order of decreasing credit risk (according to the model’s prediction). 6th. and shows the best possible default prediction accuracy. Next to that. Thus. we can expect to find 50% of the actual defaulters in the 50% (randomly chosen) ‘worst’ obligors.3 shows the cumulative accuracy profile of an average default prediction model. Thus. The other extreme is a ‘random’ model in which the obligors are ranked completely at random. If we compare two different rating models and the CAP of one model is consistently above the CAP of the other. Clearly. which means that we must consider the 19 ‘worst’ obligors according to the model.3. The marked value in the graph is at an x-value of 19. etc. Out of these 19 obligors. The same calculation is done for every x between 0 and 100. With such an approach. we also keep track of the actual default/survival behaviour of the obligors. A CAP gives a very good visual impression of the predictive performance of a credit risk model. the better it is. It will concentrate the defaulters at the high risk scores at the beginning of the ranking. it is often useful to reduce the CAP to a single number. the accuracy ratio (AR). The dark blue line in Figure III. so that the slope of its CAP will be steep initially. the y-value at x = 19 is y = 50%. you also have all defaulters).4. the 5th. The closer the CAP is to the yellow line. 13th and 16th actually did default. A good credit risk model will exhibit a CAP profile that is as close to the yellow line of the perfect model and as far away from the pink line of the purely random model as possible. The CAP of the Crystal Ball model is the yellow line. and this purely random credit risk ranking produces the diagonal line shown in pink in Figure III. the proportion of actual defaulters should be proportional to the proportion of the number of obligors selected. consider a model that assigns a default probability of 50. while an accuracy ratio of 0 means that it is as bad as a purely random classification. This is an advantage when we want to compare models that do not necessarily give us numerical probabilities of default at all (like the ‘letter’ ratings of public rating agencies) or if we want to use the model as a support tool for ‘yes/no’ loan decisions.com 251 . As an extreme example. But even with reasonable default correlation.1% to the first ten obligors (the ones who do default in the end) and a default probability of 49. We simply have to check that aspect of the model also.pdffactory.) The area between the ideal model and the random model is easily found to be (1 – D)/2. The CAP and the AR only measure the model’s ability to rank obligors. 118 Despite this extreme example.The PRM Handbook – Volume III AR is the ratio of the area between the CAP of the credit risk model under consideration and the random model’s CAP (the diagonal). in practical applications we may expect a model that does a good job in ranking the obligors also to provide good estimates for the default probabilities. where D is the fraction of defaulted obligors. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. a model which gives completely absurd values for the default probabilities may still have an almost perfect-looking CAP. thus it has a perfect CAP. A negative accuracy ratio is possible: it means that the model does an even worse job than a purely random credit ranking and that the model’s ranking should be inverted. but the assignment of almost 50% default probability to all obligors is strongly invalidated by the actual experience of only 10 defaults: 10 or fewer defaults out of 100 obligors would have a probability of essentially zero ( 10 17 ) if the individual default probabilities were indeed at 50%.B.4. we will be able to strongly reject the hypothesis that the default probabilities equal 50%. (Here the x-axis is now also a percentage scale. they say nothing about the model’s ability to give correct values for the probability of default. theoretically. An accuracy ratio close to 1 means that the model is almost as good as the ‘ideal’ model. This model correctly ranks the obligors. 118This is the value if defaults are independent. Yet.9% to the other 90 obligors. Thus the accuracy ratio of any given model with CAP function CAP(x) is: 1 AR 0 CAP( x )dx 1 2 (1 D ) 1 2 (III. and the area between the perfect prediction model’s CAP (Crystal Ball) and the diagonal.7) . the cost of the rating is usually paid for by the issuer of the bond.4. research and budgeting numbers.3. Agency rating classifications are publicly available and are an important factor driving the decisions of many potential investors in these bonds. Here. These investors will also be able to trade the bond on a secondary market.The PRM Handbook – Volume III III. Indeed. Public agency ratings are revised and updated at regular intervals to reflect changes in the credit quality of the rated obligor. There is a portion of human judgement involved.B.com 252 . As the rating is meant to represent a long-term view. This information includes such things as direct interviews with the issuing firm’s management. internal planning. Naturally. Unfortunately. in terms of their credit quality and their ability to pay their investors. Moody’s and Fitch is the analysis of issuers of debt instruments. The secondary market for the bond will also suffer from the same problem.3 Agency Ratings III.1 Methodology The core business of public rating agencies such as Standard and Poor’s. a thorough analysis of an issuer’s credit quality is usually too expensive compared to the small investment amount of most individual investors. rating agencies often gain access to information that may be unavailable to ordinary investors.B.4. and the accounting data of the rated firm. This might prevent them from investing in the bond in the first place. This ‘rating through the cycle’ policy is not Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Hence a recognised public credit rating is almost indispensable for issuance in modern bond markets. the effects of general business cycle variations should average out. These data are summarised by a credit analyst who then – possibly with the aid of proprietary statistical models – assigns a rating to the issuer and the bond issue. and AAA to C for Standard and Poor’s and Fitch. a duplication of research effort is avoided.pdffactory. but a surprisingly large fraction of the credit ratings can be reproduced using purely statistical models. This analysis is summarised in a rating classification into one of several rating classes: Aaa to C for Moody’s. rating agencies are rather secretive about the precise methodology used and the weightings of the factors that influence a rating. rating agencies try to strike a balance between rating stability and rating accuracy. But if the credit research is carried out by an independent and trustworthy agency which acts for all bond investors and which makes the findings of the research public. and the issuance of the bond may fail. Because of their crucial position as one of the most important sources of information to investors. Issuing a bond (instead of a loan) has the advantage that the issuer is able to borrow many small amounts from a large number of investors simultaneously. Investors can invest many small amounts in many different bond issues and thus diversify their risk without spending unfeasible amounts on research. quantitative credit rating providers have more recently entered the market using proprietary quantitative statistical models to produce an output which can be more directly interpreted as a ‘probability of default’.B.pdffactory.4. Obligors rated BB. CCC. It will also cause empirical difficulties when we try to back out estimates for default probabilities from agency ratings. These ratings typically are ‘point-in-time’ ratings that do not attempt to smoothen business cycle effects. B. The Standard and Poor’s ratings from AA to CCC shown in Table III. ‘fundamental’ ratings.com 253 . obligors rated BBB and better are regarded as investment-grade. Examples are KMV’s expected default frequencies and Kamakura’s default probability estimates. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.1 may be modified by the addition of a plus or minus sign to show relative standing within the major rating categories.The PRM Handbook – Volume III uncontroversial because it can lead to delays in rating adjustments. and CC are regarded as speculative-grade investments. Besides the classical. financial. and is dependent upon favourable business. However. financial.1 are not sufficient for this.B. • An obligor rated CC is currently highly vulnerable.1 Standard and Poor’s long-term issuer credit ratings definitions • An obligor rated AAA has extremely strong capacity to meet its financial commitments.2 Transition Matrices. This may be adequate to make relative comparisons and maybe intuitive buy/hold/sell investment decisions.4. we face the problem of backing out what the rating actually means in terms of default probability. Source: Standard & Poor’s.4. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. the verbal definitions as they are given in Table III. and economic conditions to meet its financial commitments. but in order to be able to use these ratings in a quantitative risk management system we need to map the letter ratings to numbers. but the obligor currently has the capacity to meet its financial commitments. • An obligor rated A has strong capacity to meet its financial commitments but is somewhat more susceptible to the adverse effects of changes in circumstances and economic conditions than obligors in higher-rated categories.B. Adverse business. It differs from the highest-rated obligors only in small degree. Default Probabilities and Credit Migration We have seen that public rating agencies only use ‘letter’ ratings in order to classify obligors according to their credit quality.pdffactory. or economic conditions will likely impair the obligor’s capacity or willingness to meet its financial commitments. • An obligor rated B is more vulnerable than the obligors rated BB. • An obligor rated CCC is currently vulnerable. to default probabilities. it faces major ongoing uncertainties and exposure to adverse business. financial.The PRM Handbook – Volume III Table III. Essentially. or economic conditions which could lead to the obligor’s inadequate capacity to meet its financial commitments. However. • An obligor rated AA has very strong capacity to meet its financial commitments. • An obligor rated BB is less vulnerable in the near term than other lower-rated obligors. III.3.com 254 .4. adverse economic conditions or changing circumstances are more likely to lead to a weakened capacity of the obligor to meet its financial commitments. • An obligor rated BBB has adequate capacity to meet its financial commitments.B. AAA is the highest issuer credit rating assigned by Standard and Poor’s. 19 0.78 0. 119 The ‘no rating’ category has been eliminated.27% did not change their rating.45% defaulted.41 B 0 0.31 0.B.45 BB 0.B.The PRM Handbook – Volume III Table III. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. for example.10 9. A first glance at Table III. Hamilton et al.99 0.17 82. the values on the diagonal) are very high.54 64.19 0.27 6. 0. 6.29 0 0 A 0.16 2. Each row of a transition matrix gives the historical rating transition frequencies for the obligors of the corresponding rating class. 6. the frequencies of unchanged ratings (i..79 7. 1981–1991 (percentages) AAA AA A BBB BB B CCC D AAA 89. 2002).47 0. thus the ratings do indeed reflect an ordering according to default probability.19 77.46 4.pdffactory.e.4.66 5. according to the BBB row of Table III.45 0 0.29 0. and 0. A typical summary of such data published by Standard & Poor’s is the transition matrix shown in Table III. on average 0. but fortunately they publish a lot of historical data about both rating transitions and defaults of rated obligors.06% of all BBB-rated obligors were upgraded to AAA in the course of one year.30 0 0 0 AA 0. 0.2: Standard and Poor’s one-year average rating transition frequencies.44 1.43% were upgraded to AA.35 6.91 88. similar transition matrices are also regularly published by other agencies (see.B.64 10.63 0.86 90. 84.03 7.10 7.B. this is an indicator of the ‘rating stability’ aimed for by the agencies.4. 1. Second.27 2.06 0.4.2.93 23. For example.B.09 BBB 0.43 1.22 0.60 0. using the assumption that a defaulted obligor remains in the default class with 100% probability.56% were upgraded to A.4.94 6.18% were downgraded to CCC.18 0.4.19 D 0 0 0 0 0 0 0 100 Rating agencies may be secretive about their methodology.2 already confirms some stylised facts about rating transitions.44% were downgraded to BB. default frequencies decrease with increasing rating classes.85 CCC 0 0 1.com 255 .43 6.49 1. First.2 we have eliminated the ‘not rated’ class and added the D rating class for defaulted obligors in the last row.60% were downgraded to B. In Table III.01 0.04 0.2.16 1.56 84.09 2. Thus. small transition frequencies are usually based upon a very small number of observations. The probability of a transition from rating class i at time t to rating class j at time T > t does not depend on the calendar dates t and T but only depends on time via the length T – t of the time interval. Although common. Besides the length of the time interval. and on no other external variables. realised frequencies can be non-monotonic. Also.com 256 . Recently downgraded obligors are much more likely to be downgraded again than other obligors of the same rating which have already been in that rating class for a long time.B.pdffactory.16% for C-rated obligors as opposed to only 0. j k .The PRM Handbook – Volume III As they stand. is also restrictive.4. Generally. the probability of a transition from rating class i to rating class j only depends on the rating class that we come from (class i ) and the rating class that we go to (class j ). downgrades are very much more likely in recessions than in boom phases.2 the CCC-rated obligors had a higher upgrade frequency to A than BBrated obligors: 1. in Table III.B.79% for B-rated obligors! Clearly. However. we need the following two assumptions: • Time-invariance. they are the realised frequencies of the transitions over the observation period. the numbers given in Table III.2 are not probabilities. The assumption of time-invariance is not an uncommon assumption in statistical analysis. This is in contradiction to empirically observed rating momentum. the Markov property implies an absence of rating momentum. to calculate transition probabilities. it only means that no AA-rated obligor defaulted in the years 1981–1991. Empirically. so that we have to expect inconsistencies like zero probabilities. the Markov property. Essentially it says that the future will be like the past. It essentially rules out the use of any information beyond the current rating class – all we need to know in order to determine future transition probabilities is the current rating class. In order to use the information given in the transition matrix to calculate transition probabilities we represent the transition table as a matrix P ( Pij ) . • Markov property. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. The second assumption. and k is the total number of rating classes (including default). transition frequencies can only be considered as noisy estimates of the true transition probabilities. it is very restrictive as it rules out phenomena such as business cycle effects. the Markov property also implies that the history of the obligor’s rating is irrelevant as long as the current rating is known. Having added the default class D row ensures that the transition probability matrix is actually a square matrix. For instance.4. 1 i . Besides disallowing other explanatory variables beyond the rating itself. Now. a zero entry (as in the AA D cell) does not mean that AA-rated obligors have a zero probability of default. where the entry Pij in the ith row and the jth column represents the transition probability from class i to class j over the time interval. Essentially. by a simple matrix multiplication. assuming time-invariance and the Markov property means that the one-year transition data given by the rating agencies can be used to calculate the transition probabilities (and default probabilities) for all time horizons. irrespective of whether we multiply two downgrade probabilities. the two-period transition probability matrix is reached as follows. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory. The probability of going from rating class A to rating class BB over two periods equals the sum of the following probabilities: • p( A p AAA ) ( AAA B) . Note that we need timeinvariance so that the transition probabilities in the second year are the same as the ones in the first year. • + p( A p AA ) ( AA B) .B. (III. the probability of going from A to AA in the first year times the probability of going from AA to BB in the second year. In practice. Mathematically. or a downgrade and an upgrade probability). the probability of going from A to D in the first year times the probability of going from D to BB in the second year. the inaccuracies of these assumptions have to be weighed against the significant simplifications that they allow: under time-invariance and the Markov property we can speak of ‘the’ one-year transition probability matrix P from which we can calculate the two-year and longer-horizon transition probability matrices. we need to sum over all possible paths that the rating can take from A at t = 0 to BB at t = 2. Similarly. For example.4. the probability of going from A to AAA in the first year times the probability of going from AAA to BB in the second year.com 257 . the matrix product of the one-year transition matrix with itself. pij( 2) K n 1 pin pnj and the two-year transition matrix is therefore given by P( 2) P P that is. • +… • + p( A D) p( D B) . and the Markov property is used when we calculate the transition probabilities for the second year (i.8) In summary.e. we just multiply the transition probabilities.The PRM Handbook – Volume III Both assumptions thus contradict empirical observation. the transition matrix over n years is obtained from the one-year transition matrix by multiplication n times with itself: P( n ) Pn . 4.4. and was further developed in Altman et al. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.4.B. various statistical methods have been developed.The PRM Handbook – Volume III If transition probabilities are sought for shorter time horizons than the one-year horizon given by the original transition matrix P.com 258 . Japanese and Asian obligors. that is consistent with P. when it comes to statistical models of default prediction. Generally. one may try to find a short-term (say.4. In order to assess the credit risk of such obligors. Credit scoring models usually rely on accounting ratios like the ones listed in Table III. Important variables that have been found to drive the default behaviour of obligors include: • balance-sheet data capturing the indebtedness of the obligor. many larger European. • (if available) market data. but for most transition matrices found in practice highly accurate approximate solutions can be found numerically. • macro-economic data capturing the effects of the business environment on the obligor. the choice of the inputs seems to be more important than the choice of the particular methodology. the set of obligors covered by public credit rating agencies is only a fraction (albeit an important one) of all obligors that make up a bank’s credit portfolio.1 Credit Scoring One of the earliest published credit scoring model goes back to Altman (1968).3. such as the firm’s market capitalisation. (1977).4 Credit Scoring and Internal Rating Models Because the large public credit rating agencies have traditionally concentrated on larger. In many cases the problem of finding such a short-term transition matrix usually does not possess an exact solution. Important missing categories of obligors are small and medium-sized businesses. III.B. or in other words a 1/n-period transition matrix such that A n P. and of course the whole retail portfolio.pdffactory. III. • profits and free cash flows capturing the ability to pay. although both are frequently intertwined. US-based corporate bond issuers. • the riskiness (volatility) of the business. monthly) transition matrix A.B. When choosing the training set it is important to take care that it is diverse.10% 247. that it Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Indeed. 0. The variables X 1( i ) . 1. using the retained earnings to total asset ratio). of course. accounting ratios are typically included to measure such things as: indebtedness (for instance.The PRM Handbook – Volume III Table III. 3..4.30% Market Capitalisation / Debt X4 40.com 259 . via earnings before interest and taxes).4.3.0) and the cutoff level of 1. and the averages of these ratios over the firms in the data set used by Altman (1968). Altman proposes to use the Z-score to make loan decisions.10% 35..8 in (III..0 X i (III. 1.. as well as numbers to measure earnings stability and the debt service load. If the score function has a value less than the cutoff score of 1.B.3: Key accounting ratios Some key accounting ratios.9) were estimated using a statistical method.6 X i 1.3 X i 0.pdffactory.6.3 shows the definitions of these ratios and their averages in Altman’s data set.9) the value of which is the so-called Z-score of the ith obligor. Accounting Ratio Bankrupt Non-bankrupt Working Capital / Total Assets X1 –6. cash flow available for debt service (for instance.4.4.8 then the obligor is likely to default and a loan is to be denied. The variables that are used to explain the default behaviour of the obligor are similar for most internal credit scoring models. The ‘training set’ lists the values of each of the variables Xi for a set of obligors and.B.80% 15. Table III.10% 41.4. using the ratio of market capitalisation to debt).B.2. The score can also be used to rank obligors: a higher score is a sign of better credit quality.B. X 5( i ) are the values of the accounting ratios for the ith obligor. ‘Bankrupt’ firms defaulted within one year. The scoring weights (1. profitability (for instance.4 X i 3.2 X i 1.70% Sales / Total Assets X5 150% 190% Altman proposes to calculate for each obligor the score function Z(i ) 1.40% Retained Earnings / Total Assets X2 –62. the information whether the obligors eventually defaulted or whether they survived.50% Earnings before Interest and Taxes/ Total Assets X3 –31. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Figure III.4shows the principle for the case with only two accounting ratios. Figure III.4.pdffactory. In particular.B. the existing criteria are very strict regarding earnings before interest and taxes (EBIT).com 260 .B. any points below it will have a lower score (and will be rejected). then the internal data will contain very few obligors with bad EBIT numbers and the scoring model may mistakenly conclude that EBIT was not relevant to default prediction. For a given cutoff level. It is not surprising that we have more and more survivors.4.B. for example. The green line shows the points where the score is exactly at the cutoff level. the score weights define a line in this graph which consists of the points that have a score of exactly the cutoff score. Any points above that line will have a higher score than the cutoff (and thus will be accepted).4. obligors which survived as blue points.B. one has to be very careful if only internal data of a bank is used to estimate a scoring model because this data set will only contain obligors who have already been pre-selected according to the existing lending criteria.The PRM Handbook – Volume III is representative of the real world.9) are chosen to maximise the number of correctly classified obligors. and that it contains a sufficient number of defaulted obligors. In Figure III. The red cloud of points are the defaulted obligors in the training set. the further we go into the ‘northeast’ direction of higher earnings and lower indebtedness.4: An example of credit scoring Defaulted obligors are shown as red points. If. the blue cloud are the surviving obligors.4.4. The scoring weights and the cutoff level in (III. the green line shows this line for the optimal cutoff. Essentially. Thus high scores are ‘good’ in the scoring model. and X n( i ) are given accounting ratios and other explanatory variables for obligor i. the default probability of obligor i is set equal to Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.The PRM Handbook – Volume III III. n are parameters that are estimated statistically.9) high scores are mapped to low default risk.4.4.10) is very similar to a scoring model like (III.10) is the cumulative normal distribution function.4.B. the logistic transformation.B. even when the same explanatory variables used in the two models. However.4. we are mapping from the Z-scores to default probabilities by using the cumulative normal distribution function. The structure of equation (III.B. In the probit model. The estimated parameters of the two models are therefore quite different. If we assume that obligor i defaults if his credit index drops below zero.4. ….B.9). N.B. One advantage of probit models is that we can tell a story about how defaults occur in this setup.com 261 .B. The logit model differs from the probit model only in that it does not use the cumulative normal distribution function to map from scores to default probabilities but uses a different function.5).4.B.2 Estimation of the Probability of Default The two most common models for the estimation of default probabilities are logit and probit models.pdffactory. Thus. but ‘bad’ in the probit model.4.B.11) is a standard normally distributed noise component. it is assumed that the default probability of obligor i can be written as: pi where X i N X Ni (III. with n = 1.4. Let us define the credit index for the ith obligor as X where (i ) i N X Ni i (III. In a scoring model such as (III. then the default probability of obligor i is exactly equal to the pi given by the probit model. Here. there is a natural link from probit models to credit portfolio models like CreditMetrics (see Chapter III. note that the scores have a different interpretation in the two models: In the probit model (III.10) high scores are mapped to high default probabilities. 5 Market-Implied Default Probabilities The credit default tree presented in Figure III. The most important of these is the KMV model. This approach rests on several assumptions. But.com 262 .B.3 Other Methods to Determine the Probability of Default Besides the rating models presented above. This is explained in detail in Section III. which is essentially a ‘black box’ to the credit officer.pdffactory.B. this approach is special because the distance-todefault contains both market information (the market capitalisation of the firm) and volatility information. such models have been found to be equally powerful compared to good statistical models (like logit or probit models). Generally.5.B.B. III. that is to say. but their acceptance in practice has been hindered by their complex nature. Instead of determining prices for given set of parameters.4.6. The default probabilities that are reached in this calibration exercise are called market-implied default probabilities. The prices of the defaultable bonds/CDSs must be meaningful.4. III. Both market and volatility information are usually not found in classical accounting ratios. in particular the empirical transformation that KMV uses to reach a default probability estimate for a given distance-to-default. First. but generally results from logit and probit models do not differ much.4. we assume that there is information to be found in the prices.4.2 is a perfectly adequate model to price simple credit-sensitive instruments such as corporate bonds or credit default swaps (CDSs). the calibration securities. liquid and not unduly affected by external factors beyond default risk (such as taxes). Also. provided that the necessary conditional default probabilities are already specified.B.12) The logit model is slightly easier to estimate than the probit. the KMV model can also be viewed as a scoring model where the ‘accounting ratio’ is the distance-to-default. a number of other models are used to assess the credit quality of individual obligors. the peculiarities of different markets enable the participants to incorporate their information into the Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. we now determine parameters for the prices of a set of benchmark securities. From a purely practical point of view. the large historical database underlying the model is valuable.4. In this section we will turn the model around.The PRM Handbook – Volume III pi 1 exp{ 1 Xi N X Ni (III. In a gross simplification. Another class of potentially powerful models for credit classification are neural networks and other expert systems from artificial intelligence research. The PRM Handbook – Volume III prices to different degrees. effective date 19 November 2003. that is. Therefore. In normal situations this correlation has only a second-order effect on the resulting default probabilities.4. we assume that movements of risk-free interest rates and defaults are independent.84 3.B. This is mostly a technical assumption. the probabilities that we will reach are pricing or martingale measure probabilities. No.B.4. it is necessary that all calibration instruments are subject to the same type of credit risk.4.B. they reference the same obligor (and have the same default definition in the case of CDSs). T ) the value at time t = 0 of receiving $1 at time T in survival (and nothing if default occurs Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. and they have the same recovery rate and seniority class in default.4. we price a security by taking the discounted expected value of its payoffs. Coupons are paid annually. Finally.com 263 . while it is easy to short credit risk in the CDS market. notional is 100.pdffactory. This issue will be discussed in the next subsection. and a recovery security which pays at the time of default.46 4. Third.75 02-10-2006 5 109. we denote by B (0.62 10-03-2006 4 100. which significantly simplifies the analysis.5 03-01-2005 2 106 5. it can be difficult to short defaultable bonds. For example. We also need to know the value of the recovery rate (or at least its average).1 Pricing the Calibration Securities As a running example in this section we consider the problem of calibrating a term structure of default probabilities to the bond prices shown in Table III. Specifically. calibration securities should be taken from markets with similar conditions.75 23-06-2005 3 105. we use the risk-neutral pricing paradigm. Second. It is important to realise that these are different from historical probabilities because they are loaded with risk premia. III.62 16-01-2007 We can represent the prices of the calibration securities with two elementary types of (hypothetical) building-block securities: defaultable zero-coupon bonds which only pay in survival. Dirty Price Coupon Maturity 1 105. Table III.46 5. Trade date: 17 November 2003. Thus.27 4.4: Calibration securities Prices of five corporate bonds issued by Daimler Chrysler NA Holding.5. that is. currency is USD. the first summation represents the value of the promised coupon payments and the final repayment of principal. 2 October 2005.25 at T1 = 3 January 2004 and T2 = 3 January 2005. thus we will have an additional payoff of R = 40 at default. we have promised cash flows on the following payment dates: 3 January 2005 16 January 2005.13) Here. TK ) R E(0.com 264 . 120We ignore day count conventions and other technical adjustments. These are the maturities of the defaultable zero-coupon bonds that we need to represent the cash flows of our calibration securities. K. k = 1. we can represent the prices of the other four bonds as discounted values of the principal. (III.4 has coupon payments of c = $2. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. TK ). and 3 January 2006. with R denoting the recovery rate.The PRM Handbook – Volume III before T). For all bonds together. For example.pdffactory. and the principal repayment of $100 at T2. coupon amount c and recovery rate R can now be written as 120 V Bond cB (0. if a default occurs before T2 .4. coupons and recovery cash flows.B.B.4. if the default occurs before time T. and by E(0. …. T ) the value at time t = 0 of receiving $1 at default. 19 November 2005. 10 March 2005. the first bond in Table III. Similarly. we assume that the recovery rate is 40%. T2 ) (1 c )B (0. 23 June 2005. Notional amounts are normalised to 1. Furthermore. and the final term represents the value of the recovery received at default. The value of a defaultable coupon bond with coupon payment dates Tk. T1 ) cB (0. 5: The term structure of risk-free interest rates USD forward Libor rates on 17 November 2003.00% 4.00% 0 1 2 3 4 5 6 7 8 9 10 Maturity The value of a protection-buyer position in a CDS on the reference credit with CDS rate s and the same payment dates Tk. pk P 0.00% 0.T) the survival probability until T.. T ) P (0. It can also be shown that: E 0. TK ) . B(0. T1 ) . B(0. T1 ) .4.com (III.B.00% 5.T) is the price of a default-free zero-coupon bond with maturity T. T1 B 0. B(0.pdffactory. T) and E(0.4.. (III.B. and the last term represents the value of receiving 1 – R at default of the obligor.1.. the first sum represents the value of the fee stream (a liability to the protection buyer). T1 p 2 P 0. K. Tk 1 B 0. .4.. k = 1. The price of a defaultable zero coupon bond is easily seen to be B (0.. TN ) (1 R ) E TK (III. 6. T ) . TN ) (III.16) where B(0.B.14) Here.4.4.B. T).The PRM Handbook – Volume III Figure III.00% 2.17) 265 . T ) B(0. Tk . we Having reduced the pricing problems to the problem of finding prices for B now have to represent these prices in terms of the survival probabilities defined in Section III.. .. T2 . and P(0. is: V CDS s B(0. The market CDS rate is chosen such that the value of the CDS position is zero: s (1 R ) E(0.00% 3. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.B.00% 1.B.15) ¯ (0. T p1B 0.4.. 7847 We have introduced a variety of ways to represent default probabilities in Section III.B.pdffactory. that P ( T ) exp{ 0 T } .2 In the example given above.62 10-03-2006 105.4. This yields the prices of the default-free zero-coupon bonds B(0.B.8) we also need default and survival probabilities for several different time horizons.4.4. Finally. Tk] (which occurs with probability pk).1. and then defaulting in the period [Tk–1. we reach the formula above.6289 105.4.2955 100.B.62 16-01-2007 109.4. Similarly.B. where 0 is a parameter that we need to find.B. The default payoffs row shows the values of the potential recovery payoffs. Here we made the assumption that these probabilities are given by a constant default-intensity model as in Section III.B.0056 4. Tk ) .15) if the ‘odds’ of default Hk are used: s (1 R ) wH wK HK Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.4. there are several ways to represent the prices of defaultable securities.4.7) and (III.0228 105.5792 109.5247 105. The results are presented in Table III. Tk 1 ) ). pk P (0.8966 104.5: Model prices of the calibration securities The model prices of the bonds of Table III.4.4482 105.0029% minimises the squared pricing errors of the bonds’ model prices relative to the market prices.5 5. Example III. By using Excel’s Solver routine we find = 1.B.1020 99.75 02-10-2006 100. we assume a common recovery rate of 40%.4.1.4710 0.46 0. that is. This takes us to the default branch at time Tk. Then.4 under the assumption of a constant default intensity 0 = 1.75 03-01-2005 23-06-2005 105.5.46 106 Calibration Model Prices 0.3767 106.27 3.2055 108.4. Table III.4.4212 1.B. after discounting with B(0. we use the USD forward Libor curve as the risk-free interest rates. Tk 1 ) represents the probability of surviving until time Tk–1 (which is P (0. a recovery rate of 40% and using the default-free interest rates shown in Figure III.B.B. T ) at the payment dates of the calibration bonds.18) 266 .84 5. Given all these assumptions.0029%. the only remaining degree of freedom that we can use to match model prices to market prices is the default intensity that a value of 0 0.The PRM Handbook – Volume III Here. The sum of these two yields the model price of the bond.5.3975 1.B. Coupon Maturity Market Price Default Payoff Survival Payoffs Model Price 4. The survival payoffs row shows the values of the promised coupon and principal payments of the bonds.com (III. In equations (III. A particularly simple representation can be reached for the fair CDS rate (III. B..com 267 .4. III.5. In order to find a set of implied default probabilities that is consistent with these prices and that simultaneously looks ‘sensible’. T1 ) . TK ) Clearly.3 In the case of Daimler-Chrysler.4.B. Tk ) .4.2 Calculating implied default probabilities Backing out an implied default hazard rate from a single CDS quote is very straightforward.8% (at an assumed recovery rate of 40%).B. and partly caused by market imperfections in the bond markets. then the corresponding implied default hazard rate is reached by solving equation (III. apart from introducing Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. the CDS rate s equals the loss on default (1 – R) times a weighted average of the odds of default Hk where the weights of the average are wk B(0.13bp. B(0. Generally.B. More details can be found in Sch nbucher (2003)..4. This effect is partly due to the fact that the embedded delivery option of a CDS makes the recovery rate with a CDS smaller than the recovery rate of a bond. if all Hk take the same value H) we have s =(1 – R)H.e. In the particularly simple case of constant odds of default (i.4.19) Hence. a numerical optimisation routine must be used. given equation (III. Example III. these weights are non-negative and sum to 1.19).19) for the hazard rate: Hˆ sˆ /(1 R ). According to the formula given above. In a general situation.The PRM Handbook – Volume III That is. (III. B(0. the implied default hazard rate is 1. If we observe a CDS spread sˆ in the market. the quote for a CDS with five years’ maturity on 17 November 2003 was 108. This procedure is quite similar to the bootstrapping procedures used to back out term structures of interest rates from default-free bond prices. we may have CDS quotes for several maturities or we may have market prices for different bonds with different maturities and different coupons. It is not unusual that implied default intensities from CDSs are higher than the corresponding implied default intensities from bond prices.pdffactory.B. if we equate the odds of default with the default hazard rate (a very accurate approximation) we can say that the CDS rate equals the loss given default times the default hazard rate. a.4.08% (assuming a recovery rate of 40%). 1997–1998. In particular. the results are qualitatively similar to the simple result for the single CDS.The PRM Handbook – Volume III a time-dependence in the default probabilities.6. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. then an investment in Baa-rated bonds will just break even. historically. that is. much below the implied default rate.3%.6 Credit Rating and Credit Spreads In order to compare implied default probabilities and historical default probabilities.B. Imagine we had bought the total Baa-rated corporate bonds market (or an equally weighted fraction of it) in the year 1997 and held it to maturity – let us call this portfolio ‘Baa97’. and whether the compensation through credit spread was adequate for the credit risk incurred. and even less than the spread.4. 122We want to hold until maturity in order to eliminate effects due to market price movements. That is.B. Even over a five-year horizon starting 1997 the historical default rate is only about 0. If the realised default hazard rate equals the implied default hazard rate. This is still much less than the implied default rate. For that year.pdffactory. which will yield an implied default hazard rate of 1. 121Choosing Aaa instead of risk-free only makes our estimate more conservative. In fact. it is instructive to study Figure III. The value of the yellow line in 1997 tells us the rate of defaults that we would have suffered over one year.55% p.65%. this rate of defaults is almost zero! So. let us take the year 1997. to actual default rates. 121 The spread is a compensation for the default risk that we bear if we invest in Baa-rated bonds. what if we had to hold the Baa97 portfolio for three years starting in 1997? 122 The blue line tells us that over 1997–2000 the annual default rate of the Baa97 bond portfolio was only 0. This situation repeats consistently without exception over the whole period from 1976 to 1997 (1997 is the last year in which we could calculate a five-year forward-looking default rate). which would be present in the treasury market). it turns out that the compensation is much more than adequate.(see the value of the yellow line in 1997). This is what we would expect to happen with risk-neutral investors and in the absence of market imperfections. the dark blue line shows an average credit spread for US Baa-rated corporate bonds of 0. credit spreads are approximately equal to the default hazard rate times the loss given default. This spread is measured as spread over Aaa-rated corporate bonds and not as spread over treasuries (in order to avoid tax and liquidity premia. so it is interesting to see if this compensation was adequate. We now investigate how the implied default intensities compare. III.com 268 . Clearly. measured as spreads over Aaa. there is risk aversion. This situation has been termed the spread premium puzzle: why are spreads so much higher than seems to be justified by their actual credit risk component? Or is this an arbitrage opportunity? Several explanations have been put forward. for the pool of Baa-rated US corporate bonds: implied default rates (black. maybe the actual default risk was much higher than historical incidence. the default rate of the pool over the next year (yellow). implied default rates tend to be larger by a factor of 2–3. investing in Baa portfolios would always and without exception have outperformed an investment in the risk-free reference portfolio (which here is Aaa-rated bonds).00 3. First.The PRM Handbook – Volume III Hence.50 0.50 1. not as spreads over treasury).00 2.50 2. and over the next five years (green). Figure III.00 1. credit spreads (dark blue. An investment in Baa97 will lose money if a recession comes.6: Implied default probabilities vs. the investor will demand a higher return in order to be compensated for this wrong-sided risk. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Secondly.com 269 . but a recession is exactly the situation in which the average investor needs money. but the possibility of this bad scenario was nevertheless priced into the spreads. over the next three years (blue). 4. historical default frequencies This figure shows.4. for 40% recovery).00 0.B. we just did not experience the truly bad scenario.pdffactory. Therefore.00 1975 1980 1985 ImplDefIntens 1990 1Y forw 3Y forw 1995 5Y forw 2000 Baa Spread Other studies also indicate that there seems to be a significant discrepancy between implied default rates and historical default rates. Typically.50 3. pdffactory. But if this is the case.4. why has this scenario never occurred so far? The liquidity argument is certainly valid if spreads over treasuries are considered. Finally. But here we are considering spreads over Aaa corporates which should have similar liquidity problems to Baarated bonds. none of these explanations can fully explain the effect. It is quite possible that the market has finally reached a level where implied and actual default risk are approximately equal. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. because treasuries are exempt from state taxes in the USA. the tax argument again does not hold for spreads over Aaa. Risk aversion is certainly present. maybe there were tax effects at work that made the Baa-rated bonds unattractive.B. for example in the markets for Eurobonds or for bonds of non-US issuers. we will only know this for sure after it is too late to make an investment decision. If this mysterious extremely bad scenario were to explain a sizeable proportion of the spreads.The PRM Handbook – Volume III Third. Unfortunately. Besides. But maybe it has already disappeared: since mid-2003 spreads have decreased significantly compared to the spreads used in Figure III.4. III. then it must also have a probability that is not too small compared to the individual default probability of an obligor. and one that never seems to be perfectly solvable.6. this is reliance on expert advice. or liquidity premia were demanded for investments in Baa97. The second explanation concerning the unrealised ‘Armageddon’ scenario also cannot explain the size of the effect. or at least in a more realistic relationship to each other. So the spread-premium puzzle remains a puzzle. The classical solution to this problem is to rely on the rating assessment of an external rating agency – essentially. Of course.B. Maybe it is indeed a market imperfection. but risk premia can never lead to a shift in spreads that amounts to a virtual arbitrage opportunity: that would be inconsistent with rational investor behaviour because even the most risk-averse investor could have picked up a seemingly risk-free excess return here.com 270 . We have seen how these rating classifications can be translated into concrete numbers for default and survival probabilities over different time horizons. it only has the possibility of being relevant if spreads over treasuries are considered.7 Summary Credit rating and the estimation and measurement of default probabilities are a classical problem of credit analysis. the spread premium puzzle is also observed in markets which are not affected by US tax rules. The PRM Handbook – Volume III Recent advances in computing power and (more importantly) the increasing availability of the necessary data in electronic form have made it possible to estimate default probabilities on a purely statistical basis. In many cases, such quantitative approaches are able to compete successfully with agency ratings, and frequently they are the only option when no agency rating is available. Finally, we discussed methods to imply default probabilities from observed market prices of traded credit-sensitive instruments such as bonds and credit default swaps. While these methods are indispensable to assess the risk compensation that one should get for the type of credit risk under consideration, spread-implied probabilities usually differ significantly and systematically from statistical and historical default rates. This credit-spread puzzle remains an open question to date. Nevertheless, because they are systematically above historical default rates, spread-implied probabilities may be useful as very conservative estimates of default probabilities. In practice, implied probabilities are the correct probabilities to use for pricing applications, because these probabilities already contain the risk premia that are paid for the credit risk contained in the calibration securities. Risk management, capital allocation and value-at-risk calculations, on the other hand, require historical probabilities because here the preferences and risk aversion can be added later on. References Altman, E (1968) Financial ratios, discriminant analysis and the prediction of corporate bankruptcy. Journal of Finance, 23(4), pp. 589–609. Altman, E, Haldeman, R, and Narayanan, P (1977) Zeta analysis: a new model to identify bankruptcy risk of corporations. Journal of Banking and Finance, 1, pp. 31–54. Hamilton, D T, Cantor, R, and Ou, S (2002) Default and recovery rates of corporate bond issuers. Special comment, Moody’s Investor Service Global Credit Research, February. Sch鰊bucher, P J (2003) Credit Derivatives Pricing Models. Chichester: Wiley. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 271 The PRM Handbook – Volume III Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 272 The PRM Handbook – Volume III III.B.5 Portfolio Models of Credit Loss Michel Crouhy, Dan Galai and Robert Mark 123 This chapter describes the main approaches to the modelling of credit risk in a portfolio context (credit value-at-risk), i.e. the credit migration approach, the contingent claim or structural approach, and the actuarial approach. It reviews the assumptions of the credit portfolio models and the pros and cons of each approach. Finally, it discusses the relationship between credit value-at-risk, economic capital and regulatory capital. III.B.5.1 Introduction In this chapter we review the main approaches to modelling credit risk. For each approach we explain the basic logic behind it, describe the data required and evaluate its strengths and weaknesses. The interested reader can find a more detailed description of the approaches in Crouhy et al. (2001). A bank should be concerned with the estimation of the risk of default of a specific creditor, since this is the basis for pricing a loan and charging the borrower with the appropriate interest rate. But, at the same time, the bank should be looking at the quality of its loan portfolio as a whole, since the stability of the bank depends to a large extent on the performance of its portfolio, and on the size of credit-related losses in the portfolio in a given period. Portfolio analysis may in turn affect the pricing of individual loans and the lending decision as each asset’s contribution to portfolio risk must be considered. Modelling credit risk and pricing risky loans or bonds is a complicated task. The factors that affect credit risk are many and varied. Some factors are exogenous or economy-wide, such as the level of interest rates and the growth rate of the economy. Other factors are endogenous, such as the business risk of the firm, its capital structure, and the flexibility of its production technology. A major consideration is whether to evaluate credit risk as a discrete event, and concentrate only on the potential default event, or whether to analyse the dynamics of the debt value and the associated credit spread, and to estimate its risk over the whole time interval to its maturity. Another important issue is the data sources that are available in order to assess credit risk. Can the analyst rely on accounting data, or are these too stale and subject to manipulation? To what extent are market data available, and then, to what extent are the markets efficient enough to convey reliable information? 123 Michel Crouhy is a Partner at Black Diamond, Dan Galai is a Professor at the Hebrew University and Principal at Sigma P.C.M., and Robert Mark is CEO of Black Diamond. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 273 The PRM Handbook – Volume III Before proceeding further, let us define some fundamental concepts. Default, in theory, occurs when the asset value falls below the value of the firm’s liabilities (Merton, 1974). Default, however, is distinct from bankruptcy. Bankruptcy describes the situation in which the firm is liquidated, and the proceeds from the asset sale are distributed to the various claim holders according to pre-specified priority rules. Default, on the other hand, is usually defined as the event that a firm misses a payment on a coupon and/or the reimbursement of principal at debt maturity. Cross-default clauses on debt contracts are such that when the firm misses a single payment on a debt, it is declared in default on all its obligations. Since the early 1980s, Chapter 11 regulation in the United States has protected firms in default and helped to maintain them as going concerns during a period in which they attempt to restructure their activities and their financial structure. Figure III.B.5.1 compares the number of bankruptcies to the number of defaults during the period from 1973 to 2004. The data are for North American public companies, but note that legal procedures in enforcing the bankruptcy procedures in the case of a default event vary quite substantially across jurisdictions (see J.P. Morgan, 1997, Appendix G). Figure III.B.5.1: Bankruptcies and defaults in North American public companies 1973Q1 to 2004Q1 140 120 100 80 60 40 20 0 Bankruptcies Defaults Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 274 The PRM Handbook – Volume III Over the last few years, a number of new approaches to credit risk modelling have been made public. The CreditMetrics approach (which was initiated by J.P.Morgan and was spun off to RiskMetrics Inc.) is based on the analysis of credit migration, i.e. the probability of moving from one credit grade to another, including default, within a given time horizon which is usually one year. CreditMetrics estimates the full, one-year forward distribution of the values of any bond or loan portfolio, where the changes in values are related to credit migration only. The past migration history of thousands of rated bonds is assumed to accurately describe the probability of migration in the next period. The credit migration framework is reviewed in Section III.B.5.3. Tom Wilson (1997a, 1997b) proposes an improvement to the credit migration approach, CreditPortfolioView, by allowing default probabilities to vary with the credit cycle. In this approach, default probabilities are a function of macro-variables such as unemployment, the level of interest rates, the growth rate in the economy, government expenses and foreign exchange rates. These macro-variables are the factors which, to a large extent, drive credit cycles. This methodology is reviewed in Section III.B.5.4. The structural approach to modelling portfolio credit risk offers an alternative to the credit migration approach. Here, the economic value of default is presented as a put option on the value of the firm’s assets. The contingent claim approach in introduced in Section III.B.5.5 KMV Corporation, a firm that specialises in credit risk analysis, has developed a credit risk methodology and extensive database to assess default probabilities and the loss distribution related to both default and migration risks. KMV’s methodology differs from CreditMetrics in that it relies upon the ‘expected default frequency’ for each issuer, rather than upon the average historical transition frequencies produced by the rating agencies for each credit class. The KMV approach is based on the asset value model originally proposed by Merton (1974). KMV’s methodology, together with the contingent claim approach to measuring credit risk, is reviewed in Section III.B.5.6. At the end of 1997, Credit Suisse Financial Products released CreditRisk+, an approach that is based on actuarial science. CreditRisk+, which focuses on default alone rather than credit migration, is examined briefly in Section III.B.5.7. CreditRisk+ makes assumptions concerning the dynamics of default for individual bonds or loans, but ignores the causes of default, contrary to KMV and CreditMetrics. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 275 The PRM Handbook – Volume III III.B.5.2 What Actually Drives Credit Risk at the Portfolio Level? Banks, as regulated institutions, are very focused on the quality of their credit portfolio. Banks must assign regulatory capital against credit risk. The current regulation requires banks to assign regulatory capital against each loan obligation, usually 8% of the principal amount. Future regulation, as described in Chapter III.B.6, will allow for better differentiation among obligors based on their ratings. However, the regulators will also look at the quality of the loan portfolio, and the level of concentration by industry and region (‘Pillar II’ in the New Basel Accord). But beyond the formal regulatory requirements, banks are judged and evaluated by their shareholders, as well as by their customers, especially the depositors. Therefore, banks have strong incentives to monitor the risk of their assets, and in particular the risk of their loan portfolio. The profitability of most banks largely depends on the performance of the loans they granted in the past. So what are the major factors that affect the performance of the loan portfolio? It should be emphasised that performance has (at least) two dimensions: return and risk. The risk of a loan portfolio can be tricky to assess since a bank can show a nice profitability over a few years due to high interest charges and low default rates, and then, once a default event (or events) occurs on a major exposure, the bank can incur a substantial loss, instantaneously wiping out those profits. The first factor affecting the portfolio is the credit standing of individual obligors. One bank may concentrate on prime, investment-grade obligors, granting loans only to the best credits, with very low probability of default for any obligor. Another may choose to concentrate on riskier, speculative-grade obligors who pay a much higher coupon rate on their debt. The critical issue for both types of institution is to charge the appropriate interest rate to each borrower that compensates the lender for the risk it undertakes. The second factor is ‘concentration risk’, or the extent to which the obligors are diversified across geography and industries. A bank with corporate clients mostly in commercial real estate is considered to be riskier than a bank with corporate loans distributed over many industries. Also, a bank serving only a narrow geographical area can be devastated by a slowdown in the economic activity of that particular region. This leads to a third important factor that affects the risk of the portfolio: the state of the economy. During good times of economic growth the frequency of defaults falls sharply compared to periods of recession. There is a propensity for things to go wrong at the same time, usually at the trough of the economic cycle. In addition, periods of high default rates such as 2001–2002 are characterised by low recoveries that lead to high loss rates. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 276 The PRM Handbook – Volume III The quality of the portfolio can also be affected by the maturity of the loans. Usually, longer loans are considered riskier than short-term loans. Time diversification can reduce the risk of the portfolio by spreading maturities over the economic cycle, as well as reducing ‘liquidity risk’. Liquidity risk is defined as the risk that the bank will run into difficulties when refinancing its assets, for instance by renewing deposits or by raising money through issuing debt instruments, because the market ‘dried up’ or prices increased sharply. Risk assessment of the portfolio is needed to determine how much economic capital should be allocated against unexpected credit losses. Therefore, the future distribution of the values of the loan portfolio must be estimated. This task is not at all straightforward and is much more complicated than estimating the value of a portfolio of market traded instruments such as stocks and bonds. The major obstacle lies in the estimation of the correlations among potential default events. While we have a lot of data on market traded instruments, we do not have data on nontraded debt instruments. The data problem is also aggravated by statistical issues, for instance that default correlations are not directly observable. To overcome some of the estimation problems, most approaches imply default correlations from equity correlations as in Section III.B.5.3.2. Still, the estimation problem is huge since so many pairs of cross-correlations must be estimated for a portfolio of obligors. For example, a small portfolio of 1000 obligors requires the estimation of 1000? 99/2 = 499,500 correlations. This last problem is circumvented by using a multi-factor or a multi-index statistical model. The rate of return for each firm or stock is assumed to be generated by a linear combination of a few indices. For example, the indices can be related to a country or an industry. This approach reduces the calculation requirements to merely estimating the correlations among pairs of indices. All the simplifying assumptions are used in order to estimate the portfolio’s credit ‘value-at-risk’ (credit VaR). The distribution of the rate of return of the portfolio of obligors is estimated, and the credit VaR is derived from a percentile of that distribution. The credit VaR of a loan portfolio is thus derived in a similar fashion to market risk, except that the risk horizon is usually much longer. It is simply the distance from the mean to the percentile of the forward distribution, at the desired confidence level. However, the future point in time is typically one year for both regulatory and economic credit risk capital, whereas for market VaR the risk horizon is 10 days for regulatory capital (but again, usually one year for economic capital). 124 124 The choice of a risk horizon is somewhat arbitrary. It is usually one year as it corresponds to the planning cycle and the average time it would require to recapitalise the bank if it were to suffer a major unexpected loss. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 277 The PRM Handbook – Volume III Economic capital is the financial cushion that a bank uses to absorb unexpected losses, including those related to credit events such as credit migration and/or default (see Chapter III.B.6). Figure III.B.5.2 illustrates how the capital charge related to credit risk can be derived from the portfolio value distribution, using the following notation: P(p) = value of the portfolio in the worst case scenario at the p% confidence level FV = forward value of the portfolio = V0 (1 + PR) V0 = current mark-to-market value of the portfolio PR = promised return on the portfolio EV = expected value of the portfolio = V0 (1 + ER) ER = expected return on the portfolio EL = expected loss = FV – EV. Figure III.B.5.2: Credit VaR and economic capital attribution P% P(p) EV Economic Capital FV EL Because the expected loss is priced in the interest charged on loans, it is not part of required economic capital. The capital charge is instead a function of the unexpected losses: Economic Capital = EV – P(p) When the risk horizon is one year, credit VaR and economic capital are equivalent. The bank should hold reserves against these unexpected losses at a given confidence level, say 0.01%, so that there is only a 1 in 10,000 chance that the bank will incur losses above the capital level over the period corresponding to the credit risk horizon, say, one year. The choice of a confidence level is generally associated with some target credit rating from a rating agency such as Moody’s or Standard and Poor’s. Most banks today are targeting a AA debt rating, which implies a probability of default of 3–5 basis points, which then corresponds to a confidence level in the Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 278 contrary to market VaR. of the credit quality of the obligor.3 – it is far from being normal. while downgrades or defaults bring with them substantial ‘downsides’. or on the internal database of a bank. The first step is to specify a rating system. i. which accounts for portfolio diversification effects. the process of moving up or down the credit spectrum. Unlike market VaR. The only uncertainty in CreditMetrics relates to credit migration.pdffactory. An improvement in credit quality brings limited ‘upside’ to an investor.97%. For derivatives such as swaps or forwards the model needs to be somewhat adjusted or ‘twisted’.3 Credit Migration Framework Credit migration is a methodology based on the estimation of the forward distribution of changes in the value of a portfolio of loan and bond-type products at a given time horizon. III.The PRM Handbook – Volume III range of 99. Forward values and exposures are derived from deterministic forward curves of interest rates. The CreditMetrics risk measurement framework consists of two main building blocks: VaR due to credit for a single financial instrument.5. as well as to default. The calculation of VaR for credit risk thus demands a simulation of the full distribution of the changes in the value of the portfolio. The matrix may take the form of the historical migration frequencies published by an external rating agency such as Moody’s or Standard & Poor’s. and the loss distribution. credit returns are by their nature highly skewed and fattailed. which are both treated in the same manner. While it may be reasonable to assume that changes in portfolio values are normally distributed when due to market risk. together with the probabilities of migrating from one credit quality to another over the credit risk horizon.com 279 . Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. This confidence level is also the expression of the ‘risk appetite’ of the bank.B. the percentile levels of the distribution cannot be estimated from the mean and variance only. 125 The changes in value are related to the migration.e. financial letters of credit for which we can easily derive the forward value at the risk horizon for all credit ratings. This transition matrix is the key component of the credit migration approach. Market risk is ignored in this framework as forward values and exposures are derived from deterministic forward curves. and VaR at the portfolio level. usually one year. with rating categories. A typical portfolio distribution is shown Figure III. within the proposed framework (since it assumes deterministic interest rates).5.B. or it may be based on the proprietary rating system internal to the bank. It can be easily extended to any type of financial claims such as receivables. upwards and downwards. A strong assumption made by CreditMetrics is that all issuers within the same rating class are 125 CreditMetrics’ approach applies primarily to bonds and loans. since there is no satisfactory way to derive the exposure.95–99. This approach is based on historical data of ratings of many bonds by a rating agency. In our example the rating categories. the risk horizon should be specified.1) or an internal rating system.5. III.g.B.The PRM Handbook – Volume III homogeneous credit risks: they have the same transition probabilities and the same default probability. e. In the case of default. this information is translated into the forward distribution of the changes in the portfolio value following credit migration.pdffactory.5. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.B. This is usually taken to be one year.B.com 280 . as well as the transition matrix. The most probable event is that the bond will maintain its rating by the end of the year (e.93% of retaining its BBB rating after a year – see table III. we have to estimate the assumed values of the bond a year from now for all possible migration events.B.5.1) Another migration event is that the bond will be downgraded by one notch.g.1 Credit VaR for a Single Bond/Loan For a given bond in the portfolio. Therefore. estimated for the ‘new’ possible rating of the bond. For each credit migration event we use the relevant forward zero-coupon curve. which is given as a percentage of face value or ‘par’.3: Comparison of the probability distributions of credit returns and market returns Typical credit returns Typical market returns Portfolio Value Second. Figure III. a BBB bond has a probability of 86.5. The third step consists of specifying the forward discount curve at the risk horizon for each credit category.3% of being downgraded to BB within the year. are chosen from an external rating system (such as the S&P transition matrix in Table III. we estimate the distribution of changes in the bond value over a one-year period. In the final step.3. by which the value of the future cash-flows from the bond are discounted in order to find the value of the bond at the end of the year for the ‘new’ possible rating. These rates serve as discount factors. a BBB bond has a probability of 5. the value of the instrument should be estimated in terms of the ‘recovery rate’. as estimated by Standard & Poor’s.B.22 0 0.B. Default is defined as a situation in which the obligor cannot make a payment related to a bond or a loan obligation.79 B CCC Source: Standard & Poor’s CreditWeek (15 April 1996) In the case of Standard & Poor’s.com 281 . this has a probability of 86.67 7. that this BBB issuer will migrate over a period of one year to any one of the eight possible states.84 1.09 2.52 0. whether the payment is a coupon payment or the redemption of the principal.06 0 0.43 6.14 0. while the probability of it being upgraded to AAA is also very small.68 0.64 0.11 0. 0. Europe and other regions are now becoming available.02 0.65 7. they now cover tens of thousands of companies around the world.5.18 BB 0. within one year Initial Rating at year-end (%) Rating AAA AA A BBB BB B CCC Default AAA 90. In the US.g. The bond issuer in our example currently has a BBB rating.06 BBB 0.48 83.05 5.02%.27 91.79 0.24 64.30 2.33 0. which does not rely on transition matrices. Transition matrices for Japan.26 0. (It should be noted that the rating agencies supply more granular statistics where each rating category is split into three subcategories. Although ten or twenty years ago these two rating agencies concentrated on US companies.06 0. there are seven rating categories.33 5. In such regions the KMV methodology. the most probable situation is that the obligor will remain in the same rating category.5.38 11. e.1 shows the probability.12 0 0 0 AA 0.95 86.73 80.24 0. However there are still some regions where historical default data are insufficient to estimate transition matrices. The probability of the issuer defaulting within one year is only 0.14 0. Such a transition matrix is produced by the rating agencies for all initial ratings. A and A– for Standard & Poor’s rating category A) The highest category is AAA.07 5. based on the history of credit events that have occurred to the firms rated by those agencies. Obviously.03 0.18%. the lowest CCC. A+.02 0 A 0. BBB.70 90.22 1.86 19.The PRM Handbook – Volume III Table III. The shaded row in Table III.93 5.01 0.93%.30 1.53 8.06 0.12 0. the transition probabilities published by the agencies are based on more than 20 years of data across all industries. including default.74 0.00 1.20 0.pdffactory. Moody’s publishes similar information.46 4.17 1. is often the method of choice. But even these data should be interpreted with care since they only represent average statistics across a heterogeneous sample of firms.1: Transition matrix: probabilities of credit rating migrating from one rating quality to another. and over several business Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.81 8. B.B.01 0.62 5.1 Source: Carty and Lieberman (1996) Now consider the valuation of a bond. which is then applied to the residual cash-flows from year 1 to the maturity of the bond.0 Baa 0. which relate more closely to the composition of their loan and bond portfolios.4). Clearly the default rates become more volatile as credit quality deteriorates.5. All obligors within the same rating class are then marked to market using the same curve. 1970–1995 One year default rate Credit rating Average (%) Standard deviation (%) Aaa 0. When implementing a model that relies on transition probabilities. Thus one should expect the elements of the transition matrix corresponding to low grade issuers to change considerably over time. The spot zero curve is used to determine the current spot value of the bond. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.The PRM Handbook – Volume III cycles. whilst transition probabilities for high grade issuers are unlikely to change much. For this reason many banks prefer to rely on their own statistics.5. Since there are seven possible credit qualities.B.3 B 7.5.0 Aa 0. Historical default statistics (mean and standard deviation) by rating category for the population of obligors that they rated during the period 1970–1995 are shown in Table III.03 0.3).B.1 A 0. depending upon whether the economy is in recession or is expanding (see Section III. seven ‘spread’ curves are required to price the bond in all possible states (Table III.B. A study provided by Moody’s (Carty and Lieberman.pdffactory. The forward price of the bond one year from the present is derived from the forward zero curve. This is derived from the zero curve corresponding to the rating of the issuer.3 Ba 1.42 1.2: One-year default rates by rating. one may have to adjust the average historical values shown in Table III.13 0. one year ahead.com 282 .5. 1996) provides some idea of the variability of default rates over time.2.5. to be consistent with one’s assessment of the current economic environment. The realised transition and default probabilities also vary quite substantially over the years.00 0. Table III.1. 52 Source: CreditMetrics. J. The cash-flows are shown in Figure III. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.05634 107.78 7.12 AA 3.10 4.04672 6 1.B.5.B.32 4.63 BB 5.B.17 4.4.02 14.4: Cash flows for five-year 6% coupon bond 0 1 2 3 4 5 6 6 6 6 106 VBBB Time Cash flows (Forward price = 107.03 8.05 7.pdffactory.22 4.03 13.2 we know that the one-year forward price.041 1.27 B 6.53 where the discount rates are taken from Table III.02 6. if the obligor remains rated BBB.52 CCC 15.4.5.B.5.5.02 8. or loans.P.25 5.65 4. VBBB.3. If we replicate the same calculations for each rating category we obtain the values shown in Table III. Morgan From Chapter I.com 283 .93 5.The PRM Handbook – Volume III Table III.3: One-year forward zero curves for each credit rating (%) Category Year 1 Year 2 Year 3 Year 4 AAA 3. including compounded coupons paid out during the year.78 5. 126 Figure III.B.05 15.53) 126 CreditMetrics calculates the forward value of the bonds.05253 106 1.17 A 3.32 BBB 4.5.60 4.73 5.B. of the five-year 6% coupon bond.67 5. is VBBB 6 6 6 1.72 4.55 6. e.18 Junior Subordinated 17. See also Altman and Kishore (1996.09 10. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. a recovery rate of par value is recuperated by the investor.62 Default 51. Carty and Lieberman (1996).74 20. i.11 Source: CreditMetrics.B.5.08 CCC 83.81 Subordinated 32.The PRM Handbook – Volume III Table III. Morgan We do not assume that everything is lost if the issuer defaults at the end of the year. Depending on the seniority of the instrument. but rather are drawn from a distribution of possible recovery rates.B.com 284 .B.5. Table III.86 Senior Unsecured 51.5 shows the expected recovery rates for bonds by different seniority classes as estimated by Moody’s.5.5.17 A 108.35 AA 109.00 B 98. 127 In simulations performed to assess the portfolio distribution. and changes in value of a BBB bond.45 Senior Subordinated 38. The distribution of the changes in the bond value.B. Table III.53 BB 102.52 23. These recovery rates are estimated from historical data by the rating agencies.6: Distribution of the bond values.90 Source: Carty and Lieberman (1996). ‘par’) Seniority Class Mean (%) Standard Deviation (%) Senior Secured 53.5.64 BBB 107.80 26.B.P.13 25. J. due to an eventual change in credit quality is shown in Table III.5.4: One-year forward values for a BBB bond Year-end rating Value ($) AAA 109. 1998) for similar statistics. the recovery rates are not taken as fixed. at the one-year horizon.6 and Figure III. in 1 year Year-end Probability of Forward Change in 127 Cf.5: Recovery rates by seniority class (% of face value. Table III.5.B.pdffactory. It is a much lower value than if we computed the first percentile assuming a normal distribution for credit VaR at the 99% confidence level would be only 128 The mean.02 109.17 1. which corresponds to credit VaR at the 99% confidence level.com 285 .5. Morgan..5 as Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.02 Default CCC 51.The PRM Handbook – Volume III rating state: price: V ($) p(%) value: V ($) AAA 0.53 B 1.17 .08 –9.12 83. 128 of the distribution for V can be calculated from the data in Table III.30 102..pdffactory.12 .43.82 AA 0.17 98.91.0 107.93 5.5: Histogram of the one-year forward prices and changes in value of a BBB bond Frequency 86.91 Default 0..62 –23.’ The first percentile of the distribution of Forward Price: V Change in value: V V.B.. is –23. 109.33 . 1.30 Probability of State (%) 1.64 A 5.18 .53 0 BB 5.42 -23.95 5.33 109.11 –56.11 83.82 This distribution exhibits a long ‘downside tail.53 0 AA AAA .11 BBB 86. . In that case –7.62 98.45 B BB BBB A 102.93 107. Figure III.91 -9. follows: 2.08 -56. J.18 51.P.00 –5.B.35 .95 108.45 CCC 0.53 -5..5.35 1.64 1.. V. and the variance.42 Source: CreditMetrics. or a recession. In reality. and the likelihood of multiple defaults increases substantially. = 2.64 0.18%( 56.pdffactory. correlations vary with the relative state of the economy in the business cycle.e.95 – 2. If there is a slowdown in the economy.46 ) 2 i.33%( 1. 2) is 8.82 0. In what follows we focus on how to estimate the potential changes in the value of a portfolio of creditors. One important factor in the portfolio assessment is the correlation between changes in the credit ratings and the default correlation for any two obligors.com 286 .46 ) 2 0. 0.The PRM Handbook – Volume III The above analysis is the basis for the evaluation of the portfolio of loans.64 variance( V ) pi ( Vi . III. 0. Default correlations might be expected to be higher for firms within the same industry.01 percentile of a normal distribution N( ..42 0. There is clearly a need for a structural model that relates changes in default probabilities to fundamental variables. than for firms in unrelated sectors.33% 1.5. or in the same region. the correlations between the changes in credit quality are not zero.42 ) )2 i 0. The opposite happens when the economy is performing well: default correlations go down. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. CreditMetrics makes use of the stock price of a firm as a proxy for its asset value.B. Thus. along the lines described above.99.46 ) 2 . i. The 0.2 Estimation of Default and Rating Changes Correlations So far we have shown how the future distribution of values for a given bond (or loan) is derived. when the changes are due to credit risk only. we cannot expect default and migration probabilities to remain stationary over time..3. most of the assets of the obligors will decline in value and quality. CreditMetrics derives the default and migration probabilities from a correlation model of the firm’s assets.. (This is another simplifying assumption in CreditMetrics that may affect the accuracy of the approach.33 . In addition.e. as the true asset value is not directly observable. Their accurate estimation is therefore one of the key determinants of portfolio optimisation.02%( 1.. then it infers the correlations between changes in credit quality directly from the joint distribution of these equity returns. mean( V ) pi Vi i 0.82 0.) CreditMetrics estimates the correlations between the equity returns of various obligors.18% ( 56. CreditMetrics proposes to use Monte Carlo simulations to assess the credit risk of a bond/loan portfolio due to the large amount of factors that must be taken into consideration.02% 1. and credit risk is expressed as the potential ratings changes during the year. –7.43.46 2 0. and the overall credit VaR is quite sensitive to these correlations. Figure III.5.e. St. we reproduce exactly the migration frequencies as shown in the transition matrices that we discussed earlier. F.5.5 as it forms the basis for the KMV approach.B. and the probability of default (i. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. The firm’s balance sheet is represented in Table III. F).B.7. the firm is assumed to have a very simple capital structure.6 shows the distribution of the assets’ value at time T.The PRM Handbook – Volume III The theoretical framework underlying all this is the option pricing approach to the valuation of corporate securities first developed by Merton (1974). and a single zerocoupon debt instrument maturing at time T.pdffactory. This generalisation consists of slicing the distribution of asset returns into bands in such a way that.B.7: Balance sheet of Merton’s firm Assets Liabilities / Equity Risky Assets: Vt Total: Debt: Bt(F) Equity: St Vt Vt Figure III.6: Distribution of the firm’s assets value at maturity of the debt obligation Probability of default F VT Default point In this framework. if we draw randomly from this distribution. Merton’s model is extended by CreditMetrics to include changes in credit quality as illustrated in Figure III. The model is described in detailed in Section III. it is financed by equity. promised to the bondholders.com 287 . default occurs at the maturity of the debt obligation only when the value of assets is less than the payment.5. Table III. with face value F. In Merton’s model. where Vt is the value of all the assets and Vt Bt ( F ) S t .5.7.B.5.5. the shaded area on the left-hand side of the default point. the maturity of the zero-coupon debt. and current market value Bt.B.B. The PRM Handbook – Volume III Figure III.7: Generalisation of the Merton model to include rating changes Standard normal distribution for a BB-rated firm Rating: Prob (%): Z-threshold (s) Firm remains BB B Default CCC 1.5.67 0. The credit rating ‘thresholds’ are calculated using the transition probabilities in Table III. i.00 8. The distribution is normal with mean zero and unit variance. The area in the left-hand tail of the distribution.5. The area in the right-hand tail of the distribution. 1.e. one year ahead.7 shows the distribution of the normalised assets’ rates of return.com 288 .37 AA A 7.04 -1.23 Z B B B 1.93 Z 0. etc. 0.06%.03%.06 Z C C C Z B BBB 80. corresponds to the probability of default. the area between ZAA and ZAAA corresponds to the probability of being upgraded from BB to AA.84 1.pdffactory.73 AAA 0.03 A A A 3.14 Z A Z A A 2. to the left of ZCCC. i.B.e. corresponds to the probability that the obligor will be upgraded from BB to AAA.39 2.B.5. down to ZAAA.53 Z B B -2. Then.43 Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.B.1 for a BB-rated obligor.30 -2. Figure III. 26 –3.24 1.51 0.52 –2.37 BB 0.84 –2.53 –1.A. For example. here. N(0.8 shows the transition probabilities for two obligors rated BB and A respectively. It assumes that the normalised log-returns over any period of time are normally distributed with a mean of 0 and a variance of 1. 130 Note that d2 is different from its equivalent in the Black–Scholes formula since..14 2. for a BB-rated obligor the default threshold is –2. and the distribution is the same for all obligors within the same rating category.8 for the definition of d2 in Black–Scholes and Appendix 1 for d2 in the above derivation. In the next section we define the ‘distance to default’ as the distance between the expected asset value and the default point. so that the drift term in d2 is the expected return on the firm’s assets.5.73 1.30 7.04 CCC 0. If pDef denotes the probability of the BB-rated obligor defaulting. This critical asset value VDef is also called the default point.19 8.98 0.1). 130 129 See the Appendix for the derivation of the proof. instead of the risk-free interest rate as in Black–Scholes.06 Table III. corresponding to a cumulative probability of pDef. 129 ZCCC is simply the threshold point in the standard normal distribution.23 B 0.and A-rated obligors A-rated obligor Rating in one year BB-rated obligor Probabilities Thresholds: Probabilities Thresholds: (%) ( ) (%) ( ) AAA 0.05 –1.27 1.67 2.B.com 289 . such that the area in the left-hand tail below ZCCC is pDef. the critical asset value V Def which triggers default is such that ZCCC = d2. Then. based on the option pricing model.39 BBB 5.74 –2.72 80.93 A 91.8: Transition probabilities and credit quality thresholds for BB.30 Default 0.The PRM Handbook – Volume III Table III.pdffactory.5. This generalisation of Merton’s model is quite easy to implement.06 1. The thresholds are given in terms of normalised standard deviations.12 0.00 –2.01 –3.09 3. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. we work with the ‘actual’ instead of the ‘risk-neutral’ return distributions.43 AA 2. and the corresponding credit quality thresholds.B.30 standard deviations from the mean rate of return. See Chapter I. then the critical asset value VDef is such that: p Def Pr(Vt VDef ) which can be translated into a normalised threshold ZCCC.03 3. respectively.e.com 290 . i. 1. To derive the critical asset value V Def we only need to estimate the expected asset return and asset volatility . etc. rA . Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. CreditMetrics makes use of equity returns as their proxy. and these can be calculated without it being necessary to observe the asset value.2 in our example. than the volatility of the firm’s assets.7. We mentioned above that.The PRM Handbook – Volume III Note that only the threshold levels are necessary to derive the joint migration probabilities. For those companies. using the bivariate normal distribution. and to estimate its mean and variance. The normalised log-returns on both assets follow a joint normal distribution: f rBB .5.2 Pr[V1 VDef 1 . especially when it is being applied to highly leveraged companies. respectively: Pr 1.23 rBB 1.pdffactory. pDef +pCCC. and VDef1 and VDef2 are the corresponding default points. which is assumed to be equal to 0. BB and A. i.98 0.E for details on how to compute joint probabilities when the two random variables have a bivariate normal distribution. equity returns are substantially more volatile.V2 VDef 2 ] where V1 and V2 and denote the asset values for both obligors at time t. Now.131 For any two obligors the joint probability of both obligors defaulting is p1.B. 131 See Chapter II. and is denoted by . and possibly less stationary.37. Yet using equity returns in this way is equivalent to assuming that all the firm’s activities are financed by means of equity. For example. assume that the correlation between the assets’ rates of return is known. This joint probability may be calculated in exactly the same way as the migration probabilities were calculated above. we can compute the probability that they will remain in the same rating classes.7365 where rBB and rA are the rates of return on the assets of obligors BB and A. 1 2 1 2 exp 1 2(1 2 ) rBB2 2 rBBr A r A2 We can therefore compute the probability of both obligors being in any particular combination of ratings. Accordingly ZB is the threshold point corresponding to a cumulative probability of being either in default or in rating CCC. i.e.e. This is a major drawback of the approach.51 r A 1. assumed normally distributed as in Figure III. as asset returns are not directly observable. B.2 but default correlation of only 0. Indeed.6%. 134 Each scenario is characterised by n standardised asset returns.0106. one for each of the n obligors in the portfolio.B. 223) Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.133 Now. Estimate the correlation between each pair of obligors’ asset returns.com 291 .B.019.e.3. and it illustrates how important it is to estimate these data correctly if one is to assess the diversification effect within a portfolio. using equation (III. p1 and p2. then the joint probability of default is only 0.000064. with asset return correlation of 0. If the probabilities of default for obligors rated A and BB are PDef(A) = 0. Def 2 ) p1. The following steps are necessary: 1. 132 See Lucas (1995).pdffactory.2–0. 133 If the default events were independent the joint probability of default would simply be the product of the two default probabilities.000054. This shows that the joint probability of default is in fact quite sensitive to pairwise asset return correlations.019. 134 A good reference on Monte Carlo simulations and the Cholesky decomposition is Fishman (1997. and the correlation coefficient between the rates of return on the two assets is = 0. we find that the correlation coefficient between the two default events is only 0.1). Instead.0006 ? 0. Generate return scenarios according to their joint normal distribution. It is larger for portfolios with relatively low-grade credit quality than it is for high-grade portfolios. this number is magnified by an increase in default correlations. Asset returns correlations are approximately 10 times larger than default correlations for asset correlations in the range of 0. III. i. CreditMetrics implements a Monte Carlo simulation to generate the full distribution of the portfolio values at the credit horizon of one year. 3.3 Credit VaR of a Bond/Loan Portfolio The analytic approach that we sketched out above for a portfolio with bonds issued by two obligors is not practicable for large portfolios. Derive the asset return thresholds for each rating category. A standard technique that is often used to generate correlated normal variables is the Cholesky decomposition.5. p. and the individual probabilities of default for each obligor. as the credit quality of the portfolio deteriorates and the expected number of defaults increases. 2.0106 = 0. respectively. the default correlation can be calculated as: corr( Def 1. is not unusual.The PRM Handbook – Volume III Given this joint probability of default.5. 0. 2 132 p1 p2 p1(1 p1 ) p2 (1 p2 ) (III. This example. It can be shown that the impact of correlations on credit VaR is quite large.5.0006 and PDef(BB) = 0.1) We can illustrate the results with a numerical example.2. than to investment-grade obligors whose default probabilities are more stable. according to the threshold levels derived in step 1. and for each country. 5.1.4 Conditional Transition Probabilities– CreditPortfolioView CreditPortfolioView is a multi-factor model that is used to simulate the joint conditional distribution of default and migration probabilities for various rating groups in different industries.com 292 .2.B. government expenditures and the aggregate savings rate. while maintaining the same level of accuracy. Estimating VaR for credit requires a very large number of simulations as the loss distribution is very skewed with very few observations in the tail. derive the percentiles of the distribution of the future values of the portfolio to obtain the credit VaR and/or credit economic capital as in Figure II. the level of long-term interest rates. which apply for each rating. when the economy becomes stronger.. credit cycles follow business cycles closely. III. In other words.B. the contrary holds true. It employs the values of macro-economic factors such as the unemployment rate. map the standardised asset return into the corresponding rating. foreign exchange rates.5. driven by macro-economic factors. this methodology can be applied in each country to various sectors and various classes of obligors that react differently during the business cycle – sectors such as construction. the rate of growth in GDP. It applies better to speculative-grade obligors whose default probabilities vary substantially with the credit cycle. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. say by a factor of 5–10. CreditPortfolioView proposes a methodology to link those macro-economic factors to default and migration probabilities. When the economy worsens both downgrades and defaults increase. revalue the portfolio. Since the shape of the economy is. Provided that data are available. In order to reduce substantially the number of simulations.pdffactory. Repeat the procedure a large number of times. to a large extent. Given the spread curves.B. financial institutions. conditional on the value of macro-economic factors. agriculture.5. Finally. CreditPortfolioView is based on the observation that default probabilities and credit migration probabilities are linked to the economy.5. it is recommended for practical applications to implement credit portfolio models with the use of variance reduction techniques. and plot the distribution of the portfolio values to obtain a graph such as Figure III.The PRM Handbook – Volume III 4. 6. 2000).000. and for each obligor. ‘Importance sampling’ is a technique which produces remarkable results for credit risk (Glasserman et al. For each scenario. and services. say 100. 7. CreditPortfolioView proposes to use (III. The opposite holds during a period of economic expansion. Since one can simulate Pj..B.pdffactory.t/ Pj.2) 1 in economic expansion Pj.B.5.The PRM Handbook – Volume III Conditional default probabilities are modelled as a logit function.t / P j .t P j . That is: 135 P j .5.P.t . For a review of multifactor models.t is the country index value derived from a multi-factor model .t P j . and Yj. see Elton and Gruber (1995) and Rudd and Clasing (1988). …. 136 J. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. whereby the independent variable is a country-specific index that depends upon current and lagged macro-economic variables. We can express this in the following way: P j .t is the unconditional (historical average) probability of default in period t.t where Pj.t over any time horizon t = 1.. while upward migrations decrease. 136 In order to derive the conditional transition matrix the (unconditional) transition matrix based on Moody’s or Standard & Poor’s historical data will be used. (III..5. across many different countries and industries. for speculative-grade obligors in country/industry j. Also credit downgrades increase.B.t 1 Y 1 e j .com 293 .2) to adjust the unconditional transition probabilities in order to produce a transition matrix Mt that is conditional on the state of the economy: Mt =M(Pj.3) 135 Note that the logit function ensures that the probability takes a value between 0 and 1. These transition probabilities are unconditional in the sense that they are historical averages based on more than 20 years of data covering several business cycles.t) where the adjustment consists in shifting the probability mass toward downgraded and defaulted states when the ratio Pj. for speculative-grade obligors in country/industry j.t is the conditional probability of default in period t. and in the opposite direction if the ratio is less than one. default probabilities for non-investment grade obligors are higher than average during a period of recession. this approach can generate multi-period transition matrices: MT t 1. T.T M P j .t P j .t where 1 in economic recession (III.t is greater than one. Morgan (1997) provides an example of multi-factor model in the context of credit risk modelling..t/ Pj. As we discussed earlier. 5. Figure III.B.3) many times to generate a distribution of the cumulative conditional default probability such as that shown in Figure III.The PRM Handbook – Volume III One can simulate the transition matrix (III.5. and possibly for each industry sector within each country. T expected (average) default probability Frequency (%) 99th percentile 1 0 CreditPortfolioView and KMV (described in Section III.5.5.B.8 for any rating over any time period. It is not clear that the proposed methodology performs better than a simple Bayesian model. III.5. KMV adopts a micro-economic approach that relates the probability of default of any obligor to the market value of its assets. CreditPortfolioView proposes a methodology that links macro-economic factors to default and migration probabilities.B. Another limitation of the model is the ad hoc adjustment of the transition matrix.com 294 . The calibration of CreditPortfolioView thus requires reliable default data for each country. The same Monte Carlo methodology can be used to produce the conditional cumulative distributions of migration probabilities over any time horizon. Unfortunately it has a major weakness: reliance on ratings transition Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.B. is rather appealing as a methodology.B. where the revision of the transition probabilities would be based on the internal expertise accumulated by the credit department of the bank. These two approaches are somewhat related since the market value of the firms’ assets depends on the shape of the economy. it would be interesting to compare the transition matrices produced by both models.6) base their approach on the empirical observation that default and migration probabilities vary over time. for a given rating.pdffactory. as described previously.8: Distribution of the cumulative conditional default probability.5 The Contingent Claim Approach to Measuring Credit Risk The CreditMetrics approach to measuring credit risk. and an internal appreciation of the current stage of the credit cycle (given the quality of the bank’s credit portfolio). over a given time horizon. and second. Here. Default occurs at debt maturity whenever the firm’s asset value falls short of debt value at that time. The option pricing approach. maturing at the maturity of the debt. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. the assumption cannot be true since we know that default rates evolve continuously. The initials KMV stand for the surnames of Stephen Kealhofer. Kealhofer and Vasicek are former academics from the University of California at Berkeley.5. the rating changes when the default rate is adjusted. As a result.5. that the actual default rate is equal to the historical average default rate. introduced by Merton (1974) in a seminal paper. The merit of this approach is that each firm can be analysed individually based on its unique features. the economic value of default is presented as a put option on the value of the firm’s assets. since the information required for such an analysis is rarely available to the bank or the investor. as we shall see below. we first make two assumptions: that the loan is the only debt instrument of the firm.pdffactory. This view has been strongly challenged by researchers working for the consulting and software corporation KMV. In this model. builds on the limited liability rule which allows shareholders to default on their obligations while they surrender the firm’s assets to the various stakeholders.1 Structural Model of Default Risk: Merton’s (1974) Model To determine the value of the credit risk arising from a bank loan. The firm’s liabilities are thus viewed as contingent claims issued against the firm’s assets. Credit rating and default rates are also synonymous. and the default-free interest rate for the debt maturity. (This lag is because rating agencies necessarily take time to upgrade or downgrade companies whose default risk has changed.The PRM Handbook – Volume III probabilities that are based on average historical frequencies of defaults and credit migration. the bank can completely eliminate all the credit risk and convert the risky corporate loan into a 137 KMV is a trademark of KMV Corporation. But this is also the principal drawback. even when recovery rates differ among obligors. John McQuown and Oldrich Vasicek who founded KMV Corporation in 1989. the volatility) of the firm’s assets. the accuracy of CreditMetrics calculations depends upon two critical assumptions: first.) What we call the ‘structural’ approach offers an alternative to the credit migration approach. at a strike price equal to the face value of debt (including accrued interest).e. and that the only other source of financing is equity. that all firms within the same rating class have the same default rate and the same spread curve. In this case. By purchasing the put on the assets of the firm for the term of the debt.g.com 295 . the loss rate is endogenously determined and depends on the firm’s asset value. and vice versa. III. according to pre-specified priority rules. while ratings are adjusted in a discrete fashion. and the risk (e. i. One has to estimate the total value. with a strike price equal to the face value of the loan. with the payoffs to the various debt-holders completely specified by seniority and safety covenants.B. 137 Indeed. the credit value is equal to the value of a put option on the value of assets of the firm. Credit rating changes and credit quality changes are taken to be identical. volatility. and the time T to maturity of the debt. then the value of the firm’s assets is simply the sum of the firm’s equity and debt. In this way they can convert the risky corporate loan into a riskless loan with a face value of F. V0 S0 B0 . Thus. and at what price? What is the economic cost of reducing credit risk? And.B. and with maturity equal to the maturing of the debt. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. 1973) to equity and debt instruments. consider Table III.4) The loan to the firm is subject to credit risk. its leverage ratio L Fe-rT/V0. we can express the value of the credit risk in an option-like formula. the volatility of the firm’s assets.B. if we make the assumptions that are needed to apply the Black–Scholes (BS) model (Black and Scholes. and by one debt obligation. namely the risk that at time T the value of the firm’s assets VT . with no taxes. this gives rise to a series of questions. i. The model was initially suggested by Merton (1974) and further analysed by Galai and Masulis (1976). Consider a firm with risky assets V.e. credit risk is a function of the financial structure of the firm. This shows that if the bank buys the put option with value P. will be below the obligation to the debt holders. (III. If we assume that markets are frictionless. the value at time T will be F whether VT F or VT > F. the cost of eliminating the credit risk associated with providing a loan to the firm is the value of this put option. From the viewpoint of a bank that makes a loan to the firm.com 296 . At time t = 0 then. where V0 is the present value of the firm’s assets.5. Can the bank eliminate/reduce credit risk.The PRM Handbook – Volume III riskless loan. Now. what are the factors affecting this cost? In this simple framework.5. so credit risk is eliminated.pdffactory. at a strike price equal to the face value of debt (including accrued interest). is greater than zero. F. Credit risk exists as long as the probability of default. which is financed by equity. To understand why the credit value is equal to the value of a put option on the value of assets of the firm.9. S. maturing at time T with face value (including accrued interest) of F and market value B. and there is no bankruptcy cost. Pr(VT < F). and Fe rT is the present value of the debt obligation at maturity. The default spread can be regarded as a risk premium associated with holding risky bonds. the less risky is the bond and the lower is the value of the put protection – therefore. the volatility of the underlying assets and the debt maturity.B.B. Note that. for a detailed discussion of the assumptions). and its costs.pdffactory. The model illustrates that the credit risk. which means that it stays constant for a scale expansion of Fe rT /V0 .5.com 297 . so that the default spread T = yT – r that compensates the bond holders for the default risk that they bear is positive. we can write the value of the put as: P0 N ( d 1 )V0 Fe rt N ( d 2 ) . The cost of credit risk is also homogeneous function of the leverage ratio.5. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. the lower is the risk premium.6) and is the standard deviation of the rate of return of the firm’s assets.9: Bank’s payoff at times 0 and T for making a loan and buying a put option Time Value of assets Bank’s position: (a) make a loan (b) buy a put Total 0 V0 VT T –B0 –P0 –B0 – P0 VT F – VT F VT > F F F 0 F Thus the value of the put option is the cost of eliminating the credit risk associated with providing a loan to the firm.5) where P0 is the current value of the put. It can be shown that. d1 ln(V0 / F ) ( r T 2 / 2)T ln(V0 / Fe rT ) T 2 T /2 . the default spread can be computed exactly as a function of the leverage ratio.5. The numerical examples in Table III.5.B. 1976.B.The PRM Handbook – Volume III Table III. when the probability of default is greater than zero the yield to maturity on the debt yT must be greater than the risk-free rate r. T Fe rT Note that the default spread decreases when the risk-free rate increases. (III. in the Merton (1974) framework.10 show the default spread for various levels of volatility and different leverage ratios. and a decreasing function of the risk-free interest rate r (the higher is r. (III. the less costly it is to reduce credit risk). is an increasing function of the volatility of the assets of the firm and the time interval T until debt is paid back.) is the cumulative standard normal distribution. N(. In fact: T = yT r= 1 V0 ln N ( d 2 ) N ( d1 ) . If we make the assumptions that are needed to apply the (BS) model to equity and debt instruments (see Galai and Masulis. The greater the risk-free rate. d2 d1 T . 156 and there is a 5. 10%). This cost drops to 25 cents when volatility decreases to 20% and to 0 for 10% volatility.5% 8.5.63 3.1% 8. Using equations (III.com 298 . Since L = Fe rT /V0 .37 for $100 worth of the firm’s assets. Hence the cost of eliminating the credit risk is $3. 40%) with the leverage ratio L = 70%. we have F = 77.9 0. Therefore the yield on the loan is 77/66. V can be reconstructed by adding the market values of both equity and debt.4% 5.8 0 0.5) and (III.5% continuously compounded.6% 0. we obtain S0 = 33.2 Estimating Credit Risk as a Function of Equity Value We have already shown that the cost of eliminating credit risk can be derived from the value of the firm’s assets.5. which translates to a riskless yield of 10% per annum.1 (i.37 for the value of equity and B0 = 66. if both equity and debt are traded. To demonstrate that the bank eliminates all its credit risk by buying the put.5.8% 4.e. corporate loans are not often traded and so. The model also shows that the put value is P0 = 3.20 0.B.1% 2. A practical problem arises over how easy it is to observe V.0% 0.37.10. which is equivalent to 9.4 (i.37 ) 1.B.The PRM Handbook – Volume III Table III. the principal amount plus the promised interest rate) of the one-year debt is 77. where the face value (i.10: Default spread for corporate debt (for V0 = 100.B. and r = 10% 138) Volatility of underlying asset: Leverage ratio: L 0. However.63) – 1 = 0.5% 0.5 0 0 0 1.e. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. r = 0.pdffactory.B. The assets’ volatility is clearly a critical factor in determining credit risk.1% 0.10 .B.B.1% 12.10 0. = 0.6% risk premium to reflect the credit risk. In some cases.1% 1.0 2.3% Example III.1 We show how the 5.6) with V0 = 100.5.4% 0. T = 1. III.3% 17.e.1% 3. we can compute the yield on the bank’s position as F /( B0 P ) 77 /( 66. to all intents and 138 10% is the annualised interest rate discreetly compounded.5% 1.40 0.63 for the value of the corporate risky debt.5. T = 1.6% default spread (marked in red) was obtained in Table III.5.7 0 0 0.5.6 0 0 0.05 0. 8) is not expected to change widely from day to day.V S.863 of the firm’s assets.B. one effectively creates a short position in the firm’s assets of N( d1) units. It should be remembered that equity itself reflects the default risk.5.159 of equity is equivalent to shorting 0. N (d1 ) Therefore.The PRM Handbook – Volume III purposes. The question now is whether we can use a put option on equity in order to hedge the default risk.7) A put can be created synthetically by selling short N( d1) units of the firm’s assets. and as a contingent claim its instantaneous volatility S can be expressed as: S where and S.com 299 . The equivalence between the put and the synthetic put is valid over short time intervals. and buying F N( d2) units of government bonds maturing at T.159.B. However in practice. If one sells short N( d1)/ N(d1) units of the stock S.pdffactory. Note that the outstanding equity is equivalent to a short-term holding of N(d1) = 0. since: N ( d1 ) S N (d1 ) VN ( d 1 ) Fe rT N (d 2 ) N ( d1 ) .137/0.V = N(d1)V/S is the instantaneous elasticity of equity with respect to the firm’s value.B. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.5. one can create a put option dynamically by selling short the appropriate number of shares. for long-term options. for a firm with a current market value of assets of 100. the bank should sell short 0. and must be readjusted frequently with changes in S and in time left to debt maturity.1. Example III. the estimated S from (III.159 of the outstanding equity. we can only observe equity. The question.V = (III. Shorting 0.5. is whether the risk of default can be hedged by trading shares and derivatives on the firm’s stock. 1. Since S is stochastic and changes with V. Its value can be expressed as a function of the same parameters as the put option: S VN ( d 1 ) Fe rT N (d 2 ) (III.863 of the firm’s assets. or to follow a deterministic path over the life of the option. equity itself is a contingent claim on the firm’s assets.5. then. This means that in order to insure against the default of a one-year loan with a maturity value of 77.B. if V is not directly traded or observed.8) S. with face value of F. The BS model requires to be constant.5.2 Using the data from Example III. In the Merton framework. N( d1)/ N(d1) = 0.863 = 0.B. the conventional BS model cannot be applied to the valuation of puts and call on S. B. …). short. Given the capital structure of the firm. when the value of assets is less than the promised payment F to the bondholders. In this framework. is assumed to follow a standard geometric Brownian motion. etc.5.5. It is financed by means of equity. each value of the EDF is associated with a spread curve and an implied credit rating. We assume that the firm has a very simple capital structure. the actual probability of default for any time horizon. AA.5. St. where Vt is the value of all the assets. convertible bonds. instead of the more conventional ‘ordinal ranking’ proposed by rating agencies (which relies on letters such as AAA. Figure III.9 shows the distribution of the assets’ value at time T. i. default only occurs at maturity of the debt obligation. and the probability of default which is the shaded area below F.and long-term debt. are already embedded in the EDFs. The firm’s balance sheet can be represented as follows: Vt Bt ( F ) S t . can be derived.pdffactory.B. 139 and once the stochastic process for the asset value has been specified.com 300 . even when the underlying instrument does not follow a stationary lognormal distribution. KMV’s model does not make any explicit reference to the transition probabilities which.e. and a single zero-coupon debt instrument maturing at time T. The EDF is firmspecific. Vt.8) can be used in the context of BS estimation of long-term options.B.5. with face value F. Indeed. for each obligor based on the Merton (1974) type of model. equation (III. the default probability. Contrary to CreditMetrics. one year.B. Credit risk in the KMV approach is essentially driven by the dynamics of the asset value of the issuer.6 The KMV Approach KMV derives the expected default frequency (EDF). EDFs can be viewed as a ‘cardinal ranking’ of obligors relative to default risk. the composition of its liabilities: equity. the volatility of the asset returns and the current asset value. III. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. in KMV’s methodology. The probability of default is thus a function of the firm’s capital structure. and can be mapped onto any rating system to derive the equivalent rating of the obligor. the maturity of the zerocoupon debt. etc. 139 That is. Figure III. two years. The value of the firm’s assets..The PRM Handbook – Volume III Therefore. and current market value Bt.9 depicts how the probability of default relates to the distribution of asset returns and the capital structure of the firm. com 301 . as of February 1998 KMV has estimated the market value of Microsoft assets at US$228. The derivation of the actual probabilities of default proceeds in three stages: estimation of the market value and volatility of the firm’s assets. As we discussed earlier. net of their depreciation. or restructuring. which amounts to $.1 Estimation of the Asset Value and the Volatility of Asset Return In the contingent claim approach to the pricing of corporate securities. which is an index measure of default risk. then the task of assessing the market value of the firm’s assets and its volatility would be 140 Financial models consider essentially market values of assets. while for Trump Hotel and Casino the book value.8 billion.B. The information contained in the firm’s stock price and balance sheet can then be translated into an implied risk of default.5. KMV models the market value of assets. where the value of the equity is determined by the stock market.e.5 billion. according to KMV’s own empirical studies.9: Distribution of the firm’s assets value at maturity of the debt obligation Assets Value VT = V0 exp {( T+ TZT} E(VT) = V0 exp ( T) VT V0 F Probability of default T Time The KMV approach is best applied to publicly traded companies. the log-asset return follows a normal distribution. i.2.5. 141 In addition. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. calculation of the distance to default. and not accounting values. or book values.pdffactory. 141 The exception is when the firm’s portfolio of businesses has changed substantially through mergers and acquisitions.B. actual data conform quite well to this hypothesis. which only represent the historical cost of the physical assets. the market value of the firm’s assets is assumed to be lognormally distributed. is higher than the market value of $ 1.6. In fact. there might be huge differences between both the market and the book values of total assets. i. if all the liabilities of the firm were traded. 140 This assumption is quite robust and. as shown in the next section. the distribution of asset returns is stable over time. Only the market value is a good measure of the value of the firm’s ongoing business and it changes as market participants revise the firm’s future prospects. For example.e.8 billion for their book value.The PRM Handbook – Volume III Figure III. III. and marked to market every day.6 billion versus $16. the volatility of asset returns remains relatively constant. and scaling of the distance to default to actual probabilities of default using a default database. there is no simple way to measure S precisely from market data. The firm’s asset value would be simply the sum of the market values of the firm’s liabilities. (III. and the volatility of asset returns: V = h (S. c is the average coupon paid on the long-term debt and r is the risk-free interest rate. If S were directly observable. . In order to make their model tractable KMV assume that the capital structure of a corporation is composed solely of equity. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.B.9) so that it becomes a function of the observed equity value.B.5. we can back out V from (III. Loss of accuracy may also result from factors such as the non-normality of the asset return distribution. L. KMV observed that firms default when the asset value reaches a level that is somewhere between the value of total liabilities and the value of short-term debt.B. r). however. r ) (III.B. the tail of the distribution of asset values below total debt value may not be an accurate measure of the actual probability of default. S. and in some cases part of the debt is actively traded.B. like the stock price. and its volatility. short-term debt (considered equivalent to cash).2 Calculation of the ‘Distance to Default’ Using a sample of several hundred companies.6.5. is relatively unstable. we could simultaneously solve (III. L. So to calibrate the model for . (III. long-term debt (in perpetuity). S. Therefore. Vasicek (1997). In practice. S. L.pdffactory. 142 Given these simplifying assumptions.9) and (III. But the instantaneous equity volatility.5. it is possible to derive analytical solutions for the value of equity. due to the complexity of the boundary conditions attached to the various liabilities. and the simplifying assumptions made about the capital structure of the firm.5. c.10) S where L denotes the leverage ratio in the capital structure.5. only the price of equity for most public firms is directly observable. c and r.B. This may be further aggravated if a 142 In the general case the resolution of this model may require the implementation of complex numerical techniques. with no analytical solution.10) for V and . and convertible preferred shares. for example. L.com 302 . . or stock price.5. c. III. c. The alternative approach to assets valuation consists of applying the option-pricing model to the valuation of corporate liabilities as suggested in Merton (1974). S: S = f (V. Since only the value of equity S is directly observable.5. r).11) Here volatility is an implicit function of V. . See.B. and is in fact quite sensitive to the change in asset value.The PRM Handbook – Volume III straightforward. KMV uses an iterative technique.9) = g (V. and the volatility of the asset return could be simply derived from the historical time series of the reconstituted asset value. This is the number of standard deviations between the mean of the distribution of the asset value. which is similar to Figure III.10. Figure III.B.12) where V0 = current market value of assets DPTT = default point at time horizon T = expected return on assets. and a critical threshold called the ‘default point’ (DPT) which is set at the par value of current liabilities including short-term debt to be serviced over the time horizon (STD).9. STD + LTD/2. If the company is in distress. KMV computes an index called distance to default (DD).B. is DD ln V0 / DPTT ( 1 2 2 )T T (III.5. the distance to default expressed in unit of asset return standard deviation at time horizon T. As shown in Figure III.The PRM Handbook – Volume III company is able to draw on (otherwise unobservable) lines of credit. For all these reasons. KMV implements an intermediate phase before computing the probabilities of default. If the expected asset value in one year is E(V1) and is the standard deviation of future asset returns then DD E(V1 ) DPT .5. i. net of cash outflows Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.B.5. using these lines may (unexpectedly) increase its liabilities while providing the necessary cash to honour promised payments.e.pdffactory.com 303 .B.10: Distance to default Asset Value Asset Value Distribution E (VT ) VO e T VT V0 F Probability of default Time T Given the lognormality assumption of asset values. plus half the long-term debt (LTD).5. Example III.B. say 40 bp. This proportion. Then EDF1 year = 20/5000 = 0.5.5. or 0.5. Using historical information about a large sample of firms.B. say DD = 4.6.B. that actually defaulted after one year. Assume that among the population of 5000 firms with a DD of 4 at one point in time. Figure III.11.3: Current market value of assets: V0 = 1000 Net expected growth of assets per annum: 20% Expected asset value in one year: V0 ? 1. for a given time horizon EDF 40 bp 1 2 3 4 5 6 DD Example III. one can estimate. The implied rating for this probability of default is BB+.B.com 304 .5.4% or 40 bp. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.4%.11: Mapping of the ‘distance to default’ into the EDFs. KMV calls these probabilities expected default frequencies.04 = 0.The PRM Handbook – Volume III = annualised asset volatility.10 is equal to N(– DD). : 100 Default point: 800 Then DD = (1200 – 800)/100 = 4.B.B. It follows that the shaded area shown below the default point in Figure III. including firms that have defaulted. 20 defaulted one year later. the proportion of firms of a given ranking.5.4: Federal Express ($ figures are in billions of US$) This example is provided by KMV and relates to Federal Express on two different dates: November 1997 and February 1998. III.3 Derivation of the Probabilities of Default from the Distance to Default This last phase consists of mapping the distance to default to the actual probabilities of default. for a given time horizon.20 = 1200 Annualised asset volatility.5. is the EDF as shown in Figure III. for each time horizon.pdffactory. 6 3.e. EDFs tend to shoot up quickly until default occurs.7 $ 4.15 12.13 shows the evolution of equity value and asset value. variations in the stock price.The PRM Handbook – Volume III November 1997 Market capitalisation February 1998 $ 7.B. as well as the default point during the same period.12.com 305 . as shown in Figure III.B.2 A– This example illustrates the main causes of changes for an EDF. or at least of the degradation of the creditworthiness of issuers. the debt level (leverage ratio).5.2 Asset volatility 15% 17% Default point $ 3.5. III.9 AA– 12.2 0.17 12.5.4 $ 3.11% (11 bp) 4.06% (6 bp) 4.6 $ 12. EDFs have proved to be a useful leading indicator of default. Figure III. together with the corresponding Standard & Poor’s rating.pdffactory. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.2 3. On the vertical axis of both graphs the EDF is shown as a percentage.3 Book liabilities $ 4.B.7 $ 7.4 0.6. the perceived degree of uncertainty concerning the value of the business). When the financial situation of a company starts to deteriorate.4 EDF as a Predictor of Default KMV has provided a ‘Credit Monitor’ service for estimated EDFs since 1993. and asset volatility (i. i.9 Market value of assets $ 12.5 0.e.5 (price ? shares outstanding) Distance to default (DD) EDF 12.6 0. B.B.and long-term debt of a firm that defaulted Source: KMV Corporation Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.com 306 .5. short.5.12: EDF of a firm that defaulted versus EDFs of firms in various quartiles and the lower decile Source: KMV Corporation Note: The quartiles and decile represent a range of EDFs for a specific credit class.13: Asset value. Figure III.The PRM Handbook – Volume III Figure III. equity value.pdffactory. is a purely actuarial model. versus Standard & Poor’s rating EDF Default S&P Credit Rating Source: KMV Corporation KMV has analysed more than 2000 US companies that have defaulted or entered into bankruptcy over the last 20 years.000 companyyears with data provided by Compustat. By contrast. Changes in EDFs tend to anticipate – by at least one year . framework). default occurs when the asset value falls below a certain boundary such as a promised payment (e.14). when default rates are high. CreditRisk+. the actuarial model discussed in this section treat the firm’s bankruptcy process. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. This means that the probabilities of default that the model employs are based on historical statistical data of default experience by credit class. including recovery. Contrary to Moody’s and Standard & Poor’s historical default statistics.14: EDF of a firm that defaulted. III. released in late 1997 by investment bank Credit Suisse Financial Products. based on mortality models of the insurance companies.pdffactory. The distance to default can be observed to shorten during periods of recession.B.The PRM Handbook – Volume III Figure III. say one year. These firms belonged to a large sample of more than 100. 1974. the Merton.g..7 The Actuarial Approach In the structural models of default.5. In all cases KMV has shown a sharp increase in the slope of the EDF a year or two before default. and to increase during periods of prosperity characterised by low default rates. as exogenous. The loss distribution is generated in a way similar to CreditMetrics by simulating correlated defaults at the risk horizon.the downgrading of the issuer by rating agencies such as Moody’s and Standard & Poor’s (Figure III.com 307 .5. EDFs are not biased by periods of high or low numbers of defaults.B.5.B. which specifies the probability of n defaults. 2.B. there is no attempt to relate default risk to the capital structure of the firm. if n is large enough. downgrade risk is ignored. and the probability of such a surprise is known and follow a Poisson type of distribution. the Poisson distribution. Under these assumptions. Default is treated as an ‘end of game’ (stopping time) which comes as a surprise. … defaults become negligible.…. the probability of default in a given period.1. is the same as in any other month.5. . the probability of default by any particular obligor is small.4% . Also. a finite number of obligors. say n. for n = 1. n + 2. ….05 5% while the probability of exactly three defaults is: Pr (3 defaults) = 33 e 3 3! 0. the timing of default is assumed to take the bond-holders ‘by surprise’. CreditRisk+ applies an actuarial science framework to the derivation of the loss distribution of a bond/loan portfolio. naturally. However.com 308 . and the number of defaults that occur in any given period is independent of the number of defaults that occur in any other period. The Poisson distribution has a useful property: it can be fully specified by means of a single parameter. say one month. is only an approximation. For example. no assumptions are made about the causes of default. therefore. Unlike the KMV approach to modelling default. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. for a large number of obligors. It is assumed that: 1. if we assume = 3 then the probability of ‘no default’ in the next year is: Pr(0 default) = 30 e 3 0! 0. . the probability distribution for the number of defaults during a given period of time (say. Only default risk is modelled. for a loan. then the sum of the probabilities of n + 1.The PRM Handbook – Volume III Contrary to the structural approach to modelling default.224 22. one year) is represented well by a Poisson distribution: 143 n Pr(n defaults) = where e n! for n = 0. The annual number of defaults n is a stochastic variable with mean and standard deviation .pdffactory. 143 In any portfolio there is.2. (III.13) is the average number of defaults per year. 000 190. we expect the mean default rate to change over time. i. In the event of default by an obligor. with exposures between $50.000) Band Obligor (loss given default) (in $100.000 435. net of the recovery adjustments) are divided into bands.11: Exposure per obligor Exposure ($) Exposure Round-off exposure (in $100.B.8 2 5 4 4 2 5 2 5 4 4 2 5 Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.e. the losses (exposures. The level of exposure in each band is approximated by means of a single number. the marked-to-market value.11 we show the exposures for the first six obligors.000) j A LA j j 1 2 3 4 5 6 150.35 3.B. the exposure for each obligor is adjusted by the anticipated recovery rate in order to calculate the ‘loss given default’.B. as suggested by CreditRisk+. rather than determined by it.000 and $1 million.000 370. These adjusted exposures are calculated outside the model (exogenous). This suggests that the Poisson distribution can only be used to represent the default process if. if positive – and zero. we make the additional assumption that the mean default rate is itself stochastic.5 4.000 460.5: Suppose the bank holds a portfolio of loans and bonds from 500 different obligors. if negative – at the time of default) less a recovery amount. Table III.7 1.000 480. Notation Obligor A Exposure LA Probability of default PA Expected loss A = L A PA In Table III. depending on the business cycle.000 1. In CreditRisk+. and are independent of market risk and downgrade risk.com 309 . Example III.5.5.The PRM Handbook – Volume III However.6 4.9 4. the counterparty incurs a loss that is equal to the amount owed by the obligor (its exposure.5.pdffactory. In order to derive the loss distribution for a well-diversified portfolio. 4 9 40 25.com 310 .The PRM Handbook – Volume III The unit of exposure is assumed to be $100. with m = 10. j = 1.2 6. is by definition: Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. we follow the three steps outlined below. say band j. expressed in units of exposure. i. In CreditRisk+.5. each band is viewed as an j independent portfolio of loans/bonds.2 2. The probability generating function for any band.5.B.12: Expected number of defaults per annum in each band Band: j Number of obligors j j 1 30 1. j/ j A of all the obligors that belong to band j. Then j. is simply the sum of the expected losses But since by definition be calculated. Step 1: Probability generating function for each band Each band is viewed as a separate portfolio of exposures.5 5. Each band j.5 2 40 8 4 3 50 6 2 4 70 25. for which we introduce the following notation: Notation Common exposure in band j in units of exposure j Expected loss in band j in units of exposure j Expected number of defaults in band j Denote by A j the expected loss for obligor A in units of exposure.4 To derive the distribution of losses for the entire portfolio. Table III.8 10 20 4 0. Table III.e. m.pdffactory.4 7 50 38. A = A/L where in this case L = $100.000 ? j.2 2. the expected number of defaults per annum in band j may now . has an average common exposure: = $100.5 8 40 19. using j j = j j.12 provides an illustration of the results of these calculations.000.000.3 5 100 35 7 6 60 14. the expected loss over a one-year period in band j.5 1.B. ….4 2. and which is assumed to be gamma distributed. p.. it is straightforward to derive the loss distribution.14) denotes the expected number of defaults for the entire portfolio.26.5.The PRM Handbook – Volume III nL ) z n Pr( loss Gj(z) Pr( n defaults ) z n 0 n j . The mean default rate for each obligor is then supposed to be a linear function of the background factors.2.pdffactory. Each factor k is represented by a random variable Xk which is the number of defaults in sector k. n 0 where the losses are expressed in the unit of exposure. the model can be easily extended to a multi-period framework. CreditRisk+ proposes several extensions of the basic one-period. These probabilities can be expressed in closed form and depend only on j and j. Step 2: Probability generating function for the entire portfolio Since we have assumed that each band is a portfolio of exposures that is independent of the other bands. the probability generating function for the entire portfolio is simply the product of the probability generating function for each band: m G( z ) where m j 1 j j 1 e j jz vj exp m j 1 m j j 1 jz j (III. we have: e G j (z ) j n! n 0 n j z nv j e jz j vj .B. Step 3: Loss distribution for the entire portfolio Given the probability generating function (III. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. First. each representing a sector of activity.com 311 .14).B. Xk. since Pr( loss of nL ) 1 d n G(z ) |z n ! dz n 0 for n 1. .. one-factor model.. the variability of default rates can be assumed to result from a number of ‘background’ factors. 144 Credit VaR is then easily derived from the above loss distribution by first computing the percentile corresponding to the confidence level. Since we have assumed that the number of defaults follows a Poisson distribution. These factors are 144 See Credit Suisse (1997). and then subtracting the expected loss from this number.5. Second. E I. loss given default and default correlations. Second. probabilities of default.com 312 . Its principal limitation is the same as for the CreditMetrics and KMV approaches: the methodology assumes that credit risk has no relationship with the level of market risk. i.pdffactory. III. CreditRisk+ is not able to cope satisfactorily with non-linear products such as options and foreign currency swaps. these models are not as different as they may first appear. For each instrument. 57–64. these approaches appear to be very different and likely to produce considerably different loan loss exposures and VaR figures.e. CreditRisk+ has the advantage that it is relatively easy to implement. as we mentioned above. CreditRisk+ derives a closed-form solution for the loss distribution of a bond/loan portfolio. and Kishore.B. similar arguments stressing the structural similarities have been made by several authors such as Gordy (2000) and Koyluoglu and Hickman (1999). Finally. However. pp. Comparative studies such as IIF/ISDA (2000) have stressed the sensitivity of the risk measures (expected and unexpected losses) to the key risk drivers. or to the variability of future interest rates.5. In addition. CreditRisk+ focuses on default. closed-form expressions can be derived for the probability of portfolio bond/loan losses. where the probability of default depends upon several stochastic background factors.The PRM Handbook – Volume III further assumed to be independent. marginal risk contributions by obligor can be easily computed. like the CreditMetrics and KMV approaches. and therefore it requires relatively few estimates and ‘inputs’. At first sight. Financial Analysts Journal.8 Summary and Conclusion In this chapter we have presented the key features of some of the more prominent new models of credit risk measurement in a portfolio context. they produce results which fall in quite a narrow range. In addition. analytically and empirically. This study also showed that when these models are run using consistent parameters. and this makes CreditRisk+ very attractive from a computational point of view. First. the exposure for each obligor is fixed and is not sensitive to possible future changes in the credit quality of the issuer. CreditRisk+ ignores what might be called ‘migration risk’. the credit exposures are taken to be constant and are not related to changes in these factors. only the probability of default and the exposure are required. Indeed. Even in its most general form. In all cases. References Altman. 52(6). Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. V (1996) Almost everything you wanted to know about recoveries on defaulted bonds. CA: Andrew Rudd. Credit Suisse (1997) CreditRisk+: A Credit Risk Management Framework. New York: McGraw-Hill. New York: Oliver. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. pp. Elton. pp. Algorithms. pp. and Applications. Koyluoglu. 119–149. Gordy. 1(1). pp. Special Report. and Shahabuddin. pp. A. 10(10). Heidelberger.The PRM Handbook – Volume III Altman. 449–470. Wilson. IIF/ISDA (2000) Modeling credit risk: Joint IIF/ISDA testing program. Crouhy. Journal of Fixed Income. Journal of Political Economy. Risk. Working Paper S-98-1. Net Exposure. D. L. H U. Springer.. and Hickman. Moody’s Investors Service. P. Rudd. Vasicek. New York: Wiley. E I. pp. D (1996) Defaulted Bank Loan Recoveries. 111-117. 46. 81. Journal of Banking and Finance. A (1998) A generalized framework for credit risk portfolio models. E J. D (1995) Default correlation and credit analysis. Management Science.com 313 . and Scholes. Morgan (1997) CreditMetrics. P. M. pp. R (2001) Risk Management. 56-61. Carty. New York: Springer Seris in Operations Research. July. T (1997b) Portfolio credit risk II. D. 1. and Gruber.pdffactory. M (2000) A comparative anatomy of credit risk models. F. Wilson. R C (1974) On the pricing of corporate debt: the risk structure of interest rates. Galai. Technical Document. and Masulis. V (1998) Defaults and returns on high yield bonds: analysis through 1997. Black. Wymann and Co. 1349–1364. T (1997a) Portfolio credit risk I. Global Credit Research. Galai. February. and Lieberman. 28. P (2000) Variance reduction technique for estimating value-at-risk. 3 (January/March). H K (1988) Modern Portfolio Theory: The Principles of Investment Management. Risk. M J (1995) Modern Portfolio Theory and Investment Analysis. 637–654. R W (1976) The option pricing model and the risk factor of stocks. O (1997) Credit valuation. 10(9). New York: Credit Suisse Financial Products. Journal of Finance. Merton. Lucas. and Mark. and Clasing Jr. Working Paper. 76-87. M (1973) The pricing of options and corporate liabilities. and Kishore. Fishman G (1997) Monte Carlo: Concepts. Journal of Financial Economics. New York University Salomon Center.P. 4(4). 53–82 Glasserman. J. Orienda. pp. com 314 .The PRM Handbook – Volume III Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory. We focus in this chapter on the main principles for computing minimum regulatory credit capital requirements under the current Basel I Accord as well as under the current proposal for the new Basel accord. Chapter III. credit downgrades and credit spread changes.B. the rules for computing minimum capital requirement for credit risk in the Basel II Accord (Pillar I) – we cover various types of exposure and comment on the special credit capital considerations under Pillar II. we review the main concepts for estimating and allocating ECC as well as the current regulatory framework for credit capital. is to act as a buffer to absorb large unexpected losses.6 Credit Risk Capital Calculation Dan Rosen145 III. which is now the basis for banking regulation around the globe. is to act as a buffer against all the risks that may force a bank into insolvency.The PRM Handbook – Volume III III. In this chapter. economic credit capital (ECC) can be viewed as a buffer against those risks specifically associated with obligor credit events such as default. Basel II. the framework created by the Basel Committee on Banking Supervision (BCBS). From the perspective of the regulator.6.1 Introduction As discussed in Chapter III. While the role of economic capital (EC). in general. the basic rules for computing minimum credit capital under the Basel I Accord.pdffactory. apart from the transfer of ownership.B. based on regulations established by the banking supervisory authorities. In this chapter you will learn: how credit portfolio models must be defined and parameterised consistently to measure ECC from a bottom-up approach. protect depositors and other claim holders and provide confidence to external investors and rating agencies on the financial health of the firm. regulatory capital refers to the minimum capital requirements which banks are required to hold.com 315 . Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. the objectives of capital adequacy requirements are to safeguard the security of the banking system and to ensure its ongoing viability. and to create a level playing field for internationally active banks.0 presents the chronology and describes the basic principles behind regulatory capital and gives an overview of the Basel Accord. the primary role of capital in a bank.0. In contrast. 145 Algorithmics Inc. In general.1 Time Horizon While trading activities tend to involve short time horizons (a few hours to a few days). III. we discuss how credit portfolio models are applied to measure ECC from a bottom-up approach. we must explicitly consider the time horizon or holding period.5 covers the fundamentals of credit portfolio models.1 Economic Capital and the Credit Portfolio Model A credit portfolio model must be defined and parameterised consistently with the definition of economic capital used by the firm. capital is designed to absorb unexpected losses up to a certain confidence level.The PRM Handbook – Volume III the Basel II treatment for internal ratings systems and probability of default estimation. as well as the shortcomings of value-at-risk (VaR) for ECC and coherent risk measures.0. as well as the main models used in industry.6.1.5 provide the statistical distributions of potential credit losses of a portfolio over a given horizon.B.B. as well as the minimum standards for credit monitoring processes and validation methods.2.B. such as market risk.B. Bottom-up approaches provide risk-sensitive measures of capital. Section III. while credit reserves are set aside to absorb expected losses.2 Economic Credit Capital Calculation ECC acts as a buffer that provides protection against the credit risk (potential credit losses) faced by an institution. spanning several years. III.6. the definition of credit losses.pdffactory. top-down approaches do not readily allow the decomposition of capital into its various constituents. the methodologies to estimate EC at the firm level can be generally classified into top-down or bottom-up approaches. It is common Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. III.B.7 covers some advanced topics: the applications of risk contribution methodologies ECC allocation. which allow us to understand and manage these risks better. The credit portfolio models that were described in Chapter III. credit risk and operational risk. In this section. when devising an economic capital framework. credit activities generally involve longer horizons (a few months to a few years). The definitions of these three parameters are tightly linked to the actual definition of capital used by the firm. Traditionally. Hence. Operational risk and insurance activities generally involve even longer horizons. and the quantile (and definition of ‘unexpected’ losses) covered by capital. Finally.6. In particular. Chapter III.com 316 .2. As explained in Chapter III. they offer the key tools to estimate the required ECC from a bottom-up approach.6.B. it is important to set a common methodology to estimate the desired capital.The PRM Handbook – Volume III practice for credit VaR measurement to assume a one-year horizon.2 Credit Loss Definition Accounting rules. there is usually little question as to the function of economic capital. while it is most common to apply credit portfolio models in a single-step setting. the multi-step applications are becoming increasingly popular. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. and exposures at default (EAD) for individual corporate. among them the following: It accords with the firm’s main accounting cycle. Also. typically used for traditional (accrual accounted. similarly. regulatory guidance and management policy combine to determine when a loss will be recognised. Credit portfolio models require several building blocks.pdffactory. The borrower might be updating its financial information only on an annual basis. when applied on an enterprise basis. These include the following: Estimates of the probabilities of default (PD). or components.B. III.2. In this case we are only concerned whether a loan (or other instrument) would be repaid or not. Loss recognition is straightforward in some instances. the choices of time horizon and credit loss definition can be tightly linked. This type of measurement is currently the most prevalent. It is a reasonable period over which the firm will typically be able to renew any capital depleted through losses. It coincides with a reasonable period over which actions can be taken to mitigate losses for various credit assets. although longer periods are sometimes employed. for trading securities that are marked-to-market regularly or operational losses that are recognised when they occur. such estimates are required for homogeneous buckets of retail or small. 146 Note that the optimal time horizons for given portfolios might vary. Finally. banking and sovereign exposures. 146 Several reasons are cited for a one-year horizon. Credit reviews are usually performed annually. LGDs and credit events. loss given default (LGD). In these instances. for example. Management has more leeway in other instances.and mediumsized enterprise (SME) exposures. asset values or PD correlations) o exposures. A portfolio model with specific assumptions about the co-dependence of o credit events (e.6.1. However. at the enterprise level. hold-to-maturity) banking book activities. With regards to credit exposures.g.com 317 . management may choose to measure economic capital against default events only (default mode) or against both default and credit migration events (mark-to-market mode): (i) Default-only credit losses. it makes sense that the horizon should be the entire life of the loan.97% confidence level over a oneyear horizon. then some time period would be selected.2. If capital is held against defaults only. in order to limit the probability of default of the firm over a given horizon.g. independent of the life of the underlying loans (e. III.The PRM Handbook – Volume III (ii) Mark-to-market credit losses.g. which defines the ‘confidence interval’. LGDs and correlations). Therefore. or for which reliable pricing information might be available.3 Quantile of the Loss Distribution Credit VaR should capture the ECC that shareholders should invest. III.6.6. one year). consider credit risk in lending activities. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. they require market pricing information (e. For example. a bank that wishes to be consistent with an AA rating from an external credit agency might chose a 99. The choice of loss definition and time horizon is tightly linked. For example.com 318 .1. such as depositors. In this case. Both choices allow for longer-maturity instruments to carry higher capital charges. as well as providing the necessary confidence to other claim holders.2 Expected and Unexpected Losses In its most common definition.2. since the one-year default probability of an AA-rated firm is 0.B. credit spreads) and more general information on the instruments structures for marking to market. Thus.pdffactory. predetermined quantile of the credit loss distribution so that shareholders will be ‘highly confident’ that credit losses will not exceed this amount. 147 The horizon and quantile are thus key policy parameters set by senior management and the board of directors. the models used are closer to market risk models. Mark-to-market models are more commonly used for portfolios which are not held to maturity. The key rationale for subtracting EL is that credit products are already priced such that net interest margins less non- 147 But note there can be a trade-off between achieving this objective and providing high returns on capital for shareholders. The quantile.03%. it is common practice to estimate economic capital as the chosen confidence interval minus the estimated expected EL. economic capital is designed to absorb only unexpected losses (UL) up to a certain confidence level. capturing the losses due to defaults and the changes in mark-to-market due to deterioration in credit quality (or credit migration).B. is chosen to provide the right level of protection to debt holders and hence achieve a desired rating. such as bond and credit derivatives portfolios. This involves defining a high. in addition to information needed for default-only models (PDs. If capital is held against deterioration in value. Credit reserves are traditionally set aside to absorb expected losses (EL) during the life of a transaction. EADs. In Chapter III. (III. It is consistent.com 319 . see Kupiec (2002). The choices of aggregation can be divided into three main categories: Sum of stand-alone capital for each business unit or portfolio. either implicitly or explicitly. In making the combination. the credit VaR measure appropriate for EC should in fact measure loss relative to the portfolio’s initial mark-to-market (MtM) value and not relative to the EL in its end-of-period distribution. Also. If each area is modelled separately.6. the credit VaR measure that is relevant to estimating EC covers only unexpected losses: Credit VaR ( L ) Q ( L ) EL . commercial lending. the firm needs to incorporate. 149 Capital allocation credit VaR measures in this context can be negative. Therefore. In general. 148 For a detailed discussion. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. 149 This highlights the importance for banks of moving towards full MtM portfolio credit risk models and of establishing accurate estimates of MtM values of credit portfolios.The PRM Handbook – Volume III interest expenses are at least enough to cover estimated EL (and must also cover a desired return to capital).0 we show that subtracting ‘EL’ as given by the expected end-of-period value from the worse-case losses represents a simplifying approximation to estimate EC. This is the approach commonly taken by practitioners and generally leads to conservative estimates.2.B. III. a bank may have any number of methodologies for various credit segments.6. 148 More precisely. such as a bank. with one-factor portfolio models (such as the Basel II underlying portfolio model). The estimation of the interest compensation calculation and the credit MtM value of the portfolio generally require the use of an asset pricing model. This methodology essentially assumes perfect correlation across business lines and does not allow for diversification from them. including retail banking. 0. various correlation assumptions. in this case.B.g. the credit VaR measure normally ignores the interest payments that must be made on the funding debt. then the amounts of ECC estimated for each area need to be combined. Such an institution is likely to have one methodology for its larger commercial loans and another for its retail credits. acquires credit risk through various businesses and activities.pdffactory. These payments must be added explicitly to the EC measure.3 Enterprise Credit Capital and Risk Aggregation A large firm.03%) of the portfolio loss distribution at the horizon. in general. derivatives and credit derivatives trading. bonds.1) where Q ( L ) denotes the % quantile (e. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. For example.com 320 . which broadly reflects the likelihood of its conversion into an actual exposure.B. thus resulting in several variations of the implementation across jurisdictions.B.3. Full enterprise credit portfolio modelling. and applying the capital adequacy ratio. Current multi-factor credit models and technology allow the computation of credit loss distributions of large enterprise portfolios.The PRM Handbook – Volume III Ad hoc cross-business correlation. Step 1: Credit-Equivalent Assets The objective in this step is to express all on-balance sheet and off-balance sheet credit exposures in comparable numbers. III. such as letters of credit. establishing minimum capital standards that linked capital requirements to the credit exposures of banks (Basel Committee of Banking Supervision 1988). Various financial institutions today are starting to apply either in-house or commercial credit portfolio models to compute ECC at the enterprise level. While generally prescriptive. III. This can be accomplished through semi-analytical methods (e. We further discuss some of its shortcomings and the motivation for regulatory arbitrage. or full-blown Monte Carlo simulation.1 Minimum Credit Capital Requirements under Basel I The 1988 Basel I Accord focused mainly on credit risk. bank capital was regulated through simple. In order to allow for some cross-business diversification. the conversion factor of an undrawn standby letter of credit is 50%. computing loan equivalents. asset equivalents are obtained by multiplying the exposure by a percentage. fast Fourier transforms. ad hoc capital standards. which better reflect the amounts that could be lost if a transaction were to default.. The general rules for this are as follows: For contingent banking book assets. saddlepoint methods).6. a firm might aggregate the individual stand-alone capital estimates using analytical models and simple cross-business (asset) correlation estimates. Prior to its implementation in 1992.3 Regulatory Credit Capital: Basel I In this section you will learn the key principles of the Basel I Accord for computing minimum capital requirements for credit risk. Basel I left various choices to be made by local regulators. etc.pdffactory.g. This is achieved by converting off-balance sheet exposures into equivalent credit assets.6. The calculation of regulatory credit capital requirements has three steps: converting exposures to credit-equivalent assets. 150 asset equivalents are given by the instrument’s total exposure.00% 7.00% 7. (III. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. and hence its total exposure is given by its current mark-to-market.00% Residual maturity Interest rate One year or less 0.6.6. In this case. a three-year foreign exchange forward contract carries a total exposure given by its current mark-to-market plus 5% of its notional.B. the netted mark-to-market of the transactions with a counterparty divided by the gross mark-to-market (the sum of all the positive mark-tomarket transaction values). and the total add-on is adjusted as follows: Netted Add-on = Gross Add-on ? (0.B. Table III.3) where NGR denotes the net-to-gross ratio.00% 1.50% 10.00% 15.B. for example as in Table III.6.00% 7.00% 10.1. obtained through the so-called method of add-ons: Total Exposure = Actual Exposure + Potential Exposure (III.00% 5.The PRM Handbook – Volume III For derivatives in the trading book. which lead to higher credit losses should the counterparty default.00% 0.4 + 0. 150 This was originally defined in BCBS (1995).00% 8. Mark-to-Market) Potential Exposure = Notional Counterparty Add-on The potential exposure attempts to capture the change in value of derivatives. Basel I allows also for partial recognition of mitigation techniques such as netting when the proper agreements are in place. an interest-rate swap with remaining nine-month maturity is deemed to carry a 0% addon.00% 12.1: Add-ons for derivatives exposures in Basel I Exchange rate Precious metals Other except gold commodities 6.pdffactory.50% Over one year to five years Over five years and gold Equity Thus.com 321 .B. actual exposure is computed as the netted mark-tomarket values of all transactions. for example.6. that is.00% 8.2) Actual Exposure = max(0.50% 1. resulting from market fluctuations. Counterparty add-ons to measure potential exposures are given prescriptively.6 ? NGR Ratio). 6.2.The PRM Handbook – Volume III Example III.4 222.4)/535) = 58%. 10.4 Table III.B. 20.5 years 500 -100 5 25 25 3 FX option 5 months 500 -200 1 5 5 Gross 500 35 535 Netted 200 22.4. Netting in this case reduces the credit equivalent for this portfolio by (535 – 222.4 NGR ratio 0.6. excluding central government. or 50% 20% Claims on domestic public-sector entities.6.50 5 505 2 FX forward 1. With a gross MtM of $500m and a netted MtM of $200m. the NGR is 0.B.com 322 . and loans guaranteed by securities issued by such entities Claims on multilateral development banks Claims on banks incorporated in the OECD and loans guaranteed by OECD 50% Loans fully secured by mortgage on residential property 100% Claims on private sector Claims on banks incorporated outside the OECD with a residual maturity of over one year Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.B. Table III.2: Example – credit equivalents in the trading book Transaction Instrument Residual maturity Notional MtM Add-on (%) Add-on Credit equivalent 1 IR swap 3 years 1000 500 0.B.3: Basel I risk weights (examples) 0% Cash Claims on central governments.6.1 Consider a counterparty with three derivatives transactions as shown in Table III. central banks denominated in national currency 0. high-rated corporate exposures may carry much higher capital than low-rated sovereign exposures. which leads to minimum capital requirements Min capital requirements = $222. Example risk weights are given in Table III.B.6. All corporate credits elicit a risk weight of 100% regardless of their credit quality. For example.com 323 .1. Some criticisms with regard to credit risk include the following: Lack of credit quality differentiation.0.4) k Example III.B.56m. III. Risk weights broadly attempt to reflect the credit riskiness of the asset.4m ? 0.B. a facility of 366 days bears the same capital charge as a long-term facility. revolving credit exposures with a term of less than one year do not get a regulatory capital charge. Lack of proper maturity differentiation. This has led to very simple regulatory arbitrage through the systematic creation of rolling 364-day facilities. This indirectly provides incentives for banks to take bad credits and avoid high-rated credits (with lower returns). a loan to a corporate. Thus. Furthermore.3. carries a 100% risk weight. regardless of its credit rating.The PRM Handbook – Volume III Step 2: Risk-Weighted Assets Risk-weighted assets (RWAs) are obtained by multiplying the exposures (or credit equivalents) by a risk weight.3.6. Its simplicity also has been its major weakness.B. assume that the counterparty is a UK bank.08 = $3. as the accord does not effectively align regulatory capital requirements closely with an institution’s risk. which allowed it to be implemented in countries with different banking and accounting practices.2 ? 0.6. while a credit exposure to an OECD government has a 0% weight.6. (III.B. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.2 Weaknesses of the Basel I Accord for Credit Risk As mentioned in Chapter III. Then the risk weight is 20%.pdffactory. a loan to an AAArated corporate such as GE carries five times more capital than a loan to a bank in Korea or Turkey (20% risk weight).2 Following on from Example III.6. Step 3: Capital Adequacy Ratio – Minimum Capital Requirement The minimum capital requirements are obtained by multiplying the sum of all the RWAs by the capital adequacy ratio of 8% (also referred to as the Cook ratio): Capital RWAk 8%. a great strength of Basel I is the simplicity of the framework. For example. Similarly. The reduction of regulatory capital from the application of netting for this portfolio is also 58% (as with the credit equivalent). banks typically transfer lowrisk exposures from their banking book to their trading book. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. The Final version of the accord was published in June 2004 (BCBS.3 Regulatory Arbitrage The lack of differentiation in the accord. Lack of recognition of portfolio effects in credit risk.pdffactory. 2003).The PRM Handbook – Volume III Insufficient incentives for credit mitigations techniques. 2004). III.B. We cover the standardised and internal ratings based approaches for different types of exposures and conclude with a brief comment on the special credit capital considerations under Pillar II of the new Accord. have led to the widespread development of regulatory capital arbitrage. collateral.4. The accord does not fully recognise the risk reduction achieved through credit mitigation techniques such as netting.B.1 Latest Proposal for Minimum Credit Capital requirements In 1999. The accord does not provide any capital benefits for diversification across assets and businesses. 151 The implementation of the accord will take effect between the end of 2006 and the end of 2007.4 Regulatory Credit Capital: Basel II We introduce the latest proposals of the current Basel II Accord for credit risk capital. for example. In this section you will learn about the Pillar I rules for computing minimum capital requirement for credit capital in the Basel II Accord. the BCBS issued a proposal for a new capital adequacy framework (Basel II or BIS II Accord).B.6. or simply place them outside the regulated banking system.com 324 . III. The third consultative paper (CP3) on the new accord was released in April 2003 (BCBS. III. the development of a capital regulation that encompasses not only minimum capital requirements but also supervisory review and market discipline. Second. together with the financial engineering advances in credit risk over the last decade. guarantees and credit derivatives. Through regulatory arbitrage instruments.6.3. This refers to the process by which regulatory capital is reduced through instruments such as credit derivatives or securitisation. Basel II attempts to improve capital adequacy framework along two important dimensions: First. without an equivalent reduction of the actual risk being taken.6. 151 A small number of open issues are still to be resolved during 2004. a substantial increase the risk sensitivity of the minimum capital requirements. The PRM Handbook – Volume III The reader is referred to Chapter III.0 for a general discussion on the Basel II Accord. In this section we summarise the key principles and formulae for the computation of minimum capital requirements for credit risk under Pillar I of the new accord. For greater detail, the reader is referred to the BCBS papers which can be found at www.bis.org . As with Basel I, minimum capital requirements consist of three components: 4. definition of capital (no major changes from Basel I); 5. definition of RWA; 6. minimum ratio of capital/RWA (remains 8%). Basel II proposes substantive changes to the treatment of RWAs for credit risk relative to Basel I. It moves away from a one-size-fits-all approach through the introduction of three distinct options for the calculation of credit risk. These approaches present increasing complexity and risk-sensitivity. Banks and supervisors can thus select the approaches that are most appropriate to the stage of development of banks’ operations and of the financial market infrastructure. Similar to Basel I, total minimum capital requirements are obtained by multiplying the riskweighted assets by the capital adequacy ration of 8%, as in equation (III.B.6.4) 8: Capital RWAk 8% . k The calculation of RWAs can be done through two types of approach: the standardised approach and the internal ratings based (IRB) approach. The IRB approach has two variants, called foundation and advanced IRB. III.B.6.4.2 The Standardised Approach in Basel II This approach is similar to Basel I in that Basel II requires banks to slot their credit exposures into supervisory categories based on observable characteristics of the exposures (e.g. whether it is a corporate loan or a residential mortgage loan), and then establishes fixed risk weights corresponding to each supervisory category. Important differences from Basel I include the following: Use of external ratings. The standardised approach allows the use of external credit assessments to enhance risk sensitivity. The risk weights for sovereign, interbank, and 8 In addition, in (BCBS, 2004) the committee introduced a scaling factor. Where aggregate capital is lower than under Basel I, the new requirements must be scaled by a factor, currently estimated at 1.06, such that overall capital levels do not fall. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 325 The PRM Handbook – Volume III corporate exposures are differentiated based on external credit assessments. For sovereign exposures, these credit assessments may include those developed by OECD export credit agencies or private rating agencies. The use of external ratings for corporate exposures is optional. Where no external rating is applied to an exposure, the approach mandates that in most cases a risk weighting of 100% be used (as in Basel I). In such instances, supervisors are to ensure that the capital requirement is adequate given the default experience of the exposure type. For example, for claims on corporates the risk weights are given in Table III.B.6.4. Table III.B.6.4: Standardised risk weights for corporate exposures Credit assessment Risk weight AAA to AA– 20% A+ to A– BBB+ to BB– 50% 100% Below B– Unrated 150% 100% Loans past-due. A loan considered past-due requires a risk weight of 150%, unless a threshold amount of specific provisions has already been set aside against the loan. Credit mitigants. The approach recognises an expanded range of credit risk mitigants: collateral, guarantees, and credit derivatives. The approach expands the range of eligible collateral beyond OECD sovereign issues to include most types of financial instruments. It also sets several approaches for assessing the degree of capital reduction based on the market risk of the collateral instrument. Finally, it expands the range of recognised guarantors to include all firms that meet a threshold external credit rating. Retail exposures. The risk weights for residential mortgage exposures are reduced relative to Basel I, as are those for other retail exposures, which receive a lower risk weight than that for unrated corporate exposures. In addition, some loans to SMEs may be included within the retail treatment, subject to meeting various criteria. Through several options, the standardised approach attempts to improve the risk sensitivity of the RWAs. Basel II also provides a ‘simplified standardised approach’, where circumstances may not warrant a broad range of options. Banks under the simplified methods must also comply with the supervisory review and market discipline requirements of Basel II. Example III.B.6.3 Based on Table III.B.6.4, a loan to a corporate obligor, with a AA rating from an external agency, would have a capital requirement of one-fifth under the Basel II standardised approach Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 326 The PRM Handbook – Volume III compared to Basel I (a risk weight of 20% versus 100%). Similarly, a loan to an A-rated obligor would see its capital requirement going to one-half (50% weight). III.B.6.4.3 Internal Ratings Based Approaches: Introduction In the IRB approaches, banks’ internal assessments of key risk drivers serve as primary inputs to the capital calculation, leading to more risk-sensitive capital requirements. This is a substantial difference from both Basel I and the standardised approach. However, the IRB approach does not fully allow for internal portfolio models to calculate capital requirements. Instead, the risk weights (and thus the capital charges) are determined through the combination of quantitative inputs provided by banks and formulae specified by the accord. These formulae, or risk-weight functions, are based on a simple credit portfolio model, and thus align more closely with modern risk management techniques. The IRB approach includes two variants: a foundation approach and an advanced approach. The IRB approaches cover a wide range of portfolios, with the mechanics of the calculation varying somewhat across exposure types. In the remainder of this section we present the key inputs and principles behind the regulatory risk-weight formulae, and highlight the differences between the foundation and advanced IRB approaches by portfolio, where applicable. We present these concepts first for wholesale exposures (corporate, bank and sovereigns); then we briefly cover retail exposures and SMEs, as well as specialised lending and equity exposures. 9 III.B.6.4.4 IRB for Corporate, Bank and Sovereign Exposures The IRB calculation of RWAs relies on four quantitative inputs, referred to as the risk components: Probability of default (PD): the likelihood that the borrower will default over one year. Exposure at default (EAD): the loan amount that could be lost upon default; for loan commitments this is the amount of the facility likely to be drawn if a default occurs. Loss given default (LGD): the proportion of the exposure that will be lost if a default occurs. Maturity (M): the remaining economic maturity of the exposure. Given these four inputs, the corporate IRB risk-weight function produces a capital requirement for each exposure. The RWA for a given exposure is given by RWA 12.5 EAD K . (III.B.6.5) 9 Risk weight functions represent the latest version of the Accord (BCBS, 2004) Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 327 The PRM Handbook – Volume III The capital requirement, K, is the minimum capital per unit exposure, and is given by 10 K LGD N N 1 ( PD ) RN 1 0.999 1 R PD MF ( M , PD ) , (III.B.6.6) where N(.) denotes the cumulative normal distribution and N–1(.) is its inverse. The formula for K is based on a simple credit portfolio model, with some adjustments as follows: The term N(.) represents the 99.9% default losses of an infinitely granular homogeneous portfolio of unit exposure and 100% LGD, under a one-factor Merton-type credit model. 11 The term LGD N . PD denotes the unexpected default losses 12 of the infinitely granular portfolio (already adjusted for loss given default); that is, the 99.9% losses minus the expected losses (EL = PD ? LGD). The parameter R denotes the one-factor asset correlation for the homogeneous portfolio in the credit portfolio model. It is obtained from a calibration exercise by the BCBS: R 0.12 1 e 50 PD 1 e 50 0.24 1 1 e 50 PD 1 e 50 . (III.B.6.7) R is a decreasing function of the default probability ranging from 24% to 12%, with higher-quality obligors showing higher systemic risk than lower-quality obligors. Figure III.B.6.1 gives the correlation parameter R for corporate exposures (as well as for retail). The final component of the capital requirement in (III.B.6.6) is the maturity function, MF, which is given by MF ( M , PD ) 1 ( M 2.5) b ( PD ) 1 1.5b( PD ) (III.B.6.8) with b ( PD ) [0.11852 0.05478 log( PD )]2 (the function b is referred to as the maturity adjustment). The maturity function MF is equal to one for loans of one-year maturity. Obtained from a calibration exercise by the BCBS, MF empirically adjusts further the default losses (given by N(.) ) to the MtM losses of loans of higher maturity than one year. 10 Note that the 12.5 multiplier cancels the 8% term in the capital. This simply allows the RWA to be expressed consistently with the Basel I formulae. 11 See Chapter III.B.5 for credit portfolio models and Gordy (2003) for a detailed treatment of this formula. 12 The original formulae in CP3 (BCBS, 2003) did not subtract expected losses. After a period of consultation on the role of EL, the final version bases the risk weights exclusively on unexpected losses. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 328 The PRM Handbook – Volume III For corporate, bank and sovereign exposures the foundation IRB and advanced IRB approaches differ primarily in terms of the inputs that are provided by a bank based on its own estimates and those that have been specified by the supervisor. These differences are summarised in Table III.B.6.5. Table III.B.6.5: Foundation and advanced IRB for corporate, bank and sovereign exposures Data Input Foundation IRB Probability of Provided by bank based on own default (PD) estimates Loss given default (LGD) Exposure at default (EAD) Maturity (M) Supervisory values set by the Advanced IRB Provided by bank based on own estimates Provided by bank based on own estimates Committee Supervisory values set by the Provided by bank based on own estimates Committee Supervisory values set by the Provided by bank based on own estimates Committee, (with an allowance to exclude certain or, at national discretion, provided by exposures) bank based on own estimates (with an allowance to exclude certain exposures) Thus, all IRB banks must provide internal estimates of PD. In addition, advanced IRB banks must provide internal estimates of LGD and EAD, while foundation IRB banks will make use of supervisory values that depend on the nature of the exposure. For example, under the foundation approach: LGD=45% for senior claims, LGD=75% for subordinated claims. These initial LGDs are then adjusted to reflect eligible collateral and guarantees provided for each transaction. In BCBS (2004), the committee decided to adopt a more stringent definition of LGD, where LGDs must be determined based on economic downturn values, and not averages over the cycle or current values. A major element of the IRB framework pertains to the treatment of credit risk mitigants: collateral, guarantees and credit derivatives. The LGD parameter provides a great deal of flexibility to assess the potential value of credit risk mitigation techniques. For foundation IRB Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 329 The PRM Handbook – Volume III banks, therefore, the different supervisory LGD values reflect the presence of different types of collateral. Advanced IRB banks have even greater flexibility to assess the value of different types of collateral. With respect to transactions involving financial collateral, the IRB approach seeks to ensure that banks are using a recognised approach to assess the risk that such collateral could change in value, and thus a specific set of methods is provided, as in the standardised approach. In the case of trading book exposures, Basel II outlines the same treatment of EADs as in Basel I, calculating loan equivalents through the use of add-ons and allowing partial recognition of netting and mitigation (e.g. as in equations (III.B.6.2) and (III.B.6.3)). However, practitioners and industry associations like ISDA have pointed out the limitation of such a technique in terms of its accuracy and risk sensitivity, its recognition of mitigation and natural offsets, and the overconservative capital it demands for trading exposures, compared to loans. 13 As industry pressure is building to allow for internal models for trading book EADs, the BCBS is currently revising this topic. 14 Advanced IRB banks will generally provide their own estimates of effective maturity for these exposures, although there are some exceptions where supervisors can allow fixed maturity assumptions. For foundation IRB banks, supervisors can choose on a national basis whether to apply fixed maturity assumptions or to provide their own estimates of remaining maturity. III.B.6.4.5 IRB for Retail Exposures For retail exposures, there is only a single, advanced IRB approach and no foundation IRB alternative. Retail exposures are classified into three primary product categories, with a separate risk-weight formula for each: residential mortgages exposures (RMEs); qualifying revolving retail exposures (QRREs); other retail exposures (OREs). The QRRE category refers to unsecured revolving credits, which include many credit card relationships. The other retail category refers to all other non-mortgage consumer lending, including exposures to small businesses. 13 For example, short of allowing for full portfolio models, Canabarro et al. (2003) propose to use as a loan equivalent exposure for a given counterparty the expected positive exposure (EPE) derived from an internal model (e.g. from a Monte Carlo simulation), perhaps increased by a small percentage. 14 The SEC in the USA has issued a proposed capital rule for broker-dealers, aligned with Basel II requirements, which allows for internal models; see Securities and Exchange Commission (2004). Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 330 The PRM Handbook – Volume III The key inputs to the IRB retail formulae are PD, LGD and EAD, all of which are to be provided by the bank based on its internal estimates (no maturity component). In contrast to corporate exposures, these values are not estimated for individual exposures, but instead for pools of similar exposures. Given these three inputs, the retail IRB risk-weight function produces a specific capital requirement for each homogeneous pool of exposures. The risk-weighted assets for a given exposure are also given by expression (III.B.6.5). The formula for the capital requirement K for all three retail product categories is K LGD N N 1( PD ) RN 1 0.999 1 R PD (III.B.6.9) where again N denotes the cumulative normal distribution and N–1 is its inverse. The term N(.) – PD is the same as for corporate exposures: the 99.9% unexpected default losses of an infinitely granular homogeneous portfolio of unit exposure and 100% LGD. As there is no maturity adjustment for retail exposures, capital only covers default risk. The parameter R denotes the one-factor asset correlation for the homogeneous portfolio and is different for each product category, according to a calibration exercise by the BCBS: residential mortgages QRREs R 15% 15 other retail R 4% R 0.03 1 e 35PD 1 e 35 0.16 1 1 e 35PD 1 e 35 Figure III.B.6.1 gives the correlation parameter R for retail as well as corporate exposures. Correlations are in practice smaller for retail than wholesale. Corporate exposures have declining correlations ranging from 24% to 12% . Retail correlations are generally smaller, flat at 4% and 15% for QRREs and mortgages, and from 16% to 3% for other retail. 15 In CP3, the correlation for QRREs was originally given by 1 e 50 PD 1 e 50 PD 0.11 1 50 1 e 1 e 50 Industry groups recommended revisions for the retail correlation curves based on best practices (e.g. RMA, 2003), suggesting that correlations for retail exposures were lower and not dependent on PD, as with wholesale exposures. In response, the final version of the accord uses a flat 4% correlation. R 0.02 Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 331 The PRM Handbook – Volume III Figure III.B.6.1: Correlation parameter, 0.24 Corporate 0.2 Mortgages 0.16 QRRE -CP3 0.12 Other Retail QRRE - Final 0.08 0.04 0 0 0.05 0.1 In (III.B.5.9), as with wholesale, EL = LGD 0.15 0.2 PD PD is excluded from minimum capital, but is to be monitored through rules that determine the value of a bank’s total regulatory capital. 16 III.B.6.4.6 IRB for SME Exposures Exposures not classified as purely ‘retail’, but with turnover of less than €50m, are classified as SMEs. They are further divided into: retail SMEs, where exposures are less than €1m (they may be treated as retail exposures if they are managed by the bank in the same way as retail exposures); corporate SMEs, where the exposure is more than €1m. SMEs that fall into the corporate approach will apply the corporate IRB risk-weight formula, with an optional firm-size adjustment, which leads to a discount in minimum capital. The value of the adjustment is a function of turnover of the borrower (up to €50 million). The average discount will be about 10%, and can range between 1% and 20%. Within this range, a mid-sized firm with a 2% PD would be weighted at 100%. Retail SMEs can be treated using the retail IRB formulae. This requires showing that their exposures are below the €1m threshold and that the bank indeed manages them as aggregated retail (lower-risk, small, diversified loans managed on a pooled basis). 16 In this sense, banks will be required to compare loan loss provisions (general and specific) to EL, and deduct any shortfall from its capital; excess provisions are credited through Tier 2 capital. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 332 however. Banks adopting the IRB approach to credit risk will be required to perform a meaningfully conservative stress tests of their own design to estimate the potential increases in capital requirements during a stress scenario. including a minimum risk weight of 100% in many circumstances. Stress-test results are to be used as a means of ensuring that banks hold a sufficient capital buffer to protect against adverse or uncertain economic conditions.4.7 IRB for Specialised Lending and Equity Exposures Specialised lending is associated with the financing of individual projects where the repayment is highly dependent on the performance of the underlying pool or collateral. To the extent that there is a capital shortfall. Since the hurdles for meeting these criteria for this set of exposures may be more difficult in practice.com 333 . III. they can use the corporate IRB framework to calculate risk weights. if banks can meet the minimum criteria for the estimation of the relevant inputs. For ‘high-volatility commercial real estate’ (HVCRE). for example. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Two options are given to treat these exposures: For all but one of the specialised lending subcategories.B. and for supervisors to review and take appropriate actions in response to those assessments.pdffactory. CP3 also includes an additional option that only requires that a bank classify such exposures into five distinct quality grades. This approach.The PRM Handbook – Volume III III. supervisors may.6. Two distinct approaches are given: The first approach builds on the PD/LGD approach for corporate exposures and requires banks to provide their own PD estimates for the associated equity exposures.6. IRB banks that can estimate the required inputs will use a separate risk weight that is more conservative than the general corporate risk weight. require a bank to reduce its risks or increase its existing capital resources to cover its minimum capital requirements plus the results of a recalculated stress test.4. mandates the use of a 90% LGD value and also imposes various other limitations. Important new components of Pillar II for credit risk include the treatment of the following: Stress testing. A simplified version of this approach with fixed risk weights for public and private equities is also included.8 Comments on Pillar II Pillar II of the Basel Accord (supervisory review) is based on a series of guiding principles which point to the need for banks to assess their capital adequacy positions relative to their overall risks.B. The second approach provides the opportunity to model the potential decrease in the market value of equity holdings over a quarterly holding period. and provide a specific risk weight for each of these grades. IRB banks must separately treat their equity exposures. Thus. Within Pillar II. For minimum capital calculations. corresponding to its internal rating grade.B.1 Methodology for PD Estimation Basel II requires that PDs be derived from a two-stage process: 1. obligors are assigned to rating grades based on their ability to remain solvent over a full business cycle or during stress events.The PRM Handbook – Volume III Concentration risk. the minimum standards for credit monitoring processes and the validation methods. which must be based on clear rating criteria.2 Point-in-Time and Through-the-Cycle Ratings Since pooled PDs are assigned to rating buckets (rather than to individual obligors directly).pdffactory. 2000): In a point-in-time (PIT) rating approach.6. they can differ meaningfully from individual obligors’ PDs that result from a forecasting model. Each obligor must be classified into a risk bucket. guarantees and credit derivatives. 2. In this sense. Obligors within a bucket share the same credit quality as assessed by the bank’s internal credit rating system.com 334 . PIT ratings tend to rise during business expansions as obligors’ creditworthiness improve and tend to fall during recessions.6. a rating changes as the borrower’s conditions changes over the credit/business cycle. In general. credit rating approaches can be classified into two categories (see.6. Minimum capital risk weights assume that the portfolio is large and well diversified. Thus. obligors are classified into rating grades based on the best available current credit quality information. the credit ratings approach can have a substantial effect on the on the minimum capital requirements. for example. this PD is assigned to each obligor in a given bucket. Pooled PDs must be long-run averages of one-year realised default rates for borrowers in the bucket.5 Basel II: Credit Model Estimation and Validation In this section we broadly introduce the Basel II methodology for probability of default estimation.5. banks will need to assess the degree of concentration risk they incur. The internal rating reflects an assessment of the borrower’s current condition and/or most likely future condition over the time horizon.B. ‘bottom of the cycle’ Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. In a through-the-cycle (TTC) rating approach. A pooled PD is calculated for each bucket. as well as specific securitisation exposures (such as significant risk transfer and considerations related to the use of call provisions and early amortisation features).5. III. the distinction between point-in-time and through-the-cycle ratings. III.B. III. A borrower’s riskiness assessment is thus based on a worst-case. BCBS. Residual risks arising from the use of collateral. The range of economic conditions that are considered when making assessments must be consistent with current conditions and those that are likely to occur over a business cycle within the respective industry/geographic region. Other minimum standards include the following: Internal rating systems should accurately and consistently differentiate degrees of risk.pdffactory.’ Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. While not explicitly requiring that rating systems be PIT or TTC. In practical terms.6. processes.6. but it establishes minimum standards for their credit monitoring processes. Alternatively. store and utilise loss statistics over time in a reliable manner. In practice. and the estimation of all relevant risk components. there is large variation between banks’ rating approaches. Banks must have independent and transparent ratings processes and internal reviews. Since they emphasise stress conditions. borrowers’ ratings would tend to be more stable over the credit/business cycle. appropriate stress scenarios. III. Banks can rely on their own internal data. While the accord does not go into methodological details. a bank may take into account borrower characteristics that are reflective of the borrower’s vulnerability to adverse economic conditions or unexpected events.B.com 335 .The PRM Handbook – Volume III scenario. or data derived from external sources as long as they can demonstrate the relevance of such data to their own exposures. Furthermore. the terms PIT and TTC are often defined poorly and used differently across institutions.B. A strong control environment must be in place to ensure that banks’ rating systems perform as intended and that the resulting ratings are accurate.5. Thus. banks will be expected to have in place a process that enables them to collect. a bank may base rating assignments on specific. Banks must define clearly and objectively the criteria for their rating categories. Basel II does hint at a preference for TTC approaches. banks that do not estimate PDs effectively or systematically misrepresent them will report substantial differences with respect to their peers. paragraph 415): ‘A borrower rating must represent the bank’s assessment of the borrower’s ability and willingness to contractually perform despite adverse economic conditions or the occurrence of unexpected events.5. 17 III. 17 From BCBS (2004. For example. without explicitly specifying a stress scenario. regulators broadly describe two empirical approaches for validating PDs: Benchmarking involves comparing reported PDs for similar obligors across banks and other external systems.3 Minimum Standards for Quantification and Credit Monitoring Processes Basel II gives banks great flexibility in determining how obligors are assigned to buckets.4 Validation of Estimates Basel II requires that banks must have a robust system in place to validate the accuracy and consistency of rating systems. 6. In a traditional securitisation. this convergence could take many years. Common examples of securitisation structures are collateralised bond obligations.The PRM Handbook – Volume III Backtesting compares the pooled PD for a grade with the actual observed (out of sample) default frequencies for that grade. the underlying pool commonly contains standard credit instruments such as bonds or loans. In contrast. Some securitisations have enabled banks under the current Basel I Accord to avoid maintaining capital commensurate with the risks to which they are exposed. In this respect. In contrast. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.B.6. Under the standardised approach.B. If a grade’s pooled PD is truly an estimate of the longrun average of the grade’s observed yearly default frequencies. for more details on these securitisation structures. This is commonly referred to as regulatory arbitrage: the avoidance of minimum regulatory capital charges through the sale or securitisation of a bank’s assets for which the true risk (and hence economic capital) is much lower than regulatory capital.pdffactory. Payments of the structure depend on the performance of the underlying exposures. Basel II provides a specific treatment for securitisation.6 in particular. thus providing important technical challenges.6. credit-linked notes) or an unfunded (e. over time the two should converge. reflecting different degrees of credit risk. Most unrated structures have a capital factor of 100% (exceptions might be some most senior tranches). See Chapter I. collateralised loan obligations and asset-backed securities. III.g. banks must assign risk weights prescribed by the accord according to various criteria such as a facility type and its external rating (if available).g. Securitisation by its very nature relates to the transfer of risks associated with the credit exposures of a bank to other parties. respectively).6 Basel II: Securitisation A securitisation is a financial structure where cash flows from an underlying pool of exposures are used to service one or more stratified positions. a synthetic securitisation uses credit derivatives or guarantees in a funded (e. or tranches. credit default swaps) way. in practice.com 336 . Banks that apply the standardised approach to the type of exposures securitised must also use it under the securitisation framework. Some examples of capital charges under the standardised approach are as follows: A structure with a long-term rating of BBB is assigned a capital of 8% times its notional while an A-rated is assigned 4% (100% and 50% risk weights. it provides better risk diversification and contributes to enhancing financial stability. which requires banks to look to the economic substance of a securitisation transaction when determining the appropriate capital requirement in both the standardised and IRB treatments.B. However. and Section I. which is used to determine the appropriate risk weights under the RBA. In essence. Securitisation exposures to which none of these approaches can be applied must be deducted. Subject to meeting various operational requirements. while a BB+ tranche gets 20% capital (250% risk weight). o LGD – the exposure-weighted average LGD of the pool.com 337 . as well as the granularity of the underlying pool.g. those interested in the mathematics of the SFA are referred to Gordy and Jones (2003) and Pykhtin and Dev (2002 and 2003). This approach is an attempt to provide a closed-form capital charge. the SFA is based on five bank-supplied parameters: o Kirb – the capital charge of the underlying pool of exposures. either the SFA or the IAA must be applied.e. The reader is further referred to various BCBS documents on securitisation. Supervisory formula approach (SFA). o L – the credit enhancement supporting a given tranche (i. The RBA must be applied to securitisation exposures that are rated.B.6. Internal assessment approach (IAA). based on a bottom-up risk assessment of the structure.pdffactory. liquidity facilities and credit enhancements).56% capital (a 7% risk weight). before they hit the tranche). the capital factor and the CCF). how big is the buffer absorbing credit losses. a bank may use its internal assessments of the credit quality of the securitisation exposures that it extends to asset-backed commercial paper programmes (e. had the assets not been securitised. o N – the effective number of exposures in the pool. Basel II prescribes the use of these three approaches in a hierarchical manner. for example.The PRM Handbook – Volume III Eligible liquidity facilities satisfying some basic criteria might be assigned a lower capital by means of a credit conversion factor (CCF) which is less than 100% (total capital is the product of the notional. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. o T – the thickness of the tranche. The IRB approach proposes three methods for calculation: Ratings-based approach (RBA). a AAA thick tranche with granular pool is assigned 0. Where an external or an inferred rating is not available.6)). Banks applying the IRB approach for the type of exposures securitised must also apply it to securitisations. The internal assessment is then mapped to an equivalent external rating. Thus. The methodology used to determine the formula is based on similar mathematical modelling of the problem to that used to derive the IRB capital charge of individual exposures (see equation (III. or where a rating can be. In this case capital factors are tabulated based on a tranche’s credit rating and thickness. EC allocation down the portfolio is required for management decision support and business planning.6. 20 The discreteness of individual credit losses leads to non-smooth profiles and marginal contributions.The PRM Handbook – Volume III III.7. This limitation is particularly relevant for credit losses.. since loss distributions are far from normally distributed.4. There is no unique method to allocate ECC.2).com 338 .B. performance measurement and risk-based compensation. and might be more appropriate for a particular managerial application. The most common approach used today to attribute ECC on a diversified basis is based on the marginal contribution to the volatility (or standard deviation) of the portfolio losses.6. the natural choice for allocating capital is the risk contributions to a VaR-based measure.2 classifies EC contributions into stand-alone contributions. but is added for completeness. pricing. and the shortcomings of VaR for ECC and coherent risk measures. However. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. and in some cases a loan’s capital charge can even exceed its exposure (see Praschnik et al. profitability assessment and limits.1 Credit Capital Allocation and Marginal Credit Risk Contributions In this section. we briefly highlight key issues on the application of the methodologies introduced in Section III. 2001. to maximise risk-adjusted returns. business units and even individual transactions and allocate it a priori in an optimal fashion. Recently. the pointwise nature of VaR has generally lead to difficulties in computing accurate and stable risk contributions with simulation. Kalkbrener et al. Such allocations are generally ineffective for credit risk.7.4. which may be far from normal and not even smooth. building optimal risk–return portfolios and strategies. 2004). while VaR is sub-additive for normal distributions. it is important to develop methodologies to attribute this capital a posteriori to various sub-portfolios such as the firm’s activities. this is not true in general. producing inconsistent capital charges.B. VaR has several shortcomings since it is not a coherent risk measure (see Section III.6. III. incremental contributions. In addition to computing the total ECC for portfolio.B. Specifically. 18 This section is not mandatory for the exam. Given the definition of ECC. 19 Every methodology has its advantages and disadvantages.7 Advanced Topics on Economic Credit Capital 18 In this section we review the application of credit risk contribution methodologies for ECC allocation. 19 The reader is reminded that there is currently no universal terminology for these methodologies in the literature.2 for computing marginal credit risk contributions for ECC allocation.0.pdffactory.0. Section III. 20 Furthermore.. and marginal contributions. ES) has recently been developed (see Gouri閞oux et al.B. 2004. Kurth and Tasche. However.. sub-additivity is a property of risk measures required to account for portfolio diversification. 2000. A risk measure is said to be sub-additive if (X Y) (X ) (Y ) (III. however. 99.10) for any two portfolios X and Y. stochastic (correlated) modelling of exposures.g. more flexible codependence structures. and loss given default. Martin et al. VaR has several shortcomings since it is not a coherent risk measure (in the sense of Artzner et al. multiple asset classes and default models. Mausser and Rosen. 2000. at a confidence level that is less than 100% (e. a modification of the standard interpretation of ECC to act as ‘a buffer for an expected loss conditional on exceeding a certain quantile’. While computing the conditional expectations can be challenging when credit losses are estimated from a Monte Carlo simulation. 2004).0. This leads to measures of capital that reflect a given quantile of a credit loss distribution.9%). Tasche.The PRM Handbook – Volume III several authors have proposed the use of expected shortfall (ES) for attributing ECC (see.pdffactory. which include diversification through multiple factors..7.B.g. 2004). various methodologies have been devised in recent years for VaR and ES contributions in simulation models (see Kalkbrener et al. As explained in Section III. The confidence level is consistent with the desired credit rating of the firm.. Several semi-analytical approaches have also recently been proposed for VaR or ES contributions (e.16)).com 339 . Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. III.6. VaR. marginal contributions require the computation of a derivative of the risk measure (see equation (III. 2001. ES represents a good alternative both for measuring and allocating capital. As a coherent risk measure. 2002).4. Thus. VaR is not sub-additive in general. VaR is an intuitive measure for ECC.2 Shortcomings of VaR for ECC and Coherent Risk Measures In its common interpretation.0. 2003. Hallerbach. This requires.6. ES yields additive and diversifying capital allocations. ECC is a buffer that provides protection against potential credit losses... Given this definition. Simulation provides the flexibility to support more realistic credit models.g. for example. The general theory behind the definition and computation of these derivatives in terms of quantile measures (e. 1999). In particular. 2003).. Kalkbrener et al. In particular. 1999) is well developed and has become popular among academics and practitioners. to one where the buffer would cover the ‘expected losses conditional on reaching an x% loss’.5% and a 50% LGD.The PRM Handbook – Volume III While VaR is always sub-additive for normal loss distributions. Expected losses are $100 ? 0.B. However the stand-alone VaR of each loan is zero (thus the stand-alone capital of each loan remains.com 340 . the stand-alone capital of this position is negative. of course. Based on VaR. This leads to a 'negative' unexpected loss and. the type of credit loss (default only or mark-to-market) and the confidence level (or quantile) of the loss distribution. III. Furthermore. Now consider a portfolio that invests $100 equally in 10 one-year loans to different BBB obligors. Today. in more general cases the total portfolio VaR might be higher than the sum of stand-alone VaRs.005 = $0. Coherent risk measures such as expected shortfall present a good alternative both for measuring and allocating capital. However. This definition includes the time horizon.5 ? 0. the modification of the standard definition and interpretation of capital as a buffer to cover x% losses. Assume obligor defaults are independent and a 50% LGD for all of them.89% chance that there is at least one defaulted loan. Credit portfolio models must be defined and parameterised consistently with the ECC definition of the firm.25.25).. Since ECC is Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.25.6.4 Consider a simple one-year BBB loan with a notional of $100. they are used broadly by practitioners for estimating ECC and managing credit risk at the portfolio level. –$0.75. In this case. There is now also a 95. Thus. the 99% VaR is equal to $5 ($10 notional ? 50% LGD). EL is still $0. The theory of coherent risk measures (Artzner et al. thus. given the discreteness of individual credit losses Example III.11% probability of no defaults and a 99. the 99% VaR is 0 (we are more than 99% certain that we will not incur a loss). in principle. which are far from normal and not smooth.8 Summary and Conclusions This chapter reviews the main concepts for estimating and allocating ECC as well as the current regulatory framework for credit capital.6.pdffactory. the total credit capital to support this portfolio is $4. The obligor has a PD of 0. This is particularly relevant to credit loss distributions. This requires. various institutions are starting to apply credit portfolio frameworks to compute and manage ECC at the enterprise level. Credit portfolio methodologies are the key tools to compute ECC from a bottom-up approach.B. com 341 . the IRB risk-weight formulae are based on solid credit portfolio modelling principles. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. M.bis. Journal of Empirical Finance. C. and Wilde. In particular. D (1999) Coherent measures of risk. but also on a holistic approach to managing risk at the enterprise level. Gordy. Risk. and Jones. References Artzner. Mathematical Finance. J-M. March. While today falling short of allowing the use of credit portfolio models to estimate regulatory credit capital. Gouri閞oux. pp. Discussion paper. J-P. D (2003) Random tranches. E.org Basel Committee on Banking Supervision (2003) The new Basel capital accord: Consultative document. pp. pp.bis. O (2000) Sensitivity analysis of values at risk. M (2003) A risk-factor model foundation for ratings-based bank capital rules. F. it is commonly estimated by a VaR-type measure (at the defined confidence level) which subtracts expected losses. E. Risk. it allows banks to use internal models for estimating key credit risk components (PDs. 199–232. Available at http://www. 117–122. Finally. available at http://www. the new Basel II Accord for banking regulation has introduced a closer alignment of regulatory credit capital with current best-practice credit risk management and ECC measurement. Laurent. Available at http://www. Basel II has introduced various approaches for minimum capital requirements of increased complexity and alignment with the credit riskiness of an institution. Available at http://www. exposures and LGDs). 12(3). Delbaen. and Scaillet. Basel II focuses not only on the computation of regulatory capital. with its three-pillar foundation.org Canabarro. Basel Committee on Banking Supervision (1988) International convergence of capital measurement and capital standards. 225–245. pp. regulatory credit capital has differed significantly from ECC. P. September. and discuss other measures such as expected shortfall. We further address some potential shortcomings of VaR for measuring risk as well as for allocating capital. 7(3–4). T (2003) Analysing counterparty risk. Picoult.bis. and Heath. Available at http://www.pdffactory.bis. for the first time. In the past. However. Eber.bis. 78–83.org Basel Committee on Banking Supervision (1995) Basel capital accord: treatment of potential exposure for off-balance-sheet items.The PRM Handbook – Volume III designed to absorb unexpected losses up to a certain confidence level.org Basel Committee on Banking Supervision (2000) Range of practice in banks’ internal ratings systems. Journal of Financial Intermediation. Furthermore. pp. 9(3).org Basel Committee on Banking Supervision (2004) International convergence of capital measurement and capital standards: A revised framework. Gordy. 203–228. Praschnik. H. Philadelphia: SIAM. pp. 14(10). pp.htm Tasche. P (2002) Calibrating your intuition: Capital allocation for market and credit risk. 1–18. Risk Management Association. J. Risk Management Association (2003) Retail credit economic capital estimation – best practices. May. Kupiec. August. S16–20.The PRM Handbook – Volume III Hallerbach. M. available at http://www. and Rosen. pp. Tasche. and Overbeck. 17 CFR Part 240. Working paper.rmahq.sec. D (2002) Expected shortfall and beyond. Risk. Technische Universit鋞M黱chen. pp. K. Kurth. D (2003) Contributions to credit risk. W G (2003) Decomposing portfolio value-at-risk: a general analysis. and Dev. M.pdffactory. A (2001) Calculating the contribution. M. January. Technische Universit鋞 M黱chen. IMF Working Paper WP/02/99.imf. Working paper. Pykthtin. Risk. March. January.org Securities and Exchange Commission (2004) Proposed rule: Alternative net capital requirements for broker-dealers that are part of consolidated supervised entities.com 342 . Lotter. and Browne. Applications of Stochastic Programming. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Hayt. Thompson. H. pp. pp. In S W Wallace and W T Ziemba (eds). S25–S27. 113-116. A (2002) Credit risk in asset securitisations: an analytical model. Working Paper. S19–S24. Kalkbrener.org Martin. A. A (2003) Coarse-grained CDOs. C (2001) VAR: who contributes and how much? Risk. and Principato. 5(2). 84–88. available at http://www. Risk. Journal of Risk. Risk. L (2004) Sensible and efficient capital allocation for credit portfolios. pp 99–102. and Tasche. Risk.gov/rules/proposed/34-48690. R. D (2000) Conditional expectation as quantile derivative. Pykthtin. D (2004) Scenario-based risk management tools. G. and Dev. http://www. Mausser. some leading to the demise of once revered..1 Introduction Operational risk management has become increasingly important for financial institutions over the past several years. e.1 The Operational Risk Management Framework Michael K. These factors contribute to the increasing complexity of banking activities and.com 343 .C. for identifying specific operational risk failures.C. the risk catalogue and risk scorecard. The report goes on to say that ‘in its rating analysis of banks. Carol Alexander and Elizabeth Sheedy. a significant number of high-impact and highprofile losses. it explains how to make the risk assessment process work through the involvement of senior management and every business unit within the institution. have pointed consistently to failure in operational risk management. ‘Operational risk is as old as the banking industry itself’. in one way or another.pdffactory. Fitch will be looking for evidence of a clearly articulated definition of operational risk. the growing sophistication of financial technology and the rapid deregulation and globalisation of the financial industry. Operational risk has received a lot of attention recently although it is not an entirely new field of risk management. III. The need for a better understanding of operational risk is driven primarily by two factors. Stuart Graduate School of Business. This seemingly sudden awareness of operational risk management is quite ironic considering that operational risk has always been an integral risk associated with doing business. Over the past few years. Many of the biggest losses in the financial industry and the corporate arena can be attributed. It outlines the key components of operational risk and presents some useful tools. to operational risk failures. examining the quality of an organization’s structure and operational risk culture.The PRM Handbook – Volume III III. Illinois Institute of Technology.1. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. the industry has only recently arrived at a definition of what it is’. the rating agency Fitch reports.g. Ong 152 In this chapter I provide a brief outline of how to establish an operational risk management framework within an institution. therefore. scheduled for promulgation in 2006. namely. well-respected institutions. 152 Professor of Finance and Executive Director of the Center for Financial Markets. have also provided some guidance (primarily to banking institutions) on the types of operational risk failure and their associated loss event types. heighten the operational risk profile of the financial services industry. ‘and yet. The chapter begins by highlighting some of the better-known losses in the recent past and argues why it is important for individual institutions to define what operational risk means to them. The author wishes to extend his sincerest thanks to the editors. for their careful editing of this chapter. The chapter then discusses the goals and scope of an operational risk management framework. Finally. The Basel II proposals. either directly or indirectly. Since operational risk will affect credit ratings.. and organisational reputation. the Basel Committee for Banking Supervision (2003) cites the emergence of new forms of risk that require attention immediately: ‘Developing banking practices suggest that risks other than credit. analysts will increasingly include it in their assessment of the management.g. Banks may engage in risk mitigation techniques (e. Moody’s goes on to comment that: ‘The control of operational risk is fundamentally concerned with good management. In fact. It must. Moody’s believes that ‘operational risk management improves the quality and stability of earnings. The emergence of banks acting as large-volume service providers creates the need for continual maintenance of high-grade internal controls and back-up systems. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. but which in turn may produce other forms of risk (e. Large-scale acquisitions. data collection efforts. and overall approach to operational risk quantification and management’ (Ramadurai et al. legal risk).. mergers. internal and external fraud and system security issues) that are not yet fully understood.g. 2004). how institutions manage their operational risks is likely to influence how they will be rated by the rating agencies. be a key consideration for any business. share prices.The PRM Handbook – Volume III the development of its approach to the identification and assessment of key risks.. therefore. which involves a tenacious process of vigilance and continuous improvement. Against the background of greater complexity and opaqueness in the banking industry due to technological advancement. 2003).’ Thus rating agencies are now clearly interested in how financial institutions manage their operational risk. Examples of these new and growing risks faced by banks include: If not properly controlled. In addition. collateral.pdffactory. netting arrangements and asset securitisations) to optimise their exposure to market risk and credit risk. on bottom-line performance.com 344 . de-mergers and consolidations test the viability of new or newly integrated systems. interest rate and market risk can be substantial. the greater use of more highly automated technology has the potential to transform risks from manual processing errors to system failure risks. Growth of e-commerce brings with it potential risks (e. thereby enhancing the competitive position of the bank and facilitating its long-term survival’ (Moody’s Investor’s Service. their strategy and the expected long-term performance of the business. and Growing use of outsourcing arrangements and the participation in clearing and settlement systems can mitigate some risks but can also present significant other risks to banks. as greater reliance is placed on globally integrated systems. credit derivatives. This is a value-adding activity that impacts.g. at least in part.2bn of losses. The emergence of the types of risks listed above by the Basel Committee forms the basis for regulatory pressure currently felt by many major financial institutions.pdffactory. even as late as in mid-2003. to operational risk.The PRM Handbook – Volume III The diverse set of risks listed above can be grouped under the heading of “operations risk”. the financial industry was still in the early stages of developing operational risk frameworks.1. In its survey. These motivations in isolation are unlikely to lead to successful implementation of an operational risk management function. In all cases the losses are attributable. Losses resulted from flaws in the risk management framework of the institutions concerned. The Barings debacle is not the story of just one single solitary rogue trader. managed to circumvent internal systems over an extended period of time to hatch and hide his trading schemes.2 Evidence of Operational Failures Table III.C. It ultimately resulted in over $1. but rather the breakdown of an entire organisation that had failed to exercise sufficient oversight and control of its people at all levels. its lack of clear directions and accountability for the processes Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. the FSA reported that ‘a majority of firms stated that their primary motivation for developing the operational framework was increased regulatory focus. sound and successful operational risk management framework can only come from an internal realisation and desire amongst senior management that this is a value-adding activity that ultimately impacts the bottom line of the institution. This all pointed to senior management’s abject failure to institute proper managerial. The Financial Services Authority (2003) reported that.1 lists some of the largest derivatives losses on the Street during the 1990s.C.com 345 . Britain’s oldest merchant bank (200 years!).1. with regulation a more significant driver in smaller firms than in major financial groups’. financial and operational control over the institution. should operational risk attain the limelight it is currently basking under because of the impending capital charge being deliberated in Basel II? I think not. the system of checks and balances failed at several operational and managerial junctures and in more than one location where the bank operated. III. One person (Nick Leeson). Since the bank’s risk management and control functions were very weak. Should the impetus for developing a sound operational risk management framework be driven primarily by emerging concerns raised by rating agencies or the threat of greater scrutiny by regulatory authorities? In fact. based in Singapore (several thousand miles away from corporate headquarters). One of the most dramatic and well-documented derivatives losses was the collapse in 1995 of Barings. A coherent. pdffactory. 153Details of other financial scandals can be found at http://www.C. Table III.The PRM Handbook – Volume III within the bank. And corporate fraud is one common instance of operational risk failure.uk/~rdavies/arian/scandals/. in more ways than one. Corporate governance in essence calls for greater accountability of senior management in an effort to combat corporate fraud. Parmalat.5 100 200 Glaxo Holdings PLC Long Term Capital Management Metallgesellschaft Orange County 150 4000 1340 2000 Proctor & Gamble 157 Area of loss Leverage & currency swaps Mortgage-backed securities Options Mortgage derivatives Copper & precious metals futures and forwards Mortgage derivatives Currency & interest rate derivatives Energy derivatives Reverse repurchase agreements & leveraged structured notes Leveraged German marks spread US dollars Source: Exhibit 1 in McCarthy (2000). Derivatives: Valuable Tool or Wild Beast? Copyright ? 1999 by Global Treasury News (www. Yet all of these derivatives losses combined pale in comparison to the S&L crisis ($150bn) of the 1980s. the ‘non-performing’ real estate loans of Japanese banks ($500bn) in the early 1990s.ac. taken from Brian Kettel.ex. and the more recent asset management frauds at Deutsche Morgan Grenfell.com). etc. WorldCom. Jardine Fleming. 153 The early 2000s witnessed the multi-billion dollar collapse of Enron. brought about a sense of urgency for better corporate governance.gtnews. Tyco. and many other fallen angels which.1: Publicly disclosed derivatives losses in the 1990s Company/Entity Air Products Askin Securities Baring Brothers Cargill (Minnetonka Fund) Codelco Chile Loss Amount ($m) 113 600 1240.1. and the failure of technology to detect trading and booking anomalies for an extended period of time.com 346 . the Credit Lyonnais bankruptcy ($24bn) due to bad debt in 1996. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. in mid-March of 2004 Bank of America Corp. internal culture. For instance. New York Attorney General Eliot Spitzer said.com 347 . This early industry definition eventually became the cornerstone of the official definition from Basel II.The PRM Handbook – Volume III Much more recently.3 Defining Operational Risk What is operational risk? This depends on what an institution wishes to gain from its operational risk management function.’ III. mutual fund firms have now reached settlements totalling $1. FleetBoston Financial Corporation. and its merger partner. There are many highfalutin and facetious ways to define operational risk. Nevertheless. No two institutions will have exactly the same definition of what operational risk means since there are unique facets such as composition of the business portfolio. One of the earliest definitions in the financial industry broadly defines operational risk in financial institutions as the ‘risk that external events. The first Basel definition of operational risk was simply ‘the risk of direct or indirect loss Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. etc. The most important element to take into account. however.pdffactory.65 billion. In simple terms. risk appetite.C. and unexpected losses are usually associated with big surprises resulting from lapses in management and breakdown in controls.1. tangible resources such as information technology and systems. and intangibles such as people and process. will result in an economic loss – whether the loss is anticipated to some extent or entirely unexpected’. The scope of operational risk in this early definition is quite broad and extends to all facets and aspects of risk associated with both internal and external events.. or deficiencies in internal controls or information systems. improper trading in mutual funds has cost banks and some funds management companies millions of dollars in fines. expected losses are those losses incurred during the natural course of doing business. There are two obvious observations here. that differentiate the types of operational risks the institutions are exposed to. This early industry definition identifies both the expected and unexpected losses attributable to operational mishaps. there are some very clear commonalities that are shared by different financial institutions. eclipsing the $1. is to choose a definition that is in line with the institution’s philosophy and sound management culture of taking proactive stances in managing the risks of the enterprise. agreed to pay a collective sum of $675m to settle charges with securities regulators that they had defrauded shareholders by allowing select investors to trade improperly in their mutual funds.4 billion Wall Street firms agreed to pay [in 2003] to settle charges their analysts issued biased research to win investment banking business. The Boston Globe reported on 16 March 2004: ‘With this agreement. The PRM Handbook – Volume III resulting from inadequate or failed internal processes, people and systems or external events’. This definition includes legal risk, but Basel II explicitly excluded strategic and reputational risk. These exclusions are very important aspects of the daily operation of any financial institution, but are admittedly much more difficult to assess and manage. Concerns were expressed about the exact meaning of direct and indirect loss. Consequently the current Basel II definition drops this distinction but provides clear guidance on which losses are relevant for regulatory capital purposes. This is achieved by defining the types of loss events that should be recorded in internal loss data. In its September 2001 press release for the ‘Working Paper on the Regulatory Treatment of Operational Risk’, the Risk Management Group (RMG) of the Basel Committee on Banking Supervision defined operational risk as ‘the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events’. According to the RMG press release this is a ‘causal-based’ definition: ‘It is important to note that this definition is based on the underlying causes of operational risk. It seeks to identify why a loss happened and at the broadest level includes the breakdown by four causes: people, processes, systems and external factors. This “causal-based” definition, and more detailed specifications of it, is particularly useful for the discipline of managing operational risk within institutions. However, for the purpose of operational risk loss quantification and the pooling of loss data across banks, it is necessary to rely on definitions that are readily measurable and comparable. Given the current state of industry practice, this has led banks and supervisors to move towards the distinction between operational risk causes, actual measurable events (which may be due to a number of causes, many of which may not be fully understood), and the P&L effects (costs) of those events. Operational risk can be analysed at each of these levels.’ The Basel II definition is primarily for capital adequacy purposes. That is, a key output of the regulatory framework is a measure of the amount of capital required by a financial institution as a buffer against unexpected operational risks. III.C.1.4 Types of Operational Risk The types of operational risk encountered daily within an institution are quite diverse and plentiful. What are the main types of operational risk financial institutions need to be wary of? The RMG struggled with this very same issue when it embarked on its event-by-event loss data collection exercise in June 2002. 154 In its press release, the Basel Committee on Banking 154 Having had two previous quantitative impact studies (QIS) in the previous years, the RMG decided that this more recent loss data collection exercise would specifically concentrate on very granular loss data. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 348 The PRM Handbook – Volume III Supervision (2002) described the goals of this exercise: ‘The primary purpose of this survey is to collect granular (event-by-event) operational risk loss data to help the Committee determine the appropriate form and structure of the AMA (Advanced Measurement Approach). To facilitate the collection of comparable loss data at both the granular and aggregate levels across banks, the Committee is again using its detailed framework for classifying losses. In the framework, losses are classified in terms of a matrix comprising eight standard business lines and seven loss event categories. These seven event categories are then further divided into 20 sub-categories and the Committee would like to receive data on individual loss events classified at this second level of detail if available.’ The eight standard business lines are: corporate finance; trading and sales; retail banking; commercial banking; payment and settlement; agency services; asset management; and retail brokerage. Table III.C.1.2: Basel II definition of business lines Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 349 The PRM Handbook – Volume III In Table III.C.1.2 we see that investment banking as a primary business unit is further split up into two level 1 sub-units: corporate finance; and trading and sales. Furthermore, within the trading and sales sub-unit, there are further level 2 sub-delineations: sales; market making; proprietary positions; and treasury activities. Each of these sub-units is classified based on its respective business functions, such as fixed income, foreign exchange, equity, commodities, credit, funding, brokerage, and so on. The Committee proposes looking at seven loss event categories associated with each business unit. These are depicted in Table III.C.1.3. The level 1 event types are the practical and obvious events: internal fraud; external fraud; employment practices and workplace safety; clients, products and business practices; damage to physical assets; business disruption and system failures; and execution, delivery and process management. Furthermore, within the level 1 event type category, there are at least two level 2 sub-categories, 20 sub-categories in total. Each of these sub-categories is again classified based on the associated business activities. Table III.C.1.3: Basel II definition of loss event types Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 350 The PRM Handbook – Volume III Table III.C.1.3 (Continued): Basel II definition of loss event types For internal purposes it is very important to establish suitable definitions of the different types of operational risk that are relevant for each individual institution. Once suitable definitions of event types are decided upon, the institution can proceed to establish a structure for the operational risk management framework. It is interesting to note that almost all of the so-called internationally active banks that are the primary focus of Basel II have their own unique definitions of operational risk event types that are tailored to their particular businesses, corporate culture and risk appetite. For example, one internationally active bank defines operational risk as: ‘The risk of inadequate identification of and/or response to shortcomings in organizational structure, systems, transaction processing, external threats, internal controls, security measures and/or human error, negatively affecting the bank’s ability to realize its objectives.’ A smaller domestic bank defines operational risk simply as: ‘The risk associated with the potential for systems failure in a given market’. III.C.1.5 Aims and Scope of Operational Risk Management The fundamental goal of operational risk management should be risk prevention. The assessment (meaning the quantitative measurement) of operational risk is of secondary importance. Because complete elimination of operational risk failures is not feasible, our operational risk management framework must aim to minimise the potential for loss – through whatever means possible. Indeed, the risk management of the entire institution as an enterprise should focus more on the operational aspects of the different business activities – with the important provision that Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 351 The PRM Handbook – Volume III the other key risk management functions within the enterprise (e.g., market risk, credit risk, audit, and compliance) are already firmly grounded. Regardless of how an institution chooses to define operational risk, it is vitally important at the outset to explicitly articulate what its target objectives and key concerns are. I suggest the following important objectives when establishing an operational risk management function: 1. To formally and explicitly define and explain what the words ‘operational risk’ mean to the institution. 2. To avoid potential catastrophic losses. 3. To enable the institution to anticipate all kinds of risks more effectively, thereby preventing failures from happening. 4. To generate a broader understanding of enterprise-wide operational risk issues at all levels and business units of the institution – in addition to the more commonly monitored credit risk and market risk. 5. To make the institution less vulnerable to such breakdowns in internal controls and corporate governance as fraud, error, or failure to perform in a timely manner which could cause the interests of the institution to be unduly compromised. 6. To identify problem areas in the institution before they become critical. 7. To prevent operational mishaps from occurring. 8. To establish clarity of people’s roles, responsibilities and accountability. 9. To strengthen management oversight at all levels. 10. To identify business units in the institution with high volumes, high turnover (i.e., transactions per unit time), high degree of structural change, and highly complex support systems. Such business units are especially susceptible to operational risk. 11. To empower business units with the responsibility and accountability of the business risks they assume on a daily basis. 12. To provide objective measurements of performance for operational risk management. 13. To monitor the danger signs of both income and expense volatilities. 14. To effect a change of behaviour within the institution and to enhance the culture of control and compliance within the enterprise. 15. To ensure that there is compliance to all risk policies of the institution and to modify risk policies where appropriate. 16. To provide objective information so that all services offered by the institution take account of operational risks. 17. To ensure that there is a clear, orderly and concise measure of due diligence on all risktaking and non-risk-taking activities of the institution. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 352 The PRM Handbook – Volume III 18. To provide the executive committee 155 regularly with a concise ‘state of the enterprise’ report for strategic and planning purposes. The stated objectives tacitly assume that the institution already has in place robust credit risk and market risk management functions, supported by audit, compliance and risk control oversight. Note that the operational risk management objectives delineated above should apply to all business units, including those responsible for market risk and credit risk management. In practice, the scope of an operational risk management function within an institution should aim to encompass virtually any aspect of the business process undertaken by the enterprise. The scope must transcend those business activities that are traditionally most susceptible to ‘operations risk’ – that is, those activities with high volume, high turnover, and highly complex support systems, e.g. trading units, back office, and payment systems. It is true that the business activities sharing these characteristics have the greatest exposure to operational risk failures. Nevertheless, other business activities could potentially sustain economic losses of similar magnitude. Table III.C.1.4: Two broad categories of operational risk Operational strategic risk (‘external’) Operational failure risk (‘internal’) Defined as the risk of choosing an Defined as the risk encountered in the pursuit of a inappropriate strategy in response to particular chosen strategy due to: external factors such as: political taxation regulation societal competition people process technology others Source: adapted from Crouhy et al. (1998) Should operational risk management encompass external events? The goal of operational risk management must be to focus on internal processes (as opposed to external events) since only internal processes are within the control of the firm. The firm’s response to external events is, however, a valid concern for operational risk management. Hence we examine operational risk from two interrelated perspectives. Table III.C.1.4 distinguishes between operational ‘strategic’ 155 In this context, the Executive Committee, composed only of very senior members of management, is assumed to be the highest governing body of the institution. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 353 The PRM Handbook – Volume III risk (i.e., a flawed internal response to external stimuli) and operational ‘failure’ risk. Failure to comply with externally dictated strategic risk factors – such as changes in tax laws 156 and new derivatives accounting treatment (e.g., FASB 133) – ultimately translates to an internal operational risk failure. Once senior management issues the call to action in response to an external stimulus, there must be no internal breakdown in the people, processes and technologies supporting the strategic call to action. III.C.1.6 Key Components of Operational Risk In view of the very wide scope of an institution’s operational risk management function, we might want to concentrate on some key components of operational risk, such as the following. (i) Core operational capability Risks to the institution’s core operational capability include the risk of premises, people or systems becoming unavailable due to: natural disasters, fire, bombs or technical glitches; loss of utilities such as power, water or transportation; employee disputes such as strikes; loss of key operational personnel; and the loss or inadequacy of systems capabilities due to computer viruses or Y2K issues. All of the aforementioned events seriously disrupt the institution’s core competency in supporting its long-term and stable operations, thereby representing a considerable exposure in terms of their possible impact on the institution’s future earnings and credibility. The good news is, for the most part, that many of the risks mentioned above are largely insurable at some cost to the institution. Insurance is, therefore, a useful risk mitigant for these kinds of failure risk. 157 (ii) People 158 An institution’s most important assets are its good people. Unfortunately, people also contribute a myriad of problems through: human error; fraud, 159 lack of honesty 160 and integrity; lack of 156 For example, an institution operating in another country might encounter some unexpected tax liability due to an unforeseen change in local taxation rules that was not anticipated by the accounting department. This is definitely an operational risk item that could lead to large unanticipated fines and tax liabilities. 157 Basel II currently does not recognise insurance as a risk mitigant, except in some restricted cases within the Advanced Measurement Approach. This is somewhat odd considering the fact that banks have routinely used insurance as a risk management tool. 158 Unfortunately, the category of people is by far the largest cause of operational risk failures. 159 An FDIC study found that fraud was the main contributing factor to 25% of 92 bank failures in the period 1960– 77. The proportion rises to 83.9% if one includes ‘insider fraud’ – i.e., improper lending to individuals or groups connected with the bank. A review by the Bank of England suggested that, in the UK in the period 1984–96, fraudulent concealment was a major contributory factor in 7 out of 22 cases of bank problems. In many of these cases, these bank frauds were perpetrated by senior managers of the banks themselves. 160 On the subject of integrity and honesty, ‘rogue’ trading generally surfaces as the most obvious case of dishonesty and fraud; however, we need to recognise that the problem of fraud also extends to teller theft, mailroom theft, illegal funds transfers, and ‘insider’ fraud mentioned in footnote 8, etc. Some well-known cases of securities fraud and rogue trading are: Kidder-Peabody (Jett, $340m, 1994), Orange County (Citron, $1.6bn, 1994), Barings (Leeson, $1.2bn, Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 354 The PRM Handbook – Volume III cooperation and teamwork; in-fighting, jealousies, and rumour-mongering; personal sabotage; office politics; 161 lack of segregation and risk of collaboration; lack of professionalism and customer focus; over-reliance on key individuals, insufficient skills, training and education; insubordination; employment disputes; poor management and supervision; and a lack of culture of control, discipline and compliance. 162 (iii) Client relationships An institution derives much of its value from its reputation and the services it provides to its client base. Any damage to an institution’s reputation has the potential to disrupt revenue flow. From an operational risk perspective, the institution needs to assess how disreputable activities might harm its client relationships. Examples include: money laundering; Nazi gold; 163 improper client suitability and lack of disclosure; 164 false valuations of client assets to mislead or conceal losses; collusional relationships with broker-dealers; cosy association with highly-leveraged institutions; and dishonest practices amidst competition that can harm the institution’s reputation. (iv) Transactional and booking systems Operational risk failures are no longer limited to settlement risk in the trading accounts or back office. More recently, with advances in automation, transactional issues also include: data capture and processing; deal confirmation 165 and contractual documentation, e.g., ISDA master agreements; collateral management; and general processing and payment/settlement errors which not only disrupt the flow of business but also put an institution at risk of litigation. In addition, corporate banking activities contributing to operational failures may include: correspondent banking; payment services; treasury services; private trust and executor services; structured finance; custody; and leasing. From a retail banking perspective, there is even more room for potential operational failures associated with such retail banking activities as: mortgage servicing; funds management; deposit taking; sending; foreign exchange; custody; credit cards; ATMs; private banking; and insurance. The transactional processes associated with handling retail customers are many: payments; cheque clearing; cash handling and teller errors; credit analysis; account opening; 1995), Daiwa (Iguchi, $1.1bn, 1995), Sumitomo (Hamanaka, $1.8bn, 1995) and many other cases involving varying amounts of losses. 161 Ask yourself this question: how many institution-wide problems and inefficiencies were caused by internal fights and office politics? 162 Rogue trading is not the only contributor to well-publicised derivatives losses. Table III.C.1.1 is a list of the largest derivatives losses attributable largely to failures in risk management where operational risk played a major role. 163 This has become an important reputation risk issue among a few big European banks in the recent past. 164 Among the most highly publicised client suitability and disclosure cases are the Gibson Greetings and Procter & Gamble lawsuits against Banker’s Trust (1994) and the Orange County collective legal debacle with Merrill Lynch, Morgan Stanley Dean Witter, and Nomura Securities (1994). 165 The best-known documented case of booking errors occurred at Salomon Brothers in the mid-1990s where a backoffice confirmation for a trade was erroneously booked several orders of magnitude larger than the intended trade. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 355 processing insurance claims. For example. it also undermines the institution’s ability properly to allocate its scarce resources. the inability of the finance. 167 requires the institution to be more vigilant in implementing new technology. 166 In addition. More broadly. can do the institution a lot of harm by failing to provide senior management with a precise picture of the state of the institution’s finances. in a quagmire of bureaucracy and mere paperpushing. In addition. e. a finance or accounting department. Another important aspect of reconciliation and accounting is that it enables the institution to identify areas of inefficient capital allocation. expanding its staff. The inability to reconcile properly the revenue-generating activities with the general ledger per se inhibits the institution from strategically assessing the performances of its business units and their growth potential. (v) Reconciliation and accounting Of course the reconciliation of transactions at different levels of institutional activities is important from a bookkeeping perspective.. we have new regulatory directives for anti-money laundering and terrorist funding. interest charges. 168 166 Ask yourself this question: through our current finance and accounting systems. associated with each of these business processes is heavy reliance on a sound and stable systems infrastructure within the institution. re-engineering its processes. the introduction of the euro or the change in regulatory accounting rules. Inability to adapt to change may damage an institution’s reputation or disrupt the continuity of its old businesses. the SarbanesOxley Act of 2002 which requires all listed institutions to enforce more effective corporate governance and more effective financial statements reporting. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. More recently. But. legal or accounting departments fully to assess the implications of regulatory changes puts the institution at a serious disadvantage with regard to tax shelters and favourable legal treatment of the institution’s assets and liabilities. in responding to rapid industry developments. processing credit/debit card transactions. Finally. and payroll processes. accepting new clients. and many other new directives concerning securities fraud promoted by the Securities and Exchange Commission.com 356 . the institution should not stumble and fall when initiating new business activities. there is operational risk in the headline risk categories involving people.pdffactory.g.The PRM Handbook – Volume III documentation. FAS 133. such as capital. do we know for sure where we are generating the greatest revenue for the least amount of risks that we take? 167 Ask yourself this question: how quickly can the institution conform to the FAS 133 directives without unduly taxing its resources and disrupting its day-to-day business operations? 168 Ask yourself this question: can the institution leverage its current resources and technology and enter into a new market without unduly incurring additional expenses? If the answer is negative. processes and technology. mortgage applications. (vi) Change and new activities To stagnate is to fall behind. launching new products or entering into new markets. did the increase in expenditure result in a greater market share or revenue to the institution? If the answer is negative. a strong operational risk culture and internal control culture (including. process and technology in responding to external risk factors.1. and contingency planning are all crucial elements of an effective operational risk management framework for banks of any size and scope. The Committee also recognizes that ‘internal operational risk culture is taken to mean the combined set of individual and corporate values. including its size and sophistication and the nature and complexity of its activities. effective internal reporting.C. monitoring and control/mitigation’ of risk. clear strategies and oversight by the board of directors and senior management. salaries. clear lines of responsibility and segregation of duties). among other things. attitudes. The Committee therefore believes that the principles outlined in this paper establish sound practices relevant to all banks’ (Basel Committee on Banking Supervision. competencies and behaviours that determine a firm’s commitment to and style of operational risk management. assessment. Expense increases (including bonuses. 2003). and the failure to control the operational cost base.7 Supervisory Guidance on Operational Risk The Basel Committee on Banking Supervision has provided preliminary supervisory guidelines for the management and supervision of operational risk. despite these differences. and systems infrastructure spending) without adequate return signals a potential breakdown and inefficiencies in people. in my years of observation. there is operational risk involving inefficiency and misallocation of precious resources. the sound practice paper of February 2003 is structured around ten basic principles grouped into four main themes: 169 Ask yourself this question: in the past three years. Rapid increase in expenses is not necessarily a desirable symbol of growth but. However. the Committee defines the management of operational risk to mean the ‘identification. On a related matter. excessive revenue volatility is the result of at least two related factors: the inability to respond properly to external market conditions. 169 It is normally associated with a bank’s undesirable corporate culture of wanton waste and lack of accountability. III. process and technology. The Committee ‘recognises that the exact approach for operational risk management chosen by an individual bank will depend on a range of factors.com 357 . I need not elaborate further on this.’ Recognising the different nature of operational risk in different institutions.pdffactory. Each of these key factors has its obvious attendant operational failures in people.The PRM Handbook – Volume III (vii) Expense and revenue volatility A sure sign of operational failures associated with management control is a rapid increase in expenses and significant deviations from budget. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. The guidelines are intended to serve as best or sound practices within the financial industry. To this end. it is also a sure sign of lax accounting and a complacent management on the verge of going out of control. By imposing the ultimate responsibility on senior management via the board of directors. serves as an important binding constraint on the behaviour of financial institutions as they provide the public with their intermediary functions. even before they occur. risk management can only be facilitated properly if the process is clear and transparent at the outset. The second theme maintains that the management of risk is comprised of four important complementary activities. As stated earlier. Recognising that the safety and soundness of the financial system is a collaborative effort. process and technology. Role of disclosure. Role of supervisors. and mitigation/control. monitoring. assessment.C.1. therefore. unnecessarily squandering precious resources and hindering productivity. III. prevention is key. The first concerns itself with surveillance and identification of risk within the institution. and finally. Public disclosure to the market.The PRM Handbook – Volume III Developing an appropriate risk management environment. many institutions have been surprised to discover that even the most obvious types of operational risk are widely prevalent within the organisation. Finally.pdffactory.8 Identifying Operational Risk – the Risk Catalogue How can an institution identify potential operational risks lurking within the organization? In the recent past. followed by a thorough assessment and monitoring of events as they unfold. the third theme emphasises the important role regulatory supervisors play in the risk management process of financial institutions. The first step in the identification process is to require that each business unit (or operational unit) shall be assessed using a so-called risk catalogue which adequately identifies all the risk categories relevant to the specific unit being assessed. It clearly identifies possible operational failures under three categories: people.com 358 . the first theme emphasizes the importance of cultivating a risk-awareness culture within the institution as dictated directly from the highest level of the organization – the board of directors. devising mechanisms to control and mitigate these risks. Risk management: identification. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. For instance.1. that the control process requires the identification of pertinent operational risk failures at two different levels: independent management oversight and self-assessments by individual business units. We shall also see. Operations Control Risk Li mi t Exceedances Volume Ri sk Se curi ty Risk Posit ion Report ing Risk Profi t and Loss Reporting Ri sk Technology Risk Systems Fai lure Netw ork Fail ure Systems Inadequacy Compati bility Risk Supplier/Vendor Risk Programming Error Data Corr uption Disaster Recovery Risk Sy stems Age Sy stems Support Consider a simple example: suppose we have determined that Business Unit A. As a precursor to building a sound risk management structure. Conf lict of Interest . is subject to some sources of operational risk. Matching. Lack of Cooperation Collusion and Connivance Fraud Pr ocess Risk A. In practice. III. due to its intrinsic business activities. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.5: Risk catalogue for Business Unit A 170 Pe ople Risk Incompetenc y Inadequate Head C ounts Key Personnel Management Communication Internal Polit ics.C.9 The Operational Risk Assessment Process It is important to keep in mind that an operational risk assessment process without the aim of proactive management is an exercise in futility. Model Risk Model or Methodology Error Pricing or Mark-to-Model Error Availabil ity of Loss Reserves Model Complexit y B.5. It also helps an institution to identify the key problem areas of operational risk and this helps target the resources that will be allocated. Confirmation. 170 Risk types shown are for illustration only. there is presumably no model risk in retail banking or in leasing.C.10.The PRM Handbook – Volume III Table III.C. This checklist can then form the basis for assessing the loss frequency and loss severity of the different event types in the business unit using the risk scorecard (see next section).com 359 .C.1. Transaction Risk Execution Error Booking Error Collateral.1. different business units may have different types of risks they are most concerned with. measurement can be the tool by which senior managers are convinced that such a structure is needed. What drives our desire to ‘measure’ must be our belief that a sound and active risk management structure is in place – or will be in place in the future.1. We can then check them off against our generic risk catalogue as shown in Table III.pdffactory. in Section III. and Ne tting Err or Product Complexi ty Capacity Risk Valuati on Risk Erroneous Discl osure Risk Fraud C. process and technology cannot be looked at in isolation. expense reports. Step 1: Inputs to Risk Catalogue Operational risk should be evaluated net of risk mitigants. regulatory reports. 171 Step 2: Risk Assessment Scorecard Using the risk catalogue and the inputs from step 1. (3) review and validation. Loss history should also cover credit losses as a result of operational mishaps. complexity. To obtain a measure of net operational risk. or the complacency factor due to ineffective management of the unit.g. and expert opinion and industry ‘best practices’. The risk scorecard will appropriately identify and assess the nature of operational risk based on the following broad points: Risk categories – people. and losses strictly due to errors. then the degree of risk must be properly adjusted by the insurance premium paid. external audit reports. Admittedly. 171 A few major banks are beginning to gather their own internal loss experiences – the outcome will not be known until many years from now. the introduction of new technology to the business unit. In addition. and (4) outputs of risk assessment process. both pre and post mortem. Because the headline risk categories of people. (2) risk assessment scorecard... the required inputs to the risk catalogue must be able to adequately assess both the frequency of failure occurrences and the severity of loss given that a failure occurs: The assessment for frequency of occurrences may come from both internal and external reports. and budgets. e. Change. the RMG has also analysed data collected from the numerous participating banks through the 2002 Operational Risk Loss Data Collection Exercise (LDCE) in June of 2002.The PRM Handbook – Volume III For each business unit. etc. and loss history. if the institution has insurance to cover a potential breakdown. their cumulative effects and interdependencies must be carefully identified and accounted for. deviation from business plans. and external dependencies. management reports. For example. which primarily ‘focused on banks’ internal capital allocations for operational risk and their overall operation risk loss experience during the period from 1998 to 2000’. variances on budgets. this is an extremely difficult task and the financial industry is still struggling with how to collect these loss data. An assessment for severity of loss may come from: management interviews. whenever possible. process or technology. loss due to theft and fraud. technology. the risk assessment process follows four fundamental steps: (1) inputs to risk catalogue. such as: audit reports. and complacency. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. insurance claims.com 360 . Connectivity and interdependencies. operational plans.pdffactory. each business or operational unit will be assessed using a risk scorecard. the complexity of products. The sources that drive the headline risk categories may be due to: a change in the work environment. process. The 2002 LDCE was an extension and refinement of two previous data collection exercises sponsored by the RMG. . Net risk assessment.C.com 361 . not likely. a catalogue of insurable bank activities needs to be prepared. very unlikely and so forth. the potential monetary amount lost due to certain insurable operational failures can be reduced through the use of risk mitigants. We concentrate on only three broad items. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. insurance and underwriting. It may be simply ‘rated’ as very likely. Severity of loss describes the potential monetary loss to the institution. Issuing summary risk reporting about the enterprise to the executive committee. issuing a mandatory report and list of recommendations to the affected business units. between $5 million to $10 million for certain failures). We need to find out which bank activities are currently covered by insurance policies and by how much. The combination of all the ingredients in the risk scorecard enumerated above gives the overall net risk assessment.3. or a question relating to the expected number of loss events may be posed.The PRM Handbook – Volume III Frequency and severity assessments.pdffactory. given the occurrence of an operational failure. In conjunction with audit and compliance departments. Quantifying the likelihood of breakdown in operational processes is very difficult. it is the responsibility of the operational risk management committee 172 to review the assessment results with the management of the respective business unit and other key officers of the institution. Net operational risk. Operational risks should be evaluated net of risk mitigants. The responsibilities of the committee may include: Formulating a set of operational risk policies and guidelines clearly delineating the actions needed to correct and prevent the operational problems and issues identified. In addition. the operational risk management committee is a committee within either the risk management function or the audit function. Step 4: Outputs of Risk Assessment Process There are several possible outputs from the operational risk assessment process. More details on the recommendations for frequency and severity of self-assessments are given in Chapter III.g. e. Step 3: Review and Validation After the risk assessment process is completed (via the risk catalogues) and risk scorecards for each business unit are produced. (i) Improved Risk Reporting and Analysis: 172 In many financial institutions. some institutions subjectively attach a range of loss (e. Since actual loss history may be difficult to come by.g. Opining on the ratings in the risk scorecards before publication. For instance.. Determining the important differences between the unit's own self-assessment and the independent assessment. 6: Example of a heat map This quadrant requires urgent attention VH Business Unit E Business Unit D H M Business Unit B Business Unit C L VL Business Unit A 5 10 15 20 25 30 35 40 Severity of Loss ($MM) Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. has a moderately high likelihood of incurring a large amount of loss due to operational risk failures. risk scorecards and ‘heat maps’ which are used to highlight the relative information on operational risk exposures across the institution.The PRM Handbook – Volume III As an ongoing goal of the operational risk assessment framework. compliance.e.. audit. This shows that Business Unit D.1.1.com 362 . the institution should endeavour to streamline its risk reporting processes among the different risk-monitoring units of the institution (i. Figure III. The most useful of these reporting tools are the risk catalogue.6.pdffactory. relative to all the other business units in the institution. These reports should be viewed as a concise summary of specific audit and compliance reports which are already instituted within the financial institution. An example of a heat map is given in Figure III. and risk control).C.C. This means that it requires a relatively large amount of economic capital to sustain its business activities. compliance risk. this is a difficult and subjective task and a whole chapter of this handbook is devoted to it (see Chapter III. price risk. Figure III.1.com 363 . highlighted its examination procedure of banks to cover nine principal categories of risk. foreign exchange risk. regardless of the amount of capital charge levied on operational risk.pdffactory. Admittedly.7. It is also important to note that the current Basel II proposals call for a regulatory capital charge for operational risk.C. 1998). operational risk events cannot be eliminated altogether. liquidity risk.C. Thus. After all. They are: credit risk. and reputation risk.0). This is illustrated by the OCC exam chart in Figure III. Whether there is wisdom behind a capital charge for operational risk remains to be seen (Ong.1.3). in its September 1995 press release. in addition to the capital charge already required for both market risk and credit risk (see Chapter III. major financial catastrophes resulting from breakdowns in people.7: OCC exam chart Inherent Business Risk for Business Unit C High Extensive Average Average Low Minimal Inherent Business Risk Adequacy of Management Controls (iii) Capital Attribution By attributing economic capital to operational risks we can ensure that business units which are more prone to operational failures are assigned a greater allocation of capital commensurate to the risks that they take. transaction risk. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. therefore the nine categories of risk need not apply to every business units of the financial institution.) Using the risk scorecard and other reports. (The OCC mandate is directed at banks with financial derivatives activities.C. we can graphically represent the evaluation of a business unit in a concise manner in line with the OCC examination procedure.The PRM Handbook – Volume III (ii) OCC Exam Chart The Office of the Comptroller of the Currency (OCC). interest-rate risk. strategic risk. process and technology cannot be prevented entirely. and poor communication between different risk-monitoring groups. risk control and compliance functions. solvency and liquidity. Subsequently. paying too much ‘lip service’ in the various risk committees. income and expense volatility. disrespect for audit reports.1. this information is already available within the institution. countless studies have continued to point to the following failures in risk control as the ultimate culprits. no segregation of duties.com 364 . and ignorance. turnover. This includes audit oversight. inadequate accounting policies. considering such operational performance measures as volume. lack of procedures for monitoring and correcting deficiencies. errors. it is important that the operational risk management function is streamlined alongside the risk control. the integrity of the Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. the day-to-day processes it has to go through.10 The Operational Risk Control Process With hindsight the well-publicised derivatives losses listed in Section III.2 were all preventable. effectiveness of line management.C.The PRM Handbook – Volume III III. Each overseer plays the role of independent ‘risk monitor’. (v) Inadequate or ineffective audit and compliance programmes.C. accounting anomalies. To help facilitate the identification of operational risk failures within the institution by business unit. compliance. (ii) Inadequate assessment of risk: on and off-balance sheet activities: lack of stress-testing for unexpected market moves. settlement failures. too much risk relative to capital. (iii) Lack of transparency: inaccurate information on capital. compliance to market and credit limits. too many delays in systems development. For many business activities of the holding company. Outside of derivatives activities.pdffactory. (i) Lax management structure: lack of adequate management oversight and accountability. lack of benchmarks and comparability. and lack of risk-adjusted return measurement. Line management has the best knowledge of its own people.1. inappropriate setting of limits. and most definitely senior management involvement. delays. and audit functions of the institution. lack of approvals. Lessons that we have learned from many highly publicised financial fiascos all point to the need for the following: (i) Independent management oversight. verifications and reconciliations. (ii) Self-assessments by individual business units. and other higher-level controls. (iv) Inadequate communication of information between levels of management: lack of escalation process in times of crises. and lack of review of operating performance. losses attributable to fraud in other lines of business are likewise preventable. Operational risk management continues to be one of the least developed areas of enterprise risk management in spite of the heightened attention it has received. 2003).11 Some Final Thoughts Operational risk failures can wreak havoc within an organization if not properly identified. and they are not a good substitute for sound judgement. My personal experience as head of enterprise risk management and chief risk officer for two of the ten largest banks in the world has taught me that most operational risk failures are preventable. In practice. Furthermore.C. patience.com 365 . only management action can achieve that‘. While much progress has been made over the past several years. monitored. the fundamental operational risk management framework continues to be confused by many people. operational risk also has a tendency to spill over and cause systemic risk to the broader markets. assessed. III. thereby reducing operational risk management to a mere set of rules and regulations.1. controlled and mitigated. Many people with quantitative background have used the Basel II proposals as their impetus for furthering the argument for more operational risk modelling. audit and risk control backgrounds tend to interpret the Basel II guidance as an opportunity to codify additional policies. After all. it is still more art than science. they are only one small component of prudent risk management. In 2006 there will be a new regulatory capital charge for operational risk. Perhaps since operational risk is fundamentally qualitative in nature. good management means vigilance. but no regulators in their right mind would think that operational risk management is about levying capital charges. ‘An increase in capital will not itself reduce risk. therefore. Experience tells me that the most important aspect of the operational risk management framework is still sound corporate governance and proactive senior management involvement. A self-assessment by the individual business units is.pdffactory. process and technology to fail. While capital charges are important. it might never be as developed as other risk areas. if not sufficiently contained. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. and the external circumstances that could cause its people.The PRM Handbook – Volume III systems supporting the business unit. the control of operational risk is concerned fundamentally with good management. The processes outlined in this chapter are based on my experience of how to prevent operational risk mishaps. People with compliance. a key first step in the operational risk assessment process. and persistence in improving the risk management process. at least primarily within the financial industry. a Moody’s Special Comment reported (Moody’s Investor’s Service. In spite of its importance for containing operational risk. And this is what operational risk management is all about. and Beck. July. K. Galai.. 4 June. In Operational Risk and Financial Institutions.aicpa.. London: Risk Books.org/pubs/jofa/may2000/mccarthy. G. D and Mark. T. M (1998) On the quantification of operational risk – a short polemic. E.htm Moody’s Investors Service (2003) Moody’s Analytical Framework for Operational Risk Management of Banks. Ong. Andrews. See http://www. February. FitchRatings.The PRM Handbook – Volume III References Basel Committee on Banking Supervision (2002) Operational Risk Loss Data Collection Exercise – 2002. London: Risk Books. Basel Committee on Banking Supervision (2003) Sound Practices for the Management and Supervision of Operational Risk. D. Ramadurai. R (1998) Key steps in building consistent operational risk measurement and management. Scott. McCarthy. M. Crouhy. (2004) The Oldest Tale but the Newest Story: Operational Risk. K. In Operational Risk and Financial Institutions. Financial Services Authority (2003) Building a Framework for Operational Risk Management: The FSA’s Observations. January. January. (2000) Derivatives revisited. 189(5). Special Report. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Journal of Accountancy.pdffactory.com 366 . Special Comment. Olseon. have reinforced the importance of ORM. A number of industry initiatives have been organized around the world to establish frameworks and standards for corporate governance and risk management. 1994) developed similar 173 President. Daiwa. rating agencies. More recent corporate failures such as Enron and WorldCom. the discipline of ORM is still in the early stages of development. and other corporate governance rules adopted by the stock exchanges. Regulatory actions.com 367 . Industry initiatives. Kidder. While businesses have always faced operational risks. The focus on operational risk has been driven by a number of important factors: Corporate disasters. New regulations with significant operational risk requirements include Sarbanes-Oxley (in particular. founding member. PRMIA. The need for ORM first gained the attention of risk management professionals in the 1990s when they realized that the root causes underlying the major financial disasters – Barings. and regulators. Additionally. James Lam & Associates.pdffactory. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. but also key stakeholders such as stock analysts. 1993) that produced the Committee of Sponsoring Organizations (COSO) framework of internal controls. These new standards impact not only corporate executives and boards.2 Operational Risk Process Models James Lam 173 III. as well as the markettiming and late-trading problems plaguing the mutual fund industry.The PRM Handbook – Volume III III. the standards for corporate governance and risk management have increased. Section 302 on certification of chief executives and chief financial officers and Section 404 on internal controls). In response to the corporate disasters.C. and Senior Research Fellow.1 Introduction Management and board attention to operational risk management (ORM) has never been greater. while the Turnbull Report (United Kingdom. Blue Ribbon Panel. the new Basel initiative (Basel II) has established a direct linkage between minimum regulatory capital and a bank’s underlying risks. anti-money laundering and bank secrecy acts. regulators have dramatically increased their examination and enforcement standards. These initiatives include the Treadway Report (United States. etc.2. 1999) and the Dey Report (Canada. Beijing University. In the aftermath of these disasters. the Patriot Act. – were operational risks and not financial risks.C. including explicit treatment for operational risk. In 2004. Technology developments. among these are stock price improvement. These results demonstrate that investments in operational risk controls can produce direct benefits that are multiples of the costs. especially in the area of operational risk. It is noteworthy that the Turnbull and Dey reports were supported by the stock exchanges in London and Toronto. In risk-intensive industries. While ORM programs are relatively new. and model risk. and outsourcing IT operations and business processes to improve efficiency. COSO is scheduled to release a major study on enterprise-wide risk management (ERM). respectively.pdffactory. data integrity. Going forward. early adapters have reported sustained reduction in operational losses and error rates (one company reported a sustained 80% reduction in operational risk losses). loss reduction. cyber-terrorism. as well as indirect benefits such as prevention of crises that divert management attention and cause reputational damage. Examples include using the Internet to communicate with customers and facilitate commerce. such as information security. The focus of this chapter is on the development and application of operational risk process models. developing customer relationship management applications to better serve customer segments. the key trends and developments highlighted above should continue to assert significant pressure on corporate boards and executives to improve their risk management capabilities. These risks require operational risk controls for day-to-day operations. cyber-crime. as well as disaster recovery planning for unlikely but potentially disastrous events. which will include key ERM principles and advocate its application within a sound corporate governance framework. and regulatory capital relief. Corporate programs. they also present new and complex risks. early warning of risks. While these technology developments provide business benefits. systems availability. We will discuss the following questions: How to develop and apply operational risk process models? What are the specific quantitative and qualitative tools used by companies today? How to link these tools with economic capital allocation? What are the actions management can take to mitigate operational risk? Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.com 368 .The PRM Handbook – Volume III guidelines. corporations have also developed sophisticated models and databases to measure all types of risk. Corporations have achieved significant benefits from their risk management programs. Other reported benefits include improved customer service and operational efficiency. Over the past decade. debt rating upgrades. technology developments have transformed how businesses operate. such as financial and energy services. products. as well as external regulatory and legal requirements. and compliance. These risk assessments are either performed by the business and operating units themselves (known as control self-assessments). Most companies view their businesses vertically in terms of operating units. one of the key objectives of a capital markets trading business is to maintain their market risk exposures within boardapproved risk policy limits. performance metrics. Financial objectives include earnings growth. product delivery. or by independent internal or external audit groups. and market share. Compliance objectives should encompass internal risk policies and limits. we will use IT outsourcing as an example to illustrate how an operational risk process can be established. Step 2: Identify the core processes that support these objectives.2 The Overall Process In developing and applying operational risk process models. As such. and improve their business processes. the methodologies and results from these initiatives – process maps. These programmes often fall under the monikers of re-engineering or total quality management. business performance. As a starting point. While the discipline of ORM is relatively new.pdffactory. For example. support functions. Business performance objectives include product innovation. III. cash Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. companies must manage their business processes horizontally to fully address the operational risks that may prevent them from achieving their key objectives. audit ratings – can be used to gain a deeper understanding of the general scope and specific issues that the ORM program must address.The PRM Handbook – Volume III At the end of this chapter. The design of operational risk process models should always start with the end goal(s) in mind: what are the key business and operational objectives for the company? These objectives can generally be grouped into three categories. monitor. customer acquisition and retention. risk managers should first take advantage of other related programmes that can provide valuable information or tools. businesses have always had to ensure that their operations are effective and efficient. With this knowledge. many companies have implemented programmes to identify. financial performance. the development of operational risk process models should include the following four steps: Step 1: Establish the objectives and requirements of key stakeholders.2. companies have also implemented risk assessment processes to identify key operational risks.C.com 369 . However. In addition to process improvement. or customer segments. and these corporate-wide efforts produce detailed process maps and performance metrics. risk-adjusted profitability. This is because the core processes of any company – customer acquisition. and shareholder value. m. the middle office (risk management). Policy Limit s Exception Manag eme nt and Reporting Y es Limits Exceed ed ? No Daily Reporting and Revie w Back Testing Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. – involve the participation of various entities within and outside of the organization. process maps should be developed for the core processes of the company. and a daily market risk report showing risk exposures against limits by 4.m. These process maps should be driven by the objectives of the company. To better understand these linkages.The PRM Handbook – Volume III management. etc.. and/or cash flows.C. Figure III. This process map shows the work flows between the front office (traders).2. Figure III.pdffactory. As an example.1: Daily market risk measurement process map Open positions report 10:00 am Market risk datab ase Account Recon ciliation Financial market datab ase Account ing dat a Formulate a nd Execut e H edgin g Strategies Deal capture Daily Accountin g Accruals 4:30 pm Market Risk System Daily Ris k Report v s. It is also important to note that risk management is a process.1 shows a process map for daily market risk measurement.com 370 . and highlight the specific interdependencies. It also shows two time-critical objectives: an account reconciliation between the open positions report from the front office and the accounting report by 10 a.C.2. such as work flows.30 p. data flows. and the back office (accounting and IT). management restructuring. and risk transfer through insurance programmes.pdffactory. process redesigns. on 250 out of the 253 trading days in a year. General Electric is well known for its ‘workouts’ in which cross-functional teams are organized to discuss and resolve any operational issues in an open forum.com 371 . the systems availability of a core application is essential for day-to-day operations. compliance. In our example. As such. including goals and MAPs. specific investigations and corrective actions. performance metrics and risk metrics should be clearly defined. For example. Step 4: Implement organizational and risk mitigation strategies. For each core process of the company. new IT applications. the goal is to produce the daily market risk report by 4.99% as minimum acceptable performance (MAP). or root cause. goals and MAPs should be established for all key performance and risk metrics. A company might set 100% systems availability as a goal and 99.m. Similarly. security and ORM activities. management can respond proactively to specific processes that perform below MAP. with a MAP of 240 days. With a clear understanding of stakeholder objectives and supporting core processes. The direct linkage between capital requirements and operational risks is one of the key developments in Basel II. Over time.30 p. For processes that perform consistently above goal.30 p. the company is well positioned to execute the appropriate ORM strategies. As such. and allocate economic capital to each business unit based on its operational risks. or hiring new trading assistants to support the traders. suppose management noticed that the daily market risk report was late 4 times in a month. Management can establish 99% as the goal and 95% as the MAP. An investigation revealed that the main reason. a below-MAP performance given that the frequency is greater than 13 times per year. then the goal and MAP for those processes can be raised to encourage continuous improvement. To highlight the importance of this process. The allocation of capital to operational risks provides a number of benefits: Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. is that the traders are late in updating their daily trades.The PRM Handbook – Volume III Step 3: Define performance and risk metrics.m. all of the company’s operations can be monitored against specific benchmarks. More sophisticated companies go beyond these four steps in their ORM programmes. integration of audit. an operational risk metric for the daily market risk measurement process may be the percentage of time that the daily market risk report is produced by 4. and performance of those processes against performance standards. Risk mitigation strategies may include discussion forums to resolve any misunderstandings or conflicts. senior executives usually attend the last session of these workouts. To follow on with our example. to obtain an inperson report from the team leaders on how they plan to address any outstanding issues. These strategies may include: new training programs. management can compare the cost of risk retention (i. performance models that do not fully adjust for risks (such as economic value added models) would overstate the profitability of high-risk businesses and understate the profitability of low-risk businesses. losses are measurable and can be used to indicate trends (e. Additionally.C.. trend in the loss/revenue ratio). but it is too early to tell which one(s) will become the industry standard. subjective analysis of the key risks. Second. the controls available to mitigate these risks. As we will discuss later.The PRM Handbook – Volume III Management can measure risk-adjusted profitability consistently across different business units and products. Organizational incentives. the allocation of economic capital to operational risk will enhance the evaluation of the costs and benefits of these strategic alternatives. as well as to facilitate the sharing of lessons learned within the company. however. are provided to business units that effectively manage their operational risks. economic capital times the cost of capital) and the cost of risk transfer (i.C. As such. a company should employ a range of qualitative and quantitative tools to assess. A control self-assessment (as distinct from a risk self-assessment – see Section II.e. the development of risk-based audits (in which high-risk business units are audited more frequently). even if they did not result in an operational loss. without which past mistakes are more likely to be repeated.g. First. One of the key objectives of any risk model is to motivate appropriate behaviour.3 Specific Tools Given the wide scope of operational risk. It is important for all of the business units to assess their current situation in terms of a control self-assessment to develop a clear picture of Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. every loss and incident within a company represents a learning opportunity. Below is a summary of the basic ORM tools that companies use today: (i) Loss-incident database. measure. such as insurance.com 372 .2) is an internal. net cost of the insurance strategy). and the management implications. and manage operational risks.. In fact. the loss-incident database should be used to support the identification of operational risk exposures.e. given the nature of operational risk. It is unlikely. (ii) Control self-assessment. In the evaluation of risk transfer strategies.pdffactory.. it will always be more of a management issue than a measurement issue. that the management of operational risk will ever become a wholly data-driven process. there are several industry initiatives to develop more robust loss-event databases.2. Incidents record other events that should be noted. III. A company should record operational losses and also keep a record of operational incidents for two main reasons. in the form of lower capital charges. C. Examples include customer complaints for a sales or service unit. the establishment of goals and MAPs will provide useful performance benchmarks against which the key risk indicators can be measured. What metrics. What policies. team meetings. Figure III.g. the company’s key risk exposures can be ranked with respect to their ‘probability’ and ‘severity’ so that management can have a comparative view in the form of a two-dimensional risk map.g. trading errors for a trading function. These risk indicators are usually developed by the individual business units and closely tied to their business objectives. procedures and controls specified in 3 and 4 above are indeed effective? (iii) Risk mapping. Early-warning indicators should also be developed to provide management with leading signals (e. and sometimes even a Letterman-style ‘top 10 risks’. outsourcing arrangements. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. whereas they have more experience in dealing with events of high probability and high severity.2 shows an example of a risk map. procedures and controls do you have in place to ensure that risks are within acceptable levels? 4. Risk indicators are quantitative measures that are linked to operational risks for a specific process. As discussed earlier. key control initiatives. Given that they fully participated in the assessment process. risk-based process maps can be produced to show how various risk exposures can arise. employee absenteeism and turnover as an early warning indicator of future operational errors). such as single points of failures or where errors often occur.com 373 .The PRM Handbook – Volume III how to proceed in the ORM process. they would also have a greater sense of ‘ownership’ to address outstanding opportunities or issues. and facilitated workshops. special-purpose vehicles). Building on the work from control self-assessments. issue-specific interviews. These maps will also enable each business unit to develop and prioritize their risk management initiatives to address the most important risks. tests and reviews provide you with assurance that the policies.. Tools that support self-assessments include questionnaires. What new risk management initiatives do you have planned for the next 12 months? 5. What are the key business and financial objectives for the business unit in the next 12 months? 2. or system downtime for an IT function. (iv) Key risk indicators. unreconciled items for an accounting function.. What are the key risks that may prevent you from attaining these objectives? 3. These maps will aid in the identification of the risks encountered in each business unit. indicating ‘problem spots’. For operations that are more complex (e. The following are questions that might be included in a control self-assessment: 1. Some ORM professionals argue that companies should be most concerned about events of low probability and high severity because management does not have sufficient experience in dealing with these events. The output is an inventory of key risk exposures.pdffactory.2. 174 See Lam (2003a). and audit findings. operational risk professionals realize that best practices must integrate both quantitative and qualitative tools. regulators). employee exit interviews.4 Advanced Models When ORM first came on the scene a few years ago. KRDs are levers that management has direct control over. KRIs are ex-post indicators of operational risk performance. Human Capital Low High 3. Economic Medium High Low High Medium Medium High Low 8. As such.pdffactory. and economic capital models. there were basically two distinct schools of thought. risk maps. risk indicators. Operational Low High 6. Interest Rate High 1 10 4 9 3 7 6 Probability 2 5 8 Low Low High Severity Other sources of valuable information for risk identification and assessment include internal audit reports.C. Leverage 10. Compliance 9.2. Reputational Medium High 7. Liquidity朏 unding Medium Medium 5. The other school believed that operational risk cannot be quantified effectively.2: Risk Map Risk Probability Severity 1.The PRM Handbook – Volume III Figure III.2. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. and they focused on more humanistic.com 374 . number of automated versus manual processes. Valuation Medium High 4. III. On the other hand. external assessments (external auditors. 174 Today. time to fill open positions. Investment/Credit High High 2. Operational risk professionals also find it useful to distinguish between key risk indicators (KRIs) and key risk drivers (KRDs). and they focused on quantitative tools such as loss distributions. in that management has no direct control over their outcomes. One school subscribed to the notion that you cannot manage what you cannot measure. KRDs can be best thought of as controllable factors that will influence future KRIs. Examples of KRDs include number of training hours. and time to resolve outstanding audit findings.C. and customer and employee surveys. qualitative approaches such as selfassessments. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. for a detailed discussion of key strengths and weaknesses of various ORM models). leading companies employ advanced operational risk models. are discussed in this section (see Hernandez et al. Examples of top-down models on operational risk include the implied capital model. In selecting a methodology (or combination of methodologies). its simplicity presents several disadvantages. Given that there is likely to be no single solution.. where risk measurement methodologies have been developed and tested for many years. First. and require various inputs to be useful. Second. The volatility attributable to operational risk is calculated in the same way as in the capital allocation model – by subtracting the credit and market risk components from the total income volatility. Unlike market risk and credit risk. These top-down models benefit from the sophisticated methodologies already developed for credit and market risk. the economic pricing model. allowing a more robust overall measurement to be developed.2. Although this model provides an easily calculated ‘number’ for operational risk.4. a combination of methodologies will allow the disadvantages of one model to be balanced by the strengths of another. each company should first establish its objectives and resources and choose accordingly. Thus. there are no widely accepted models for operational risk measurement. it ignores the interrelationships between operational risk capital and market risk and credit risk capital.The PRM Handbook – Volume III In addition to the basic risk identification and assessment tools discussed above. Finally. such as the overall financial performance of the company or that of the industry in which it operates. Different methodologies imply different interpretations of operational risk. Top-down models use relatively simple calculations and analyses to arrive at a general picture of the operational risks encountered by a company.1 Top-down models The top-down approach to operational risk assessment calculates the ‘implied operational risk’ of a business by using data that are usually readily available. including their strengths and weaknesses. This methodology assumes that the domain of operational risk is ‘that which lies outside of credit and market risk’. (ii) Income volatility model. but it goes one step further. by looking at the primary determinant of capital allocation – income volatility. and the analogue model: (i) Implied capital model. Some of the most common methodologies. the income volatility model.com 375 . total risk capital must be estimated given the company’s actual capital and the relationship between its actual debt rating and target debt rating.C. the capital allocated to operational risk must be the result of subtracting the capital attributable to credit and market risk from the total allocation of capital. this model does not explicitly capture the causes and effects for operational risk. III. 2000. This model is similar to the capital allocation model.pdffactory. The analogue model is based on the assumption that one can look at external institutions with similar business structures and operations to derive operational risk measures for one’s own organization. transaction volume. they are not thoroughly accounted for. Finally. the most dramatic of which is that it ignores the rapid evolution of firms and industries. as is true in all of the top-down approaches. documented risk policies and other qualities that can be scored are swamped. The advantage of this approach is that it incorporates both discrete risks and softer issues such as reputational damage and effects of forgone opportunities. Such incidents can do more than just diminish the value of a business: they can lead to the end of the business completely. the bonus structure of its traders – put so many wild cards into the operational risk equation that similarities in business volume. it fails to capture the low-frequency. it takes some credulity to assume that the high-level numbers of another institution can accurately measure one’s own operational risk. However. so there is no motivation to improve operations. This is a significant omission.com 376 . The CAPM assumes that all market information is captured in the share price. a company’s stock price volatility due to operational risk is derived by taking the company’s total stock price volatility and subtracting from it the stock price volatility due to credit risk and market risk. thus the effect of publicized operational losses can be determined by evaluating the market capitalization of a company. Structural changes. With this approach. In the words of one analyst: ‘[The] intangibles within an institution – its risk-taking appetite. The income volatility model also fails to capture softer measures such as opportunity costs or reputation damage. the CAPM approach presents an incomplete and simplistic view of operational risk. (iii) Economic pricing model. and can be used to determine a distribution of the pricing of operational risk relative to the other determinants for capital (see Chapter I. the level of operational risk exposure is not affected by particular controls and business risk characteristics.The PRM Handbook – Volume III One of the advantages of this model is that of data availability: historical credit and market risk data are usually easily obtained. However. this model also has several shortcomings. However. and while tailend risks are incorporated in the model. It provides only an aggregate view of capital adequacy. this model does not help in anticipating. not information about specific operational risks. the character of its senior executives. In addition. and total income volatility can be observed.’ Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory. The capital asset pricing model (CAPM) is probably the most widely used of economic models. incidents of operational risk. are not captured in this model. and many are suspicious of this approach. Furthermore. highseverity risks.4). (iv) Analogue model. This model can be extended to look for the causes and effects of operational losses at such institutions. such as new technologies or new regulations.A. This method offers one way to proceed when a company does not have a robust database of operational risk losses. and therefore avoiding. high-severity events. However. The Basel II requirements should further encourage banks to develop bottom-up models. the company can assess its operational risk exposure on an ongoing basis and can upgrade specific controls as needed. particularly for low-frequency. and incompetence. continuous tracking provides a company’s management with better information about its operations and increases awareness of the causes of operational risk. Bottom-up models are usually based on statistical analysis and scenario analysis. lack of trading controls. lack of back. the analytical power of this tool will hopefully become more widely applicable in the near future as increased awareness of operational risk leads to improvements in data collection and extensions of the classical statistical methodology. In addition. The data needed for this methodology can also be used to derive a business risk profile. A scenario analysis is used to capture diverse opinions.and front-office segregation. the use of external data as a proxy poses several problems. an operational loss on a trading floor might result from personnel risk. expanding overseas business. By tracking these KRIs over time.2. Even mapping internal losses to specific risk types is difficult because losses are frequently reported as aggregates from multiple risk sources that are difficult to isolate. concerns and experience/expertise of key managers and represents them in a business model.2 Bottom-up models The bottom-up methodology applies loss amounts and/or causal factors to predict operational losses in the future. The lack of appropriate internal data is therefore the greatest obstacle to the widespread application of this methodology. Furthermore.3) A number of surveys have indicated an increasing preference for risk-based bottom-up methodologies over the top-down approaches.The PRM Handbook – Volume III III. The final output of this bottom-up approach is a loss distribution that enables operational risk capital to be estimated for a given confidence level (see Chapter III. robust internal historical loss data may not be available.C. Classical statistical models require an ample supply of operational loss data that are relevant to the business unit. It requires a company to clearly define the different categories of operational risk that it faces. turnover or error rates can be tracked over time and combined with changes in business activities to construct a more robust picture of the business operational risk profile. volatile markets. size.pdffactory.C.4. However. given the differences in business mix. For example. Scenario analysis offers several benefits that are not addressed by the classical statistical models. gather detailed data on each of these risk categories and then quantify the risk. Mapping loss data from the company with loss data from other companies is complex. senior management confusion.com 377 . A company often needs to augment its internal data with an external loss-event database. scope and operating environment. bottom-up models present several difficulties. For example. Scenario analysis is a useful tool in capturing Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. as mentioned earlier. or detective. and capture the details of the loss scenarios envisioned by the managers surveyed. it should support both the measurement and management of operational risks. Risk maps allow the representation of a wide variety of loss situations. however. whether any controls are in place. was designed to ensure such validation. and then how these attributes can underpin a seven-factor economic capital model. At the beginning of this section we discussed the need to balance the qualitative and quantitative tools. We will now discuss the attributes of a unified ORM framework. For example.com 378 . which creates a potential for recording data inconsistently and/or for biasing conclusions if one is not careful. regulatory reviews. A unified ORM framework should satisfy two basic requirements. periodic tests to ensure that actual losses and incidents (ex post) result from operational risks that were being monitored through KRIs or at least discussed in the control self-assessments (ex ante). the key attributes of a unified ORM framework include the following: Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.5 Key Attributes of the ORM Framework Today. First.The PRM Handbook – Volume III the qualitative and quantitative dimensions of operational risk. and customer surveys. The shortcoming of such a model. market and operational risks as part of an overall ERM program. and the type of control: damage. In fact. preventive. To counterbalance this shortcoming.C. III. control self-assessments require that business units are honest and forthright about their major operational risks. the severity of the associated risks. the ORM framework should incorporate the interdependencies across credit. Based on these two requirements. comparisons between control self-assessments and independent assessments such as internal audits. Second. as well as auditor attestation. Cause and effect relationships can be captured with this methodology. is in its subjectivity. which can often be embarrassing problems that they would rather not discuss (let alone highlight for senior management!). ORM practitioners recognize the pitfalls of using only one approach to modelling operational risk – either top-down or bottom-up – and that best practice ORM incorporates elements of both approaches. business units should be required to not only ‘tell me’ but also to ‘show me’. such as: pre-established operational risk indictors that are monitored against goals and MAPs.pdffactory. Section 404 of the Sarbanes-Oxley Act requiring management assessment of internal controls for financial reporting. Risk maps of each business unit identify where operational risk exposures exist. external audits.2. This can be accomplished through validation processes. (2) teamwork between the line units and ORM in new business and product development processes. (ii) Providing early warnings and escalations. and trends against established targets. A quantitative example is that an increase in employee absenteeism may be an early warning for increasing turnover and human errors. One of the most important attributes of an ORM framework is that it influences business actions and decisions. An ORM framework should establish early warning indicators. such as acceptable versus unacceptable sales practices.e.pdffactory. exposures. systems and external events 175 is complex and dynamic. the risk of loss due to people. This ensures that ‘bad news travels up’ the organization and that the appropriate level of management responds in a timely manner. audit.. and that specific consequences are in place to provide organizational reinforcements. would render the firm’s existing technology obsolete. For example. The ORM framework should provide early warning indicators of emerging risk issues. As such. compliance. and (5) positive and negative incentives to motivate appropriate business behaviour. A qualitative example is competitive intelligence that indicates significant investments in a new technology by a key competitor that. insurance). For example. A unified ORM framework should incorporate both advantages. as well as effective escalation processes so that management can take the appropriate actions.com 379 . the advantage of quantitative tools is that they provide objective indicators that can be used to show aggregate losses. and restrictions on. the advantage of qualitative tools is that they can incorporate human experience and judgement in order to capture risks that are subjective. business activities. (4) adjustments in economic capital given operational risk performance and risk mitigation strategies. (3) risk response plans based on ORM indicators and escalations. ORM. Operational risk cannot be managed effectively based only on backward-looking indicators such as losses. The nature of operational risk (i. This attribute ensures that operational risks are managed on an ongoing basis. the higher the level of management is notified. and incidents. 175 The definition of operational risk in this chapter includes business risk.The PRM Handbook – Volume III (i) Integrating qualitative and quantitative tools..g. what are the operational risks associated with a new product? On the other hand. as well as integrate the institution’s various risk management and oversight activities (e. Such influence can be asserted through: (1) corporate policies with respect to guidelines for. if successful. error rates. (iii) Influencing business activities. As excellent example of using positive incentives is when GE tied one-third of senior management compensation to the achievement of quality management objectives as part of the company’s ‘six sigma’ programme. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. a money management company established specific escalation processes such that the higher the number of customers impacted by an incident. which is notably absent in Pillar I of the Basel II proposals. quality. processes. management should implement pre-established contingency plans. (v) Incorporating risk interdependencies. An ERM programme should address such interdependencies in the design of early warning indicators. Given that there is a high correlation between volatile prices (a key driver for market risk) and transactional volumes (a key driver for operational risk) during stressed periods. such as those established by the Sarbanes-Oxley Act..The PRM Handbook – Volume III (iv) Reflecting environmental changes. increases in industry-wide operational risk losses and incidents may indicate an increase in systemic risk. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. financial institutions should establish early warning indicators and risk response plans. For example. the Patriot Act and the Basle II proposals. the development of scenario analysis. A company that lacks the processes and systems to comply with these new requirements is likely to face greater operational risk with respect to regulatory scrutiny and legal penalties. If these indicators exceed a critical level.pdffactory.g. an ORM framework should reflect changes in the operational risk environment. Just as credit risk and market risk frameworks reflect changes in underlying default rates and market prices. the Russian crisis during the autumn of 1998). Other environmental changes include new legal and regulatory requirements. A number of industry loss-event databases are being developed that can provide this type of information.3. such as reduction of trading limits to reduce market risk exposures and activation of back-up sites to increase processing capacity. For example. and the implementation of risk response plans. financial institutions must simultaneously manage market risk and operational risk during stressed market conditions (e.2. credit risk is the primary concern for most banks.C.com 380 . There are important interdependencies within and across risk types. For example. but inadequate loan documentation (an operational risk) is likely to increase loss severity in the event of a borrower default. Examples of early warning indicators are shown in Figure III. The PRM Handbook – Volume III Figure III.6 Integrated Economic Capital Model Given the five attributes of a unified ORM framework discussed above.pdffactory. While this is conceptual. and risk profile.4 shows a seven-factor approach to calculating operational risk capital.2.C. profitability. it can be adapted to a firm’s specific business mix. size.and inter-risk correlations As we will discuss in the next section. and complexity/change Enterprisewide Risk • Increases in any risk concentrations and/or organizational powers • High and undesirable turnover rates • Changes in intra. what factors should determine economic capital for operational risk? Figure III.C.com 381 .2. these interdependencies should also affect the determination of economic capital. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.2.3: Early warning indicators Risk Category Credit Risk Market risk Early warning indicators • Borrower/counter party stock price declines • Widening of credit spreads in the debt and credit derivatives markets • Increases in actual and implied price volatilities • Breakdowns in historical price relationships and patterns Business/ Operational Risk • Spikes in business growth. and in ways that might not be obvious.C. III. com 382 . This is a top-down estimate of the amount of operational risk capital required by a business or operating unit. (ii) Operating margin.2. Capital One may be a credit card company analogue. Outsourcing firms such as IBM or EDS may be analogues for internal IT functions. and is often referred to as ‘business risk’. Such an estimate can be derived from observing analogues of publicly traded companies in same or similar businesses. which can then be adjusted upwards or downwards by the other factors below. while First Nationwide may be one for mortgage companies. For example.C.4: Operational risk capital calculation 2 1 Revenue Multiplier 3 O perating Margin • Revenue volatility • Fixed vs. A firm’s inability to 176 For certain businesses. while adjusting for market risk and credit risk.pdffactory.The PRM Handbook – Volume III Figure III. The central question is ‘if the business or operating unit were a standalone business. (i) Revenue multiplier. This factor incorporates the degree to which the firm’s operating margin is more or less volatile than average. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. how much capital would it need for operational risk capital?’ The revenue multiplier 176 assumes an average operational risk profile. variable expenses Internal Indicators • • • • External Indicators • Customers • Regulators • Event risk exposures 4 Operational Risk Capital 5 Model Risk 6 Systemic Risk Credit Risk Capital 7 Financial Risk Capital Multiplier Market Risk Capital Losses/incidents Risk indicators Risk assessments Audit ratings • Model reliance • Back test results • Industry loss experience • Banking and settlement failures Captures linkages between operational risk and financial risk Let us discuss each one of these factors in turn. a top-down proxy based on activity or volume might be more appropriate. Goals and MAPs for external indicators should also be established. Systemic risk is especially important for highly interconnected industries such as financial services and energy services. A scorecard should be developed for the internal quantitative and qualitative indicators. such as fires. with individual weightings. Firms that rely on external vendors should also incorporate vendor performance relative to service level agreements. credit. and regulatory exam findings. where trading activities and counterparty exposures within the industry are significant. this may simply be one of the internal indicators. This scorecard would also track exposures to external events. (vi) Systemic risk. risk maps. For firms that do not rely on models. This adjustment reflects the effectiveness of internal controls. In fact. Each key indicator should also be associated with specific goals and MAPs. Y2K readiness. a scorecard of external indicators should be developed. This factor reflects the degree to which a firm relies on models. business variables that can increase the required operational risk capital include greater volatility in business volume. such as industry-wide losses and incidents. (iv) External indicators. etc. Internal indicators would include losses. companies were concerned not only about their direct exposures. The primary input is back-testing results against predetermined criteria. (iii) Internal indicators. internal audit ratings. unreconciled items). which may lead to a reduction in aggregate economic capital at the enterprise-wide level. early warnings. For example. It is not portfolio diversification. The economic impact of contingency plans and insurance programmes should also be factored in. that would provide an overall adjustment to operational risk capital. risk metrics (e. (v) Model risk.com 383 . external audit comments.. and the quality of such models. External indicators would include customer satisfaction scores and complaints. A firm should include all models that drive management decisions and actions. (vii) Financial risk multiplier. earthquakes and acts of terrorism. This factor adjusts for dramatic shocks in the business environment.pdffactory. and the Enron bankruptcy. As with internal indicators. In each of these situations. scenario and simulation models. This factor is meant to capture the compounding effects between operational. weak power to set prices. such as pricing and valuation models. and banking and settlement failures. Past examples include the Long-Term Capital Management collapse.The PRM Handbook – Volume III generate sufficient revenue to cover expenses (net of unexpected credit and market risk losses) is a major reason why it needs to hold operational risk capital. incidents. but also the exposures of their business partners and counterparties. it is a compounding Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. and risk management models. error rates. and market risks. and higher fixed versus variable expenses.g. After all. including improved customer service. unsavoury sales practices) are compounded in a firm with significant market risk and credit risk exposures.C.. It still has a long way to go.7 Management Actions Assessing and measuring operational risk is important. more frequent or more extensive monitoring). These actions include adding human resources. An argument can also be made that a variety of operational risk exposures (e. To fully realize these benefits. However. and they responded immediately when operational risk indicators fall below MAP and reported back to management on their assessments and actions within a few days or weeks.2. Early adopters of more sophisticated ORM have reported significant business benefits. and reputational and contagion effects. it is clear that the further development of ORM practices must integrate quantitative and qualitative tools. changing organizational structure and incentives. lack of corporate limberness. adding internal controls (e. the goal of ORM is to help management to achieve their business objectives. increasing training and development. rogue trader. Regulators refer to this compounding factor as ‘spillover effects.pdffactory. inadequate loan documentation. The financial risk multiplier is meant to capture such spillover effects. III. the next step is to implement a process that identifies actions that will reduce operational losses. At the annual 2003 operational risk conference organized by the Risk Management Association. Finally. greater operating efficiency and reduced losses. the development of ORM is more than a regulatory compliance issue. improving and/or automating processes. Eric Rosengren of the Federal Reserve Bank of Boston said that only three of the 20 largest US banks qualify for the ‘advanced management approach’ for operational risk under Basle II. The practice of ORM has come a long way in the past several years.com 384 .The PRM Handbook – Volume III factor that many risk managers ignore. a mechanism for evaluating and prioritizing potential improvements must be created. and upgrading systems capabilities. Simply stated. Once a measurement framework is in place. Cost–benefit analysis and readiness assessments are useful tools that should be included in the Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. a rogue trader can do much more damage at a bank than at a retail store.. which is supposed to lead to reduced capital charges. but pointless unless directed towards the improved management of operational risk by enhancing internal controls and controlling key risk factors.g. At one business unit at Fidelity Investments these teams were called ‘turbo teams’.’ Cumming and Hirtle (2001) argued that the confluence of variables including market liquidity problems.g. The key to effective operational risk mitigation is to establish a cross-functional rapid response team that will address and resolve any emerging operational risk issues. could result in the aggregate risk of a firm exceeding the sum of its individual risks. such as the level of operational losses. significant policy violations. Additionally. the number of errors. training. as well as training programmes and on-line support tools. testing. These escalation triggers and procedures should be incorporated into the company’s policies and procedures.g. exposure limits and MAPs on key operational processes. while its MAP is no more than 100 failed trades per day. business executives often turn to IT. In other words. A key requirement for risk mitigation is to understand the root causes of operational risks. For example. Escalation triggers can be defined in terms of the KRIs. management should clearly communicate when they should be informed through a cascading set of ‘escalation triggers’. For example. Automation of poorly designed processes can result in significant operational risks in the future. suppose a brokerage group on average processes a million trades per day. This group may specify that its operational goal is that failed trades be less than less than 50 per day. Business units that take the appropriate actions should receive a ‘credit’ in their economic capital charges. which would lead to the appropriate decisions and actions on the part of management. the quantification and modelling of operational risk should lead to more timely communication and escalation of operational risk issues. the group may specify that no more than 40% of daily trades can be processed by one operational centre in order to spread its reliance across multiple operational centres. if it captures both performance and behaviour effects..The PRM Handbook – Volume III evaluation process. For example. and number of customers impacted by an incident. or more specifically automation. One of the key objectives of any ORM programme is to ensure that ‘bad news travels up an organization’.pdffactory. More importantly. should motivate business units to improve their ORM in order to reduce their capital charges. A business unit can monitor and improve its operational risk levels by setting operational goals. development. a business may set up procedures through which employees may respond immediately to operational problems and implement the controls necessary to monitor and improve performance. Additionally.com 385 . as the answer to process improvements. The allocation of economic capital for operational risk. so that all employees understand what is expected of them. such as lack of training or inadequate systems. Some of the operational risk measurement approaches discussed above should naturally lead to improved operational risk management at the business unit level. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. implementation and ongoing maintenance). This can be accomplished through the various process models and quantitative tools discussed above. the potential benefits must be weighted against the total costs of the project (e. However. and then focus corrective actions on these root causes. management must ensure that the organization is ready to take advantage of the technology solutions. In fact. A company can strengthen product development controls as well as purchase product liability insurance. III. with an expected loss of $80. For example. a company must decide if the best strategy is to implement internal controls and/or execute risk transfer strategies. a company should: identify its operational risk exposures and quantify its probabilities.com 386 . directors’ and officers’ liability insurance provides protection against ‘wrongful acts’. the cost of capital for operational risk (and other risks) should be incorporated into the pricing of a transaction. and provider rating and service level.C. then an adjustment of $8 per transaction could cover such losses. These reserves are considered a form of self-insurance.8 Risk Transfer For critical operational risk exposure. if a business unit performs 10. Pricing can also be driven by the target levels of returns that the company expects a product to achieve given competitive pricing and market share objectives. Some risk transfer strategies are intended as ‘backstops’ to internal controls.g. integrate its operational risk with its credit risk and market risk in order to assess its enterprise-wide risk–return profile.000 a year due to operational factors.2. severities and economic capital requirements. cost. most companies implement workplace safety procedures (an internal control) and purchase workers’ compensation insurance (a risk transfer strategy). Another example is product liability. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. and economic capital incentives for ORM.. Including an additional adjustment for operational risk makes for a more comprehensive picture and allows for more accurate risk-adjusted pricing. establish operational risk limits (e. For example. the former can reduce the cost of the latter. Expected losses should be embedded in the pricing of a product.The PRM Handbook – Volume III Besides risk mitigation through operational processes and controls. MAPs. Companies can establish reserves to cover their expected operational losses. economic capital concentration). For example. indeed market and credit risks are already incorporated into some transaction prices as a matter of practice. insurance managers would purchase such ‘backstop’ insurance policies based on the structure.000 transactions annually. Additionally.pdffactory. In the past. In the context of ERM and ORM. implement internal controls and develop risk transfer and financing strategies. The two are not mutually exclusive and are often complementary. there are other financial solutions that management may consider. 5: Ceded RAROC analysis Different Structures Common Cost/Benefit Framework Ceded RAROC = Derivatives Return Economic Capital Return Pay cashflows or insurance premium – Include transaction and ongoing management costs – Reduce Economic Capital ‘benefit’ – Structured Finance Economic Capital Reduce Economic Capital held for risk – Increase Economic Capital counterparty exposure – Increase operating risk Economic Capital – Insurance Let us apply the framework using the following example of purchasing a product liability insurance policy at an annual premium of $100.pdffactory.000: 1. In a sense. By comparing the ceded RAROCs of various risk transfer strategies. Figure III. Annual management cost $100.com 387 . Figure III. while the economic costs include insurance premiums.C.e. comparing the cost of risk retention and risk transfer). as well as higher counterparty credit exposures.000 20. in executing any risk transfer strategy the economic benefits include lower expected losses and reduced loss volatility. For example. resulting in a ‘ceded RAROC’.000 Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.2. prices and counterparties on an apples-to-apples basis and select the most optimal transaction(s). and vice versa. a risk transfer strategy with a ceded RAROC below the firm’s cost of equity would add to shareholder value.The PRM Handbook – Volume III evaluate alternative providers and structures based on cost–benefit economics (i.2. The economic capital framework discussed above is also a useful tool for evaluating the impact of different risk transfer strategies.5 provides the framework for a ceded RAROC analysis.C.. Annual insurance premium 2. a company can compare different structures. Moreover. the company is both ceding risk and ceding return. the ceded RAROC is 9% [(100. Economic capital benefit 177 4.2. This represents the effective cost of risk transfer. resource allocation to core activities.C.000]. For instance. III.9 IT Outsourcing IT outsourcing is widely considered one of the major business imperatives in today’s business world. Net reduction in economic capital 178 60. III.000.000) / 2. and the decision processes and cost– benefit analyses that will result in specific outsourcing transactions. the approach to evaluating alternative outsource providers and solutions.1 Stakeholder Objectives Buyers of outsourcing should first establish an overall outsourcing strategy. This strategy would identify the IT systems and/or applications that are outsource candidates. More importantly.C.The PRM Handbook – Volume III 3. Let us discuss how one might establish an operational risk process for IT outsourcing based on the processes and tools discussed earlier. including the specific business and financial objectives that outsourcing is expected to achieve. The META Group estimates that IT outsourcing is a US$150 billion market. 177 The economic capital benefit represents a funding credit. scalability and flexibility in IT resources. the growth of IT outsourcing is expected to outpace that of software and hardware for the next few years at least. which can be compared to the effective costs of alternative risk transfer strategies as well as the cost of risk retention.9.000 2. Buyers of outsourcing services often cite the following expected benefits: cost savings. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. we assume a funding rate of 3%.pdffactory. It is used in matched-maturity funds transfer systems to recognize the interest income from the investment of capital funds. enhance e-business applications.2.000 + 20. shorter time to market. This strategy should also discuss how the outsourcing strategy will support the overall business strategy of the company. if the company’s cost of economic capital were 10%. quality of services. access to IT skills and advanced technologies.000.000 + 60. minus the increase in counterparty and operational risk capital.com 388 .000 In this example. compared to $200–220 billion for software and hardware. 178 The net reduction in economic capital includes the gross reduction of economic capital for the risk exposure that is being insured. often described as a ‘mega-trend’. In our example. then this transaction would add to shareholder value because the ceded RAROC (cost of risk transfer) is below the cost of risk retention. diamondcluster. In response to political pressure.pdffactory. and reputational risk. available at www. However. regulatory compliance. expected cost savings cited by buyers have come down from 40–70% a few years ago to 10–20% today. While project delays and cost overruns are the most common reported outcomes from these risks. professional standards. buyer consolidation of outsourcing vendors (business risk). Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. it might be useful to review a couple of more specific examples. This step can take six months to a year. the state of New Jersey has passed legislation to ban IT and business process outsourcing by state government. III. a woman from Pakistan recently obtained sensitive patient information from the University of California. sending out RFPs and evaluating the responses. The process includes documentation of requirements in a request for proposal (RFP). Medical Center through a medical transcription subcontractor that she worked for. San Francisco. outsourcing arrangements often fail to meet the buyers’ requirements. A 2004 IT outsourcing study by DiamondCluster 179 noted that 21% of buyers said that they had prematurely terminated an outsourcing agreement in the last 12 months.9. and business objectives. Perhaps one of the most critical risks in IT outsourcing arrangements is the appropriate alignment of objectives between the parties. buyers must seek out a provider that not only offers the optimal technologies and services. In the area of information security.com 389 . In addition to these risks. there is a significant backlash against ‘exporting jobs’ through outsourcing.com. but also possesses compatible business culture. On the political side. Given the long-term nature of outsourcing contracts.2. and threatened to post the files on the Internet unless she was paid more money. even with reduced expectations. and negotiating the contract and service level agreement (SLA).C. and cost 2–5% of the annual cost of the contract.2 Key Processes The key processes associated with IT outsourcing include: (i) Evaluation and selection of outsourcing provider. The most common reasons cited for cancelling outsourcing arrangements include: provider having financial difficulties (credit risk). data integrity and security.The PRM Handbook – Volume III As a sign of more realistic expectations on the part of buyers. provider failure to deliver on commitments (operational risk). 179 DiamondCluster 2004 Global IT Outsourcing Study. buyers must address a complex set of significant operational risks – such as geopolitical risk. with the following quantitative metrics as being the most common: on-time delivery. and integrate the output with other in-house or outsourced IT operations.3 Performance Monitoring As discussed earlier. perform quality tests. audit operations. quality staffing. size of request backlog. On a day-to-day basis. debt rating Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. the ongoing ‘costs of risk’ should be considered. monitor provider performance. Each of the above processes.pdffactory. performance and risk metrics should be developed as part of an ongoing outsourcing review process. The DiamondCluster study noted that 70% of buyers and 69% of providers monitor performance at least monthly. Additionally. such as incremental insurance expense and economic capital costs. software. such as market share.2. communication protocols) and operational processes. and establishing the required infrastructure (hardware. Steps include bringing the provider company professionals onsite for training and knowledge transfer. defect rates. III. earnings.The PRM Handbook – Volume III (ii) Transitioning to the outsourcing environment.com 390 . roles and responsibilities of both parties are clearly established. and can cost 5–10% of the contract. This requires significant project management resources. In addition to the above performance metrics. standards compliance. Performance goals and MAPs should be incorporated into SLAs and a ‘scorecard’ should be developed to track actual performance against goals and MAPs. should be fully documented in policies and procedures and illustrated in process maps. timely. As such. transferring or terminating existing employees. time to process requests. especially ongoing management. The transition period is perhaps the most challenging stage of an outsourcing initiative. end-user satisfaction. and cost 5–15% of the annual contract. (iii) Ongoing management of the outsourcing contract. cost effectiveness. stock price performance.9. This step can take an additional three to twelve months. the buyer must manage work processes and communications. service availability.C. the buyer should monitor the provider’s key business and risk metrics. Ongoing management – 500 (5%) – 1000 (10%) – 1000 (10%) – 1500 (15%) $2800 (28%) $500 (5%) 5. which provides a tax-adjusted 50% labour arbitrage on a $10 million contract. 180 Risk costs include incremental insurance expense for the outsourcing arrangement and the cost of incremental operational risk capital.pdffactory. As noted in a survey of 500 human resources executives by Hewitt Associates. but as a proactive process to ensure overall project success.com 391 . The following table shows the cost–benefit analysis for both a best case and worst case: ($ Thousands) Best Case Worst Case 1. which may include some of the risk mitigation strategies discussed below.The PRM Handbook – Volume III and financial ratios. Such an analysis should be performed for different outsourcing strategies and various providers. 92% of the firms had moved jobs overseas to cut costs. Vendor selection – 200 (2%) – 500 (5%) 3.2.9. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Let us take a simple example to see how an outsourcing contract should be evaluated. Performance and risk monitoring should not be viewed as a reporting exercise. the projected range of net savings should then be considered against less tangible risks such as reputational risks and geopolitical risks. The latter should include exit cost. Risk costs 180 Net savings: The above example shows that the adjusted net savings range from 28% in the best case to only 5% in the worst case. Transition costs – 500 (5%) – 1500 (15%) 4. including consideration of all critical risk factors. Finally. less than half of those companies studied the tax environments of the offshore country and only 34% considered the expense of shutting down US facilities. However.C. Suppose a US company is considering moving one of its application development projects offshore to China.4 Risk Mitigation One of the first and most important steps in mitigating outsourcing risk is to fully evaluate the economic costs and benefits at the onset. III. Projected labour cost savings $5000 (50%) $5000 (50%) 2. which is a function of the probability of project failure and the cost of switching operations in-house or to another provider. other risk mitigation strategies include: Developing a hybrid outsourcing strategy.. It is critical that the outsourcing contract is attractive to both parties. given the rapid changes in technologies.g. Establishing exit strategies and contingency plans. Other considerations include the mix between ‘nearshore’ operations (e. and technology and service requirements. For buyers new to outsourcing or experienced buyers working with a new provider. At a significant scale. in the case of a US firm) and offshore operations (e..pdffactory. the provider should have ‘skin in the game’ to provide the right incentives for ongoing performance. customer preferences and business requirements. or the provider does not deliver as expected. Negotiating a flexible win–win contract. Also. the META Group estimates that by 2006. countries and locations. Canada. China). India. corporate cultures. Early outsourcing contracts were large and very ambitious arrangements that often failed to live up to the buyer and/or provider expectations. or business conditions require the termination of the contract. Outsourcing experts suggest that the optimal blend of in-house and outsourced IT resources is generally 20–30% in-house and 70–80% outsourced. However. even the outsourced component should be diversified across different providers. Taking an incremental outsourcing approach. contract terms should incorporate sufficient flexibility so that they do not become obsolete prematurely.g.com 392 . Once the initial work is performed at a satisfactory level. 35% of outsourcing contracts will adopt output-based pricing instead of time and materials (input-based pricing).The PRM Handbook – Volume III In addition to performing a full cost–benefit analysis of outsourcing opportunities. These exit strategies and contingency plans should be developed in the early stages and with the participation of the provider(s). For example. Companies should establish exit strategies and contingency plans in the event that the outsourcing contract expires. For example. because their cooperation will be needed to execute such Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. as well as the possibility of setting up captive outsourcing companies. Mexico. an arrangement that is overly favourable to the buyer might not get the appropriate level of service and attention of the provider. then the outsourcing relationship can be expanded over time. a more practical approach is to outsource incrementally to ensure a compatible relationship in terms of expectations. Lam. J (2003a) A unified management and capital framework for operational risk.The PRM Handbook – Volume III plans. C M. pp. L M. or planning to initiate. 26–29. Otherwise. These PMOs take a portfolio approach in allocating resources and monitoring vendor performance to ensure optimal performance. contingency plans for disaster recovery should be fully developed and tested by the provider and/or buyer. Journal of Risk Finance. companies should develop a well thought-out communication strategy with respect to their outsourcing initiatives. tax experts and lawyers. pp.pdffactory. internal communication is important given the potential for disgruntled employees to undermine outsourcing initiatives..3 million US jobs. companies involved in. and Hirtle. 3. B J (2001) The challenges of risk management in diversified financial companies. Besides communicating to external groups such as customers. References Cumming. the ‘next best thing’ might very well become the company’s worst nightmare. Lam. 1(3). NJ: Wiley. R (2000) Quantifying event risk: the next convergence. in the post September 11 world. given the strategic importance of IT and outsourcing. The PMOs also represent a centre of excellence for project management skills and resources. March . accounting for $100 billion in wages. and Ceske. Moreover. RMA Journal. Sanchez. Hernandez. Federal Reserve Bank of New York Economic Policy Review. unions. including sourcing and managing external resources such as consultants.com 393 . coordinate and implement project management controls and the above risk mitigation strategies. Developing a compelling stakeholder communication strategy. Hoboken. and governmental entities. Outsourcing should continue to be a sensitive political issue for the foreseeable future. Forrester Research estimates that over the next 12 years. To minimize reputational risks. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. J (2003b) Enterprise Risk Management – from Incentives to Controls. outsourcing arrangements should establish the appropriate operational risk controls discussed in the chapter. Regardless of whether a PMO is established. Feb. To develop. 9–23. will move offshore. companies are putting in place centralized programme management offices (PMOs) as governance structures. J V. Such numbers will likely fuel the current backlash. The PRM Handbook – Volume III Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 394 The PRM Handbook – Volume III III.C.3 Operational Value-at-Risk Carol Alexander 181 Many firms may wish to apply an ‘advanced measurement approach’ (AMA) to assess their capital to cover operational risk. Under the new Basel Accord that comes into force at the end of 2006, banks will at first apply a ‘top-down’ method (either the ‘basic indicator’ or ‘standardised’ approach) to assess their operational risk regulatory capital. However, by the end of 2007 they will be able to apply an AMA – and hopefully reduce their regulatory capital charge – provided they meet certain qualitative and quantitative criteria. Rating agencies are another driving force behind the implementation of AMA. Banks and corporates that aim for a high credit rating require an accurate assessment of their operational risks to convince rating agencies that capitalization is adequate. The ‘top-down’ methods provide only a crude estimate of operational risk, based on the unrealistic assumption that operational risks increase proportionally with gross income (see Section III.C.2.4.1). Quantitative risk management requires an understanding of the ‘value-at-risk’ (VaR) models that are used to assess market, credit and operational risk capital. Operational VaR modelling is the subject of this chapter: Section III.C.3.1 outlines the ‘loss model’ approach to computing operational risk capital (ORC). Sections III.C.3.2 and III.C.3.3 examine how to apply some standard functional forms for the frequency distribution and severity distribution. Sections III.C.3.4 and III.C.3.5 describe how each component of ORC is estimated using (a) analytic and (b) simulation methods, and then Section III.C.3.6 explains how the component ORC estimates are aggregated over all business lines and event types to obtain the total ORC estimate for the firm. Section III.C.3.7 concludes III.C.3.1 The ‘Loss Model’ Approach The actuarial ‘loss model’ approach has recently become accepted by the industry as the generic AMA for the determination of operational risk regulatory capital for the new Basel 2 Accord (see Sections III.0.3 and III.C.1.7 for further details). Consequently, the loss model approach may also be favoured by rating agencies for firms of high credit quality. But even without this external pressure, many firms will want to adopt operational loss models as a key element of good risk management practice. 181 Chair of Risk Management and Director of Research, ISMA Centre, Business School, University of Reading, UK. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 395 The PRM Handbook – Volume III Prior to implementing an AMA the firm must identify events that are linked to operational risks, and map these events to an operational risk ‘matrix’ such as that based on the Basel 2 consultative documents (see Basel Committee on Banking Supervision, 2001) and which is shown in Table III.C.3.1. Each element in the matrix defines an operational risk ‘type’ by its business line and operational event category. In the AMA, operational risk capital is first assessed separately for each risk type for which the AMA is the designated approach. 182 Then the component estimates are aggregated to obtain the total AMA operational risk capital for the firm. The AMA could be chosen for only the most important risk types and this depends on the nature of the business: that is, the definition of the important event types and business lines will be specific to the firm’s operations. For example, operational losses for clearing and settlements firms may be concentrated in processing risks and systems risks. Table III.C.3.1: The operational risk matrix Internal External Fraud Fraud Employment Practices & Workplace Safety Clients, Products & Business Practices Damage to Physical Assets Business Disruption & System Failures Execution, Delivery & Process Management Corporate Finance Trading & Sales Retail Banking Commercial Banking Payment & Settlement Agency & Custody Asset Management Retail Brokerage The definition of business units and event types for the operational risk matrix can be specific to the firm. It will be natural to follow pre-existing internal definitions of business units and to define event types that capture the important operational risks. It should also take account of the granularity that is required for the AMA calculations. Increasing levels of granularity are necessary to include the impact of insurance cover, which may only be available for some event types in certain lines of business. 182 Corporates are, of course, free to pick and choose which risk types they assess using AMA. Indeed, banks are also afforded some flexibility: under the new Basel Accord they can choose the AMA for some risk types and apply the standardized approach to others. However, once a risk type has been chosen for AMA modelling, the bank will not be allowed in future to apply a more basic risk capital assessment method. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 396 The PRM Handbook – Volume III It is desirable to isolate those elements of the matrix that are likely to be dominant in the final aggregation, as these risk types should be the main priority for risk control. Unfortunately, these can be precisely those risks for which the data are very subjective, consisting of expert opinions or risk self-assessments that have a large element of uncertainty. Somehow, this uncertainty in the data must be included in the risk model. Qualitative judgements must be translated into quantitative assessments of risk capital using appropriate statistical methodologies. Many firms now aim to do this through a ‘risk self-assessment’ process. A risk self-assessment gives a forward-looking, subjective estimate of the loss model parameters. Risk self-assessment programs can be facilitated in the same way as control self-assessments (see Section III.C.1.9). Operational risks may be categorised in terms of: frequency, the number of loss events during a certain time period; and severity, the impact of the event in terms of financial loss. Risks with very low frequency and high severity, such as a massive fraud or a terrorist attack, could jeopardise the whole future of the firm. These are the risks associated with losses that will lie in the very upper tail of the total loss distribution. Risk capital is not really designed to cover these risks. However, they might be insurable. Risks with high frequency and low severity, which include credit card fraud and processing risks, can have a high expected loss but will have relatively low unexpected loss. That is, the range of loss outcomes is relatively narrow. If expected losses for the high-frequency, low-severity risks are covered by the general provisions of the business, the implication is that ORC requirements for these risk types will be relatively low. If this is not the case, then expected losses should be included in the risk capital. Unless expected losses are very high, the risk capital will still be lower than that for medium-frequency, medium-severity risks. These latter risks are the legal risks, the minor frauds, the fines from improper practices, the large system failures and so forth. In general, these should be the main focus of the AMA. An example of loss data is shown in Table III.C.3.2. For simplicity and only for the purposes of this illustration, the exact date of the loss is not actually recorded, only the quarter into which it falls. This allows one to classify data by quarterly frequency – or by semi-annual or annual frequency – but not by monthly or lower frequencies. Suppose we choose the quarterly period, so the frequency distribution will be of the number of loss events per quarter. First we ignore the severity data, and consider only the dates of the loss events. There were no quarters in which no loss events occurred, no quarters in which 1 loss event occurs, but there was one quarter in which 2 loss events occurred (2001:Q3) and two quarters in which 3 loss events occurred (2000:Q1 and 2000:Q3). Continuing counting in this way, we can draw an empirical frequency Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 397 The PRM Handbook – Volume III density. Secondly, returning to the example loss data, we now ignore the date of loss, consider only the loss amounts and hence construct the empirical severity density. Table III.C.3.2: Example of historical loss experience data Date 2000:Q1 2000:Q1 2000:Q1 2000:Q2 2000:Q2 2000:Q2 2000:Q2 2000:Q3 2000:Q3 2000:Q3 2000:Q4 2000:Q4 2000:Q4 2000:Q4 Loss (? 00) 4.45 13.08 29.38 25.92 39.10 12.92 1.24 8.01 12.17 13.88 53.37 5.89 1.32 7.11 Date 2001:Q1 2001:Q1 2001:Q1 2001:Q1 2001:Q1 2001:Q1 2001:Q2 2001:Q2 2001:Q2 2001:Q2 2001:Q2 2001:Q3 2001:Q3 2001:Q4 2001:Q4 2001:Q4 2001:Q4 2001:Q4 2001:Q4 2001:Q4 2001:Q4 2001:Q4 Loss (? 00) 7.51 1.17 1.35 105.45 37.24 16.55 7.34 1.35 1.50 1.19 2.80 3.00 6.82 1.73 231.65 5.00 3.10 26.45 12.62 2.32 71.12 1.73 Date 2002:Q1 2002:Q1 2002:Q1 2002:Q1 2002:Q1 2002:Q1 2002:Q1 2002:Q1 2002:Q1 2002:Q1 2002:Q2 2002:Q2 2002:Q2 2002:Q2 2002:Q2 2002:Q3 2002:Q3 2002:Q3 2002:Q3 2002:Q3 2002:Q3 2002:Q4 2002:Q4 2002:Q4 2002:Q4 Loss (? 00) 1.12 4.06 34.55 10.24 24.17 11.01 3.89 187.50 13.21 4.49 2.10 2.20 2.31 25.00 3.81 1.48 1.57 20.33 43.78 3.62 45.72 142.59 20.73 31.96 55.60 For some real loss experience data based on a record going back over 24 months of operational loss events, Figure III.C.3.1 illustrates the empirical frequency density (the number of loss events per month). It shows that, out of the 24 months, there was only one month in which less than 10 loss events occurred; there were five months for which between 10 and 19 loss events occurred; three months for which between 20 and 29 loss events occurred, and so forth. Figure III.C.3.1: Example of (monthly) frequency distribution Frequency 6 5 4 3 2 1 0 1 10 20 30 40 50 60 70 80 90 100 110 120 130 140 Num be r of Loss Events Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 398 The PRM Handbook – Volume III Figure III.C.3.2 illustrates the empirical severity density obtained from the same data. Only losses in excess of €1000 are recorded, so the severity data are truncated at the lower end. Special methods need to be applied to detruncate the severity data so that the full severity density can be recovered. Figure III.C.3.2: Example of loss severity distribution Severity 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 0 20 40 60 80 100 120 Loss Give n Eve nt In summary, for a given operational risk type we construct a discrete frequency density h(n) of the number of loss events n per period, and a continuous density g(l ) representing the loss severity, L. The density function for the total loss distribution f(x) is then given by compounding these two densities, usually under the assumption that loss frequency and loss severity are independent. More details of the method for compounding these densities will be given in Section III.C.3.5 below. Figure III.C.3.3 gives a diagrammatic representation of the relationship between the total loss distribution and its underlying frequency and severity distributions. The basic time period for the frequency density here is one year, and in this case the total loss distribution is also called the ‘annual’ loss distribution. This is a convenient terminology because later we need to aggregate several ‘total’ loss distributions over different risk types into a ‘total total’ loss distribution. However, by referring to each component as an ‘annual’ loss distribution (or a ‘quarterly’ loss distribution if the frequency is measured at quarterly intervals, or ‘monthly’ and so on) there is no confusion about what is meant by the ‘total’ loss distribution. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 399 The PRM Handbook – Volume III Figure III.C.3.3: Representation of the loss model Unexpected Loss Expected Loss 99.9th percentile Frequency Distribution No. Loss Events Per Period Loss Severity Distribution Loss Given Event Marked on the density function for the total loss distribution are the expected loss, that is, the mean of this distribution; and the unexpected loss at the 99.9th percentile, that is, the unexpected loss at the th percentile is the difference between the upper th percentile and the mean of the annual loss distribution ORC is held to cover all losses, other than highly exceptional losses, that are not already covered by the normal cost of the business. The definition of what one means by ‘highly exceptional’ translates into the definition of a percentile of the loss distribution, such that only losses exceeding this amount are ‘highly exceptional’. One should attempt to control these, using scenario analysis. As already mentioned, often expected losses are already included in the normal cost of business. For example, the expected loss from credit card fraud may be included in the balance sheet under ‘operating costs’. In that case ORC should cover only the unexpected loss at some predefined percentile of the loss distribution. The ORC is thus defined using the VaR metric, for some risk horizon (defined below) and at some percentile. The Basel Committee recommend 99.9% and a one-year horizon for the calculation of ORC in banks; internally, for companies wishing to maintain a high credit rating, it is common to use an even higher percentile that is consistent with their desired credit rating. For instance, a firm that targets a AA rating will typically be measuring economic capital at the 99.97th percentile. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 400 The PRM Handbook – Volume III III.C.3.2 The Frequency Distribution The total operational loss refers to a fixed time period over which these events are to be observed. This time period is usually called the risk horizon of the loss model to emphasise that it is a forward-looking time interval starting from today. For regulatory purposes, both operational and credit risk horizons are set at one year, but for internal purposes it is also common to use risk horizons of less than or more than one year. For ease of exposition we shall henceforth only refer to the one-year horizon – hence the AMA aims to model the annual loss distribution. Having defined a risk horizon, the probability of a loss event, which has no time dimension, can be translated into the loss frequency, that is, the number of loss events occurring during the risk horizon. In particular, the expected loss frequency, denoted lambda ( ), is the product of the expected total number of events, N, during the risk horizon, including events for which no operational loss was made, and the expected loss probability, p: = Np. Sometimes it is convenient to forecast (III.C.3.1) directly – this is the case when we cannot quantify the total number of events N and we only observe loss events – and in other cases it is best to forecast N and p separately – for example, N could be the target number of transactions over the next year, and in that case it is p, not , that we should attempt to forecast using loss experience and/or risk self-assessment data. Loss frequency (the number of loss events occurring during the risk horizon) is a discrete random variable: it can only take the values 0, 1, 2, … , N. A fundamental density for such a discrete random variable (which nevertheless is only appropriate under certain assumptions) 183 is the well-known binomial density (see Section II.E.4.1), h( n ) N n p (1 n p )N n n = 0, 1, … , N. (III.C.3.2) The binomial frequency is only used when a value for N can be specified. For example, for Trading & Sales/Client, Products and Business Practice Risk, N could be the target number of deals over the next year; for Retail Banking/External Fraud, N could be the total number of credit cards in issuance during the risk horizon. 184 183 We must assume that the probability of a loss event is the same for all the events in this risk type, and therefore equal to p; and that operational events are independent of each other. 184 Note that here N would be so large that the Poisson distribution can be used instead of the binomial distribution. However, firms may still wish to employ the binomial distribution when, for instance, their target for N over the next year is quite different from the historical value of N. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.pdffactory.com 401 2) n h( n ) exp( n! ) n = 0. The Poisson distribution has the density function (see Section II. one could equate the mean frequency observed empirically with . this does not imply that one should always choose the negative binomial as the frequency functional form. 185 But of course. 2. since p is normally small the binomial distribution can often be well approximated by the Poisson distribution.4. However. the main model risk is ‘aggregation risk’.com 402 . … . In fact the model risk arising from inappropriate and/or ad hoc methods when handling the data is a much more important source of operational VaR model risk.4) 2. it is difficult to apply the binomial distribution to external consortium data because the consortium will not normally be recording a value of N for each bank.C.C. These restrictions imply that it is often the Poisson distribution that is used in practice. See. as in (III. which has the single parameter the expected frequency.E. However. the question ‘what is the expected number of loss events next year?’ will directly invoke the parameter for the Poisson distribution. On the other hand. Chapter III. Also. 2. with density function h(n ) which has mean n 1 n and variance n 1 1 1 n = 0. 185 The choice of functional form for the frequency distribution should depend on both the type of data and the source(s) of the data – internal and/or external loss data and/or risk selfassessments.3) If the empirical frequency density is not well modelled by a Poisson distribution – for example.3. more flexible functional form is the negative binomial distribution. but then find that the sample variance is significantly different from – an alternative. … . is also equal to the variance of the Poisson distribution. This stems from making inappropriate assumptions regarding dependencies when aggregating risks.3.pdffactory.3. for instance.C. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.The PRM Handbook – Volume III It is not always possible to specify N. In contrast to market and credit risk. 1. This has by far the most influence on the total VaR estimate. Incidentally.A.1) above. 1. There is absolutely no point in applying a statistical test to decide which of the frequency distributions provides the closest fit to loss data: the binomial is only applicable when a ‘number of events’ can be quantified (and is small) and the negative binomial has two parameters so it will always fit better than the Poisson. It is very difficult to design psychologically meaningful questions in a risk selfassessment that are compatible with a negative binomial frequency. in common with market and credit risk. for operational risk the precise fitting of data by choosing the best functional form is not a main source of model risk.3. (III. (III. The density function is shown in blue in Figure III. The monthly frequency distribution is therefore estimated as a Poisson distribution with = 5. over the last 2 years: Month 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Number of Loss Events 20 13 24 26 25 21 17 13 21 30 16 24 31 20 19 21 14 14 15 18 16 21 22 19 The total number of loss events recorded was 480.The PRM Handbook – Volume III Example III.C. The monthly frequency distribution is therefore estimated as a Poisson distribution with = 20.2 0.C.C.12 0. so the average number of loss events per month is 20.1: Estimating a Poisson frequency from historical data (i) High-frequency risk type. Suppose historical loss events give the following data on just the number of loss events recorded each month. so the average number of loss events per month is 5.04 0. Suppose historical loss events give the following data on just the number of loss events recorded each month.pdffactory.08 0.1 0. (ii) Low(er)-frequency risk type.4.3. Figure III.3. over the last 2 years: Month 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Number of Loss Events 0 12 3 0 8 4 10 1 0 9 2 10 3 5 3 5 1 7 4 7 10 5 7 4 The total number of loss events recorded is now only 120.18 Lam bda = 5 0.C. The density function is shown in red in Figure III.06 0.02 0 1 Number of loss events per month Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.4: Poisson frequency densities for high-frequency and low-frequency risks 0.3.14 0.3.16 Lam bda = 20 0.4.com 403 . High-frequency risks can have severity distributions that are relatively lognormal. denoted ln L) is normally distributed with mean and variance . Other functional forms for the severity that have been considered by some banks include the generalised hyperbolic.2. highseverity risks. and more details are given in Section II.pdffactory. and general mixture distributions. but Excel does have in-built functions for these common distributions (with the obvious names). Lower-frequency risks have more skewed and leptokurtic frequency densities than highfrequency risks. g(l ) 2 exp 2 B( l2 ) (l > 0).3.C.3 The Severity Distribution Now consider how to ‘fit’ a severity distribution. (III. On first sight these may look daunting. so that the annual loss distribution will also be highly skewed and leptokurtic for the low-frequency. ( ) (III. III. g(l ) l 1 exp l/ (l > 0). L. The Poisson annual frequency density is obtained by multiplying the estimated lambda from monthly data (20 and 5 in the above example) by 12. This is a very common distribution in financial mathematics.) denotes the gamma function.E. This property influences the compound distribution.com 404 . lognormal mixtures.) denotes the Bessel function of the first kind. has the density function: g (l ) 2 1 ln l 2 1 exp 2 l (l > 0) . Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.3.C.3.3. as described in Chapter II.6) where (.5) Here the log of the loss severity (or log severity for short. (III.5. such as that shown in Figure III.The PRM Handbook – Volume III Notes: 1.7) where B(.C. and the two-parameter hyperbolic density.3. but lowfrequency risks can have severity distributions that are too skewed and leptokurtic to be well captured by the lognormal density function.4.C.E. Common choices. are the gamma density.C. therefore. Various functional forms are available for continuous random variables like severity. The lognormal distribution for loss severity. 2. for instance those constructed from public. for instance.pdffactory. Whilst EVT may have found useful applications to high-frequency. to the sparse and fragmented data that are available for very large operational losses is. see. there is absolutely no point in applying a statistical test to decide which of these frequency distributions provides the closest fit to loss data: the generalised four-parameter hyperbolic distribution will always fit best. Nevertheless. will depend on the type of underlying loss distribution F(x) and on the choice of threshold u. in my view.9) 1 – (1 + y/ )–1/ The parameters and if 0.C.3.6. (III. such as those observed in the physical sciences. ‘newsworthy’ events. has a simple relation to the distribution F(x) of the loss severity X. it is very difficult to design a risk self-assessment that is compatible with any severity density having more than two parameters. Some generalised Pareto densities for different values of and are shown in Figures III.com 405 .C. There is an implicit high threshold for the losses included in the database and thus some analysts have attempted to model the severity of these losses using the generalised Pareto and other distributions from the class of distributions in ‘extreme-value theory’ (EVT). some AMAs are currently attempting to incorporate extreme-value densities. However.C. The ‘peaks-over-threshold’ (POT) model applies when losses over a high and predefined threshold u are recorded. Gumbel (1958) and Embrechts et al. this choice again depends on both the type of data and the source(s) of the data – internal and/or external loss data and/or risk self-assessments. tic-by-tic financial market data.3. In some databases.C. this does not imply that one should always choose the generalised hyperbolic distribution as the severity functional form. Gu(y) = (III. Again. X – u.5 and III.8) For many choices of underlying distribution F(x) the distribution Gu(y) will belong to the class of generalised Pareto distributions (GPDs) given by: 1 – exp(–y/ ) if = 0.3. Note that the primary effect of increasing is to increase the Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.3. only very extreme losses are recorded. a triumph of hope over reason. The distribution function Gu of the excess losses. (1991).The PRM Handbook – Volume III Again. In fact Gu(y) = prob(X – u < y X > u) = [ F(y + u) – F(u) ] / [ 1 – F(u) ]. independent and identically distributed processes. one should not forget that it was introduced (almost 50 years ago) to model the distributions of extreme values in repetitive. For example. To attempt to fit a generalised Pareto distribution. and readers should therefore have some understanding of them. or any other extremevalue distribution. 5).1 0.09 0.03 0.5: Generalised Pareto densities ( = 1) 0.05 0.3.9 0 Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.6).09 0.06 0.05 0.01 0 =0 = 0.02 0.07 =1 0.1 0.08 0. and that is called the ‘tail index’ precisely because as increases so does the weight in the tails of the GPD (see Figure III.02 0.06 0.3.3.6: Generalised Pareto densities ( = 1) 0.08 0.01 0 =5 =2 0 20 Figure III. Figure III.04 0.03 0.C.07 0.pdffactory.C.04 0.C.3.C.The PRM Handbook – Volume III range of the density (Figure III.com 20 406 . Thus the IMA provides only a useful benchmark.3. More generally.C.C. I write unexpected loss as a multiple phi ( ) of the loss standard deviation.C. and .3. Note that NpL only corresponds to the expected annual loss when the loss frequency is binomially distributed and the loss severity is not regarded as a random variable.11) Note that a very strong assumption of the IMA is that each time a loss is incurred. is a multiplier that depends on the operational risk type. exactly the same amount is lost (within a given risk type).10) where N is a volume indicator (a proxy for the number of operational events). and is either unexpected loss at the 99.pdffactory. Instead of following Consultative Paper 2. there are simple analytic formulae for the expected loss and the unexpected loss in the annual loss distribution. but nevertheless appear to be admissible by regulators). often by a factor of 5 or more. a lower bound for the operational risk capital calculated using the full simulation method that we shall describe presently. the latter being the case when expected losses are not already provisioned.3. ORC = standard deviation of annual loss.12) Recall that ORC is measured by a VaR metric. Thus (III.C. 151) how the value of in (III.C. (III.3. ‘gamma’. p.The PRM Handbook – Volume III III. p is the expected probability of a loss event. will always increase the ORC.3. (III.3.4 The Internal Measurement Approach Under certain assumptions (which are rather strong. with a Poisson frequency distribution having expected frequency ORC = Np): L. I have shown elsewhere (Alexander. The basic formula for the IMA risk capital calculation given in the proposed Basel 2 Accord is: ORC = gamma expected annual loss = NpL.C. as in the ‘loss distribution approach’ (LDA) described below. (III.9th percentile itself. 2001) and writing unexpected loss as a multiple ( ) of expected loss. These formulae are based on what the Basel Committee has called the ‘internal measurement approach’ (IMA).com 407 .11) can be calculated and provide statistical tables for : it only depends on the percentile and the expected loss frequency.12) can be rewritten as: Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. L is the loss given event. Introduction of severity uncertainty.5 (Basel Committee. 2003.9th percentile or the 99. That is. Having obtained we can obtain as follows: compare (III. when the frequency is Poisson.9th percentile.C.3.13) = [(99.3.3.2) it is easy to calibrate .3. Table III.3 gives the Poisson gamma tables ( is also shown).3.C. = .C.pdffactory. as explained in Section III.9th percentile. (III. For instance. the mean and the standard deviation of the frequency density.C. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. which has mean and variance .13) – the 99. that L cancels) implies that and are related through a factor (standard deviation/mean) of the frequency density.3. Since ORC is the same in both. Thus L cancels. Because loss severity is assumed to be the same amount. a different table for each choice of frequency distribution. and the right-hand side of (III.13) really just refers to the 99.e.C. all the randomness in the annual loss distribution comes only from the frequency distribution.14) In this way.The PRM Handbook – Volume III = [(99. L.10) and (III.com 408 .3. once we have estimated (either from a risk self-assessment or from loss data. The same argument as above (i.12). Hence all quantities on the righthand side of (III. This formulation shows that. (III.C. equating them gives: expected annual loss = standard deviation of annual loss so is a multiple of : in fact = (standard deviation/mean).C.9%-ile mean)/standard deviation] if expected losses are provisioned. the mean and the standard deviation – are just L times the respective quantities of the frequency density.C. we construct statistical tables for the gamma factor in the IMA.3. every time a loss is made.9%-ile /standard deviation)] otherwise. 269 7.452 47.421 1. For instance.510 0.204 1.234 3.805 72.292 4.8 0.072 1.868 4.739 2.848 3.401 Source: Alexander (2003).205 20.662 3.9%-ile 99.408 1. 3.180 3.042 2.736 1. the frequency distribution approaches the normal distribution so tends to a lower limit of 3.9 0. does not change much with .255 3. tends to a lower limit of zero! Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.868 3.551 4.475 3.362 13.187 4.591 4. a very high-frequency risk) the 99.751 60.1805/ 100 Note that 0. For low-frequency risks is very large indeed.234 3.630 14. however. not on expected loss.554 1.C.252 3.9th percentile of the Poisson distribution is 131.940 6.218 3.868 4. Table reproduced by kind permission of Pearson Education (Financial Times-Prentice Hall). but that does.965 3.771 10. we have = (131.3: The ‘gamma’ in the IMA (Poisson frequency) 99.9%-ile 100 50 40 30 20 10 131.e.812 34.615 1.290 3.9%-ile 99.6 0.01 2.405 3.805/10 = 3.841 3.594 0.9%-ile 99.09 (this is the 99.127 7. for very high- frequency risks.974 5.908 2.1805.914 3.896 3.478 3.839 3.306 89.511 0.556 1 0.066 8 6 5 4 3 2 17.3 0.31805 .449 3.318 0.584 3.455 0.956 9.300 9.The PRM Handbook – Volume III Table III.056 4.com 409 .853 3. for a capital charge based only on unexpected loss.4 0.537 3. As increases above 100.pdffactory. but for high-frequency risks it is very low.3.176 4.998 4.2 0.5 4.1 0. Thus.449 12.805 – l00)/ 100 = 31. when = 100 (i.541 8.372 0.490 2.065 0.714 20.904 3.805.113 3.9th percentile in the standard normal distribution).7 0.05 0. 3.000 transactions are processed in the back office and the expected probability of a human error giving rise to an operational loss is 0. Indeed. (b) For such a high we have = 3. as it is under the basic indicator or standardised approach.com 410 .14) gives ORC = we know that L.000 credit cards will be in issuance during the forthcoming year and the expected probability of a credit card fraud on any card is 0. unless the expected number of loss events is less than 1 every 50 years.000 (assuming expected loss is provisioned for). (c) Standard deviation of loss is ORC = 3.C. ORC = 155.pdffactory.1 and ORC is estimated using (III. Hence doubling the size of one’s operations should only lead to an increase of 2 in the capital charge. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. and this risk type will have little impact on the total ORC for a retail bank. with and without the assumption that the expected loss is already covered by the normal cost of the business.5 million = €2. (b) the ORC at the 99.11) with (III.3.The PRM Handbook – Volume III Also note that the ORC should increase as the square root of the expected frequency.C. calculate (a) the expected loss.05 = 2500 so expected loss = 2500 1000 = €2. In that case the ORC is there to cover the unexpected losses. but that the ORC is far greater for the internal fraud: Case A: 30.9th percentile.655m (if expected loss must be included).C. The ORC also is linearly related to loss severity – yet another reason why high-severity risks (which are by definition also low-frequency) will attract higher capital charges than low-severity risks. this may be an incentive for banks to use the AMA when they are permitted to do so at the end of 2007.05. The ORC for this type of risk will therefore be very small. Example III. The expected losses are normally already provisioned in the normal cost of being in the credit card business.10).01.000 and so 50.C.3.2.000 = €155. low-severity operational risks the expected loss is much greater than the unexpected loss. Answer: (a) = 50. and is around 3 or 4.000 + 2.3: Transactions errors versus internal fraud Show that the expected loss is the same in the following two cases.2: Credit card operational risk Question: 50.5 million.000 0. This example shows that for high-frequency. If every fraud gives rise to an operational loss of €1000. high-severity operational risks: Example III.1 L = 2500 1000 = €50.C. the following two examples illustrate this point.3. For example. combining (III. the next example shows that the total ORC will be dominated by the low-frequency. So in the AMA the ORC will not be linearly related to the size of the bank’s operations. in the Poisson frequency. Each time a loss occurs the operational loss is €1000. 000 300.35369m and €7.732. the low-frequency.22474m respectively.C. even if the ORC estimates were €0.4.000 is added to the ORC figures.320.000 60 p 0. The simulation algorithm is as follows: 1.9) in Table III.The PRM Handbook – Volume III Case B: 60 deals are made in Corporate Finance and the probability of internal fraud resulting in any one of these making an operational loss is 0. … . Take a random draw from the frequency distribution: suppose this simulates n loss events per period.3. XM: this represents the simulated total loss distribution.000.2.000 Expected Loss ( ) Standard Deviation (= (Np)L) 17.3. 3.0005 L 1.08246 0. Monte Carlo simulation (see Section II.000 300. 4.0005.3. …Ln.5 300 0. Take n random draws from the severity distribution: denote these simulated losses by L1. Return to step 1. it will amount to €10m.000 10. high-severity risk type is the one that totally dominates the total ORC.51 1. L2. Sum the n simulated losses to obtain a total loss X = L1 + L2 + …+ Ln. if such a loss is made. However. the same basic observation holds whether or not the additional sum of €300. Table III. but it makes the construction of the compound distribution very simple. clearly.03 3.1 3.178979 23. Form the histogram of X1. … .C.3.3 Case A Case B N 30. and repeat several thousand times: thus obtain X1.01 0.998 0. 2.E.4 below assume that expected loss is provisioned for.C.92474 The Loss Distribution Approach The standard assumption that frequency and severity are independent may not be very realistic.4: Results for Example III. XM where the number of simulations M is very large.051 (= Np) (= / ) ORC (€m) III.C.3) is used to generate the total loss distribution as the compound of the frequency and severity distribution.05369 6. 5.com 411 . The ORC calculations based on (III. That is. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.C.pdffactory. even if sufficient data are available to generate these distributions.com 412 ... if expected losses are not provisioned for elsewhere. Figure III. the result will be identical to the IMA estimate. the simulated annual loss distribution will not be an accurate representation if the same frequencies and severities are repeatedly sampled.TL 10000 . 186 Figure III. The ORC for this risk type is then the difference between the 99. (III.pdffactory. This shows that operational risk capital will increase more or less in proportion with the severity standard deviation. There are two reasons for this.15) The resulting ‘modified’ IMA estimate is similar to the LDA estimate. for any given risk type: the 186 The use of empirical frequency and severity distributions is not advised. So when loss severity becomes more uncertain.7: Simulating the annual loss distribution 1 1 Severity CDF Frequency CDF 0 0 L 2 …L n …L 1 n Total annual loss from one simulation = Li = TL Repeat to obtain TL 1 . this has a direct.3.C.C.. in which case it is multiplied by the factor (1 + r 2). I considered two cases: Without an assumption of loss severity uncertainty in the LDA. or just the 99. Secondly.. 7) I have compared the LDA estimate of ORC with the result of applying the IMA formula of Section III. 0.3.3. provided enough simulations are used in the LDA calculation.C.4.7 illustrates the first two steps in the simulation algorithm.C. The IMA estimate can be modified to include the assumption of loss severity uncertainty.The PRM Handbook – Volume III 6.……. 2003. there will be no ability to carry out scenario analysis in the model unless one specifies and fits the parameters of a functional form for the severity and frequency distributions. Firstly.TL 10000 ORC Estimate = Percentile (TL 1 .999) – Average (TL 1 . almost linear impact on the capital charge. A very strong conclusion can be drawn. That is. where r = (severity standard deviation/severity mean).3.TL 10000 ) Elsewhere (Alexander.9th percentile and the mean of the simulated annual loss distribution.9th percentile. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Ch. the risk capital calculation based on the LDA can easily give an ORC estimate that is 10 or 20 times larger than the IMA estimate. in the sense that if one changes then so will the other. but they are not ‘correlated’. For instance. this is an approximate upper bound for the total risk and is often used for regulatory risk capital calculations (to err on the conservative side). workload – affect many types of operational risks. 187 (b) because the ‘total’ we get for the risks is very much influenced by the assumptions we make about the dependencies between the risks. legal risks. It is easy to aggregate variances (and to allow for correlation in the process) but it is not so simple to aggregate VaR. Correlation is a metric that refers to ‘jointly stationary’ random variables with elliptical distributions like the multivariate normal (see Section II.com 413 . Thus risk aggregation is very difficult even within market and credit risks – it is a very thorny issue in operational risk! Regarding dependency assumptions.C. management.C.3). particularly when the VaR metric is applied.3. percentiles obey no simple standard rules. 188 Indeed.The PRM Handbook – Volume III LDA ORC is always far greater than the ORC calculated from an analytic formula based on the assumption that severity is non-random (and this includes the IMA formula).E. 188 I prefer not to use the term ‘correlation’ – instead I use the term ‘dependency’. III. Often. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. and (c) because dependencies between risks are very difficult to assess.2.pdffactory. when aggregating risks. This difference will be most pronounced for operational risks where the loss severity is highly uncertain (so that r in (III. and fraud. provided that it can demonstrate that its systems for measuring correlations are sound and implemented with integrity’. 187 Unlike the ‘variance’ risk metric.7) but there is no evidence that operational losses behave in this way. risk drivers associated with ‘human’ risks – such as pay. Dependencies between operational risks are common. It is very difficult to aggregate risks (a) when they are assessed using a VaR metric at an early stage.3. including employment practices.15) is large): for these risk types. they occur whenever two operational risk types share a common key risk driver (see Section III. Operational risks are likely to be dependent. The aggregation of risks – not just operational risks – is currently a hot topic of research.C. the Basel Committee (2001) has stated: ‘The bank will be permitted to recognize empirical correlations in operational risk losses across business lines and event types. transactions processing. training.4.6 Aggregating ORC Having estimated the ORC for each ‘risk type’ for which the AMA is the designated approach. we must now ‘add up’ these ORC estimates to obtain the total ORC for the firm. banks will make just two simple assumptions: Full dependency: This implies risks should simply be added to obtain the total risk. The PRM Handbook – Volume III No dependency: This is the assumption of ‘independence’. But it has shown that small changes in correlation can produce estimates of total operational risk capital that is doubled – or halved – even when aggregating only two annual loss distributions.C.4: Effect of ‘correlation’ on risk aggregation Consider the two annual loss distributions with density functions shown in Figure III. Table III. However the unexpected loss at the 99.3.3. Then Table III.4 in each case.5 shows that the expected loss is hardly affected by the assumptions made about co-dependencies of these two risks: it is approximately 22.7665 54.9 shows the total loss under correlations of = 0. 0.com 414 .3909 22.9th Percentile Unexpected Loss Of course.3. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. Example III.4. It implies that the total risk is the square root of the sum of the squares of the component risks.5. this will give an approximate lower bound for the total risk capital since it is unlikely that there will be many large negative dependencies between different operational risks.9th percentile (and at the 99th percentile) is very much affected by the assumption one makes about dependency.5 =0 = 0.3.3714 31. Figure III.5.3951 22. Obviously the effect of correlation assumptions on the aggregation of many annual loss distributions to the total annual loss for the firm will be quite enormous. For aggregating different types of operational risks.C.C.C.pdffactory.3.7683 99.3.7658 48. the values of the correlation parameter were chosen arbitrarily in Example III. respectively.3749 26.C.3977 41. the total risk capital based on the aggregation of only two operational risk types can easily be doubled – or halved – depending on the assumption made about their dependency. –0.5: Risk capital estimates under different correlation assumptions = –0. The next example shows that small changes in ‘correlation’ between operational risk types will have a huge effect on the total risk estimate: for example.8.C.1660 19.5 Expected Loss 22. Thus the total risk estimate will be somewhere between a lower bound given by the square root of the sum of the squared risk estimates and an upper bound given by the sum of the risk estimates.06 0.5 and with probability 0.01 0 0 20 rho = 0 III.C. These two bounds are usually far apart and it is very difficult to say exactly where.03 0 0 5 10 15 20 25 Figure III.7 the normal has mean 6 and standard deviation 2.02 0.04 0.3. 189 The bimodal density has been fitted by a mixture of two normal densities: with probability 0. ‘systems’ risks and so forth.3. These drivers can be linked to ‘human’ risks.09 0.com 415 . the total risk will be.pdffactory.3 the normal has mean 14 and standard deviation 2.09 0.3.06 0.07 0.C.15 0.03 0.08 0.8: Two annual loss densities 189 0.05 0.The PRM Handbook – Volume III Figure III.5 60 80 rho = .7 40 rho = 0. Hence ‘aggregation risk’ is by far the most important source of model risk in an operational VaR model.9: The total loss distribution under different assumptions for correlation 0. within these bounds.5 Concluding Remarks Operational risks often show a positive dependency because they experience the same directional influence from common key risk drivers. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www.0. The other annual loss is gamma distributed with = 7 and = 2.12 0.C. It is not.com 416 . subsequently. The important issues are how to model dependencies and how to obtain reliable data. or from a data consortium. T (1991) Modeling Extremal Events. massive legal suits and so forth. P. Operational risk is a behavioural science. Basle Committee on Banking Supervision (2001) Working Paper on the Regulatory Treatment of Operational Risk. and external data need processing using proper statistical methods. Consultative Paper 2. September. C (ed. Getting the right assumptions about functional forms may be an important source of model risk in market and credit risk analysis but this is really not an important issue in operational risk assessment. New York: Columbia University Press. acts of God (such as earthquakes). such as internal fraud. The only way to include these risk types in the risk estimate is to use subjective data: such as a risk self-assessment. available from www. Copyright ? 2004 The Authors and the Professional Risk Manager’s International Association PDF created with pdfFactory Pro trial version www. The most important risk types for a risk estimate (as opposed to an expected loss estimate) are the low-frequency. like market and credit risk. or ‘tail distributions’ with data such as these.pdffactory. Berlin: SpringerVerlag. Internal historical loss data on these risk types are simply not available as the frequency is just too low. E J (1958) Statistics of Extremes.5. We may indeed apply a VaR metric to an operational loss distribution. but that does not mean that operational risk analysis is a statistical science with proper economic foundations. and Mikosch. However. high-severity risk types.E). But risk self-assessment questionnaires require careful design and even more careful control of the responses. References Alexander. or external data from public sources. Embrechts. C.) (2003) Operational Risk: Regulation. And software consultants who focus on fitting the ‘best’ functional form to operational risk data are missing the big picture (and making a fast buck in the process!).bis. as if the sparse and unreliable data on operational losses should be treated like the plentiful and reliable data on market prices..org Gumbel. not market or firm behaviour! Thus we really should not be talking about ‘correlations’. When data are subjective the proper statistical methods to use are Bayesian methods (see Chapter II. how these data are handled. Analysis and Management. Operational risks are not at all like market or credit risks – operational risk analysis is about human behaviour. Kl黳pelberg. at the moment we are witnessing attempts to apply some of the traditional ‘classical’ methods that have been developed in market risk to operational risk analysis. London: FTPrentice Hall (Pearson Education).The PRM Handbook – Volume III The next most important source of model risk is the way that data are obtained and.
Copyright © 2024 DOKUMEN.SITE Inc.