Picture The Institutional Risk Analyst
published by Lord, Whalen LLC
Copyright 2014 - All Rights Reserved. No Republication Without Permission.
 Our Products:   The Institutional Risk Analyst   The SEC Filings Catalog   XBRL Filings Parser   About    Contact Us  
Basel II & Neoclassical Idiocies: Mark to Model Lives Again!
November 1, 2006

Basel II & Neoclassical Idiocies: Mark to Model Lives Again!

Our comment regarding the lessons to be taken from the Amaranth hedge fund collapse drew some sharp criticisms from the risk modeling community (See The IRA, "Counterparty Credit Risk: Amaranth Aftermath"). � In particular, our assertion that VaR models are useless for assessing�market or credit�risk seems to have struck a raw nerve among some members of the NY risk management community. Guess there's nothing to do but start the root canal.

First it needs to be stated that we have a bias. IRA exists in order to explore, develop and deliver fundamental analytical solutions. We�started our firm in 2003 because we perceived that the Street had taken quantitative methods for assessing credit risk several bridges too far, resulting in a lengthening string of risk management�failures such as Enron, Parmalat and WorldCom.

These corporate scandals, let us recall,�were events where fundamental indicators were shinning bright red, but none of the derivative�models used by quantitative credit analysts detected a problem - until well after the fact.� Readers of The IRA will recall our report on the�November 2004 meeting of the International Association of Financial Engineers when one of the founders of Moody's KMV admitted that quantitative models did not provide any advance warning of Enron and other corporate failures.� Small wonder then that the IAFE subsequently�implored us�to�remove our�verbatim notes of that discussion �from our web site.

Just as the economic profession has become dominated by neoclassical thought, which holds that all market participants are rational and fully informed, the risk community has embraced quantitative methods to such an extreme degree that the idea of performing a classical credit analysis on obligors is not even considered. Fascination with models and the mathematical tools borrowed from quantum physics has turned the economics profession - and, we suggest, their brethren in the risk analytics community -- into what Philip Ball, writing in the FT on Monday, calls purveyors of�"neoclassical idiocies."

For example, in their IMF working paper (06/134) on "Risk Models of the Financial Sector Assessment Program," �Avesani, Liu, Mirestean and Salvati state that "over the last ten years, we have witnessed major advances in the field of modeling credit risk. There are now three main approaches to quantitative credit risk modeling: the "Mertonstyle" approach, the purely empirical econometric approach, and the actuarial approach� Each of these approaches has, in turn, produced several models that are widely used by financial institutions around the world."

Notice that these respected researchers do not even mention the idea of using�fundamental financial factors to track the behavior of a specific obligor.� In the same paper, the authors then articulate "three main approaches to estimating the probabilities of default. One approach is to use historical frequencies of default events for different classes of obligors in order to infer the probability of default for each class. Alternatively, econometric models such as logit and probit could be used to estimate probabilities of default. Finally, when available, one could use probabilities implied by market rates and spreads."

Again, the concept of focusing attention on the behavior of a specific obligor is not even considered among the contemporary methods in use today by risk professionals.� Instead, derivative indicators and models, using aggregate studies of past default experience and other data, are employed to estimate the possible future behavior of obligors. So hungry are the modelers for data to feed the great engine of statistical analysis that they even admit to using bond spreads as data inputs when equity market prices are unavailable!

Contemporary risk models, to paraphrase Ball, assume a degree of homogeneity and stability among subjects that simply does not exist.��Most market�professionals know this statement to be true, yet the�risk vendors and the largest financial institutions have such an enormous investment in�quantitative methods that they are unwilling to give them up.

The situation today reminds us of another era not too long ago, when the "whiz kids" of Wall Street made the mistake of relying upon "mark to model" methods for managing the risk in their bond portfolios -- and wound up damaging their trading desks and running from the law in shame.� How quickly we forget the damage done by those SPARC 2 workstations in the hands of math geniuses.� When the models and real life diverged,�the trade�tickets�were found hidden in�desk drawers as�the guilty parties�prayed that the real world would move back�towards their risk position before anyone found out.

Why are such distinctions important to risk managers and regulators, especially those working to implement Basel II?� Because unless and until a more balanced approach to credit risk methodology is adopted by US financial institutions, achieving full implementation of a true "risk weighted" approach to measuring bank capital adequacy will be impossible. "Marking to model" today does the same thing it did in the 1980's, it coils the risk spring with assumptions that are ever more distant from the fundamentals.

Recall, for example, that the original Basel II proposal was meant to encourage banks to calculate specific measures for the risks in each area of business, using consistent definitions for key terms such as Probability of Default ("P(D)"), Loss Given Default, Expected Loss and Exposure at Default.� The idea of the original Basel II proposal was to track the P(D) of specific obligors and facilities,�aggregate these individual ratings to measure the risks in a given portfolio and institution, and then assign a required capital weight to support these risks.

Banks around the world are spending billions of dollars to comply.� From Bahrain to Beijing, they are increasing the safety and soundness of their banking systems by getting to know the fundamentals of their obligors.� Unfortunately, the US banking industry, enamored as it is with derivative indicators,�has never answered the obligor-specific�challenge originally posed by Basel II.�

With a few notable exceptions such as Citigroup (NYSE:C), the largest US banks have actually increased their reliance on quantitative methods for measuring credit risk, making it impossible for most large US banks to achieve the most advanced level of Basel II where each customer is internally rated by the bank. Whereas Basel II asks about the P(D) of a specific obligor, the best that most large banks and the vendors which support them are able to do in response is to talk about estimates of risk in a portfolio�containing thousands of obligors.

The excessive reliance on quantitative methods to measure credit risk�is a troubling development that should concern bank leadership. Why is it that so many of the brain trusts inside institutions are�allowed to perpetuate the use of aggregate statistical models when the data and capability exists to build the best bottom up Basel II credit obligor modeling systems in the world?� Had this process begun five years ago, when the new Basel Accord was in formulation, surely the dissonance between the models and the fundamentals would have narrowed to manageable margins by now.� It's up to leadership of the major US banks and the regulatory agencies to begin to ask why this has not happened.

In the meantime, the modeling community shall continue to tack fudge factor on top of fudge factor to twist aggregate statistics to fit expected outcomes. In paper entitled "Convergence of Credit Capital Models" just published by the International Association of Credit Portfolio Managers and the International Swaps and Derivatives Association, the authors note that new regulatory capital requirements under Basel II have been promulgated that "allow firms following an advanced approach to submit their own estimates of key parameters as input into a single regulatory formula which does not depend on portfolio composition ." The authors also note that many of the largest US banks depend primarily on models provided by third party vendors to calculate credit risk and capital requirements.

Translated into plain terms, for the purpose of determining capital adequacy under Basel II, the largest US banks are proposing to substitute estimates of future expected and unexpected losses for actual analysis of specific obligors. This same derivative methodology will no doubt be proffered to satisfy the requirements of the revised survey for Shared National Credits, which like Basel II, asks banks to provide estimates of P(D) and other credit risk measures for specific obligors.

Regulators are responding to this pressure to aggregate.� Some of the responses are innovative work saving solutions�that prove to us that the United States remains at the vanguard of developing better techniques for managing large and complex economic infrastructures. For instance, we see the utility of implementing lending facility risk matrices as a way to bucket groupings of C&I; obligors.

But we also note that nothing in this facility bin design relieves the institution of doing the fundamental work to document why each obligor went into a particular risk bin. Nor does it relieve the institution from its safety and soundness responsibilities to monitor and report, in Pillar III fashion, the bottom-up proof that each credit facility bin is free of "moral hazard" risks to both the regulatory examiners and their fully exposed auditors.� Trust us when we say that, as you read these words, some of�the leading US�trial lawyers are focused on precisely this issue.

Viewed from the perspective of methodology choices, the fact that some of the largest US banks are now pressing regulators and the Congress for a simpler, "standardized" version of Basel II is no surprise given that so few institutions have demonstrated the willingness or ability�to perform obligor specific analysis.� Indeed, while many banks and their advocates in Washington blame the regulators for the lack of progress on Basel II, perhaps it is the neoclassical brain trusts inside many of the�largest US�banks�which are to blame for the confusion and lack of focus in the US when it comes to measuring specific credit risk.

Questions? Comments? [email protected]

The Institutional Risk Analyst is published by Lord, Whalen LLC (LW) and may not be reproduced, disseminated, or distributed, in part or in whole, by any means, outside of the recipient's organization without express written authorization from LW. It is a violation of federal copyright law to reproduce all or part of this publication or its contents by any means. This material does not constitute a solicitation for the purchase or sale of any securities or investments. The opinions expressed herein are based on publicly available information and are considered reliable. However, LW makes NO WARRANTIES OR REPRESENTATIONS OF ANY SORT with respect to this report. Any person using this material does so solely at their own risk and LW and/or its employees shall be under no liability whatsoever in any respect thereof.

A Professional Services Organization
Copyright 2016 - Lord, Whalen LLC - All Rights Reserved