Pillar III Validation: Will Basel II Finally Discredit VAR? May 14, 2007
Pillar III Validation: Random Walking in the Land of VAR
"Risk management is a serious business. Accordingly, the production of a risk 'measure' must be subjected to the question 'how do you know what you claim to know' � in other words, epistemology."
Last month, a
reader at the University of Michigan asked about our March 27, 2007 comment, "Countrywide Financial: It's All About Liquidity,"
where we asserted that "the use of risk pricing tools such as Value at Risk or 'VAR' models
and other types of statistical routines arguably amplified the effect
of excess liquidity, boosting the throughput of the Wall Street
mortgage origination machine, generating big fees, and vastly expanding
the pool of risk for end-users."
Asked the reader:
"I would like to know how VAR can cause banks to take more risk than what
is acceptable. Doesn't the Fed with its extremely loose monetary policy deserve
Good question. The Fed's "easy
money" monetary policy in the early part of the decade, what we've
coined as the "Greenspan Effect," clearly contributed to "exuberant" behavior by investors, pushing risk
spreads below the economic cost for many assets. The spectacle last year of CDS for
subprime trading at annual spreads less than a quarter of
the expected default rate on such portfolios or corporate CDS trading through
the yields on the underlying bonds are two cases in point.
But no matter
how pernicious the Greenspan Effect, in our view, the use of risk
measures such as VAR contribute far more significantly to the problem. Determining
"what is acceptable" is the key issue for risk managers, both in terms of setting the minimum expected loss
in a given scenario and then how to benchmark these forward-looking estimates.
To review, VAR models summarize the
expected maximum loss (or worst loss) over a target horizon within a given
confidence interval. To us, this is an elegant way of saying "I don't
know." Or to quote the author of Fooled by Randomness, Nassim
Taleb: "There is an internal contradiction between measuring risk (i.e. standard
deviation) and using a tool with a higher standard error than that of the
trouble with VAR models is that the methodology says nothing about specific
risks regarding specific transactions, yet provides risk practitioners with the
false impression that the particular risk has been measured. So widespread
is the delusion that VAR is effective to measure risks of particular exposures that federal
regulators are about to adopt it as the central method for measuring bank
capital adequacy under Basel II (see our comments to the FDIC on the Basel
II proposal by clicking here.)
By providing a technical framework for estimating market
risk that, on the surface at least, has credibility, the Sell Side and the
major rating agencies have evolved VAR into a powerful mechanism for
increasing both transaction throughput and leverage. Simply stated, if the
overall risk calculated in the VAR model appears to be low, then additional
risk may be taken.
Since the Greenspan
Effect pushed actual
default rates on loans and bonds to near zero, particularly for mortgage
collateral, VAR models became the Sell Side's best friend. By relying on
assumptions of normality in the distribution of possible future events and using
recent historical data, VAR models effectively minimize the true
financial risk taken and thus are a key enabler of the vast
expansion of leverage on Wall Street.
interesting possibility comes into prospect, however, with the
implementation of Pillar III of Basel II, the requirement for market discipline,
where banks will be compelled to publicly disclose and benchmark the
efficacy of VAR models against their actual results, including public
"mark-to-actuals" results for VAR, Defaults, Loss Given Default and Exposure at
Default, etc. Will Pillar III ultimately be the undoing of VAR?
For example, if a large bank
publishes a VAR of say 1% of total investments in a given period, but then takes
a loss of 3% on those same investments in the subsequent period, investors (not
to mention auditors and regulators) will discern pretty quickly that the
bank's management is flying blind.
As former Fed Vice
Chairman Roger Fergusson told Congress in 2003: "Pillar III--disclosure--will
highlight any significant differences across banks, in the expectation that
counterparties will penalize inconsistent risk measures."
In the event, regulators will be forced to recommend higher capital levels for
those banks which are not good at predicting future losses, one reason
why the Economic Capital simulation in the IRA Bank Monitor includes
such prudent assumptions about future expected loss.
A number of organizations try to address the
inadequacies of VAR by including obligor-specific factors in their risk models.
for example, explicitly combines VAR methodology with factors that track the specific issuer risk in debt and equity securities. But far too many organizations simply accept the basic VAR result as good enough.
If as, as we suspect, the cost of risk is rising back to the
mean after almost a decade of below-average experience, then the Federal Reserve Board
unwittingly may be creating the circumstances for finally
discrediting the use of VAR models for estimating specific financial risks.
Consider the irony: Just as global regulators are enshrining VAR as the
centerpiece of the Basel II regime, markets may demonstrate that this
method of measuring risk is entirely useless -- at least when expected losses
are not hovering around zero.
Over the past decade, we
suspect that VAR models have appeared effective because there was so little risk to measure.
But in an environment where risk events are large and "unusually" frequent,
we suspect that financial institutions will quickly be forced to look
for alternatives. Or in political terms, when the next unexpected
event (or series of events) takes down a large financial institution, we expect
to see trial lawyers and members of Congress quizzing the bankers, their familiars
at the rating agencies and regulators on the efficacy of VAR for predicting specific
During the inevitable process of
recrimination and accusation that will occur following the next big
systemic financial event, we hope that somebody asks members of the Federal Reserve
Board and other federal regulators why they ever agreed
to adopt into regulation VAR as a means of measuring bank capital adequacy. As we
wrote in our comments on Basel II: "By relying upon the false assumption
that financial market events tend to follow a random, normally distributed
pattern, the FDIC and other regulators are about to adopt into [the Basel
II] regulation one of the most specious, dangerous and widely held
misconceptions in the financial world."
Rosner and Joe Mason write: "...the big three ratings agencies are often
confronted with an array of conflicting incentives, which can affect choices in
subjective measurements of risk. Of even greater concern, however, is the fact
that the process of creating MBS and CDOs requires the ratings agencies to
arguably become part of the underwriting team, leading to legal risks and even
The Institutional Risk Analyst is published by Lord, Whalen LLC (LW) and may not be reproduced, disseminated, or distributed, in part or in whole, by any means, outside of the recipient's organization without express written authorization from LW. It is a violation of federal copyright law to reproduce all or part of this publication or its contents by any means. This material does not constitute a solicitation for the purchase or sale of any securities or investments. The opinions expressed herein are based on publicly available information and are considered reliable. However, LW makes NO WARRANTIES OR REPRESENTATIONS OF ANY SORT with respect to this report. Any person using this material does so solely at their own risk and LW and/or its employees shall be under no liability whatsoever in any respect thereof.
A Professional Services Organization
Copyright 2016 - Lord, Whalen LLC - All Rights Reserved