Record Detail Back

XML

An Introduction to Market Risk Measurement


You are responsible for managing your company’s foreign exchange positions. Your boss, or your boss’s boss, has been reading about derivatives losses suffered by other companies, and wants to know if the same thing could happen to his company. That is, he wants to know just how much market risk the company is taking. What do you say? You could start by listing and describing the company’s positions, but this isn’t likely to be helpful unless there are only a handful. Even then, it helps only if your superiors understand all of the positions and instruments, and the risks inherent in each. Or you could talk about the portfolio’s sensitivities, i.e., how much the value of the portfolio changes when various underlying market rates or prices change, and perhaps option delta’s and gamma’s. However, you are unlikely to win favor with your superiors by putting them to sleep. Even if you are confident in your ability to explain these in English, you still have no natural way to net the risk in your short position
in Deutsche marks against the long position in Dutch guilders. . . . You could simply assure your superiors that you never speculate but rather use derivatives only to hedge, but they understand that this statement is vacuous. They know that the word ‘hedge’ is so ill-defined and flexible that virtually any transaction can be characterized as a hedge. So what do you say? (Linsmeier and Pearson (1996, p. 1)) The obvious answer, ‘The most we can lose is . . . ’ is also clearly unsatisfactory, because the
most we can possibly lose is everything, and we would hope that the board already knows that. Consequently, Linsmeier and Pearson continue, “Perhaps the best answer starts: ‘The value at risk is . . . ’ ”. So what is value at risk? Value at risk (VaR) is our maximum likely loss over some target period— the most we expect to lose over that period, at a specified probability level. It says that on 95 days out of 100, say, the most we can expect to lose is $10 million or whatever. This is a good answer to the problem posed by Linsmeier and Pearson. The board or other recipients specify their probability level—95%, 99% and so on—and the risk manager can tell them the maximum they can lose at that probability level. The recipients can also specify the horizon period—the next day, the next week, month, quarter, etc.—and again the risk manager can tell them the maximum amount they stand to lose over that horizon period. Indeed, the recipients can specify any combination of probability and horizon period, and the risk manager can give them the VaR applicable to that probability and horizon period. We then have to face the problem of how to measure the VaR. This is a tricky question, and the answer is very involved and takes up much of this book. The short answer is, therefore, to read this book or others like it. However, before we get too involved with VaR, we also have to face another issue. Is a VaR measure the best we can do? The answer is no. There are alternatives to VaR, and at least one of these—the so-called expected tail loss (ETL) or expected shortfall—is demonstrably superior. The ETL is the loss we can expect to make if we get a loss in excess of VaR. Consequently, I would take issue with Linsmeier and Pearson’s answer. ‘The VaR is . . . ’ is generally a reasonable answer, but it is not the best one. A better answer would be to tell the board the ETL—or better still, show them curves or surfaces plotting the ETL against probability and horizon period. Risk managers who use VaR as their preferred risk measure should really be using ETL instead. VaR is already pass´e. But if ETL is superior to VaR, why both with VaR measurement? This is a good question, and also a controversial one. Part of the answer is that there will be a need to measure VaR for as long as there is a demand for VaR itself: if someone wants the number, then someone has to measure it, and whether they should want the number in the first place is another matter. In this respect VaR is a lot like the infamous beta. People still want beta numbers, regardless of the well-documented problems of the Capital Asset Pricing Model on whose validity the beta risk measure depends. A purist might
say they shouldn’t, but the fact is that they do. So the business of estimating betas goes on, even
though the CAPM is now widely discredited. The same goes for VaR: a purist would say that VaR
is inferior to ETL, but people still want VaR numbers and so the business of VaR estimation goes
on. However, there is also a second, more satisfying, reason to continue to estimate VaR: we often
needVaR estimates to be able to estimate ETL.We don’t have many formulas for ETL and, as a result,
we would often be unable to estimate ETL if we had to rely on ETL formulas alone. Fortunately, it
turns out that we can always estimate the ETL if we can estimate VaR. The reason is that the VaR is
a quantile and, if we can estimate the quantile, we can easily estimate the ETL—because the ETL
itself is just a quantile average.
INTENDED READERSHIP
This book provides an introduction to VaR and ETL estimation, and is a more basic, student-oriented
version of Measuring Market Risk, also published by John Wiley. The present book differs from
Measuring Market Risk in cutting out some of the more difficult material—quasi-Monte Carlo
methods, lattice methods, analytical and algorithmic approaches to options VaR, non-parametric
density estimation, copulas, and other either advanced or exotic material. The reader who wants the
more advanced material is therefore advised to go for the other book. However, most students should
find An Introduction to Market Risk Measurement is better suited to their needs.
To get the most out of the book requires a basic knowledge of computing and spreadsheets,
statistics (including some familiarity with moments and density/distribution functions), mathematics
(including basic matrix algebra), and some prior knowledge of finance, most especially derivatives
and fixed-income theory. Most practitioners and academics should have relatively little difficulty
with it, but for students this material is best taught after they have already done their quantitative
methods, derivatives, fixed-income and other ‘building block’ courses.
USING THIS BOOK
This book is divided into two parts—the chapters that discuss risk measurement, presupposing that
the reader has the technical tools (i.e., the statistical, programming and other skills) to follow the
discussion; and the toolkit at the end, which explains the main tools needed to measure market risk.
This division separates the material dealing with risk measurement per se from the material dealing
with the technical tools needed to carry out risk measurement. This helps to simplify the discussion
and should make the book much easier to read: instead of going back and forth between technique and risk measurement, as many books do, we can read the technical material first; once
we have the tools under our belt, we can then focus on the risk measurement without having to pause
occasionally to re-tool.
I would suggest that the reader begin with the technical material—the tools at the end—and
make sure that this material is adequately digested. Once that is done, the reader will be equipped
to follow the risk measurement material without needing to take any technical breaks. My advice to
those who might use the book for teaching purposes is the same: first cover the tools, and then do
the risk measurement. However, much of the chapter material can, I hope, be followed without too
much difficulty by readers who don’t cover the tools first; but some of those who read the book in
this way will occasionally find themselves having to pause to tool up.
In teaching market risk material over the last few years, it has also become clear to me that one
cannot teach this material effectively—and students cannot really absorb it—if one teaches only
at an abstract level. Of course, it is important to have lectures to convey the conceptual material, but
risk measurement is not a purely abstract subject, and in my experience students only really grasp the
material when they start playing with it—when they start working out VaR figures for themselves
on a spreadsheet, when they have exercises and assignments to do, and so on. When teaching, it is
therefore important to balance lecture-style delivery with practical sessions in which the students
use computers to solve illustrative risk measurement problems.
If the book is to be read and used practically, readers also need to use appropriate spreadsheets
or other software to carry out estimations for themselves. Again, my teaching and supervision
experience is that the use of software is critical in learning this material, and we can only ever claim
to understand something when we have actually measured it. The software and risk material are also
intimately related, and the good risk measurer knows that risk measurement always boils down to
some spreadsheet or other computer function. In fact, much of the action in this area boils down to
software issues—comparing alternative software routines, finding errors, improving accuracy and
speed, and so forth. Any risk measurement book should come with at least some indication of how
risk measurement routines can be implemented on a computer.
It is better still for such books to come with their own software, and this book comes with a CD
that contains two different sets of useful risk measurement software:
 A set of Excel workbooks showing how to carry out some basic risk measurement tasks using
Excel: estimation of different types of VaR, and so forth.
 A set of risk measurement and related functions in the form of An Introduction to Market Risk
Measurement Toolbox in MATLAB and a manual explaining their use.1 My advice to users is to
print out the manual and go through the functions on a computer, and then keep the manual to
hand for later reference.2 The examples and figures in the book are produced using this software,
and readers should be able to reproduce them for themselves.
Readers are welcome to contact me with any feedback; however, I would ask that they bear in mind
that because of time pressures I cannot provide a query answer service—and this is probably educationally best in any case, because the only way to really learn this material is to struggle
through it. Nonetheless, I will keep the software and the manual up-to-date on my website
(www.nottingham.ac.uk/∼lizkd) and readers are welcome to download updates from there.
In writing this software, I should explain that I focused on MATLAB mainly because it is both
powerful and user-friendly, unlike its obvious alternatives (VBA, which is neither powerful nor
particularly user-friendly, or the C or S languages, which are certainly not user-friendly). I also
chose MATLAB in part because it produces very nice graphics, and a good graph or chart is often
an essential tool for risk measurement. Unfortunately, the downside of MATLAB is that many users
of the book will not be familiar with it or will not have ready access to it, and I can only advise such
readers to think seriously about going through the expense and/or effort to get it.3
In explaining risk measurement throughout this book, I have tried to focus on the underlying ideas
rather than on programming code: understanding the ideas is much more important, and the coding
itself is mere implementation. My advice to risk measurers is that they should aim to get to the level
where they can easily write their own code once they know what they are trying to do. However, for
those who want it, the code I use is easily accessible—one simply opens up MATLAB, goes into
the IMRM Toolbox, and opens the relevant function. The reader who wants the code should refer
directly to the program coding rather than searching around in the text: I have tried to keep the text
itself free of such detail to focus on more important conceptual issues.
The IMRM Toolbox also has many other functions besides those used to produce the examples or
figures in the text. I have tried to produce a fairly extensive set of software functions that would cover
all the obvious VaR or ETL measurement problems, as well as some of the more advanced ones.
Users—such as students doing their dissertations, academics doing their research, and practitioners
working on practical applications—might find some of these functions useful, and they are welcome
to make whatever use of these functions they wish. However, before anyone takes these functions too
seriously, they should appreciate that I am not a programmer and anyone who uses these functions
must do so at his or her own risk. As always in risk measurement, we should keep our wits about us
and not be too trusting of the software we use or the results we get.
OUTLINE OF THE BOOK
As mentioned earlier, the book is divided into the chapters proper and the toolkit at the end that deals
with the technical issues underlying (or the tools needed for) market risk measurement. It might be
helpful to give a brief overview of these so readers know what to expect. The Chapters
The first chapter provides a brief overview of recent developments in risk measurement—market
risk measurement especially—to put VaR and ETL in their proper context. Chapter 2 then looks at
different measures of financial risk. We begin here with the traditional mean–variance framework.
This framework is very convenient and provides the underpinning for modern portfolio theory, but it
is also limited in its applicability because it has difficulty handling skewness (or asymmetry) and ‘fat tails’ (or fatter than normal tails) in our P/L or return probability density functions.We then consider
VaR and ETL as risk measures, and compare them to traditional risk measures and to each other.
Having established what our basic risk measures actually are, Chapter 3 has a first run through
the issues involved in estimating them. We cover three main sets of issues here:
 Preliminary data issues—how to handle data in profit/loss (or P/L) form, rate of return form, etc.
 How to estimate VaR based on alternative sets of assumptions about the distribution of our data
and how our VaR estimation procedure depends on the assumptions we make.
 How to estimate ETL—and, in particular, how we can always approximate ETL by taking it as
an average of ‘tail VaRs’ or losses exceeding VaR.
Chapter 3 ends with an appendix dealing with the important subject of mapping—the process of
describing the positions we hold in terms of combinations of standard building blocks.We would use
mapping to cut down on the dimensionality of our portfolio, or deal with possible problems caused
by having closely correlated risk factors or missing data. Mapping enables us to estimate market risk
in situations that would otherwise be very demanding or even impossible.
Chapter 4 then takes a closer look at non-parametric VaR and ETL estimation. Non-parametric
approaches are those in which we estimate VaR or ETL making minimal assumptions about the
distribution of P/L or returns: we let the P/L data speak for themselves as much as possible. There
are various non-parametric approaches, and the most popular is historical simulation (HS), which
is conceptually simple, easy to implement, widely used and has a fairly good track record. We
can also carry out non-parametric estimation using principal components methods (see Tool No. 4),
and the latter methods are sometimes useful when dealing with high-dimensionality problems
(i.e., when dealing with portfolios with very large numbers of risk factors). As a general rule,
non-parametric methods work fairly well if market conditions remain reasonably stable, and they
are capable of considerable refinement and improvement. However, they can be unreliable if market
conditions change, their results are totally dependent on the data set, and their estimates of VaR and
ETL are subject to distortions from one-off events and ghost effects.
Chapter 5 looks more closely at parametric approaches, the essence of which is that we fit probability
curves to the data and then infer the VaR or ETL from the fitted curve. Parametric approaches
are more powerful than non-parametric ones, because they make use of additional information contained
in the assumed probability density function. They are also easy to use, because they give rise
to straightforward formulas for VaR and sometimes ETL, but are vulnerable to error if the assumed
density function does not adequately fit the data. The chapter discusses parametric VaR and ETL at two different levels—at the portfolio level, where we are dealing with portfolio P/L or returns,
and assume that the underlying distribution is normal, Student t, extreme value or whatever; and at
the sub-portfolio or individual-position level, where we deal with the P/L or returns to individual
positions and assume that these are multivariate normal. This chapter ends with an appendix dealing
with the use of delta–gamma and related approximations to deal with non-linear risks (e.g., such as
those arising from options).
Chapter 6 examines how we can estimate VaR and ETL using simulation (or random number)
methods. These methods are very powerful and flexible, and can be applied to many different types of
VaR or ETL estimation problem. Simulation methods can be highly effective for many problems that
are too complicated or too messy for analytical or algorithmic approaches, and they are particularly
good at handling complications like path-dependency, non-linearity and optionality. Amongst the
many possible applications of simulation methods are to estimate theVaR or ETL of options positions
and fixed-income positions, including those in interest-rate derivatives, as well as the VaR or ETL
of credit-related positions (e.g., in default-risky bonds, credit derivatives, etc.), and of insurance and pension-fund portfolios. We can also use simulation methods for other purposes—for
example, to estimate VaR or ETL in the context of dynamic portfolio management strategies.
However, simulation methods are less easy to use than some alternatives, usually require a lot of
calculations, and can have difficulty dealing with early-exercise features.
Chapter 7 considers risk addition and decomposition—how changing our portfolio alters our
risk, and how we can decompose our portfolio risk into constituent or component risks. We are
concerned here with:
 Incremental risks. These are the changes in risk when a factor changes—for example, how VaR
changes when we add a new position to our portfolio.
 Component risks. These are the component or constituent risks that make up a certain total risk—
if we have a portfolio made up of particular positions, the portfolio VaR can be broken down into
components that tell us how much each position contributes to the overall portfolio VaR.
Both these (and their ETL equivalents) are extremely useful measures in portfolio risk management:
amongst other uses, they give us new methods of identifying sources of risk, finding natural hedges,
defining risk limits, reporting risks and improving portfolio allocations.
Chapter 8 examines liquidity issues and how they affect market risk measurement. Liquidity
issues affect market risk measurement not just through their impact on our standard measures of
market risk, VaR and ETL, but also because effective market risk management involves an ability
to measure and manage liquidity risk itself. The chapter considers the nature of market liquidity
and illiquidity, and their associated costs and risks, and then considers how we might take account
of these factors to estimate VaR and ETL in illiquid or partially liquid markets. Furthermore, since
liquidity is important in itself and because liquidity problems are particularly prominent in market
crises, we also need to consider two other aspects of liquidity risk measurement—the estimation
of liquidity at risk (i.e., the liquidity equivalent to value at risk), and the estimation of crisis-related
liquidity risks.
Chapter 9 deals with backtesting—the application of quantitative, typically statistical, methods
to determine whether a model’s risk estimates are consistent with the assumptions on which the
model is based or to rank models against each other. To backtest a model, we first assemble a suitable
data set—we have to ‘clean’ accounting data, etc.—and it is good practice to produce a backtest
chart showing how P/L compares to measured risk over time. After this preliminary data analysis,
we can proceed to a formal backtest. The main classes of backtest procedure are:
 Statistical approaches based on the frequency of losses exceeding VaR.
 Statistical approaches based on the sizes of losses exceeding VaR.
 Forecast evaluation methods, in which we score a model’s forecasting performance in terms of a
forecast error loss function. Each of these classes of backtest comes in alternative forms, and it is generally advisable to
run a number of them to get a broad feel for the performance of the model. We can also backtest
models at the position level as well as the portfolio level, and using simulation or bootstrap data as
well as ‘real’ data. Ideally, ‘good’ models should backtest well and ‘bad’ models should backtest
poorly, but in practice results are often much less clear: in this game, separating the sheep from the
goats is often much harder than many imagine.
Chapter 10 examines stress testing—‘what if’ procedures that attempt to gauge the vulnerability
of our portfolio to hypothetical events. Stress testing is particularly good for quantifying what we
might lose in crisis situations where ‘normal’ market relationships break down and VaR or ETL risk
measures can be very misleading. VaR and ETL are good on the probability side, but poor on the ‘what if’ side, whereas stress tests are good for ‘what if’ questions and poor on probability questions.
Stress testing is therefore good where VaR and ETL are weak, and vice versa. As well as helping to
quantify our exposure to bad states, the results of stress testing can be a useful guide to management
decision-making and help highlight weaknesses (e.g., questionable assumptions, etc.) in our risk
management procedures.
The final chapter considers the subject of model risk—the risk of error in our risk estimates due
to inadequacies in our risk measurement models. The use of any model always entails exposure to
model risk of some form or another, and practitioners often overlook this exposure because it is out
of sight and because most of those who use models have a tendency to end up ‘believing’ them. We
therefore need to understand what model risk is, where and how it arises, how to measure it, and
what its possible consequences might be. Interested parties, such as risk practitioners and their
managers also also need to understand what they can do to combat it. The problem of model risk
never goes away, but we can learn to live with it.
The Toolkit
The toolkit at the end consists of seven different ‘tools’, each of which is useful for risk measurement
purposes. Tool No. 1 deals with the use of the theory of order statistics for estimating VaR and ETL.
Order statistics are ordered observations—the biggest observation, the second biggest observation,
etc.—and the theory of order statistics enables us to predict the distribution of each ordered observation.
This is very useful because the VaR itself is an order statistic—for example, with 100 P/L
observations, we might take the VaR at the 95% confidence level as the sixth largest loss observation.
Hence, the theory of order statistics enables us to estimate the whole of the VaR probability density
function—and this enables us to estimate confidence intervals for our VaR. Estimating confidence
intervals for ETLs is also easy, because there is a one-to-one mapping from the VaR observations
to the ETL ones: we can convert the P/L observations into average loss observations, and apply the
order statistics approach to the latter to obtain ETL confidence intervals.
Tool No. 2 deals with the Cornish–Fisher expansion, which is useful for estimating VaR and
ETL when the underlying distribution is near normal. If our portfolio P/L or return distribution is
not normal, we cannot take the VaR to be given by the percentiles of an inverse normal distribution
function; however, if the non-normality is not too severe, the Cornish–Fisher expansion gives us
an adjustment factor that we can use to correct the normal VaR estimate for non-normality. The
Cornish–Fisher adjustment is easy to apply and enables us to retain the easiness of the normal
approach to VaR in at least some circumstances where the normality assumption itself does not hold.
Tool No. 3 deals with bootstrap procedures. These methods enable us to sample repeatedly from
a given set of data, and they are very useful because they give a reliable and easy way of estimating
confidence intervals for any parameters of interest, including VaRs and ETLs.
Tool No. 4 covers principal components analysis, which is an alternative method of gaining insight
into the properties of a data set. It is helpful in risk measurement because it can provide a simpler
representation of the processes that generate a given data set, which then enables us to reduce the
dimensionality of our data and so reduce the number of variance-covariance parameters that we need
to estimate. Such methods can be very useful when we have large dimension problems (e.g., variancecovariance
matrices with hundreds of different instruments), but they can also be useful for cleaning
data and developing data mapping systems.
Tool No. 5 deals with extreme value theory (EVT) and its applications in financial risk management.
EVT is a branch of statistics tailor-made to deal with problems posed by extreme or rare events—and
in particular, the problems posed by estimating extreme quantiles and associated probabilities that go well beyond our sample range. The key to EVT is a theorem—the extreme value theorem—that
tells us what the distribution of extreme values should look like, at least asymptotically. This theorem
and various associated results tell us what we should be estimating, and also give us some guidance
on estimation and inference issues.
Tool No. 6 then deals with Monte Carlo simulation methods. These methods can be used to price
derivatives, estimate their hedge ratios, and solve risk measurement problems of almost any degree
of complexity. The idea is to simulate repeatedly the random processes governing the prices or
returns of the financial instruments we are interested in. If we take enough simulations, the simulated
distribution of portfolio values will converge to the portfolio’s unknown ‘true’ distribution, and we
can use the simulated distribution of end-period portfolio values to infer the VaR or ETL.
Tool No. 7 discusses the forecasting of volatilities, covariances and correlations. This is one of
the most important subjects in modern risk measurement, and is critical to derivatives pricing,
hedging, and VaR and ETL estimation. The focus of our discussion is the estimation of volatilities,
in which we go through each of four main approaches to this problem: historical estimation, exponentially
weighted moving average (EWMA) estimation, GARCH estimation, and implied volatility
estimation. The treatment of covariances and correlations parallels that of volatilities, and we end
with a brief discussion of the issues involved with estimating variance–covariance and correlation matrices.

Kevin Dowd - Personal Name
0-470-84748-4
NONE
An Introduction to Market Risk Measurement
Banking And Finance
English
JOHN WILEY & SONS, LTD
2002
New Jersey
LOADING LIST...
LOADING LIST...