In this article I describe a very popular class of models in academia and central bank research circles: affine term structure models. These models attempt to provide an answer to the important question: what is the term premium in the yield curve? (See here and here for previous articles on this topic.) I will not even attempt to cover the mathematics involved here.
I will begin with a personal anecdote which explains how my
philosophy of term structure modelling developed. At the beginning of my career
in finance, I inherited the task of maintaining a model which calculated the
unbiased expected forward level of short rates; i.e., a modelled term premium was
subtracted from observed forward rates. Everybody loved the concept, but snags
developed when it was used.
A typical problem: the forward curve rose by 5 basis points
in response to some data, and a strategist at the firm wanted to say that the
expected fed funds rate rose by 5 basis points, right? Nope. The model decided
that the term premium rose by 7 basis points that day, and so the expected rate
fell by 2 basis points. I would then
be told to inspect the model, because this made no sense. After this happened a
dozen times or so, it became the first mathematical model in my life that I
truly loathed.
I luckily had an excuse to “recalibrate” the model, and I
clamped down on the volatility of the term premium hard. It was not constant, as I needed to have some “quant-y” black
box stuff in there to justify my salary, but it was stable enough so that I did
not have to re-examine the model every 20 days. And my advice to anyone out
there who have to build a model like this: stabilise the volatility of the term
premium in your model output by any means possible.
Returning to the affine term structure models, I recommend
this working paper by David Jamieson Bolder at the Bank of Canada. The paper
is dated, but it covers the mathematical basics which are not put into other papers
for reasons of space. The state of the art has moved on, but it will be easier
to follow the other papers once the basic concepts are covered. If I had a
working copy of an affine term structure model, I would work backward: start
with the final algorithm, and then see what mathematical model is implied later.
The basic idea of an affine term structure model is very
similar to factor analysis used in other parts of finance. The expected path of
short rates is modelled by some kind of a random walk influenced by some
fundamental factors ("unobserved latent factors"), and then a time-varying random term premium is added to
reproduce observed bond prices. (The models are called affine because there is
an assumption that the term structure is an affine function of the unobserved latent factors, which is function
of the form f(x) = a + bx; in other
words a “linear function plus a constant”.) [UPDATE: added italicised words, as the original sentence did not have the intended meaning.]
Central banks are natural consumers of these models. They
are not interested in forecasting bond portfolio returns, so they want to strip
out the term premium. Market practitioners, on the other hand, should really
only be interested in expected returns, and it does not really matter whether the
returns come from the path of short rates or the term premium.
These models are also very popular in academia,
paradoxically because they do not work too well. There is always room to tweak
the models, hence publish a new paper. (By contrast, look at principal
component analysis. Once you fix the estimation period, your estimates for
hedging ratios will not change much even if you make (sensible) changes to the algorithm. This means
that the model is useful for practitioners, but there is no capacity to keep
publishing papers on the subject.)
As a typical example of how they have been used recently:
imagine that we have calibrated an affine term structure model on data
pre-2008. We then freeze the model structure, and see how the model develops.
And imagine then that the 10-year model term premium falls 150 basis points in
recent years, once Quantitative Easing (QE) started. If you are an academic or
central bank researcher, you publish a paper explaining that this means that QE
has lowered the 10-year bond yield by 150 basis points. However, a cynic might
suggest that your model just blew up when you went “out-of-sample”. There is no
way of distinguishing these explanations with the data available.
As a final example, take the recent hammering of the 10-year
Treasury: going from 1.65% to about 3% in a few months (with a small
retracement going on at the time of writing). The answer you get depends on how
you structure your model, but for a lot of the term structure models, it is
possible that almost all the move could be in the term premium. In other words,
the model expectations for short rates did not move much. (Since the
expectations in a lot of these models are based on macro data, they are
slow-moving.) This is very unsatisfying to me. I could hope to forecast where
the Fed might be going, but I see no way of forecasting such violent moves in an
unobservable model variable.
(c) Brian Romanchuk 2013
This comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDelete