Attempting to Estimate an Unobservable Variable Will End Badly
In control engineering, there is the notion of an unobservable variable. These are state variables whose values are not just directly measured, their values cannot be inferred from any manipulation of measured values and known system dynamics. For example, if can accurately measure the position of a vehicle, we can infer its velocity, so velocity is not unobservable. However, we have no way of determining the internal temperature from the position data, and so the temperature is indeed unobservable.The normal practice is to delete unobservable variables from dynamic systems models. We have no way of determining their value, and they interfere with attempts to estimate the state variable. (Since there is an infinite number of valid solutions, algorithms will not converge.) It is not that these variables do not exist, but we cannot say anything useful about them with available data and known model dynamics (like the vehicle temperature in my example).
As I noted in "How to Approach the Term Premium," an aggregate term premium is a variable that we cannot hope to measure with currently available data. Although I have not formally proved that the term premium is unobservable, that certainly appears to be the case. The only way we can say that an aggregate term premium exists is if we can infer measurable effects on other variables.
(As I discussed in that article, investors should probably have their own estimate of the embedded term premium when making investment decisions. Since it is your own estimate, you presumably know what its value is. The catch is that we do not know others' estimates based on market behaviour.)
In other words, researchers are writings hundreds of extremely complex papers discussing a concept that shows little sign of existing. If we want to be careful with what we are doing, we need to accept that we should not accept the labelling given to the time series as given by the researchers. That is, just because a model output is referred to as a term premium by a researcher, we should not assume that what the variable really corresponds to. However, I will refer to these model estimates as term premia in this article, as otherwise the text will be confusing.
There's an Infinite Number of Term Premia Estimates
The second issue with term premia estimates is that there is an infinite number of them. We can decompose observed nominal yields in an infinite number of ways, the rules for decomposition can change over time. The only restriction is that the decomposition is arbitrage-free, which is a relatively weak restriction (albeit with complex mathematics).This is wonderful for researchers, as an infinite number of models implies an infinite number of potential papers. (Of course, computational tractability eliminates most potential models.) However, it makes discussion of these models a question of hitting a moving target.
One typical use of these models is to examine the effect of an event (for example, quantitative easing) on term premia. The abstract of such papers typically reads as follows:
{Event X} caused the term premium at maturity M to move by Y basis points.Such papers can then be used to prove any number of statements about policy.
The correct way of interpreting such papers if that the researcher has found one term structure model -- out of an infinite number of possibilities -- where event X coincided with a move in the term premium of Y basis points.
Therefore, the usefulness of such research depends upon your prior beliefs about academic and central bank research. If you normally believe the claims of researchers in their abstracts, there is no problem. For those of us with more cynical prior beliefs, such results can easily be explained as being the result of model-mining.
The Decompositions are Dubious
Once we get past the previous high-level problems, which are highly generic, we are left with more model-specific issues. These problems are usually the result of another inherent problem: we have no natural way to decompose observed yields into term premia and the expected path of short rates.In order to do this, the usual procedure is to force one of the components to follow some estimated value, and then the other component has to equal the residual. (One alternative -- interpreting statistical factors -- is discussed later.) That is, we could force term premia to be roughly equal to some variable, and then expectations is (roughly) equal to observed yields minus the estimated term premium. Vice-versa if we force expectations to follow some variable.
Some example decompositions I have run across over the years include the following.
- Use a survey of economists to determine the expected path of rates.
- Use a measure that is roughly equivalent to historical volatility (or implied volatility) of rates to determine the term premium.
- For inflation-linked curves, use a fundamental model with 2-3 variables to estimate expected inflation.
The problem with all of these techniques is that they are questionable. In most cases, the importance of these assumptions is largely buried under a discussion of the mathematics of the curve structure model. However, for those of us who are primarily concerned about the level of the term premium, the results are entirely driven by these fundamental estimation techniques.
There is an alternative way of approaching this problem, which is based on a yield curve model that relies solely on statistical risk factors. The researcher then interprets one or more of these factors as being a term premium. Such an approach appears more reasonable, but analysis comes down to battling interpretations of data. The presumed attraction of term premium models is that they were supposed to eliminate verbal arguments over how to interpret yield curve movements. Since these models are quite distinct, they are not discussed in the rest of this article.
Frequency Domain Problems
In most cases, model estimates for the rate component use data that are at a lower frequency than bond market data. By definition, all of the high frequency components of bond yields ("noise") have to be attributed to the other factor.
In particular, if we have a slow-moving estimate of expected rates, term premia will be oscillating at a high frequency. In my opinion, such a decomposition makes little economic sense. (I would need to justify this intuition in other articles.)
Why are Survey Estimates Dubious?
It would seem that surveys regarding the path of short rates would be a useful estimator for rate expectations. However, these estimates are mainly used for entertainment purposes by market participants. (The people being surveyed tend to take them more seriously, of course.)
The problem with surveys is that they are almost invariably set by chief economist, who has to work with a committee to set a house view on the economy. Since each committee meeting is invariably a compromise between factions, there is considerable institutional inertia in their estimated path for short rates. Market participants are well aware of the tendency for economists to be stubborn, and then only throw in the towel on their views after the bond market has already moved.
Furthermore, there is considerable herding behaviour of economists in surveys. The optimal strategy is just put your view at end one of the consensus. If the outcome is way outside the consensus in your favour, you have the best forecast, and people love you. If the outcome is on the other side, your forecast was only slightly more wrong than the others.
To top things off, what matters for bond pricing is what investors think, not economists. Even if the investment firm has a Chief Economist, the positioning of the bond portfolio may have no resemblance to the Chief Economist's views. Large bond investors are extremely coy about their positioning. If they write public bond market commentary, it may only reflect a desire to get out of a position. (Fiduciary rules should certainly imply that such investors not signal future portfolio shifts,)
Finally, surveys are done at a low frequency (and with an unknown lag), while market makers adjust prices instantly based on incoming data and flows. As discussed in the previous section, this loads all of the high frequency dynamics in the curve on the term premium, which ends up wiggling around like a greased pig. (The obvious fix to this frequency mismatch is to do a survey of views about the term premium; if it does move at a low frequency, you do not need to worry about aligning survey data to market data.)
If we want to interpret a time series as a term premium, it should have a relationship to future realised excess returns of a bond at that maturity. The deviation of the term premium from future excess returns is equal to the forecast error of the embedded rate expectations series.Relationship to Realised Excess Returns
For the long end of the yield curve, we have problems with data limitations. The excess return of a 10-year swap starting in January 2000 are going to be pretty close to the excess return of a 10-year swap starting in February 2000. In order to create completely independent observations, we would need to use January 2010 as the next point we test. (I assume that there would be legitimate ways of taking samples closer together.)
This runs into the problem that bond yields were regulated in the developed countries until the 1970s, or even the early 1980s. Furthermore, we had a major yield cycle within that era, in which it is clear that everyone overestimated future short rates. (This did show up in historical excess returns.)
However, this is not the case for the front end of the curve. For example, in a 25-year period, we have 100 completely independent 3-month instruments issued. Additionally, short rates across currencies are somewhat uncorrelated, increasing the number of potential observations. This allows us to compare the predictions of the yield curve models with actual market behaviour. From an empirical standpoint, such historical analysis is where many term structure models fall apart.
The overall relationship between a term premium and future excess returns is somewhat complicated; I may discuss it again in a later article.
Concluding Remarks
This article outlines the generic problems with term premium estimates derived from term structure models. We can then look at particular models, and see how they relate to the specific technique used.
I may look at one or two examples, but I am not enthusiastic about this task. Any critique of a model that points out that a model has an undesirable property, which just raises the response that another model does not have that property. Given the infinite number of models that are available, that is a never-ending game of Whac-a-Mole™. I would rather spend my time looking at techniques that are useful, than being bogged down chasing after an unlimited number of techniques that appear to have few redeeming features.
(c) Brian Romanchuk 2017
This comment has been removed by a blog administrator.
ReplyDelete