(Note: This is a speculative, unedited first draft of a section that may make its way into the chapter on neoclassical techniques in my book on recessions. At the time of writing, I am mainly familiar with the standard DSGE models that are meant to describe economic expansions. There are models that may be better suited to capture recessions, mainly developed after 2008. I may need to qualify some statements as part of the writing process. At the same time, it is clear that this generic problem is a widespread issue in the broad literature.)
The difficulties I will outline here are foreign to stock-flow consistent (SFC) models, which conform much closer to what one might expect will happen in a model. Stock-flow consistent models do not attempt to model future states -- the right decision, in my opinion -- rather, any "expectations" that are embedded in the model for time t are based on data available at time t (only). (Technically, since many of the models clear markets at time t, the expectations are in some sense based on model-consistent expectations, but only for the current time period.) As anyone who has coded such models realises, the annoyance with SFC models is getting the initial conditions to be aligned with the first period's solution. (Otherwise, the model starts off with unusual dynamics as the lagging information is drawn into line with the current solution.)
The Linearisation Dodge
From a practical perspective, the nonlinear models that appear in DSGE macro papers are purely ornamental -- we use them to derive a linearised (technically, log-linearised) model. The original nonlinear model (which is what I am referring to now) is then dropped from consideration. The resulting log-linear models are sufficiently primitive that we can interpret time in them in the same way as SFC models. I will be discussing the use of these models in {another section of the book}.Forward Pricing Time
The concept of a forward pricing time axis is something that will be intuitively understood by anyone involved in fixed income markets, but may be very unusual for others to grasp. I will now quickly outline the concept.Imagine that we are in a country where fixed income practitioners are quite lazy, and only trade bonds once per year, on Sadie Hawkins Day. All fixed income instruments are annual coupon and have maturity dates on Sadie Hawkins Day. Furthermore, this country conveniently rebased its calendar so that we are in year 0.
In year 0, we face an array of fixed income products, including forward rates, that have maturities that are all in years 1, 2, 3, etc. We note that arbitrage opportunities are small, and so we can fit a zero curve and adequately price all instruments. This allows us to back out a fitted forward curve that is close to observed forward rates. The formula for the first forward -- the 1-year rate, starting 1-year forward -- is given by:
(1 + forward) = (1 + r_2)^2 / (1 + r_1),
where r_1, r_2 are the 1- and 2-year zero rates respectively.If we somehow able to apply option pricing to such a world (delta hedging would be awkward), we would see that the expected value of the probability distribution for the 1-year rate starting 1-year forward would equal the above forward rate (with all notions of term premia thrown out the window). This would lead people to refer to the forward rate as being the "expected rate," which ends up being interpreted by practically everyone as the "forecast rate." There is a leap of faith in equating the result of the mathematical expectations operation with a "market forecast," but one could argue that there are decent arguments treating the two concepts as the same thing.
What happens when we hit year 1, or later years? Unless we actually entered into forward contracts, we have no reason to care what people in year 0 thought the level of rates might be. We face a new curve, with a brand new forward curve. That is, we have a completely different forward time axis in time point 1. (The dependence of the forward curve on calendar time as well have a forward time makes fixed income mathematics brutal to read.) Although we would like to think that the forward curve at time 0 has some information about the forward curve at time 1, that may be wishful thinking. (In the real world, we only have one trading day between forward curve generations, so they resemble each other. However, if we have a long time step between successive forward curves -- which is a feature of economic models, which are quite often done at the quarterly frequency -- the predictive value of the older curve is extremely limited. The reason is straightforward: a great deal of information has arrived, causing people to adjust their forecasts, and expectations presumably follow.
DSGE Models and Time
The premise behind DSGE models is that they are dynamic, and so expectations about the future matter. As a result, we do not just face current prices and interest rates, we have a forward curve for everything over the entire model time axis. For perfectly plausible reasons, the forward time axis heads out to infinity, which causes chortling among critics (including myself on occasion).The bulk of the analysis of the nonlinear model is trying to characterise the solution over the forward time. The idea is that expected values are coherent, and represent some form of a market clearing operation. I have reservations about how these operations are expressed in the parts of the literature I have examined, but those reservations are not material for this discussion. The key point is that the nonlinear model is only solved in forward time, and the author has never run across a discussion of what happens when calendar time passes.
One fairly outlandish interpretation is that at time zero, one gigantic DSGE model was set up, and we are slowing fulfilling the expected path of economic behaviour that was the result of the market clearing at time zero. Such a literal interpretation of the DSGE mathematical setup does not appear to be the intention of anyone using the models.*
Based on the experience of fixed income markets, the natural interpretation is that if we want to fit DSGE models against observed data in calendar time, would would need to run the model with updated initial conditions (to reflect new information) at each time point.
Since it is intended that these models are internally consistent with the passage of time, if no new information arrived, the new solution at time 1 would align with the forward expectations generated when we solved at time 0. That is, the forward curves would evolve as forecast if spot rates trade at where the forwards were. In fixed income, the implication of spot always hitting the forwards is that every instrument has the same return. Since we have an entire industry devoted to trading bonds, it is safe to say that nobody expects that to happen.
In fixed income, the mathematical analysis of what happens along the forward time axis is straightforward; the only complications that show up are in the arbitrage of exotic derivatives. However, when we arrive to analysing what happens in calendar time, we have very few definitive mathematical laws, and wild theories run free. When we are looking at calendar time, we are discussing trades about the direction of interest rates (directional trades).
If we want to use fixed income terminology, DSGE models are arbitrage models for economic variables. They could be amazing if anyone ever wanted to arbitrage forward economic time series. Unfortunately, no sane person does that. We are interested in the directional properties of economic time series.
The Time Zero Problem
Although this sounds like academic wrangling, this distinction is critical for nonlinear DSGE models. (Once again, the log-linearisations sidestep this problem, at the cost of being primitive.) The optimality conditions within the models look at finding a path for variables along the time axis, and there are a conditions relating time points to the ones coming before and after. Essentially, this is a boundary value problem. The concern is straightforward: time zero is at one endpoint of the time interval, and the endpoints of boundary value problems show the most extreme behaviour.As an example, we could grab a DSGE model with flexible prices -- a real business cycle (RBC) model. Many of these proposed models have an interesting property: almost all analysis is based on real quantities, and so all dollar amounts are multiplied by either the price or wage variable. (The critical exception is wealth, as discussed below.) We could replace the currency used in the model with a new currency that has a fixed exchange rate to the original currency, and the solution expressed in the new currency will meet the model conditions. That is, re-denominating the currency changes nothing. (Once again, not all models have this property. One example would be a model where money enters the utility function as an absolute number. In which case, re-denominating the currency by adding zeros to the old currency raises utility, since the absolute amount of currency units rises. Since that does not meet economists' prior views about utility, models tend to be invariant to currency re-denominations.)
So what? Imagine we somehow found a solution to a model that is currency re-denomination invariant. Then if we re-scaled all variables to be twice the original solution, we would still have a solution. The implication is that if we double the inherited wealth (which are government liabilities held from the previous period), it corresponds to a solution where all prices are doubled. In other words, there is a constant that relates the inherited wealth to the price level in the model. If this constant were stable, we would see something equivalent to the Quantity Theory of Money, although instead of money, it is inherited wealth. This is a special case of what is known as the Fiscal Theory of the Price Level.
Why do we see such scaling? In RBC model, all constraints on prices are relative constraints: prices relative to wages in the current period, discount rates from one period to the next (interest rates), and prices in a period relative to the previous period (inflation). There is nothing to pin down the initial price level. We can multiply the wage and price time series by a constant, and we are still a valid solution. The only thing that allows us to pin down the initial price level is a scaling that is fixed in monetary terms, and that is exactly what the initial wealth represents.
The so-called New Keynesian models add a new effect to pin down the initial price level. Goods in the model are a composite entity, and some prices in the price index are fixed. If we inherited fixed prices from a previous period, those initial prices would also help provide an anchor for the initial price level. We would now have two initial conditions to help pin down the initial price level.
(There is a technical issue: most treatments the author has run across assumes that all prices are free to move in the initial period, and that stickiness only affects later prices. The model has to explicitly incorporate an initial condition of stuck prices to get away from the world of scaling based on initial wealth. This may explain why it is possible to derive the Fiscal Theory of the Price Level for models with sticky prices.)
We now see the issue: if governments run deficits, private sector wealth is changing. Even if the effect is modulated by sticky prices, the initial price level should be expected to whip around if we re-run the model at different calendar times. Attempting to align the dynamics of running the model repeatedly when events do not go as forecast to observed data appears to be a very difficult task -- and it is unclear how adequately the existing empirical literature does the job.
Time and Recessions
When we turn to the question of recessions, the difference between forward time and calendar time is critical.Imagine that were started running DSGE models in time period 0, and there is a deep recession starting in time period 10, with an expansion on either side of the recession. Furthermore, we assume that DSGE model captured this. What happened?
- We ran the model one in time period 10, and it generated a fall in activity starting in time period 10.
- We ran the model each period, and the initial calculated model solution started falling in time period 10.
These are radically different interpretations.
For the first, we need to somehow generate a sharp change in activity that starts in time 10, while activity on either side is stable. How is this possible, particularly given the smoothing properties of the utility function discounting? Could such an event be forecast in a real sense?
For the second, the instability of the initial solution of DSGE models aids the explanation. It may take much less movement in model parameters to destabilise the initial point of the solution. The downside to this version is that not only do we need to find the nonlinear model solution, we need to figure out what its solution time series properties are. Given the complexity of the models (and vagueness of the solution methods), this is a tall order.
Concluding Remarks
Unless we duck into log-linearised models, or look at toy models designed to generate recessions, DSGE macro models are not inherently suited for recession forecasting. I will survey the attempts at explicit recession model building in a later text.
Technical Appendix: Probability and Time
The interpretation of probability in nonlinear DSGE models is also awkward once we start to enquiry closely into the time axis.
The models are solved in some sense by appealing to market clearing: demand and supply have to equal each other. However, if we accept the stochastic formalism of the models, the variables are random variables. How can two distinct random variables be guaranteed to equal each other?
I accept that someone could come up with some form of model where we can have such variables being characterised by probability distributions, yet end up equal to each other. However, such a model may bear no resemblance to the models in the literature. My intuition is that if we want to even pretend that we can solve the models, we need to have a firm order of operations. Firstly, we draw the realisation of all random variables (parameters) for all time points in the forward axis. Secondly, we apply the reaction functions of agents to the models to the now-fixed parameters. That is, the dynamic model is deterministic, with the only random component being that input parameters get adjusted in some random fashion. Such an interpretation would eliminate almost all the stochastic boilerplate that clogs the model expositions.
If my interpretation of probability is correct, recessions become even harder to interpret if they occur in forward time. At time 0, we would know with certainty all forward parameter values, including "shocks." That is, the recession would be known with certainty in advance, and so path-planning would be based on that known event.
Footnote:
* Disclaimer: one never knows with some academics.
(c) Brian Romanchuk 2019
No comments:
Post a Comment
Note: Posts are manually moderated, with a varying delay. Some disappear.
The comment section here is largely dead. My Substack or Twitter are better places to have a conversation.
Given that this is largely a backup way to reach me, I am going to reject posts that annoy me. Please post lengthy essays elsewhere.