(As an editorial note, this article will likely be the concluding section of a chapter on empirical recession models in an upcoming book about recessions. The argument that these models cannot tell us why recessions happen explains why the rest of the book will slog through the swamps of the competing economic schools of thought. I have not completed my discussion of probit models, but I may put that aside until I am ready to put the manuscript into order. Since I have a hard time supporting equations in my book layouts for all the publishing formats -- particularly e-books -- I want to avoid equation-heavy discussions that are not critically important. As the tone of this article suggests, I am not convinced about the value of probit models as anything other than a curiosity.)
I divide empirical recession models into two classes:
- models that examine the trends in economic activity variables (employment, production, GDP, etc.); and
- models that attempt to forecast recessions based on examining time series that historically appear to offer predictive value (e.g., yield curve slopes).
The activity-based models are more straightforward in the context of this discussion. We can define a recession as a general downturn in activity variables. Using a statistical test to detect such a generalised downturn could be viewed as an alternative way of designating a period as a recession. There are only a few theoretical points to consider.
The first is that we can blindly grab variables from statistical databases. Some variables will just be alternative measures of the same concept, such as measuring employment via surveying employers versus households. Furthermore, other variables are linked by definition. For example, so long as the workforce participation rate is stable, the unemployment rate and the number of people being employed move inversely to each other.*
The second is that we have reason to believe that other variables will be related. For example, if we believe in the concept of a production function, output on a monthly basis should be highly correlated to the changes in aggregate hours worked. (This is because the capital stock does not vary appreciably on a month-to-month basis, and and productivity is normally fairly stable over the short run.) To the extent this is true, we should expect employment and production to move together by construction.
The final theoretical issue is: why should recessions be particularly interesting in the first place? If we believed that economic activity was the result of some steady state glide path plus random disruptions, why do we not see changes in economic variables as being akin to dice rolls? That is, one month, employment goes up, then next it goes down, etc. If it were truly random, we would expect some strings of negative employment changes to be stuck together. Furthermore, different economic activity variables could be largely uncorrelated with each other, modulo the issue of variables that are correlated by definition (as previously noted). This concern is the only one that had a direct impact on discussion of theory: we can either accept or reject models based on whether they generate economic data that is random (or not random). This would most likely be a grounds to test some neoclassical models, such as Real Business Cycle models.
Once we hit the much larger class of forecasting models, there is a greater tie to theory. However, even there, it is unclear how strong conclusions we can draw from the existing models.
One could attempt to build a forecasting model purely by ransacking a database of economic time series. In the worst case, we take 10,000 economic time series, and slap them into the equivalent of a neural net, and have a giant black box that generates recession probability signals.
These black box approaches have popped up again in academia and in industry, under the guise of "data science." I ran into the first generation of neural net applications in control systems back in the early 1990s when I was doing my doctorate. Neural nets made almost no headway into the field, precisely because they are black boxes. People want to know whether their engineering systems -- such as aircraft -- are stable when they operate. Having a control law that cannot be modelled makes stability analysis purely a game of simulations. However, all simulations of physical systems are imperfect, and if we have no tractable model for the approximate system, we have no ability to judge how close to instability we are. The applications to economics are straightforward: no matter how well a black box worked on historical data, if we do not know how it worked, we have no way to judge whether that will hold up going forward.
If eschew black boxes that lump together inputs, we tend to see that the best inputs for forecasting models are based on variables that incorporate some form of expectations. The main classes are:
- financial market variables (e.g., yield curve slopes);
- surveys of economists;
- surveys of industrialists, such as purchasing managers.
I am relatively comfortable with the surveys of industrialists: they are supposed to be measures of what they see in activity, such as customer orders. If firms see their customers making less orders, it is straightforward to see that they will themselves cut back on production. As a result, it would not be surprising that things like purchasing manager's surveys mechanically leading other economic activity variables (like employment). When manufacturing was more dominant economically, purchasing manager surveys were one of the top indicators to track. However, the shift towards services means that there is less of a manufacturing inventory cycle, while it is hard to design surveys for services.
Once we get to surveys of economists or looking at market prices, we run into theoretical difficulties. The human beings behind there expectations-based series have to get their expectations from somewhere.
Various theories may suggest which set of expectations are interesting. In fact, theory has presumably driven which surveys are undertaken and followed. However, different theories could easily come up with the same list of candidate expectations to track. For example, it is not hard to predict that surveys of the state of customer orders would be useful information for judging the state of the manufacturing cycle.
If we focus on the most successful input -- yield curve slopes -- we can see the theoretical problems. Almost any economic model could suggest that yield curve slopes are good indicators. All we need are the following conditions.
- The central bank normally cuts rates when a recession hits.
- Bond market pricing is driven by rate expectations.
- Bond market participants are not completely incompetent in forecasting recessions.
Since it is very easy for these conditions to be true, we would observe that yield curve slopes will be useful for predicting recessions regardless of what are the best economic models. As a result, we cannot say that the success of the yield curve as a recession predictor offers any support for any particular school of thought.
A mainstream economist might argue that the importance of expectations variables is a point in favour of mainstream economics. I would argue that this is misplaced: just because mainstream economists stick the mathematical expectations operator into their models -- and post-Keynesians generally do not -- does not mean that mainstream economics has a monopoly on the concept of expectations. Post-Keynesians are very well aware of the importance of sentiment in driving economic activity, however, they argue that the general equilibrium framework is flawed. The expectations operator is eschewed on the straightforward argument that we cannot solve the models we want to solve.** The post-Keynesian models are deliberately simplified constructs, mainly for teaching purposes.
Concluding Remarks
I will be addressing the competing schools of thought and their views on recessions in later articles; that would be the core of the book. The question is whether empirical work can support one school of thought over another? When we look at recession prediction models, we probably will only be able to rule out only some very weak models.Footnotes:
* Employment is usually defined in terms of the total number of workers employed, whereas the unemployment rate is the percentage of the labour force that is not working. For example, stay-at-home spouses or students are not looking for jobs, and so are not counted as unemployed. However, the percentage of the population in the labour force is not constant, and so if it fell, it is possible for unemployment to fall without the number of workers employed changing.
** My contentious argument is that this is where neoclassical economics breaks down; they write down fancy mathematical models, but they are not properly solving them.
(c) Brian Romanchuk 2019
No comments:
Post a Comment
Note: Posts are manually moderated, with a varying delay. Some disappear.
The comment section here is largely dead. My Substack or Twitter are better places to have a conversation.
Given that this is largely a backup way to reach me, I am going to reject posts that annoy me. Please post lengthy essays elsewhere.