One of the interesting features of neoclassical macro is the vagueness of how the models are supposed to work. One can find popularisations of General Relativity which are meant to be understood by people who just took high school physics. And if one has the misfortune of studying tensors and manifolds, one might even have a chance of guessing at the mathematics behind the explanations. I have not seen anything remotely useful for neoclassical macro at a general reading level, while the more technical introductions have the defect of being expressed in what is best described as “economist mathematics.”
The working paper “How do central banks control inflation? A guide for the perplexed.” by Laura Castillo-Martinez and Ricardo Reis is one of the better attempts at an introduction that I have encountered, but it is mathematical. The advantage is that they address the more squirelly part of the mathematics that other texts tend to bury under a wall of obfuscation. Someone not interested in the mathematics might be entertained by puzzling through the text, but the hidden cost to doing that is one is entirely reliant upon their textual representations about the models.
Back to Basics
The working paper is relatively straightforward because it remains close to the household optimisation problem. This makes it easier to follow because it is closer to standard mathematics.
We could imagine an optimisation problem for a household. Given an initial stock of money and a future earnings flow, the objective is to generate a sequence of consumption expenditures over an infinite time horizon that optimise a utility function. (Yes, an infinite time horizon is a bit silly, but it is convenient mathematically.) For example, we have $100 to spend on apples, and we want to optimise our lifetime apple consumption utility when we have the full grid of future prices of apples.
We assume that the household is given the time series of future (expected) prices as well as future interest rates that determine the rate of return on an unspent money balance. The utility function is chosen so that the solution will tend to spread out consumption over time. (By contrast, if the utility function said that the utility was given by the square of the number of units consumed, the preference is going to be to consume the entire budget in one shot. For example, assume we could buy 100 apples spread across today and tomorrow. For simplicity, we are indifferent to the date of purchase. If our utility function is the square of apples consumed in a period, the optimal solutions (there are two) are to consume 100 apples either today or tomorrow. But if the utility is the square root of the number of apples consumed per period, then the optimal solution is to consume 50 each day. Utility functions used in neoclassical models are like the square root case.)
This is a problem that is not too difficult to pursue with standard 1950s optimal control theory, although optimising on an infinite time horizon is somewhat tricky mathematically courtesy of infinite dimensional spaces being a royal pain in the nether regions (to use mathematical jargon).
However, such a problem was not exactly what economists needed: they wanted prices to be determined within the optimisation problem (as well as determining the optimal consumption path). This is an extremely difficult problem to express in standard mathematics, which is why we end up with “economist mathematics.” However, if the model has a single optimisation problem, one can generally reverse engineer what they are trying to do. (Not the case when they throw in multiple optimisations.)
So, How Do Central Banks Control Inflation?
Although the paper has an expansive title suggesting that the answer to how central banks control inflation, it is a survey of a number of neoclassical approaches (which may or may not be internally consistent). As such, it is a good introduction to neoclassical debates. However, it is not an empirical paper, leaving open the question “Do these models stink?”
I am most interested in the first approach, which involves embedding something like a Taylor Rule within a model. So, one might ask: how is a Taylor Rule supposed to control inflation? The answer is somewhat painful, but much cleaner than other texts that I have read that skipped over the mathematical ugliness.
The key theoretical mechanism relies on two alternative specifications of the nominal interest rate. Note that everything here is being expressed in log-linear terms, so we add terms rather multiply factors. (That is, we do not see (1+i) = (1+r)(1+π), rather i=r+π. Using additive terms is crucial for the algebra.)
The first is a Taylor Rule: the nominal policy rate (single period) is equal to a constant that is greater than 1 multiplied by the current period inflation rate (so the price change from t-1 to t), plus another term that is given by the rest of the Taylor Rule (that typically incorporate corrections for a non-zero target inflation rate, plus an estimate of the real rate). The key is that the inflation rate from t-1 to t appears.
The second is the Fisher equation, where the nominal interest rate equals the real interest rate in the economy (discussed more below) plus the expected inflation rate from time t to t+1.
Since it is the same nominal interest rate in both equations, we can equate the two expressions. We then get a relationship between the inflation rates over two time periods. Using some algebra (described below in the text block) and a key assumption, we can express inflation rates at any given time as an infinite sum (“summation”) of terms involving variables that we hopefully know. Readers who do not want to wade through the word salad below can skip to the implications.
I will now describe the manipulations. This probably would have been better with equations, but I will try to describe it as text. One could look at the equations in the article instead of my description, but they have a lot of symbols running around in there, and they also skip how the summation is derived. Given the complexity of the expressions, jumping to the summation formula is not a trivial step for anyone who has not seen the equations multiple times.
We rearrange terms in the joined equation to get an equation where the inflation rate between t-1 and t is equal to a simple function of the (expected) inflation rate from time t to t+1. (I am going to drop the “expected” from the description.)
Since we normally refer to the inflation rate between time t-1 and t as inflation at time t, we see that we can specify inflation at time t as a function of inflation at time t+1.
The reason to do this is that we can then use this relationship to specify inflation at time t+1 a function that includes inflation at time t+2 (since the equation holds for all t, we can relabel). We can then substitute back into the original equation, so that inflation at time t is equal to some terms plus a factor multiplying inflation at t+2. We then keep going, until we end up with inflation at time t equalling a summation of N terms, and a term including inflation at t+N.
We then invoke an assumption that the term including inflation at t+N tends to zero as N goes to infinity (discussed below!), and we end up with an expression for inflation at time t that is a summation of terms that we can calculate without knowing future inflation.
Since this equation works at t=0 (if the assumptions hold!), the inflation rate from time t=-1 to 0 can be calculated, and so the price level at t=0 is pinned down. (This would not possible if we did not have the Taylor rule based on historical inflation, as opposed to expected inflation. I complained about indeterminacy in the past, but including historical inflation in the reaction function is the end run around the issue.)
The problem is that the assumption that allows the summation to converge is entirely based on “we assume that the summation converges” (although expressed in a mathematically equivalent format). The logic is essentially “nobody would believe it if the inflation rate tore off to infinity,” which is precisely not the sort of mathematical logic taught in reputable Real Analysis courses.
The authors even note one of the fundamental issues: the Taylor Rule magnifies inflation deviations. That is not the sort of mathematical system that I am going to make leaps of faith regarding the convergence of infinite summations (and the existence and uniqueness of solutions).
Banks - A Red Herring
The article includes a balderdash reference to “banks” that allegedly use “reserves” to invest in “real assets.” Heterodox authors could easily be misled by that text. As always, one needs to take textual assertions about model mathematics made by neoclassicals with a massive grain of salt. There are no “banks” in the model. Instead, they are coming up with a fairy story to motivate an argument about “real interest rates.”
The idea is that if the (expected) real rate of interest on financial investments (reserves/bills that pay the policy rate) departs from the assumed known real rate of return on real assets, then mysterious entities will pop into existence and buy/sell the real assets (which is also the consumption good) versus bills to arbitrage the difference in return. (The real rate of return is supposed to be known because entities know the current period production function, but anyone even familiar with how businesses work realise that skips a lot of uncertainties.)
In other words, these “bank” entities have no mathematical existence within the model description itself, the only mathematical object is the assumption that the Fisher equation holds (a statement about set elements).
Although this story has a lot of plausibility issues, it is also core to the mathematical manipulations. If the real rate of return at time t is not fixed by the economic laws of nature, the Fisher Equation (nominal interest rate equals that real rate of return plus expected inflation) is no longer useful, and we cannot use it to create the summation formula.
The random appearance of “banks” is the sort of thing one has to expect when dealing with economist mathematics. Properly structured mathematics refers to statements about sets, and the sets involved are clearly delineated within the exposition of the model at the beginning. Economist mathematics involves randomly dropping in entities that are not sets in the middle of the exposition, and the reader has to figure out how those entities interact with already existing mathematical entities. And since they refer to real world entities — like banks — one could easily make the mistake of using mathematical operations describing how banks operate in the real world, as opposed to what the authors want the entities to do (“arbitraging” Treasury bills and real assets). It also creates the mistaken impression that such neoclassical models include banking system dynamics, which is definitely not the case here.
Concluding Remarks
If we are to take the model literally, central banks “control inflation” by announcing that they are going to follow a rule that would probably cause the economy to blow up, but nobody really believes it will blow up, so everything expects inflation to follow some sensible path near the inflation target.
One only needs to re-read that sentence to realise that you are not supposed to take the mathematical models too literally. Instead, one is supposed to assume that it is an idealised approximation that captures mechanisms that allegedly exist in the real world. The problem with this approach is that if one starts ignoring the core of the mathematical model, there are no objective standards to discuss the quality of the model predictions.
The fundamental issue with neoclassical modelling is that the equilibrium assumption means that everything in the economy is tied together, and mainly influenced by expected values of variables — which are generally not measurable. With all the modelling weight on non-measurable quantities, it is quite hard to deal with what should be straightforward questions, like “What is the effect of an immediate 50 basis point rate hike?,” or even “What was the effect of the Fed rate hike campaign?” The only questions the models are clearly suited for are ones like “What happens if the non-measurable expectations for production function shifts downwards for the rest of time?”
When this kind of analysis talks about "expected inflation rates," is it meant in the informal sense ("the inflation rates that important people anticipate")? Or is it meant in a more formal sense of some weighted average over a probability distribution?
ReplyDeleteI am assuming the former (since the latter would probably involve some sort of empirical investigation, upon whose rocks the whole thing would likely soon founder) but if they do mean the more rigorous one, that'd be interesting.