Pages

Tuesday, August 1, 2023

Inflation Targeting In Practice

I have been running into the ongoing debates on nominal GDP targeting, and whether it is superior to inflation targeting. To look at this debate, I need to put my “conventional economist” hat on, as if one accepts the heterodox view that interest rate policy is ineffective, the entire debate is pointless (neither policy “works”). As will become apparent, I think the neoclassical models behind the debate are dubious. But if we accept that interest rates at least sort-of work the way that they are conventionally assumed to (e.g., rate hikes dampen growth and inflation), we can have a view on how a change in targets would work in practice.

That said, I am going to ignore nominal GDP targeting in this article, and push that to a later article. (I am still editing my manuscript, so my article writing time is constrained.) So I will just be making some wild assertions about the behaviour of inflation targeting central banks without getting to why I brought it up. However, I think that this is a useful exercise for anyone who is concerned about central bank reaction functions. (If you are trading government bonds, even if interest rate policy does not work the way the neoclassicals believe, you still need to project what central bankers will do.)

Low Frequency Policy Changes


The figure above shows the Fed Funds target rate (or the midpoint of the target range for recent data). The back history is an anachronism: there was no announced formal target for the fed funds rate (which is an interbank rate). (According to this note by Daniel L. Thornton, transcripts suggest that there was an effective target for the fed funds rate starting in 1982.) However, the Fed was coy about targeting the funds rate, only mentioning the target level in August 1997 (as noted on page 48 of the linked report).

If we ignore the erratic Monetarist era of interest rates, we see that the policy rate after the Greenspan era (1987-) generally moved in smooth lines. The cycle looks like this.

  • The economy hits a recession (or policymakers fear a recession), and the policy rate is rapidly cut.
  • The policy rate then is unchanged while policymakers wait for the economy to build up some steam.
  • The policy rate is then hiked at a relatively steady pace until policymakers are convinced that inflationary pressures are quiescent.
  • Whoops, recession — cycle starts over.

There were a small number of wiggles and false starts that did not match that pattern, but it explains the bulk of the level of interest rates. We could approximate the policy rate by straight lines, with the slope changing at decision points corresponding to entering cut/hike/on hold modes.

If one wanted to literally interpret “low frequency” as the term is used in signal processing, one would slap the time series into a discrete Fourier transform and look at the spectral analysis. However, that would overstate the “frequency” of the signal in that the straight lines are hard to approximate with sinusoids. Instead, what matters is the frequency of changes in trend (the inverse of the average time between trend changes). Using that definition, the number of trend changes per year is quite low — four per decade in the past cycles (with most of the switches bracketing short-lived recessions). And if one wants to compare this trend change definition to the standard one using sinusoids, one sinusoid cycle would correspond to one classic interest rate cycle with 4 trend changes (as outlined above). However, we could imagine a slightly different interest rate cycle with a lengthy “on hold” period within a sequence of hikes (or less likely, cuts), which means that there could more than 4 trend changes in a business cycle.

The Fed is not some strange special case — other developed central banks have acted generally similar after the early 1990s (where there was a bit of silliness defending the ERM). The Bank of Japan absolutely demolished the volatility of the policy rate after the bubble burst.

Gradualism

Why do policy rates follow these somewhat linear trajectories between trend change points? The answer is gradualism, as was described in the well-known (but perhaps not well known enough) speech by Ben Bernanke. I am not a major fan of Ben Bernanke’s output, but the gradualism speech did a good job of capturing this important issue. (I do not know whether anybody published a description that preceded the Bernanke version.)

I described gradualism in Section 3.5 of my exciting summer time thriller Interest Rate Cycles: An Introduction, which is available at reputable online bookstores. I will just offer a short version of the idea using wording which may or may not resemble the original speech.

At the time of the speech, one debate was “Why does the Fed not immediately hike to 3% (from 0%) instead of going by 25 basis points a meeting?” This can be expressed more generally. The Fed allegedly knows the “optimal” level of the policy rate at any given time, why not set it at that optimal level immediately, (ignoring its historical value)? This is a particularly interesting question given that the neoclassical project is based on getting optimal solutions to forward-looking problems.

The answer is: uncertainty. Setting the control level to the optimal value at all times in control engineering led to what is known as “bang-bang control laws.” As the name suggests, this was not exactly what you want for a system that you do not want to blow up.1 Both control engineers and central bankers who had not gotten too enamoured of neoclassical models realised that you needed to respect uncertainty in our understanding of the system we want to control.

Central bankers in practice decided to lag behind the data flow. They only undertook an adjustment in the major trend of interest rates when it was clear that the previous trend no longer made sense. For example, when rates were on hold, they only started hiking or cutting when it was clearly necessary that it had to be done. Once a hike/cut cycle was underway, they knew they were somewhat “behind the curve,” but the pace of hikes/cut should be fast enough that they will eventually be in a position to make incremental slowdowns in the pace (e.g., skip a meeting), and then eventually they reach the “terminal rate” where they re-enter the “on hold” mode.

Although one could pretend this process was symmetric — cutting and hiking cycles are just a sign difference — in practice, recessions are not symmetric with expansions. Things are usually blowing up, and so we see rapid rate cuts to try to re-establish market confidence.

The “gradualism” speech captured how central bankers set rates in practice — which is exactly what the Taylor rule started out as. However, one may note the relative importance that the neoclassicals have put on the two observations. The Taylor Rule is extremely popular as a concept as it can be built into the core of optimising neoclassical models. Gradualism is turning into a bit of historical arcana because it points to the limitations of those models.

“2% In 2 Years”

With the abstract description of the trends in interest rates out of the way, we can turn to the thinking process.

  1. Policymakers are backwards looking in that the policy rate is set as a small deviation from its previous value.

  2. (Sensible) policymakers are “forward looking with a blind spot” in that they typically aim to bring inflation to the 2% level in two years (the forecast horizon).

The second point aligns with the inflation forecast charts produced by the Bank of England and Bank of Canada, which invariably show inflation reverting to target. (The Fed policymaker forecasts are a bit of a horror show since the system encourages “eccentric” regional Fed heads.) Although the projection always reverting to target reflects deliberate messaging, it also reflects genuine belief — the consensus believes that it will be able to stabilise inflation by current and future policy.

Thing is, the central bank skips the ugly bit at the front of the projection. Inflation spikes? Well, that will be transitory. Although they will react to realised inflation being way out of line with target, they are still patient with deviations, and will concoct some measure of “underlying inflation” that is moving in the correct direction. (Some central banks — like the Bank of Canada — have a “dead zone” built around their target, so they do not officially worry until inflation is outside the 1%-3% band.)

Since policymakers are effectively free to explain away what are allegedly temporary deviations of inflation from target, they have considerable freedom of action. They just want to steer the economy towards an outcome of steady growth and stable, low inflation at the end of the forecast horizon. This gives them flexibility to do things like rapid rate cuts in a recession, even though inflation might still be above target.

This flexibility cannot be modelled in equilibrium-based models, which means that the thousands of papers being produced about “optimal monetary policy” cannot capture the status quo. Equilibrium models are forward-looking: they are based on calculating arbitrage free forward prices for bonds, goods, and wages (using the household consumption function and a production function to relate quantities). If an equilibrium exists, the forward price of goods exists for all time, so there is no uncertainty about future inflation.

That sounds crazy (and not how these models are represented!), but we can use fixed income pricing to explain further. So long as the prices of benchmark bonds are agreed on, everybody can use a fitting methodology and come up with the same forward prices (within the limits of methodological differences). People are uncertain about future realised prices for bonds, but the forward prices are known.

In an equilibrium model, the central bank has no uncertainty about the future path of inflation within an equilibrium and is following some kind of optimising rule, so there is no wiggle room for the path of interest rates. It will react to every wiggle in the economy created by shocks — and those projected reactions will change every time period as we re-run the forward-looking model based on new data.

Average Inflation Targeting?

Average inflation targeting was a short-lived fad among neoclassical Ph.D.’s. It was obviously a bad idea, as it was a variant of level targeting. It is a deviation from what I describe as “sensible” policy making. I hope to get back to level targeting when I discuss nominal GDP targeting.

Concluding Remarks

Central banks are in a murky position. Neoclassical theory is the dominant paradigm, and so if one is going to advance in the ranks, one needs to pay lip service to it. At the same time, the theory is a fragile mess that cannot cope with real world concerns like uncertainty. Which means that grownups at the institution have to make statements that conform to neoclassical academic orthodoxy while at the same time having to rely on institutional memory and gut instincts. Although inflation targeting was an academic darling, it also quietly gives policymakers the flexibility they need in practice.

1

The “banging” name came from the application of optimal control rules to mechanical systems that resulted in discontinuous jumps in the third derivative of the position of the system, which is known as “the jerk.” The approximative mechanical models that were used to develop control laws ignored the effect of jerk, but in practice, mechanical systems flex and excessive jerk tended to result in the unwanted self-deconstruction of machinery.


Email subscription: Go to https://bondeconomics.substack.com/ 

(c) Brian Romanchuk 2023

No comments:

Post a Comment

Note: Posts are manually moderated, with a varying delay. Some disappear.

The comment section here is largely dead. My Substack or Twitter are better places to have a conversation.

Given that this is largely a backup way to reach me, I am going to reject posts that annoy me. Please post lengthy essays elsewhere.