The figure shows two versions of the "annual inflation rate" for the United States, using seasonally-adjusted headline CPI.
- The percentage change for the index for a month versus the same month 12 months earlier (standard).
- The first order low pass filter on the monthly annualised changes, with a weight of 0.08 on the latest month. (The usual way in engineering is to specify a filter as a frequency domain representation or as a canonical matrix representation, but unless the reader is familiar with those concepts, that is not helpful.)
(The latter calculation is to create a state variable that is updated to equal .08 times the latest value of the rate of change, and 0.92 times the previous value of the state variable. I picked the .08 parameter value via the scientific method of trying a few guesses, and see what looked coolest.)
There are two reasons to use the usual annual percentage change.
- Easy to explain to the public, which is certainly why statistical agencies would not use alternatives. I almost exclusively use this in my writing for this reason.
- Easy way to get around seasonality (which is why I used the seasonally-adjusted series).
But if you are attempting to forecast (which I am not in my writings here), your focus should be on being correct, and not the ease of explanation. You should be able to work around seasonal adjustment, even if you do not lazily just grab the adjusted series (like myself).
The above figure illustrates the problems we run into with the standard method.
- If we look at the period from mid-2017 to mid-2018 (with the red chart chat), we see the following. The inflation rate dipped in mid-2017 to a low level (around 1.5%). This was temporary, and the adaptive measure recovered to slightly higher than 2%. However, the base effects kicked in the standard measure, and there is a surge closer to 3%.
- For the latest data point, the two measures converge. However, the standard measure shows a massive surge from below 1.5% in the last two months, whereas the adaptive version had recovered to around 1 3/4% earlier, and started accelerating over the past few months.
Technical Appendix
Unfortunately, I am supposed to be working on a consulting project, and I would need to dig up one of communications systems textbooks to give a more formal explanation. However, I will wing it here. Please keep in mind that I learned this theory around 1990, and only used it once since (when I taught a communications system course).
If we look at the specification of the annual percentage change as an impulse response (usual way of defining a filter in the time domain), we see that it is non-zero for a finite number of periods (12). That is, any change to the signal disappears after 12 periods. Such a filter is classified as a finite impulse response (FIR) filter. Conversely, the first order filter has an impulse response that is non-zero (albeit really teeny-tiny) over an an infinite domain, putting it into the infinite impulse response (IIR) category.
We can convert the impulse response of a filter to its frequency domain representation via the (Discrete) Fourier Transform, and the Inverse Fourier Transform. The cool thing about that transform in discrete time over finite intervals is that you just need to muck around with the input to turn the Fourier Transform into the Inverse Fourier Transform, and so the Fast Fourier Transform allows you to go back and forth with one efficient algorithm.
One complicated theorem used in communications systems (whose name I have completely forgotten) runs something like this: if a function is non-zero over a finite "part" of the time domain, it is non-zero over an infinite "part" of the frequency domain -- and vice versa.
There are two practical implications.
- It is impossible to develop an ideal low pass filter that works in real time. The impulse response of an ideal low pass filter is non-zero in negative time, which means that the output responds to the input before it happens -- non-causal in systems jargon. We normally assume that time travel for information is possible. This fits into the story of why you should never use the Hodrick-Prescott filter.
- You never want to use FIR filters -- like a moving average or annual change -- in analysing systems, since the filter injects high frequency noise into your analysis. (The finite impulse response implies garbage being injected at all frequencies.) In my academic field of control systems, we never used FIR filters for this reason, which explains why I have largely forgotten the theory around them. The first order filter smoothly attenuates noise, at the cost of introducing a lag. In signal processing, the "no free lunch" story revolves around the trade-off between lag and noise reduction.
Hello
ReplyDeleteIndeed, exponential moving averages is one way of managing "base effects" or sudden spikes in data. Low pass filters are becoming common among financial analysts because it is one of the tools of data visualization packages. However, repeated base effects are generally meaningful and may need to be considered. Great post Brian
Thanks. Good to see data visualisation packages catching up to 1960s electrical engineering. 😀
Delete