Pages

Sunday, April 29, 2018

Can We Falsify Models With Time-Varying Parameters?

In a previous article, I argued that having unknown fixed parameters within many economic models does not create much in the way of uncertainty: just extend the range of historical data available, and we can pin down the parameter values. This article covers a related case: what if we allow parameters to vary with time? This possibility will make it impossible to make reliable forecasts with the model. However, such models have another defect: they can be fitted to practically any data set, making the model non-falsifiable. This can be illustrated by thinking about the simplest model of stock index returns. My argument that the apparent success of mainstream macro modelling techniques relies on the use of such non-falsifiable models.

Introduction

In previous articles (first in series), I discussed the concept of forecastability: is it possible to forecast the future outputs of a mathematical model, based on a past history of public information? (The definition needs to distinguish between the information available to the forecaster versus all the information in the model.) It should be emphasised that this is a property of mathematical models; whether it extends to the real world depends upon whether we believe a particular model is a good representation of reality.

The normal justification of using mathematical economic models is that we can use them to make forecasts, and so one might assume that the model being forecastable is a useful property. My view is that we actually want the opposite: since real world forecasting exercises tend to fail in particular ways, we want models that are similarly hard to forecast. For example, the Quantity Theory of Money provides a straightforward model that is easily forecast. And in the real world, the Quantity Theory of Money is easily rejected empirically (by those of us who do not want to come up with stories to manipulate the data so that they fit the desired theory).

What are Parameters?

When doing mathematics, we need to be careful of what definitions we use. One can legitimately use the same English word in different ways, so long as you make it clear what definition you use. My intuition for parameters comes from engineering, and they would normally be fixed constants. (A better term for my intuition is coefficient.)

If we start with linear systems theory, we are interested in the evolution of a set of state variables, which we stack into a vector denoted $x$ by tradition. If we confine ourselves to the set of linear, time-invariant, finite dimensional systems, the state dynamic equation is of the form:
$$
x(t+1) = A x(t) + B u(t),
$$
where $x$ is the state, $u$ is a vector of input variables, while $A$ and $B$ are fixed matrices of the appropriate size. (In economics, the models are often of the form $x(t) = Ax(t-1) + Bu(t)$, that is, there is no lag from the output to the state variable.) The elements of $A$ and $B$ are fixed parameters.

For example, a simple compound growth system with a growth rate of 1% is given by:
$$
x(t+1) = 1.01 x(t) + u(t)
$$
(with $x,u$ being single-dimensional time series). In this case, the $A$ matrix just has one element ($a_{1,1}$), which has a fixed value of 1.01.

Importantly, with a linear  system, we cannot have the dynamic equation contain expressions where the state variables multiply each other (or other complex expressions). There is a strict division between fixed parameters (the $A, B$ matrices above) and state variables, which vary over time.

In economic models, we have complicated systems where we do have state variables interacting in a nonlinear way. For example, if we return to model SIM (which I discussed in the previous article; see the references therein), the consumption function depends on multiplying income and wealth by the alpha parameters (which are the propensity to consume out of income and wealth). If those parameters are fixed (as I assumed in the previous article), we end up with a linear time invariant system. However, one could imagine a specification of the model where the alpha parameters themselves change as a function of time. They are elevated to being state variables, and we have a nonlinear system. As a result, the intuition for economists for what constitutes a parameter would be variables that are not economic time series, which are either fixed coefficients or allowed to vary.

(In my Python sfc_models framework, all variables are assumed to be time series; the only fixed parameters would be the ones that are hard-coded as constants.)

As I noted in the previous article, forecasting the output of a linear system with fixed parameters is straightforward (assuming that key variables are directly measured, which is the usual condition). Things are very different if we allow for time-varying parameters.

Simplest Stock Market Model

If we denote the total return index for a stock market as $x$, the simplest possible stock market model is:
$$
x(t+1) = a(t)x(t),
$$
(where $a(t)$ can be thought of as $1 + r(t)$, with $r$ the one-period return).

So long as the total return index stays away from zero, this model can reproduce any observed time history for returns. (If the total return index goes to zero, it stays there, according to this model.)

This is not a useful model for reality, but the question is: why?

The first thing to examine is forecastability. If we assume that the time series $a$ is not public information, the model is non-forecastable. This fits with the empirical results regarding market efficiency, with the simplest version is that it is hard to forecast stock market returns.

The next thing to think about is statistical tests. However, so long as our data set does not have the stock index going to zero and then bouncing back, this model can easily reproduce any historical time series. (One could imagine a stock index disappearing because of a war or similar calamity, and then a new set of equities arising thereafter. From the perspective of the original owners, the new index would be a separate entity.) In other words, it will always pass statistical tests of validity. This is one reason I do not see a blind reliance on statistical tests as being best practices for assessing the usefulness of models.

Instead, the reason why this is a bad model seems to be that it provides no useful information (other than the assumption that stock markets will not go to zero and then rise). It is too flexible: it can fit any observed data set.

In communication systems theory, the notion of information is quantifiable: it is related to a stream of bytes you are sending down some channel. Unfortunately, in this context, the statement that a model does not convey useful information does not appear to be directly quantifiable. However, we might be able to capture the idea by comparing the model to other models. Is it possible to find a class of models coming from a smaller set of models that also can be fit to historical data? In this case, we could have models with stochastic volatility, which also can be fit to historic data, but cannot reproduce the full set of behaviour shown by assuming no restrictions on period returns.

Alternatively, one might see that the model is so flexible that it can be fit to the output of a wide variety of models that have theoretical content that contradicts that of the model in question. In this case, one could argue that the model cannot falsified, as it can be fit to a almost any plausible set of data.

Mainstream Macro

The problem with mainstream macroeconomic theory is that it is too heavily reliant upon time-varying parameters, such as the natural rate of unemployment or the natural interest rate (or whatever fad terms have replaced those names for the concepts). The models appear to offer some information about future developments in the economy, but not a lot: since those key parameters cannot be directly measured, nor forecast.

The way to test this is see whether mainstream economic techniques could be fit to other classes of models, particularly those that rely on different underlying assumptions (e.g., stock-flow consistent models). Given the flexibility of the natural interest rate in adapting to real-world data, I see such a fit being achieved.

In other words, the statistical tests used to "prove" that the policy rate drives activity in the real world could be applied to models in which we know the policy rate has no such effect. This is theoretically awkward, to say the least.

Concluding Remarks

Allowing for drift in key parameters is one way to make model behaviour more interesting. The cost is that the model may end up offering no theoretical content, and being unable to be rejected empirically.

  (c) Brian Romanchuk 2018

No comments:

Post a Comment

Note: Posts are manually moderated, with a varying delay. Some disappear.

The comment section here is largely dead. My Substack or Twitter are better places to have a conversation.

Given that this is largely a backup way to reach me, I am going to reject posts that annoy me. Please post lengthy essays elsewhere.