Dr. Jason Smith (publisher of the Information Transfer Economics website) has recently released A Random Physicist Takes On Economics. It is a relatively brief, non-technical discussion of empirical approaches to economics, and is highly critical of mainstream economic approaches. For many of my readers (many of whom also read Jason Smith's work), the criticisms of economics will be an interesting read. They are certainly novel, as they are coming from an outsider's perspective. However, I have some reservations about his approach (which is only briefly summarised in this book).
Book Description
The book is relatively brief, with the paperback edition having 131 pages. (I have the Kindle ebook version, and so I cannot refer to page numbers.) The text is not too technical, being largely without equations (that I recall seeing). The text has an introductory section explaining some of the hairier theoretical topics he discusses.Smith's Critiques
In a section labelled "The Critique" (the sections are not numbered), Jason Smith summarises his critiques of economics as follows. (The statements below are a direct transcription from the text; I just changed his use of "I" to "Smith/He" in order to avoid confusion with this reviewer.)- Smith questions the use of overly-formal math in economics.
- He questions the role of human agency in economics (refers to the random agent concept, discussed below).
- He questions the role of expectations in economics.
- He questions the understanding of so-called sticky prices in economics.
- He questions how agents are combined "into an economy." (More specifically, how Dynamic Stochastic General Equilibrium -- DSGE -- theory combines agents into an economic model.)
- He questions the interpretation of the price mechanism (as a aggregation of information).
- He questions the place economics has relative to the social sciences.
These criticisms overlap those raised by heterodox economists over recent decades, and so this aspect of the book will have a receptive audience. His analysis approach may be more familiar to those in the physical and applied sciences, and so the criticisms may make more sense than heterodox economists for such readers.
Random Agents
The strongest part of the book is the description of how people acting at random can replicate whatever insights are generated by utility functions when we look at consumption behaviour. The idea is that individuals only have a certain amount of money to make purchases (a budget constraint), and if their choices are taken at random from the set of possible purchases, an increase in one items price reduces the number of times it is purchased. This replicates the logic of a demand curve generated by a utility function.
For readers that are interested in microeconomics, this should be an interesting discussion. Unfortunately, I am not interested in microeconomics, as I see the leap from micro behaviour to macro behaviour as difficult to make: real world economic sectors comprise a mix of very different agents, each with a differing behaviour. Even if we have a credible description of each class of agent we can obtain relatively arbitrary macro behaviour by changing the mix of agents.
Pricing
His discussion of pricing is curious. He notes a recent study that mainstream models of "sticky prices" are invalid. However, he essentially ignores the decades of empirical work of Keynesian/post-Keynesian economists on how prices are set. For someone who emphasises the importance of empirical grounding of theory, that is a curious omission.
I have limited knowledge of that post-Keynesian theory, but it emphasises the distinction between flex prices (such as those set in financial markets) and administered prices. For macro analysis, we are often most interested in administered prices, although oil prices are the key "flex price" exception (and possibly bond yields). For example, it is very hard to believe that the highly influential wage settlements reached by IG Mettall are the result of union leaders and employers rolling dice behind closed doors.
Economics is not Physics
Although it is fun to be condescending to mainstream economists, some of Smith's complaints how practices in economics diverge from physics are questionable.
He wastes the reader's time discussing how he was surprised that economics models have the mechanism that expected future outcomes influence present activity. In physics, the future does not normally influence the present (I only keep up with trends in theoretical physics by watching "The Big Bang Theory," so that statement may be incorrect...). That's one of the reasons why most people do not scour physics textbooks looking for models to apply to finance and economics.
Financial derivatives research is dominated by both pure and applied mathematicians, physicists, and engineers (from the theoretical wings of engineering). All of the competent ones are fully aware that real world outcomes can differ from the probability distributions embedded in their models, and yet all the research I am aware of uses the assumption that the probability distribution of realised outcomes matches the probability distribution used in determining fair value of financial assets. (This is essentially "rational expectations.") One could reasonably argue that we do not know the "true" probability functions describing future asset prices, and so we should just price derivatives randomly. However, doing so would lead to a very short and unhappy career as a market maker. The "rational expectations" hypothesis in finance is needed to create models that offer us a chance to operate in the world of financial derivatives -- in other words, they are empirically grounded. Since the assumption works in finance, there are no a priori reasons to reject its use in economics.
He also raises the issue of scope conditions: under what conditions is a model valid? They are there, just not presented as they are in physics -- for reasons I discuss in a later section. For example, economists quite often work with closed economy models: interactions with foreign countries are assumed to be insignificant. This obviously limits the scope of the model.
There are some minor issues. He decries the lack of error bars in economics texts. I dug up some of my old undergraduate electrical engineering texts (including my quantum physics text), and did not spot a single error bar in any of them. Are we to believe that the standard electrical engineering texts are not empirically grounded? In economics, prediction error analysis is done in econometrics texts, and statistical tests are reported in macro research (other than a handful of extremely theoretical papers).
Empirical Weakness of DSGE Economics
I am not a fan of DSGE economics, but I have a complaint that appears to be the exact opposite of Smith's: in my view, there is no practical way to falsify DSGE macro. This lack of falsifiability is why it remains extremely robust to criticism: both from the outside and even internal reformers.
Meanwhile, Jason Smith takes the view that DSGE macro has too weak predictive powers. He cites studies that show DSGE model predictions performing worse than simple econometric techniques, or of course, his information transmission economics techniques. I discuss information transfer later in this review, in this section I just discuss the underperformance versus simple models.
Since I view these empirical comparison studies to be misleading (for reasons discussed here), I will not delve into them. I will instead offer a simplified example that roughly captures what those studies found.
In the post-1990 environment, one of the best "standard" macro models for predicting inflation in many developed countries has been to extrapolate previous trends and assume that (core) consumer price inflation will remain near 2% (0% in Japan). (Overall CPI inflation oscillates more due to oil prices, but those deviations have been largely transient.) There are some notable exceptions, such as Greece, where euro area policy makers managed the feat of recreating depression conditions in a welfare state.
In other words, DSGE models have been underperforming a function that just returns 0.02. Although this is funny, almost no economist or even financial market participant cares about this. Why?
We are back to Smith's scope conditions. The scope condition for the "inflation will be 2%" model is the current environment -- characterised by inflation sticking near 2%. You do not need a doctorate in theoretical physics to see that this is a fairly silly situation. Furthermore, such extrapolations of past behaviour would have failed in the 1970s, or in the early 1990s (the regime changes).
The point of economic models is to offer predictions about what will happen to changes in the policy regime. For example, what would be the effect on inflation by a large universal guaranteed income scheme? Until the scheme is rolled out nationwide, we have to use theory to guess what the macro effects would be. Since simple models (such as assuming that inflation will stick at 2%) cannot offer any information about such policy shifts, they are not viewed as legitimate contenders, and so their forecasting success over short time intervals is ignored.
Of course, financial market participants do want to make short-term forecasts of economic variables. I used to follow the inflation-linked market, and there is a small industry of analysts who spend their days making short-term inflation forecasts. Needless to say, they did not use DSGE models. Instead, they used partial models of the major components of CPI, and stuck them together to get an overall forecast. Although these were accurate models, they were entirely at the mercy of policy shifts (a reality of which most were aware).
Information Transfer Economics
The book only offers a limited introduction to what Jason Smith calls information transfer economics. He explains the decision in "Why No Empirical Work?" To summarise, he prefers to wait for validation of his ideas in peer-reviewed journals.
If information transfer economics is even half as successful as Smith claims, it would have an incredible commercial value. If I were at my old job, I would have immediately had an analyst taking apart his work as soon as I saw some of his model outputs. He has dozens of models that appear to offer highly accurate forecasts for important economic and financial variables. Looking for validation in peer-reviewed journals is curious: if the capitalist system is an efficient system for processing information, the commercial success of the techniques should have appeared within months of their appearance in the public domain.
I would note that I taught communications systems theory (on which some of the information transfer theory is based), a background in theoretical applied mathematics, as well as about 15 years of experience as a fixed income analyst. In other words, I should be able to test the validity of his empirical claims. However, when I attempted to do so, I hit a rather large road block. The entire information equilibrium theory is just back story for the algorithm he uses to generate forecasts; and like the back story in old school video games like Pac-Man, it is expendable. However, when I read one of his initial papers, the actual algorithm description was just a reference to source code in a computer language I never worked with, nor had access to. From my perspective, the source code was effectively undocumented. I was forced to guess how his algorithm worked. On the basis of that guess, I saw little need to pursue analysing the algorithm.
I want to underline that this was purely a guess on my part; I could have made a grievous error for any number of reasons. (For one, long-time readers will note my dour skepticism about mathematical forecasting models.) Although my website certainly has limited traffic, I have a largely self-selected numerate audience. Some of them are likely to be in a better position than myself to judge Jason Smith's claims about his algorithm.
Concluding Remarks
This book is interesting, adding yet more complaints against mainstream economics. However, we need to understand why DSGE macro is popular before building expectations that it might disappear. Information transfer economics offers a challenge to accepted thinking about economics, but the algorithm operation needs to be translated into a description that can be analysed by those accustomed to traditional econometrics.
(c) Brian Romanchuk 2017
Sure, it would be great to have more empiricism in economics. One word of caution, however: economic phenomena are not equivalent to physical phenomena. Orbital mechanics existed before humans entered the scene, and their basic operations are not subject to human manipulation. Economic systems are human created, not timeless artifacts of nature. The very ground rules of human created systems are subject to change, evolution, and (dare I say) manipulation.
ReplyDeleteSo when you study economic "data," please realize any empirical generalizations derived are strictly context dependent.
That's one of the problems we are dealing with. Trying to explain various structural shifts with mathematical models is always going to be a challenge.
DeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteThank you for a very informative review. I think you're bang on the money when you say 'The point of economic models is to offer predictions about what will happen to changes in the policy regime.' This is the bit that gets everyone flummoxed, (Sargent and Lucas in particular) with the result that the model becomes more a judicial system for determining the just price (as Aquinas would have said) than a tool for analysing counterfactuals. To my mind that's the root cause for the popularity of models that exhibit mixed empirical track records.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDelete