It may be that their application to the path of the U.S. policy rate would lead to less disastrous results, but it seems likely that this experience should lead us to treat such policy rules with a certain amount of caution.
A History Of Control Systems Engineering In 4 Paragraphs
Control systems engineering is a branch of applied mathematics, in which engineers design the mathematical rules to control a target system. The target system can either be mechanical (e.g., a plane), or things like chemical plants (you need to adjust flow rates and temperatures to keep chemical reactions going at a steady pace). Modern controllers are generally implemented as mathematical algorithms on specialised computers; the role of the control engineer is to implement the algorithm, not the hardware.
Control systems grew out of the (mainly American) war effort during World War II; it was mainly used in various aeronautics applications. Since the initial work was secret, the most basic results in the field appeared rapidly when the military declassified the techniques*. The key results involved stabilising feedback rules implemented with analog circuitry. The feedback rules had the defect that they had to be applied on linearised system models, looking at one input variable versus one output variable at a time. This meant that the theory could not cleanly treat a complex system with multiple variables as an integrated whole.
The advent of digital computers in the 1960’s led to the possibility of developing optimal control rules for more complex systems. The impetus for optimal control was from yet another U.S. government effort, the need for path planning to get spacecraft to the moon. The lure of optimisation infected economics at this time as well, leading to some overlap of the disciplines. However, when optimal control rules were applied to engineering systems in practice they failed (for reasons discussed below). The only remnants of the theory within engineering are the Kalman Filter and some algorithmic work, particularly for path planning.
Starting at about the late 1980’s, robust control started to make inroads in control systems. There are similarities between robust and optimal control – the controller is also found as the result of an optimisation problem. However, the philosophy of controller design is completely different. It also represents a completely different philosophy towards mathematical modelling of real world systems.There have been some attempts to apply robust control techniques within economics, but the implications do not appear to be widely understood.
Why Optimal Control Theory Was Unworkable
The working of an optimal rule could be understood as following this procedure (technically, the rule was the result of an optimisation problem, but the structure of the solution could be interpreted in the following way):
- Design an optimal rule to determine the current state of the system. Note that the “state” of the system includes variables that we cannot directly observe. For example, we may be able to measure the position of an object, but we need to use the time series of positions to estimate its velocity. (The standard rule for this estimation was the Kalman filter, which is still heavily used.)
- Find the optimal path to get from the initial state to the desired target state. (This component is an optimisation problem, and the mathematical results are presumably still being used in other contexts.)
- Determine the inputs to the target system that causes the state to follow the optimal trajectory. Roughly speaking, the optimal rule first calculated the inverse of the mathematical model of the target system, and this mathematical inverse is used to cancel out the target system's dynamics. The rule would then force the trajectory to follow the optimal path, regardless of the characteristics of the system you are trying to control. (This is the part of the theory that had to be abandoned.)
This meant that control rules were extremely aggressive, and relied heavily on the mathematical model proposed for the target system. However, what we find is that if we cancel out the system's dynamics imperfectly, the resulting closed loop behaviour can be wildly unstable**. In other words, there is a spectacular amount of model risk associated with these types of control rules. The story that I heard as a grad student is that when these optimal control rules were first applied to an aircraft, the plane almost immediately went out of control and killed the test pilot.
The robust control philosophy was to accept model error. (Robust control is also known as H-infinity control theory.) You start off with the best mathematical model you can come up with for the system. You then find a controller that stabilises that target system, as well as family of mathematical systems that are “close” to that model. You solve an optimisation problem when designing the controller, but you are trying to find the controller that gives you adequate performance while at the same time stabilising the largest possible family of mathematical models near your assumed model. You do not care that you do not have the perfect representation of the system you are controlling, you just need the model to be somewhat “close” to the true dynamics.
Application To Policy Rates
Using an optimal control rule to determine the policy rate could be fairly harmless in practice. It could be just used as a means to determine a general trajectory (a path-planning problem), but the tuning of the rate in response to economic dynamics could be done in a more sensible fashion. If the model of the economy used is well-behaved mathematically, the optimal rule will not be very aggressive and so there is no reason for the system to blow up. And if you happen to believe that interest rates are not potent in steering the economy and that economic stabilisation is largely achieved by the automatic stabilisers, the debate is completely academic anyway.
My gut feeling is that optimal control rules are just being floated now as trial balloon to provide some justification for forward guidance policy.
Finally, with regards to the insight of robust control, I think that if they were widely understood, the amount of arguments over the role in mathematics in economics would drop dramatically. We need to work with some form of model if we want to get any form of quantitative results, and we have to accept that there will be at least some model error.
Footnotes
* I heard the following anecdote from someone I view as a reliable source. After World War II, when the first lectures on control theory were given at MIT, the students were not allowed to take notes. An Air Force officer stood by the professor, and erased each equation off the board immediately after the prof wrote it down.This allowed the techniques to leak out selectively to industry, without them falling into the "wrong hands".
** For those of you who know what a transfer function is, you end up with a closed loop system where you have a zero of the numerator that almost cancels out a zero of the denominator (a “pole” in control engineering lingo). Some mathematical result whose name I’ve forgotten shows such a configuration creates a strong tendency for the closed loop system to oscillate at a particular frequency. (My specialty was nonlinear systems, so I did not worry too much about frequency domain analysis like that.) This means that any noise that hits the system will cause it to vibrate at that frequency, which is generally a bad thing for mechanical devices.
(c) Brian Romanchuk 2013
No comments:
Post a Comment
Note: Posts are manually moderated, with a varying delay. Some disappear.
The comment section here is largely dead. My Substack or Twitter are better places to have a conversation.
Given that this is largely a backup way to reach me, I am going to reject posts that annoy me. Please post lengthy essays elsewhere.