Tuesday, September 29, 2015

Rational Expectations and the Microfoundations of Autonomy


One aspect of modern economies that deserves more attention is the variety of beliefs that inform the decision making of households, firms, and governments.  Every asset market includes bulls who believe prices will rise and bears who believe they will fall.  Central banks and national governments draw on diverse macroeconomic models in their policy making, which is evident in the conflicting predictions about the effects of quantitative easing.  And Nobel Prizes in economics have been awarded to economists advancing sharply divergent theories, the most recent example being the 2013 Prizes awarded to Eugene Fama and Robert Shiller.

On its face this multiplicity of views seems incompatible with the hypothesis of rational expectations.  If all agents have access to the same information and the same (correct) model of the economy, then, instead of a multiplicity of expectations, we would see a uniformity of expectations.  Some of this real-world diversity of expectations can, of course, be explained by “information partitions” in which market participants have access to different pieces of the “information pie.”  The force of this explanation is diminished, however, by the broad dissemination of government statistics and the widespread use of information technology to organize and analyze this data.  Moreover, economists with access to the same data and information processing capabilities nevertheless produce conflicting explanations of historical trends and events, divergent forecasts of future trends, and opposing predictions about the effects of various monetary and fiscal policies.

This multiplicity of outlooks, whether in the form of theories, models, beliefs, or expectations, calls into question the usefulness of the postulate that economies are always in equilibrium.  Even if a rational expectations, representative agent, model could be calibrated to track some time series of economic data, it could hardly explain the obvious presence of agents with diverse and, often conflicting, views.  Furthermore, if market participants are acting on the basis of inconsistent expectations, then at least some of these expectations will disappointed and some plans will have to be revised.  To insist on characterizing an economy in which the participants are planning to buy and sell at different prices as being in equilibrium is simply to insist on a stipulated definition come what may. 

In their Anti-Keynesian manifesto, Lucas and Sargent (1979) criticize the lack of microfoundations in the Keynesian models that were developed in the 1950s, 60s, and early 70s.  Many reasons have been offered in defense of microfoundations as a necessary feature of a good macro model, including an implicit appeal to the familiar notion of autonomous agents who form and act upon their own plans and forecasts.  Thus, Lucas and Sargent chastise “economists who ten years ago championed Keynesian fiscal policy as an alternative to inefficient direct controls [now] increasingly favor the latter as ‘supplements’ to Keynesian policy” (original stress).  But it’s the gloss they add to their argument that’s most revealing.  Mocking these old fashioned Keynesians they write, “The idea seems to be that if people refuse to obey the equations we have fit to their past behavior, we can pass laws to make them do so” (original stress).  Free and independent agents keep changing their minds in response to new information, so their “past behavior” is, at best, an imperfect guide to their future behavior.


A closer look reveals that the New Classical demand for microfoundations straddles two incompatible ideas.  On the one hand, Lucas and Sargent insist on microfoundations because they believe economic outcomes depend on the rational choices of individuals rather than on the behavior of aggregates.  What is “the Lucas critique” if not a vigorous statement of this point?  On the other hand, genuinely autonomous agents, who choose their own objectives and the means of achieving them, will often hold different views about the future.  Indeed, this a reasonably good description of what happens in societies when the unquestioned guideposts of custom and tradition give way to some measure of individualism and self-determination.  Thus, while the demand for microfoundations appeals to the idea of independent agents constructing their own action-guiding scenarios, the variety of beliefs that emerge from, and guide the actions of, these agents is suppressed by the premise of rational expectations.  If we really want macroeconomic models that are consistent with free and independent agency, then we need a new “microfoundations of autonomy.”  I’ll return to this topic in a future post.

11 comments:

  1. Professor,
    I enjoy your analyses of current issues and regret that I didn't do more to provide you more opportunities to shape City of Seattle policies. I also appreciate being included in your distribution list of your treatises on current economic issues
    Your student,
    Jim Ritch.

    ReplyDelete
  2. This comment has been removed by a blog administrator.

    ReplyDelete
  3. Greg, interesting post!

    I saw you comment on Glasner's site, which led me here. You refer to agents having different views in your post. I'm a frequent reader of Jason Smith's blog (one of the people David quoted in his post). He takes that view, but "to the extreme" you might say. He postulates that human behavior may be so complex, so as to be essentially random, from an aggregate view, at least when markets are functioning well. It's when agents become coordinated (like during a panic) that things fall apart.
    http://informationtransfereconomics.blogspot.com/2014/08/against-human-centric-macroeconomics.html

    In what he's doing, he's trying to find approximations for the good times (=random times), with very simply models based on the idea of information transfer. Typically his long term trend model for an aggregate (such as price level or NGDP) will be a function of one or two things, and have one or two parameters which must be selected based on the data.

    Interestingly enough, he can find an "emergent" representative agent from this. But this is more of an afterthought, and not central to his framework.
    http://informationtransfereconomics.blogspot.com/2015/09/the-emergent-representative-agent-1.html


    In any case he takes the view that macro data is largely uninformative, justifying at most ~10 parameters fit to the data. Past that you're generally just overfitting noise, and you lose any predictive capability. For example the NY Fed DSGE model has over 40 parameters.

    In other words, the nature of macro data dictates that at this stage in the game, models need to be simple.

    ReplyDelete
    Replies
    1. I said "information transfer" above, which is true, but really the "good times" (i.e. well functioning market times) are a special case: when there's "information equilibrium." If you consider that demand is an information source (I won't get into why), then the information it transmits is I(D). If you consider the supply as the destination, then we have I(D) >= I(S). In other words you can never have more information arrive at the destination that what was transmitted by the source. Information equilibrium holds when I(D) = I(S). That's the basic theoretical idea (he applies it to much more than just D and S though). We can't say too much about the I(D) > I(S) times unfortunately, but assuming equality leads to some empirically impressive (in my view) simple models of some long term trends in many countries. Events like recessions look like a special kind of noise (dovetailing with Friedman's "plucking" model it turns out), but are times of "non-ideal information transfer" (i.e. when we have to use > rather than =).

      Also I should point out that information equilibrium is very different than the usual equilibrium we hear about in economics. It's POSSIBLE that a change from random to coordinated could in some sense improve a market, it's just very very unlikely. It's like all the gas molecules in a room, by chance, moving to one side of the room, and this somehow causes a "benefit." Well, for a market this kind of random coordination is literally many orders of magnitude more probable than the room full of gas case, but still very small. (consider an economy might have ~1e9 "particles" whereas a container of gas may have ~1e24 molecules). And beyond just this simple comparative measure, humans aren't exactly molecules, and they DO, at times, panic or in other ways coordinate (in other words, 1e9 molecules are still more random than 1e9 humans). Such times are generally outside the scope of Jason's models though. Interestingly enough, the usual economic concept of expectations is one thing that can lead to this (almost certainly destructive) coordination: when expectations are wrong, this can cause non-ideal information transfer. It's only when they are very close to being correct that an improvement over assuming "maximum ignorance" occurs. And again, I'm not speaking for individuals here (who may well benefit from being 51% correct), I mean in the macro sense.

      Delete
    2. Hi Tom,

      Thanks for taking the time to comment. I’m not familiar Jason Smith’s work, but it looks interesting, and I plan to take a look.

      When I think about “information,” I think about the “frameworks” through which the information is interpreted, appraised, etc. Some of these frameworks will be economic models, e.g., Black-Sholes, New Classical, etc.; some will be verbal “models,” e.g., “trickle down,” “clash of civilizations,” etc. Three points seem compelling to me: 1) data, information, etc., are viewed and understood through a variety of models, frameworks, etc.; 2) the expectations of people drawing on different models and frameworks will not be mutually consistent; and 3) since general equilibrium requires common knowledge and mutually consistent expectations, and since these conditions aren’t satisfied in the real world [see 1 & 2 above], GE models have some serious limitations as vehicles for understanding “our world.”

      That’s all for now. Thanks again for your interest.

      Delete
    3. This comment has been removed by the author.

      Delete
    4. This comment has been removed by the author.

      Delete
    5. Hi Greg,

      I erased two longish comments: but the basic point was this: I suspect the sense in which Jason means information is different than your meaning: he means a quantifiable measure (e.g. bits) in the technical sense of Shannon's information theory. Check out the permanent links in his right hand columns where he explains in what sense he means "information."

      OK, thanks Greg!

      Delete
    6. Hi Tom,

      I'm sure Jason's conception of information is different than mine, which is simply the ordinary language version of the word. I'm a bit skeptical of technical definitions of terms for the purpose of understanding things like "choices based on information," or "states" in some mathematically defined manner, because they often lose touch with the conceptual vocabulary in which real people actually frame their options.

      I say skeptical, but not closed-minded. If someone can develop a model that helps explain, and maybe even predict, the behavior of recognizable "units," e.g., aggregate consumption, investment, etc., then I'm interested. Despite their methodological arrogance, many New Classical economists seem pretty naive about issues in the philosophy of science and the differences between natural science and social science. Peter Winch's excellent monograph, "The Very Idea of a Social Science," is a good place to start if you're interested.

      Thanks again, Greg

      Delete
    7. I'll check Winch out Greg. Thanks. And there's nothing wrong with skepticism. I'll leave you with a post which I helped inspire (a bit more philosophical):
      http://informationtransfereconomics.blogspot.com/2015/06/falsifiabilite-simplicite-succes-ou-la.html

      I have no idea if Jason's approach will ultimately turn out to be very useful. But I do appreciate his experience in the natural sciences, and his emphasis on keeping empirical checks on what he's trying to do close at hand. His view is different than the view you get on more mainstream blogs (of which there are many good ones, BTW, as I'm sure you already know).

      I'd like to see MORE ideas enter the ring, and I'd like to see people commit to what evidence would knock their ideas out of the ring up front. So more candidates, and better culling of failed concepts based on the evidence. I have the impression that it's hard to get a new consensus view on when a theory/framework/model has definitely failed empirically, and thus should be abandoned (no matter how "beautiful" it is).

      Delete