In classical economics, agents are modeled as rational, Bayesian agents who take whatever actions
will maximize their expected utility Eω∈Ω [U (ω)], given their subjective probabilities {pω}ω∈Ω
over
all possible states ω of the world.75 This, of course, is a caricature that seems almost designed
to be attacked, and it has been attacked from almost every angle. For example, humans are not
even close to rational Bayesian agents, but suffer from well-known cognitive biases, as explored by
Kahneman and Tversky [81] among others.
Furthermore, the classical view seems to leave no room
for critiquing people’s beliefs (i.e., their prior probabilities) or their utility functions as irrational—
yet it is easy to cook up prior probabilities or utility functions that would lead to behavior that
almost anyone would consider insane. A third problem is that, in games with several cooperating
or competing agents who act simultaneously, classical economics guarantees the existence of at
least one Nash equilibrium among the agents’ strategies. But the usual situation is that there are
multiple equilibria, and then there is no general principle to predict which equilibrium will prevail,
even though the choice might mean the difference between war and peace.
Computational complexity theory can contribute to debates about the foundations of economics
by showing that, even in the idealized situation of rational agents who all have perfect information
about the state of the world, it will often be computationally intractable for those agents to act
in accordance with classical economics. Of course, some version of this observation has been
recognized in economics for a long time. There is a large literature on bounded rationality (going
back to the work of Herbert Simon [122]), which studies the behavior of economic agents whose
decision-making abilities are limited in one way or another.