The typical representation theorem for expected utility theory can be roughly understood as asserting that:
If an agent’s preferences satisfy conditions C, then she can be represented as maximising her expected utility under a particular set of credences and utilities.
Philosophers have long thought that such theorems tell us something interesting about the connection between agents' preferences on the one hand, and their credences and utilities on the other. But the kinds of agents these theorems might tell us about are very highly idealised, having perfectly probabilistically coherent and infinitely precise degrees of belief, as well as full knowledge of all the logical and mathematical truths and an implausible kind of deductive infallibility. Most of us do not look very rational when compared to the kinds of agents usually talked about in decision theory. Of course, we manage to get by, and we generally act in such a way as to tend to bring about the kinds of things we prefer given the way we take the world to be—but we’re not even close to ideally rational. It would be nice if we could have a representation theorem for us, too. In this paper, I will develop an expected utility representation theorem aimed at the representation of highly irrational agents.