On Rationality, Utility and Self-Interest
Rational Choice Theory - a framework for understanding and analyzing human decision-making - is widely used in economics, political science, sociology, and other social sciences to model and explain how individuals and organizations make choices when faced with a range of options and constraints. Frank P. Ramsey (1903 – 1930), a British philosopher, mathematician, and economist, was instrumental in laying the foundations to the theory of rational choice.
Ramsey argued that an individual's preferences, the choices they make among different alternatives, could be represented mathematically through a utility function if these preferences satisfy certain consistency conditions. A utility function then assigns a numerical value (utility) to each possible outcome or alternative, reflecting the individual's subjective satisfaction or preference for that outcome. Essentially, it quantifies the individual's “happiness” or “well-being” associated with different choices.
As noted, in order to translate an individual’s preferences into a mathematical utility function, they need to meet certain consistency conditions. These include:
Completeness: This condition requires that an individual can compare and rank all possible pairs of outcomes or alternatives. For example, if I have the choice between chocolate and marshmallows, I must either prefer chocolate to marshmallows, prefer marshmallows to chocolate or be indifferent between the two. In other words, this condition ensures that there are no gaps or missing comparisons in the individual's preference order.
Transitivity: This crucial condition ensures logical consistency of preferences. For example, if an individual prefers chocolate to marshmallows and marshmallows to gummy bears, then they must also prefer chocolate to gummy bears. Mathematically, if chocolate ≻ marshmallows and marshmallows ≻ gummy bears, then chocolate ≻ gummy bears.
More-is-Better: This condition reflects the assumption that, in general, more of a good thing is better than less. It implies that if an individual prefers one bundle of goods to another and the first bundle contains more of every good, then they prefer a larger quantity of goods to a smaller one.
Continuity: Continuity is a technical condition that ensures the existence of a utility function. It allows for the representation of continuous preference relations.
Convexity: Convexity is a condition related to the shape of indifference curves on a utility function. It implies that if two bundles of goods have the same total utility, an individual would prefer the bundle that provides greater variety. In other words, consumers tend to prefer diversified consumption bundles.
Based on these conditions, the theory of rational choice states that a rational actor seeks to makes choices that maximize their expected utility. In other words, when presented with multiple options, they choose the one that is expected to provide the highest level of satisfaction. It thereby takes into account the resources (e.g., time, money, effort) required to obtain a certain outcome.
Key Criticisms
The assumption of human rationality in the theory of rational choice has been a subject of both criticism and debate in the social sciences. A key criticism of traditional rational choice theory is the assumption of perfect rationality, where individuals are assumed to have unlimited cognitive abilities and make decisions to maximize utility.
Herbert A. Simon (1916 – 2001), a pioneering figure in economics, cognitive psychology, and computer science, challenged the traditional economic assumption of perfect rationality. Accounting for the fact that individuals’ cognitive abilities are limited, he introduced the concept of ‘bounded rationality’.
Key features of bounded rationality include the following:
Limited Information: Individuals do not have access to complete information about all available choices and their consequences. They must make decisions based on the information they have, which is often incomplete or imperfect.
Limited Computation: Human cognitive processes have finite computational capacity. Individuals cannot process vast amounts of information and perform complex calculations in real-time.
Satisficing: Rather than optimizing (i.e., maximizing utility), individuals often use a strategy called "satisficing." This means they aim to find a solution or make a decision that is "good enough" or satisfactory, rather than seeking the best possible outcome.
Heuristics: Bounded rationality acknowledges that people frequently use heuristics to simplify decision-making. These heuristics are simple rules of thumb that help individuals make quick decisions.
Adaptive Decision-Making: Individuals adapt their decision-making strategies to their environment and the complexity of the decision at hand. In simpler situations, they may employ more deliberate reasoning, while in complex or time-sensitive situations, they may rely more on simple rules of thumb.
Another challenge to the notion of perfect rationality comes from the field of behavioural economics. The ground-breaking work of Daniel Kahneman and Amos Tversky conducted in the 1970s and 1980s shows that people do not evaluate outcomes in terms of final states of wealth or utility as assumed in traditional economic models. Instead, they assess outcomes in terms of gains and losses from a reference point, often their current status or an expected outcome. A key finding of Kahneman’s and Tversky’s work is that individuals tend to feel the pain of losses more acutely than the pleasure of equivalent gains. As a result, they are often willing to take risks to avoid losses but become more risk-averse when faced with potential gains.
These insights help to explain phenomena in people’s behaviour not accounted for in traditional economic models. One such phenomena is the so-called framing effect. Experiments show that people's choices are influenced by the way information is presented or "framed." Essentially, the same information can evoke different responses depending on how it is phrased or framed. This effect highlights the psychological and emotional impact that presentation and context can have on decision-making, often leading to irrational or inconsistent choices.
Here's an example of the framing effect: Imagine a scenario where a new medical treatment is being considered, and individuals are given two treatment options:
Option A: This treatment has a 70% success rate in curing patients.
Option B: This treatment has a 30% failure rate, resulting in patients not being cured.
Both Option A and Option B describe the same treatment, but they are framed differently:
Option A emphasizes the positive outcome (success rate).
Option B emphasizes the negative outcome (failure rate).
Studies have shown that when presented with this choice, people tend to prefer Option A (the treatment with a 70% success rate) over Option B (the treatment with a 30% failure rate). This is despite the fact that the two options are logically equivalent.
The framing effect occurs because individuals tend to be risk-averse when faced with gains (positive frames) and risk-seeking when faced with losses (negative frames). In the context of this example, Option A is framed as a gain (success), making it more attractive, while Option B is framed as a loss (failure), making it less attractive.
The framing effect has been widely observed in various domains, including marketing, finance, and public policy. Marketers often use framing to influence consumers' choices, and policymakers use it to shape public opinion on issues. It underscores the importance of clear and unbiased presentation of information in decision-making processes to minimize cognitive biases like framing.
Another cognitive bias – the so-called endowment effect – has been identified by the work of Richard Thaler, also a pioneer in behavioural economics. Experiments show that individuals tend to assign a higher value to objects they own or possess (endowed with) than to identical objects that they do not own. In other words, people tend to overvalue what they already have, simply because they own it.
Here is a simple demonstration of the endowment effect based on an experimental test. Half of participants were given a coffee mug for free and subsequently provided with the chance to sell it to those participants who did not receive one. In order to sell it, they had to put a price on the mug (they did receive for free) demonstrating their willingness to sell. Individuals who were not given a mug, in turn, were asked how much they would be willing to pay to obtain a mug. As it turned out those participants who had been endowed with a mug asked for more than double the amount than the opposite side was willing to pay.
As such, the endowment effect challenges the traditional economic assumption that people make rational decisions based solely on objective factors. Instead, it highlights the influence of psychological ownership and attachment on individuals' perceptions of value. This effect has significant implications for decision-making, pricing, and negotiation, and it is a key concept in behavioural economics.
Utility vs Self-Interest
In a recent book, the German philosopher Julian Nida-Rümelin poses a different challenge to the theory of rational choice. In discussing the assumptions of traditional rational choice theorems, he raises the question whether the utility an individual seeks to maximize necessarily always equals their respective self-interest. Nida-Rümelin negates this due to the fact that we do not know the motives underlying the preferences of the individual.
He illustrates this point with a simple example. Assume an individual is a pure Kantian, i.e., someone who aligns their preferences or the maxims that guide their preferences with the categorical imperative. In short, the Kantian does not follow economic rationality but rather moral reason. Yet, the actions guiding his preferences still fulfill the postulates of the utility theorem mentioned above.
Is, then, the Kantian actor a utility maximizer? In a formal sense yes, as constituted by the postulates of the rational choice theorem. Yet, as a purely Kantian actor does not optimize their self-interest, the utility function is not a representation of the actor’s own interests.
Since the theory of rational choice is blind to the specific type of motivation by a given individual, Nida-Rümelin concludes
“It is a fallacy that permeates much of the economics literature to assume that modern economic utility theory provides evidence that rational persons maximize their self-interest.”
Implications for Public Policy
In applying economic theory to public policy, it is crucial to exercise caution when leaning heavily on rational choice theory. While this framework has its merits, it's important to remember that individuals have cognitive limitations. This means that their decision-making abilities are limited by the information they have access to and their cognitive capacities.
Moreover, cognitive biases, such as confirmation bias or the framing effect, can sway decisions in unpredictable ways. Rational choice theory, in its abstraction, is blind to these biases and can provide an oversimplified view of human behavior. Additionally, the theory assumes that individuals act solely out of self-interest, neglecting the rich tapestry of motivations and values that drive real-world decisions. Therefore, in the complex landscape of public policy, it's imperative to consider these limitations and supplement rational choice theory with a nuanced understanding of human behavior to formulate more effective and equitable policies.
—————————————————————————————————————
Bibliography:
Ramsey, F. P. (1926). Truth and probability. In Readings in Formal Epistemology: Sourcebook (pp. 21-45). Cham: Springer International Publishing.
Simon, H. A. (1990). Bounded rationality. Utility and probability, 15-18.
Kahneman, D., Knetsch, J. L., & Thaler, R. H. (1990). Experimental tests of the endowment effect and the Coase theorem. Journal of Political Economy, 98(6), 1325-1348.
Nida-Rümelin, J. (2023). A theory of practical reason. Springer Nature.
Nida-Rümelin, J. (2020) Eine Theorie praktischer Vernunft, Berlin, Boston: De Gruyter, 2020. https://doi.org/10.1515/9783110605440; Page 103. [Note: The quote is my translation from the German edition].