Human Altruism and Social Structure
This week we discussed the article Strong Reciprocity and the Roots of Human Morality by Gintis et al 2008 in relation to the evolution of human altruism and morality. The authors of this work set up a dichotomy early in the paper between two opposing camps. In the first camp are those who view human morality as “enlightened self-interest.” According to this view, human morality evolved under individual selection in small closely related hunter-gather groups in which altruism was favored because nearly all interactions would be among close kin rather than strangers. However, in the modern day, humans interact much more commonly with strangers and human morality is a maladaptive left over from our hunter-gatherer days. The second camp asserts that human altruism and morality evolved as a result of group selection. The idea dates back to Darwin who suggested in his book The Decent of Man that, under natural selection, “tribes” with a higher proportion of altruists would fare better than those lacking and thus morality would be favored. To help explain why this is the view they favor, the authors introduce their concept of strong reciprocity which they define as “a propensity, in the context of a shared social task, to cooperate with others similarly disposed, even at personal cost, and a willingness to punish those who violate cooperative norms, even when punishing is personally costly.” In their view, morality is an innate trait that is favored by group selection and lacks selfishness. While discussing the paper, our group felt that the authors characterized the biological literature on human morality, altruism and kin selection and seemed not to understand the genetic based views of Hamilton and Dawkins. The authors seemed to feel that group selection is somehow less selfish than kin selection and ignored the parallels between the two schools of thought and the fact that group selection can act synergistically with kin selection if the group in question happens to be made up of closely related kin. Overall, the authors appear to have been trying to make a case for why “true altruism” should be favored.
We also discussed the Ultimatum Game from game theory in relation to human morality. In the basic form of the Ultimatum Game there are two players, A and B. A has a set amount of money, say $100 that it must share with B, so A proposes a split to B which B can either accept or reject. If B accepts the split then both players get to keep their respective amounts, however, if B rejects the split then neither player gets to keep any money. The rational strategy of this game would be for B to accept any offer from A because receiving something is better than nothing, likewise, A should offer B a meager portion the money (say $1) in order to maximize their own profit. However, in experimental trials B subjects tend to reject offers from A that they perceive as unfair even though doing so results in neither player receiving any money, and show emotional upset when presented with an unfair offer. The level that is perceived to be fair varies from culture to culture but the results appear to be similar across cultures. Likewise, A subjects often offer rather high splits that are close to fairness (say 50%) initially despite this not being the rational choice. However, in repeated games (with the same pool of participants but different pairs each time) A subjects gradually begin to offer less and B subjects gradually begin to accept less. However, if B is given the opportunity to spend money to punish A for being unfair the results change. A quickly learns to offer B a fair deal to avoid punishment. This strategy however is also not rational; a rational B can only lose money by spending it to punish individuals with whom he/she will not be paired in the future. Yet in experiments, B subjects often choose to punish A. Similar results are also observed in the opposite situation where B can pay a cost to reward A for acting fairly, despite it being irrational to do so, B subjects often choose to reward A. [Note: “rational” here refers specifically to rational choice theory and not to broader concepts of rationality.]
The Ultimatum Game has been thoroughly studied in a variety of different settings and in tandem with other variables such as culture, status, attractiveness, and gender. The results seem to consistently show that humans tend to act irrationally with some standard of fairness, and are pained when acting or being treated unfairly.
Next we discussed the evolution of human society and culture. As with the discussion of the evolution of morality above, there are two main camps. One camp argues that society evolved due to reciprocal altruism and individual based selection, similarly to the arguments for enlightened self-interest. The other camp argues that society acts as a super-organism and evolved due to group selection. These seem to parallel sociological theories of society (see Meeting 3.3). In tandem with the evolution of society is the evolution of culture. Again, there are two camps with regards to the interplay between cultural evolution and biological evolution. One camp argues that cultures change too quickly to meaningfully effect biological evolution, and the other camp argues that culture and biological evolution feedback on each other. This lead to a discussion of memes which can be thought of as “cultural genes” that pass information from one generation to the next through books, songs, ideas, etc. Cultural evolution through memes is somewhat controversial because the units of inheritance and a mutational process are difficult to pin down but provides a useful simplification. The group did not reach a formal consensus with regard to the evolution of culture and its interplay with biology, but there was no strong opposition to the idea that genetic and cultural evolution interact.
Lastly, we discussed how behavioral heuristics may be involved in the evolution of human morality and altruism. Heuristics can be thought of as behavioral shortcuts or gut reactions to a stimulus. They apply generally to a wide variety of situations and result in behaviors with little or no “rational thought.” The latter point struck up a discussion in the group of whether rational thought and emotions can be separated, especially when discussing heuristics. Most felt that rational thought and emotions cannot be fully separated, however, again an overall consensus was not reached. The interest in heuristics with respect to the evolution of altruism is whether or not the heuristics humans have evolved are currently adaptive. As with the enlightened self-interest argument, if humans evolved behavioral heuristics during their hunter-gatherer period of evolution where most of the individuals one would encounter were close kin, then heuristics to act altruistically to all would be favored. However, in the modern age where individuals interact with a large number of non-kin, then such heuristics are no longer favored. Group selection could also help explain the evolution of heuristics if acting altruistically toward those in your “in-group” leads to greater group fitness. The discussion of heuristics also sparked a discussion of their cooption toward other animals, namely pets, especially dogs.
To summarize, we discussed the evolution of human altruism and morality and the two overarching schools of thought that attempt to explain the phenomenon. One school favors individual based selection, reciprocal altruism, kin selection and enlightened self-interest. The other favors group selection, and the interaction of genes and culture. Both schools of thought can explain the evolution of heuristics but to different ends. We did not reach any answers or hard conclusions in our discussion, but we certainly left with a lot to ponder over.
-Zach Grochau-Wright
Leave a Reply